About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe

Role Summary

We are looking for a highly skilled AI Engineer to build, optimize, and deploy production-grade AI solutions. In this role, you will be the engine room of our AI initiatives - taking architectural blueprints and turning them into scalable, functional systems. While you will partner with Data Engineering and Platform teams, your focus is on the implementation of LLM-based applications (RAG), agentic workflows, and fine-tuning models to meet the specific needs of our healthcare and pharmaceutical clients.

What You’ll Do

AI Development & Implementation

  • Build & Optimize: Develop and maintain LLM-powered applications using frameworks like Microsoft Agent Framework, AutoGen, LangChain, LangGraph, LlamaIndex etc..
  • RAG Pipelines: Implement and fine-tune Retrieval-Augmented Generation (RAG) pipelines, including sophisticated chunking strategies, embedding selection, and vector database management.
  • Agentic Systems: Develop autonomous agents capable of tool-use, multi-step planning, and human-in-the-loop interactions.
  • Model Fine-tuning: Execute fine-tuning and optimization tasks (Quantization, PEFT/LoRA) to adapt models for specific domain tasks.

MLOps & Productionization

  • Deployment: Deploy models into production environments using Docker and Kubernetes, ensuring high availability and low latency.
  • Monitoring: Implement observability for AI systems, tracking accuracy, hallucinations, cost, and latency.
  • CI/CD: Maintain CI/CD pipelines for ML, ensuring automated testing (unit, contract, and model-quality tests) is integrated into the workflow.

Domain-Specific Execution (Pharma/Healthcare)

  • Data Integrity: Work with sensitive pharmaceutical datasets, ensuring all AI outputs comply with data privacy standards and PII masking requirements.
  • Validation: Support the rigorous validation processes required in life sciences, including reproducibility and statistical validity of model outputs.

Collaboration & Mentorship

  • Teamwork: Work closely with Senior Architects and Engagement Managers to translate business requirements into technical tasks.
  • Code Quality: Participate in code reviews and contribute to the team’s internal library of reusable AI patterns and playbooks.

Must-Have Qualifications

  • Experience: 3-6 years of experience in Software Engineering or Data Science, with at least 2-3 years focused specifically on deploying AI/ML models in production.
  • Tech Stack: Expert-level Python skills and proficiency with deep learning frameworks (PyTorch or TensorFlow).
  • Generative AI: Hands-on experience building applications with LLMs (OpenAI, Claude, Gemini, etc.) and vector databases.
  • Cloud & DevOps: Strong experience with AWS/Azure/GCP and containerization (Docker/Kubernetes).
  • Engineering Fundamentals: Strong SQL skills, API design (FastAPI/Flask), and a "software engineering first" approach to ML (testing, modularity, and documentation).

Preferred Qualifications

  • Advanced RAG: Experience with advanced retrieval techniques (reranking, hybrid search, query expansion).
  • Evaluation Frameworks: Experience using G-Eval, RAGAS, or TruLens to quantify LLM performance.
  • Performance Tuning: Knowledge of vLLM, DeepSpeed, or Triton Inference Server for high-throughput serving.