Job Title:
AI Evals & Test Engineer
Company: BharatGen
Location: Mumbai, Maharashtra
Created: 2025-10-24
Job Type: Full Time
Job Description:
Job Summary: We are looking for an AI Evaluation & Test Engineer to join our growing team to ensure that our generative AI models and applications are safe, accurate, trustworthy, and deliver an elegant user experience. You will serve as the first customer of our AI systems. This role is ideal for product-minded engineers who obsess over product quality and customer-centricity, and are passionate about shaping the behavior of AI systems in the real world. Key Responsibilities: Build and maintain AI evaluation pipelines to test, measure, and evaluate the behavior and performance of AI systems. Implement traces, spans, and session tracking for observability and identify error propagation in multi-step pipelines. Define AI quality metrics and KPIs around factuality, faithfulness, toxicity, grounding precision/recall, latency, cost, etc., with clear acceptance bars. Implement evaluation and testing automation to enable end-to-end system and regression testing at scale. Define criteria for and implement release gates in the CI/CD pipeline. Find creative ways to break products. Assist in root cause analysis and troubleshooting of bugs and field issues. Collaborate with cross-functional teammates from product, engineering, linguistics, and customer support to shape human-AI interaction paradigms and ensure that our AI models and applications deliver the desired outcome and user experience. Minimum Qualifications and Experience: Bachelor’s or Master’s degree in CS/CE/IT/EE/E&TC or related fields with 5+ years of experience in manual and automation testing of software products, with at least 2 years in evaluating and testing AI/ML products. Required Expertise: Strong software testing fundamentals and expertise in writing test plans, executing test cases, and generating detailed reports and dashboards. Strong analytical and debugging skills, and attention to detail. Proficiency in Python, scripting, and software testing automation frameworks and tools such as Pytest, Selenium, Robot Framework, etc. Working knowledge of generative AI models, AI agents, and related concepts such as retrieval augmented generation (RAG), prompt engineering, context engineering, explainability, traceability, observability, guard rails, reasoning, specificity, etc. Sound understanding of the fundamental differences in the approach for testing conventional software versus evaluating generative AI systems. Team player with excellent interpersonal skills and the ability to collaborate effectively with remote and cross-functional team members. Go-getter attitude and ability to flourish in a fast-paced, startup environment. Experience in any of the following would be a big plus - - AI evaluation frameworks such as Arize, Braintrust, DeepEval, LangSmith, Ragas - AI safety and red teaming experience, e.g., prompt injection, jailbreak, adversarial and stress testing. - Different types of AI evaluation methods, e.g, Human-in-the-loop, LLM-as-a-Judge.