Job Title:
AI Safety Testing Expert
Company: beBeeAI
Location: Vellore, Tamil Nadu
Created: 2025-10-14
Job Type: Full Time
Job Description:
Job Opportunity: AI Red Teaming & Prompt Evaluation Specialist As a detail-oriented professional, you will help rigorously test and evaluate AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards. Key Responsibilities: Conduct Red Teaming exercises to identify adversarial, harmful, or unsafe outputs from large language models (LLMs). Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes. Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses. Requirements: Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design. Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI. Strong background in Quality Assurance, content review, or test case development for AI/ML systems. Understanding of LLM behaviours, failure modes, and model evaluation metrics. Excellent critical thinking, pattern recognition, and analytical writing skills. Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines. Join our collaborative environment where professionals thrive in an ecosystem of innovation and continuous learning. We offer opportunities for growth, professional development, and networking within the industry.