Job Title:
AI Content Safety Specialist
Company: beBeeContentSpecialist
Location: Kannur, Kerala
Created: 2025-10-14
Job Type: Full Time
Job Description:
Job Description As a specialist in AI-generated content, you will play a vital role in testing and evaluating the safety and quality of our language models. We are seeking highly analytical and detail-oriented professionals to collaborate with our data scientists and safety researchers in identifying vulnerabilities and assessing risks in AI-generated responses. Key responsibilities include: Conducting Red Teaming exercises to identify potential failure modes and adversarial outputs from large language models. Evaluating and stress-testing AI prompts across multiple domains to uncover vulnerabilities and biases. Developing and applying test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses. Collaborating with data scientists and safety researchers to report risks and suggest mitigations. Performing manual QA and content validation across model versions, ensuring factual consistency and coherence. Required Skills & Qualifications Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design. Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI. Strong background in Quality Assurance, content review, or test case development for AI/ML systems. Understanding of LLM behaviors and failure modes. Excellent critical thinking, pattern recognition, and analytical writing skills.