Skip to Main Content

Job Title


Senior AI Security Engineer


Company : IDecisions


Location : Panipat, Haryana


Created : 2026-03-25


Job Type : Full Time


Job Description

CompanyWe partner with enterprises to advise, build, secure, and operationalize AI systems at scale.Our focus is on developing Generative AI (GenAI), Agentic AI, and Reinforcement Learning-driven systems, while embedding security, governance, and risk controls directly into AI workflows. We enable organizations to safely deploy LLMs, autonomous agents, and adaptive decisioning systems in regulated, mission-critical environments.Job DescriptionAs a Senior AI Security Engineer (GenAI, Agentic AI & Reinforcement Learning), you will lead the design and implementation of secure, scalable, and adaptive AI systems, including LLM-based applications, agentic workflows, and RL-driven decision engines.This role goes beyond traditional security—you will build intelligent, self-improving security review systems using agentic frameworks (LangGraph, LangChain, LangSmith) and reinforcement learning techniques to continuously enhance AI risk evaluation, policy enforcement, and approval workflows.You will collaborate closely with AI/ML engineers, platform teams, and governance stakeholders to embed autonomous, learning-based security mechanisms into enterprise AI ecosystems.Key ResponsibilitiesGenAI, Agentic AI & RL Security ArchitectureDesign and secure LLM, RAG, multi-agent, and RL-driven systemsImplement security controls for:Autonomous decision-making agentsRL-based adaptive systemsTool-using and API-integrated agentsEnsure safe exploration and bounded behavior in RL environmentsAgentic AI + Reinforcement Learning for Security Automation (Core Focus)Build agentic AI pipelines using:LangGraph → multi-step, stateful security workflowsLangChain → LLM orchestration and tool integrationLangSmith → observability, tracing, and evaluationDevelop RL-enhanced security agents that:Learn from past approval decisionsOptimize risk scoring and classification over timeContinuously improve policy enforcement accuracyImplement feedback loops (human-in-the-loop + automated) to train:Risk evaluation agentsCompliance validation agentsAutomate end-to-end intake → evaluation → approval pipelines for GenAI and Agentic AI use casesReinforcement Learning Implementation & GovernanceDesign and implement RL models for adaptive security decisioningPolicy optimizationRisk-based prioritizationDynamic access control adjustmentsApply safe RL techniques:Reward shaping aligned with compliance and security policiesConstraint-based RL (safe exploration boundaries)Monitor and mitigate risks such as:Reward hackingUnsafe policy learningDrift in learned behaviorsIntegrate RL models into AI governance workflows for continuous improvementAI Risk, Governance & ComplianceTranslate frameworks such as:NIST AI RMFEU AI ActOWASP Top 10 for LLMs into automated, adaptive controlsBuild dynamic risk scoring systems enhanced by RL:Adversarial Risk ScoreModel Drift IndexPolicy Compliance Confidence ScoreGenerate real-time AI risk heat maps and approval recommendationsImplement policy-as-code + policy-learning systemsSecurity Assessment & Red TeamingConduct AI/LLM/RL system security assessmentsPerform red teaming across:Prompt injection scenariosAgent tool misuseRL policy exploitationEvaluate vulnerabilities in:RAG pipelinesMulti-agent coordinationRL training environmentsAI/ML Lifecycle & LLMOps/RLOps SecuritySecure the full lifecycle:Data ingestion, labeling, and validationModel training (LLM + RL) with GPU isolation and sandboxingDeployment, inference, and continuous learning loopsImplement RLOps + LLMOps security controlsEnsure:Model lineage and provenanceSecure feedback loopsVersion control for policies and learned behaviorsMonitoring, Incident Response & ObservabilityBuild AI + RL-aware monitoring systemsDetect anomalies in:LLM outputsAgent decisionsRL policy shiftsDevelop incident response playbooks for autonomous systemsCreate executive dashboards linking AI + RL risk to business KPIsData Security & Access ControlImplement fine-grained and adaptive access controlsSecure:RAG knowledge basesVector databasesRL training datasetsEnsure compliance with data privacy and residency requirementsThought LeadershipAct as an SME in:AI SecurityAgentic AI systemsReinforcement Learning securityResearch emerging risks in:Autonomous AI systemsSelf-improving modelsMulti-agent + RL ecosystemsQualificationsRequiredBachelor’s degree in Computer Science, Engineering, or related field3–5+ years of experience in cybersecurity (application, cloud, or data security)Strong experience in automation, scripting, and security tool developmentHands-on experience with:GenAI / LLM applicationsAI threat modeling and risk assessmentDeep understanding of AI threat vectors:Prompt injectionData leakageAdversarial attacksExperience with Azure or AWS cloud security ecosystemsPreferred (Strong Differentiators)GenAI & Agentic AIHands-on experience with:LangChainLangGraphLangSmithExperience building agentic workflows and multi-agent systemsExperience securing RAG pipelines and LLM applicationsReinforcement Learning (Highly Valued)Experience implementing Reinforcement Learning models:Policy optimizationReward function designDecision-making systemsFamiliarity with:RLHF (Reinforcement Learning from Human Feedback)Safe RL and constrained optimizationExperience integrating RL into:Automation workflowsSecurity decision systemsUnderstanding of RLOps pipelines and lifecycle managementSecurity & GovernanceFamiliarity with:OWASP Top 10 for LLMsNIST AI RMF, EU AI Act, ISO 42001Experience with:Microsoft Sentinel, Azure Monitor, Purview, Key VaultPolicy-as-code and automated compliance frameworksKnowledge of data privacy regulations (GDPR, DORA, etc.)