Job Title:
Senior Data Engineer
Company: Mitra AI
Location: New delhi, Delhi
Created: 2026-04-03
Job Type: Full Time
Job Description:
This role requires hands-on experience in managing Snowflake environments, supporting data pipeline orchestration, enforcing platform-level standards, and ensuring observability, performance, and security across environments. You will collaborate with architects, engineers, and DevOps teams to operationalize the platform’s design and drive its long-term stability and scalability in a cloud-native ecosystem.JOB SPECIFIC DUTIES & RESPONSIBILITIESDesign and manage schema isolation, role-based access control (RBAC), masking policies, and cost-optimized Snowflake architecture for multiple nonprofit tenantsImplement and maintain CI/CD pipelines for dbt, Snowflake objects, and metadata-driven ingestion processes using GitHub Actions or similar toolsDevelop and maintain automation accelerators for data ingestion, schema validation, error handling, and onboarding new clients at scaleCollaborate with architects and data engineers to ensure seamless integration with source CRMs, ByteSpree connectors, and downstream BI/reporting layersMonitor and optimize performance of Snowflake workloads (e.g., query tuning, warehouse sizing, caching strategy) to ensure reliability and scalabilityEstablish and maintain observability and monitoring practices across data pipelines, ingestion jobs, and platform components (e.g., error tracking, data freshness, job status dashboards)Manage infrastructure-as-code (IaC), configuration templates, and version control practices across the data stackEnsure robust data validation, quality checks, and observability mechanisms are in place across all platform servicesSupport incident response, pipeline failures, and technical escalations in production, coordinating across engineering and client teamsContribute to data governance compliance by implementing platform-level policies for PII, lineage tracking, and tenant-specific metadata taggingREQUIRED EXPERIENCE AND QUALIFICATIONSBachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field6+ years of experience in data engineering or platform delivery, including 3+ years of hands-on Snowflake experience in production environmentsProven expertise in building and managing multitenant data platforms, including schema isolation, RBAC, and masking policiesSolid knowledge of CI/CD practices for data projects, with experience guiding pipeline implementations using tools like GitHub ActionsHands-on experience with dbt, SQL, and metadata-driven pipeline design for large-scale ingestion and transformation workloadsStrong understanding of AWS cloud services relevant to data platforms (e.g., S3, IAM, Lambda, CloudWatch, Secrets Manager)Experience optimizing Snowflake performance, including warehouse sizing, caching, and cost control strategiesFamiliarity with setting up observability frameworks, monitoring tools, and data quality checks across complex pipeline ecosystemsProficient in infrastructure-as-code (IaC) concepts and managing configuration/versioning across environmentsAwareness of data governance principles, including lineage, PII handling, and tenant-specific metadata tagging