Job Title:
Data Engineer
Company: TP
Location: Nizamabad, Telangana
Created: 2026-03-15
Job Type: Full Time
Job Description:
The Data Engineer plays a central role in building the data infrastructure that powers Teleperformance’s TP ecosystem.This position is responsible for designing, developing, and maintaining robust data pipelines that enable analytics, machine learning, and product intelligence across the Foundation, Enablement, and Blueprint layers.The engineer ensures high data availability, security, and performance to support real-time decisioning, model training, and AI orchestration at scale.Key Responsibilities- Data Architecture & Pipeline Development - Design, implement, and optimize scalable ETL/ELT pipelines to ingest, clean, and transform large structured and unstructured datasets. - Build streaming and batch data workflows supporting analytics and ML workloads. - Implement efficient storage models for data lakes and warehouses (Delta Lake, Snowflake, BigQuery, Synapse). - Integration & Automation - Develop APIs and data connectors to integrate multiple data sources (CRM, ERP, product logs, etc.). - Work with AI/ML Engineers to automate feature extraction and training-data generation pipelines. - Support integration with model serving endpoints and observability dashboards. - Data Quality & Governance - Establish validation and monitoring frameworks for data accuracy, lineage, and freshness. - Ensure compliance with Responsible AI, data privacy, and security standards (GDPR, SOC2, etc.). - Collaborate with Security and Legal teams to maintain data anonymization and retention controls. - Collaboration & Enablement - Partner with Data Scientists, ML Engineers, and Product Managers to deliver reliable datasets for analytics and modeling. - Document data models, schemas, and workflows within FAB repositories. - Participate in the FAB Engineering Guild to standardize data practices across squads.Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related field.Experience- 5+ years of experience in data engineering, data platform development, or data-intensive software projects. - Proven experience with modern data stack tools and cloud data architectures. - Experience supporting AI/ML or analytics platforms in production environments preferred.Technical Skills- Proficiency in Python, SQL, and data pipeline orchestration tools (Airflow, Prefect, DBT, etc.). - Hands-on experience with cloud ecosystems (Azure Data Factory, AWS Glue, or GCP Dataflow). - Familiarity with data streaming technologies (Kafka, Kinesis, or Pub/Sub). - Knowledge of data warehousing, feature stores, and vector databases. - Understanding of MLOps, CI/CD, and data observability concepts. - Strong background in data modeling, schema design, and performance optimization.Soft Skills- Analytical and methodical approach to problem-solving. - Strong communication and collaboration skills across cross-functional teams. - Accountability, autonomy, and continuous-learning mindset. - Adaptability to fast-changing business and technology environments. - Passion for enabling data-driven innovation and AI scalability.