IN.JobDiagnosis logo

Job Title:

AWS Data Engineer

Company: Capco

Location: Bengaluru, Karnataka

Created: 2026-03-10

Job Type: Full Time

Job Description:

About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery.WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry.MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services.#BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity.CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands.DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage.Location - BangaloreSkills and Qualifications:Job Description Data Engineer We are looking for aData Engineerwith strong experience in building and operationalizing data pipelines, ETL workflows, and analytics platforms usingPySpark, Apache Airflow, and AWS data services .Key Responsibilities Build scalable ETL/ELT pipelines usingPySparkon distributed processing frameworks Orchestrate workflows usingApache Airflow(DAG design, scheduling, monitoring) Develop data ingestion and transformation jobs usingAWS Glue Manage secure, compliant data access usingAWS Lake Formation Maintain and optimizeAWS Glue Data Catalogfor metadata, schema, and table management Work with analytics teams to publish datasets for BI and dashboards Build and support visualizations usingAmazon QuickSight Ensure data quality, performance, and reliability across all pipelines Required Skills Strong hands-on experience withPySparkfor large-scale data processing Deep knowledge ofAirflow DAGs , operators, sensors, and CI/CD integration Expertise inAWS Glue(ETL jobs, crawlers, Glue Studio, Glue Job Bookmarks) Experience withLake Formationpermissions, governance, and data lakes Familiarity withGlue Data Catalogfor metadata management Ability to build dashboards inAmazon QuickSight Understanding of data modeling, partitioning, and performance optimization Nice to Have Experience with S3, Athena, Redshift, or EMR Knowledge of Python-based automation and testing Exposure to cloud-native DevOps (IaC, Terraform/CloudFormation)

Apply Now

➤
Home | Contact Us | Privacy Policy | Terms & Conditions | Unsubscribe | Popular Job Searches
Use of our Website constitutes acceptance of our Terms & Conditions and Privacy Policies.
Copyright © 2005 to 2026 [VHMnetwork LLC] All rights reserved. Design, Develop and Maintained by NextGen TechEdge Solutions Pvt. Ltd.