IN.JobDiagnosis logo

Job Title:

Data Engineer

Company: Confidential

Location: Republic Of India

Created: 2025-11-18

Job Type: Full Time

Job Description:

Azure Data Engineer Location: Bengaluru, Chennai, Delhi, Mumbai, Pune, Hyderabad Exp: 10 yrs Expertise: · Good Knowledge of Data Brick lakehouse and Azure DataLake concept · Knowledge of Data Bricks delta concept– Delta live tables (DLT) · Strong hands-on experience in ELT– pipeline development using Azure Data factory and Databricks Autoloader, Notebook scripting and Azure Synapse Activity Copy, Data Flow Task · Strong knowledge of metadata-driven data pipeline, metadata management, dynamic logic · In-depth knowledge of data storage solutions, including Azure Data Lake Storage (ADLS), and Azure Serverless SQL Pool. · Experience with data transformation using Spark, and SQL technologies. · Solid understanding of design patterns, and best practices of the cloud stack. · Experience with code management and version control using Git or similar tools. · Strong problem-solving and debugging skills in ETL workflows and data pipelines. · Strong understanding of Azure Data bricks and Azure Synapse internals – features and capabilities. · Knowledge of Azure DevOps and continuous integration and deployment (CI/CD) process. · Knowledge of data quality and data profiling techniques, with experience in data validation and data cleansing. Hands-on Duties: · Conducting technical sessions, design reviews, code reviews, and demos of pipelines and their functionality · Developing technical specification for Data pipelines and workflow and getting sign-off from Architect and Stryker leads. · Developing, deploying, and maintaining workflows and data pipelines using Azure Data bricks and Azure Synapse. · Developing pipeline/notebook for Delta live tables ( DLT) Databricks Auto Loader, Notebook scripting and Azure Synapse Activity Copy, Data Flow Task. · Collaborating with data architects, data analysts, and other stakeholders to design and implement ETL solutions that meet business requirements. · Writing efficient and high-performing ETL code using PySpark, and SQL technologies. · Building and testing data pipelines using Azure Data bricks and Azure Synapse. · Ensuring the accuracy, completeness, and timeliness of data being processed and integrated. · Troubleshooting and resolving issues related to data pipelines and notebooks. · Performance benchmarking of data ingestion and Data flow pipeline/notebook and ensuring consistency

Apply Now

➤
Home | Contact Us | Privacy Policy | Terms & Conditions | Unsubscribe | Popular Job Searches
Use of our Website constitutes acceptance of our Terms & Conditions and Privacy Policies.
Copyright © 2005 to 2025 [VHMnetwork LLC] All rights reserved. Design, Develop and Maintained by NextGen TechEdge Solutions Pvt. Ltd.