Job Title:
Cloud Data Engineer with PL/SQL, SQL, AWS, Data Bricks - 5+ Years - Immediate Joiners only
Company: Blue Cloud Softech Solutions Limited
Location: Bareilly, Uttar Pradesh
Created: 2025-08-23
Job Type: Full Time
Job Description:
About the CompanyAt BCSS we strongly believe that data and analytics are strategic drivers for future success. We are building a world class advanced analytics team that will solve some of the most complex strategic problems and deliver topline growth and operational efficiencies across our business. The Analytics team at BCSS is part of the Organization and is responsible for driving organic growth by leveraging big data and advanced analytics. The team reports to the VP and Chief Data Officer at BCSS, works closely with the SVP of Corporate Strategy, and has regular interactions with the company’s C-Suite.About the Role We are on an exciting journey to build and scale our advanced analytics practice. BCSS is looking for a Cloud Data Engineer that has experience in building data products using Databricks and related technologies. In this position you will apply your skills to manage the existing cloud data platform to make it more scalable, reliable, and cost efficient. You will also work on additional projects that would leverage the existing architecture in some cases and use newer technologies where needed.Responsibilities>Analyze and understand existing data warehouse implementations to support migration and consolidation efforts. >Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks. >Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation. >Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks. >Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards. >Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform. >Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability. >Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination. >Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform. >Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming. >Triage, debug and fix technical issues related to Data Lakes. >Maintain and Manage Code repositories like Git.Qualifications>Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or related field.Required Skills>5+ years of experience working with Databricks, including Spark SQL and Delta Lake implementations. >3+ years of experience in designing and implementing data lake architectures on Databricks. >Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures. >Hands-on experience with data modeling and warehouse design principles. >Proficiency in at least one programming language (Python, Scala, Java). >Experience working in Agile environments and contributing to iterative development cycles.Preferred Skills>Databricks cloud certification is a big plus. >Exposure to enterprise data governance and metadata management practices.