Job Title:
Senior Machine Learning Engineer
Company: Philodesign Technologies Inc
Location: Bhubaneswar, Odisha
Created: 2026-01-09
Job Type: Full Time
Job Description:
Job Title: MLOps Engineer (Databricks) Experience:4–6 Years Work Mode:Remote Budget:₹1 LPMJob Overview we are seeking an experiencedMLOps Engineerwith strong hands-on expertise in theDatabricks ecosystemto join our Data & AI Engineering team. The selected candidate will be responsible for building, deploying, operationalizing, and monitoring Machine Learning models in production environments while working closely with cross-functional teams such as Data Science, Data Engineering, and DevOps. This role requires deep technical competence in ML lifecycle management, model observability, CI/CD, cloud-native deployment, and modern data tooling. Key Responsibilities Manage Databricks workspaces, jobs, workflows, Delta Lake, Unity Catalog, and MLflow. Implement and optimize end-to-end MLOps workflows, including model training, deployment, monitoring, and retraining. Develop scalable ML pipelines using Python, PySpark, SQL, and cloud-native services. Deploy ML models across AWS (preferred: SageMaker) with Docker/Kubernetes-based orchestration. Build and manage CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI. Design infrastructure using Terraform for reproducibility and scalability. Configure monitoring for model performance, drift, and data quality using Databricks Lakehouse Monitoring. Utilize Databricks Feature Store and hyperparameter tuning frameworks (Optuna, Ray Tune, etc.). Document ML processes and collaborate with Data Scientists, ML Engineers, and DevOps teams. Required Skill Set (Must-Have) Databricks(Core Expertise) MLflowfor experiment tracking & model lifecycle management Python(pandas, scikit-learn, PyTorch/TensorFlow) PySpark End-to-End MLOpslifecycle experience Cloud Platform:AWS (SageMaker experience preferred) Containerization & Orchestration:Docker / Kubernetes CI/CD:Jenkins, GitHub Actions, GitLab CI Preferred Skills (Good to Have) Terraform or similar Infrastructure as Code tools Distributed training frameworks (Optuna, Ray Tune, Horovod) Model explainability tools (SHAP, LIME) Knowledge of multi-cloud environments (Azure / GCP) Eligibility Criteria Minimum 4–6 years of relevant industry experience Hands-on experience with production-grade ML deployments Strong problem-solving and analytical skills Ability to work independently in a remote setup How to Apply Send your resume to:priyanka.jadhav@