Job Title:
Senior Web Scraping Engineer
Company: Sasvat Infotech
Location: New delhi, Delhi
Created: 2026-05-12
Job Type: Full Time
Job Description:
Experience: 4 to 7 yearsLocation: Vadodara / Ahmedabad or RemoteJob Type: FulltimeWork Hours: 1:30 PM IST – 10:00 PM IST (US Eastern overlap preferred)Company DescriptionSasvat Infotech specializes in high-end application development, offering secure, scalable, and feature-rich solutions. Our applications are designed to enhance user experience and distinctly represent your brand. We are committed to delivering responsive and functional digital solutions tailored to meet client needs.Role DescriptionWe are hiring a Senior Web Scraping Engineer to help us migrate and rebuild a large-scale production crawling ecosystem.We are accelerating and modernizing an existing distributed crawling platform that must survive blocking, scale horizontally on Azure, and deliver clean, reliable data at high throughput.We need an engineer who treats spiders like distributed systems – not scripts. Someone who understands anti-blocking, system design, observability, and production stability. This role is about making crawlers stable at scale – not just making them work once.Key ResponsibilitiesRebuild and migrate existing crawling systems into scalable, production-grade architecture.Design and develop industrial-grade spiders using Python and Scrapy.Integrate Playwright for JS-heavy, dynamic, and protected environments.Engineer advanced unblocking strategies including - Session lifecycle control, Traffic shaping and throttling, Fingerprint consistency, Structured retry taxonomy, Stateful browser flows when requiredDesign crawlers that are Stateless wherever possible, Queue-driven, Horizontally scalable on AzureOptimize Scrapy internals including concurrency, middleware, pipelines, and scheduling.Deploy and scale crawlers using Azure containers and cloud-native infrastructure.Own system reliability including: Structured logging, Metrics collection, Failure classification, Observability and monitoringEnsure data quality, validation, and structured output pipelines.Troubleshoot blocking, performance bottlenecks, and scaling limitations.Contribute through disciplined GitHub PR workflows and maintain clean, extensible code.Write code that another senior engineer can extend without rewriting.Required Skills4–7 years of hands-on experience in web scraping and crawler engineering.Strong production-level Python expertise.Deep understanding of Scrapy internals – Concurrency, Middleware, Throttling, PipelinesHands-on production experience with Playwright.Strong knowledge of HTTP protocol, sessions, cookies, headers, and request lifecycle.Proven experience handling bot detection and anti-scraping mechanisms.Experience designing systems with balance between throughput, stealth, and cost.Experience deploying and scaling systems on Azure (containers, scaling, monitoring).Experience with SQL and/or NoSQL data storage.Strong debugging mindset and system-thinking approach.Experience with Git and structured PR/code review workflows.Good to HaveExperience migrating legacy crawling systems to distributed cloud architecture.Exposure to proxy orchestration and IP rotation strategies.Experience designing distributed crawler clusters.CI/CD experience for crawler deployment pipelines.Familiarity with observability tools and monitoring frameworks.Experience working in large-scale data migration or platform modernization projects.Interested candidates are invited to send their resumes to