Job Title:
Cerebry — GenAI Implementation Engineer (AI Growth Lead)
Company: Cerebry
Location: Kurnool, Andhra pradesh
Created: 2025-10-21
Job Type: Full Time
Job Description:
MissionTransform Cerebry Research designs into production-grade GenAI features—retrieval-grounded, safe, observable, and ready for seamless product rollout. Architect, code, evaluate, and package GenAI services that power Cerebry end-to-end.Why this is exciting (Ownership-Forward)Founder-mindset equity. We emphasize meaningful ownership from day one.Upside compounds with impact. Initial grants are designed for real participation in value creation, with refresh opportunities tied to scope and milestones.Transparent offers. We share the full comp picture (salary, equity targets, vesting cadence, strike/valuation context) during the process.Long-term alignment. Packages are crafted for builders who want to grow the platform and their stake as it scales.What you’ll buildRetrieval & data grounding: connectors for warehouses/blobs/APIs; schema validation and PII-aware pipelines; chunking/embeddings; hybrid search with rerankers; multi-tenant index management.Orchestration & reasoning: function/tool calling with structured outputs; controller logic for agent workflows; context/prompt management with citations and provenance.Evaluation & observability: gold sets + LLM-as-judge; regression suites in CI; dataset/version tracking; traces with token/latency/cost attribution.Safety & governance: input/output filtering, policy tests, prompt hardening, auditable decisions.Performance & efficiency: streaming, caching, prompt compression, batching; adaptive routing across models/providers; fallback and circuit strategies.Product-ready packaging: versioned APIs/SDKs/CLIs, Helm/Terraform, config schemas, feature flags, progressive delivery playbooks.Outcomes you’ll driveQuality: higher factuality, task success, and user trust across domains.Speed: rapid time-to-value via templates, IaC, and repeatable rollout paths.Unit economics: measurable gains in latency and token efficiency at scale.Reliability: clear SLOs, rich telemetry, and smooth, regression-free releases.Reusability: template repos, connectors, and platform components adopted across product teams.How you’ll workCollaborate asynchronously with Research, Product, and Infra/SRE.Share designs via concise docs and PRs; ship behind flags; measure, iterate, and document.Enable product teams through well-factored packages, SDKs, and runbooks.Tech you’ll useLLMs & providers: OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock; targeted OSS where it fits.Orchestration/evals: LangChain/LlamaIndex or lightweight custom layers; test/eval harnesses.Retrieval: pgvector/FAISS/Pinecone/Weaviate; hybrid search + rerankers.Services & data: Python (primary), TypeScript; FastAPI/Flask/Express; Postgres/BigQuery; Redis; queues.Ops: Docker, CI/CD, Terraform/CDK, metrics/logs/traces; deep experience in at least one of AWS/Azure/GCP.What you bringA track record of shipping and operating GenAI/ML-backed applications in production.Strong Python, solid SQL, and systems design skills (concurrency, caching, queues, backpressure).Hands-on RAG experience (indexing quality, retrieval/reranking) and function/tool use patterns.Experience designing eval pipelines and using telemetry to guide improvements.Clear, concise technical writing (design docs, runbooks, PRs).Success metricsEvaluation scores (task success, factuality) trending upwardLatency and token-cost improvements per featureSLO attainment and incident trendsAdoption of templates/connectors/IaC across product teamsClarity and usage of documentation and recorded walkthroughsHiring processFocused coding exercise (2–3h): ingestion → retrieval → tool-calling endpoint with tests, traces, and evalsSystems design (60m): multi-tenant GenAI service, reliability, and rollout strategyGenAI deep dive (45m): RAG, guardrails, eval design, and cost/latency tradeoffsDocs review (30m): discuss a short design doc or runbook you’ve written (or from the exercise)Founder conversation (30m)ApplyShare links to code (GitHub/PRs/gists) or architecture docs you authored, plus a brief note on a GenAI system you built—problem, approach, metrics, and improvements over time.Email: