Role Description
As a Middle ML Engineer at Provectus, you will design, build, and deploy production ML solutions for our clients β working independently on most tasks while growing toward senior technical ownership. You'll use AI coding tools daily, mentor junior engineers, and contribute to Provectus's internal AI toolkit.
What You'll Do:
-
Build & Ship ML (55%)
-
Design and deliver ML pipelines from experimentation to production;
-
Build and optimize models β supervised, unsupervised, and generative AI;
-
Write clean, tested, modular Python code;
-
Deploy and monitor models; track performance and prevent drift;
-
Contribute to LLM applications: RAG systems and agent workflows;
-
Use AI coding tools on every task to move faster and write better code.
-
Agentic & AI-Assisted Engineering (20%)
-
Use Claude Code or similar AI tools to deliver client projects;
-
Build with agent frameworks (Bedrock AgentCore, Strands, CrewAI, or similar);
-
Integrate or build MCP servers for internal and client use;
-
Contribute features, bug fixes, or docs to the Provectus AI toolkit.
-
Collaborate & Mentor (15%)
-
Mentor junior engineers and give actionable code review feedback;
-
Work closely with DevOps, Data Engineering, and Solutions Architects;
-
Share knowledge through docs, presentations, or internal workshops.
-
Learn & Innovate (10%)
-
Stay current with ML research, GenAI, and agentic frameworks;
-
Propose process improvements and reusable ML accelerators;
-
Participate in architectural design and trade-off discussions.
Qualifications
-
Solid grasp of supervised/unsupervised ML: algorithms, evaluation, trade-offs;
-
Deep learning hands-on experience: CNNs, RNNs, Transformers β training and fine-tuning;
-
Depth in at least one domain: NLP, Computer Vision, Recommendation, or Time Series.
-
Experience building LLM apps with OpenAI, Anthropic, or Hugging Face APIs;
-
Hands-on RAG design: chunking, embedding, retrieval, generation;
-
Familiarity with vector databases (OpenSearch, Pinecone, Chroma, FAISS);
-
Understanding of prompt engineering and LLM evaluation.
-
Proficient with AI coding tools (Claude Code, Cursor, Copilot, etc.) β beyond autocomplete;
-
Experience building tool-using, stateful agents with an orchestration framework;
-
Understanding of Model Context Protocol (MCP) β consume or build MCP servers;
-
Can write technical specs for AI execution and review/correct AI-generated output;
-
Aware of agent monitoring, evaluation, and cost optimization in production.
-
Solid AWS: SageMaker, Lambda, S3, ECR, ECS, API Gateway;
-
Familiarity with Amazon Bedrock (model invocation, Knowledge Bases, Agents);
-
Basic awareness of Infrastructure as Code (Terraform or CloudFormation).
-
Production ML deployment experience;
-
Experiment tracking with MLflow, W&B, or similar;
-
CI/CD pipelines for ML; model monitoring and drift detection;
-
Advanced Python (async/await, OOP, packaging); strong pandas, NumPy, SQL;
-
Docker for containerized ML workloads.
-
1β3 years of hands-on ML engineering experience;
-
At least one ML model deployed to production (or near-production);
-
Team-based or client-facing project experience;
-
Demonstrated use of AI-assisted development tools;
-
Education: Bachelor's/Master's in CS, Data Science, Math, or equivalent practical experience.
Requirements
-
Strong problem-solver β breaks complexity into testable pieces;
-
Clear communicator β written docs, PRs, and explanations to non-technical stakeholders;
-
Fluent English (B2+);
-
Proactive β raises blockers early and comes with proposed solutions;
-
Collaborative mentor who helps without creating dependency.
Benefits
-
Competitive salary based on competencies and market rates;
-
Premium AI tooling: Claude Code, Cursor, and Provectus AI toolkit;
-
Mentorship from Senior ML Engineers and Tech Leads;
-
Clear growth path: Mid-Level β Senior ML Engineer β Tech Lead;
-
Learning budget for courses, certifications, and conferences;
-
Remote-first culture; work on projects across LATAM, North America, and Europe;
-
Health benefits.