Machine Learning

Discover the best of tech. Machine learning recruitment for next-gen breakthroughs.

There's a world-class candidate behind every innovation. We specialise in connecting them with the startups and scaleups shaping the future of machine learning. 

With several decades of collective experience in tech recruitment, our ML consultants have developed the knowledge, networks, and industry insight needed to source and secure game-changing talent. We’re proud to partner with the world’s Machine Learning innovators, ranging from startups to tech giants across the UK, Ireland, the US, Switzerland, and Germany. 

Whether you're building bleeding-edge multimodal AI systems or you're hoping to find a meaningful new career in Machine Learning, DeepRec.ai’s ML recruiters have the means to support you. 

Hoping to hire? 

Chat with an ML Consultant

Looking for a fresh start?

Discover the best ML jobs

The roles we cover in Machine Learning include:

  • Senior Machine Learning Engineer

  • Machine Learning Engineer

  • Head of AI

  • Head of Deep Learning

  • Head of Machine Learning

  • Deep Learning Engineer

  • Heard of Product - AI

  • Product Owner - AI

  • Project Manager - AI

  • Senior Deep Learning Engineer

  • MLOps Developer

  • MLOps Engineer

  • Machine Learning Ops Engineer

  • KubeFlow/ MLFlow

  • Machine Learning Engineer

  • Machine Learning Researcher

  • Machine Learning Team Lead

  • Head of Machine Learning

  • Head of AI

MACHINE LEARNING CONSULTANTS

Anthony Kelly

Co-Founder & MD EU/UK

Hayley Killengrey

Co-Founder & MD USA

Nathan Wills

Senior Consultant | Switzerland

Sam Oliver

Senior Consultant | Contract DACH

Sam Warwick

Senior Consultant – Geospatial, Earth, & Defence Technology

Jacob Graham

Senior Consultant

LATEST JOBS

Boston, Massachusetts, United States
Senior MLOps Engineer
Senior MLOps Engineer – GPU Infrastructure & Inference Our client is building AI-native systems at the intersection of machine learning, scientific computing, and materials innovation, applying large-scale ML to solve complex, real-world problems with global impact. They are seeking a Senior MLOps Engineer to own and operate a production-grade GPU platform supporting large-scale model training and low-latency inference for computational chemistry and LLM workloads serving thousands of users. This role holds end-to-end responsibility for the ML platform, spanning Kubernetes-based GPU orchestration, cloud infrastructure and Infrastructure-as-Code, ML pipelines, CI/CD, observability, reliability, and disaster recovery. You will design and operate hardened, multi-tenant ML systems on AWS, build and optimize high-performance inference stacks using vLLM and TensorRT-based runtimes, and drive measurable improvements in latency, throughput, and GPU utilization through batching, caching, quantization, and kernel-level optimizations. You will also establish SLO-driven operational standards, robust monitoring and alerting, on-call readiness, and repeatable release and rollback workflows. The position requires deep hands-on experience running GPU workloads on Kubernetes, including scheduling, autoscaling, multi-tenancy, and debugging GPU runtime issues, alongside strong Terraform and cloud-native fundamentals. You will work closely with research scientists and product teams to reliably productionize models, support distributed training and inference across multi-node GPU clusters, and ensure high-throughput data pipelines for large scientific datasets. Ideal candidates bring 5 years of experience in MLOps, platform, or infrastructure engineering, strong proficiency in Python and modern DevOps practices, and a proven track record of operating scalable, high-performance ML systems in production. Experience supporting scientific, computational chemistry, or other physics-based workloads is highly desirable, as is prior exposure to large-scale LLM serving, distributed training frameworks, and regulated production environments.
Sam WarwickSam Warwick
Spain
MLOps Engineer
MLOps EngineerLocation: Barcelona (Hybrid) Contract: Fixed-term until June 2026 Salary: €55,000 base pro rata Bonuses: €3,000 sign-on €500/month retention bonus Relocation: €2,000 package available Eligibility: EU work authorisation required The opportunity We’re hiring an MLOps Engineer to join a fast-scaling European deep-tech company working at the forefront of AI model efficiency and deployment. This team is solving a very real problem: how to take large, cutting-edge language models and run them reliably, efficiently, and cost-effectively in production. Their technology is already live with major enterprise customers and is reshaping how AI systems are deployed at scale. This is a hands-on engineering role with real ownership. You’ll sit close to both research and production, helping turn advanced ML into systems that actually work in the real world. What you’ll be working onBuilding and operating end-to-end ML and LLM pipelines, from data ingestion and training through to deployment and monitoringDeploying production-grade AI systems for large enterprise customersDesigning robust automation using CI/CD, GitOps, Docker, and KubernetesMonitoring model performance, drift, latency, and cost, and improving reliability over timeWorking with distributed training and serving setups, including model and data parallelismCollaborating closely with ML researchers, product teams, and DevOps engineers to optimise performance and infrastructure usageManaging and scaling cloud infrastructure (primarily Azure, with some AWS exposure)Tech you’ll be exposed toPython for ML and backend systemsCloud platforms: Azure (AKS, ML services, CycleCloud, Managed Lustre), plus AWSContainerisation and orchestration: Docker, KubernetesAutomation and DevOps: CI/CD pipelines, GitOpsDistributed ML tooling: Ray, DeepSpeed, FSDP, Megatron-LMLarge language models such as GPT-style models, Llama, Mistral, and similarWhat they’re looking for3 years’ experience in MLOps, ML engineering, or LLM-focused rolesStrong experience running ML workloads in public cloud environmentsHands-on background with production ML pipelines and monitoringSolid understanding of distributed training, parallelism, and optimisationComfortable working across infrastructure, ML, and engineering teamsStrong English communication skills; Spanish is a plus but not requiredNice to haveExperience with mixture-of-experts modelsLLM observability, inference optimisation, or API managementExposure to hybrid or multi-cloud environmentsReal-time or streaming ML systemsWhy this role stands outWork on AI systems that are already in production with global customersTackle real infrastructure and scaling challenges, not toy problemsCompetitive salary plus meaningful bonusesHybrid setup in Spain with relocation supportJoin a well-funded, high-growth deep-tech environment with long-term impact
Jacob GrahamJacob Graham
Frankfurt am Main, Hessen, Germany
Senior / Principal Research Scientist – Core AI Algorithms (Autonomous Systems)
Senior / Principal Research Scientist – Core AI Algorithms (Autonomous Systems)Location: Germany (Remote-first within Germany, on-site in Frankfurt every 2–4 weeks)About the RoleWe are partnering with a global automotive OEM building a core AI research and algorithm team responsible for the foundational intelligence behind next-generation automated driving systems.This role is research-driven and sits upstream of product teams. The focus is on inventing, validating, and transitioning new perception and world-modeling algorithms from research into production-ready systems. The team operates similarly to a big-tech research lab, but with a clear path to real-world deployment.Research Focus AreasDepending on background and interest, you may work on topics such as:3D scene understanding and world modelingOccupancy, motion forecasting, and dynamic scene reconstructionMulti-sensor perception (camera, LiDAR, radar)Representation learning for autonomous systems (BEV, implicit / generative 3D, Gaussian models, foundation models)Robustness, generalization, and long-tail perceptionLearning under weak, sparse, or noisy supervisionBridging offline training with real-world deployment constraintsKey ResponsibilitiesConduct original research in perception and autonomous systems with clear technical ownershipDesign and prototype novel algorithms and learning frameworksPublish at or contribute toward top-tier conferences and journals (e.g., CVPR, ICCV, ECCV, NeurIPS, ICRA, IROS)Translate research ideas into scalable, production-oriented implementationsCollaborate with applied ML, systems, and hardware teams to ensure feasibilityShape the long-term technical roadmap of the core AI organizationMentor junior researchers and engineers where appropriateRequired BackgroundPhD (or equivalent research experience) in Computer Vision, Machine Learning, Robotics, or a related fieldStrong publication record at top-tier conferences or journalsExperience conducting research within an industrial or applied settingExcellent understanding of modern deep learning methods and 3D perceptionStrong programming skills in Python and/or C Ability to work across the full spectrum from theory to implementationStrongly PreferredResearch experience in autonomous driving, robotics, or embodied AIWork on 3D perception, tracking, SLAM, or world modelsExperience at big-tech research labs, industrial AI labs, or advanced OEM R&DFamiliarity with real-world constraints such as runtime, memory, and system integrationPrior collaboration with product or engineering teamsWhat’s on OfferA research-first role with real influence on production systemsThe opportunity to define core algorithms, not just incremental improvementsA team culture that values publications, patents, and long-term thinkingRemote-first working model within Germany, with regular in-person collaboration in FrankfurtCompetitive compensation aligned with senior / principal research profilesWho This Role Is ForResearchers who want their work to ship into real vehiclesIndustry researchers seeking greater technical ownershipPhD-level candidates who enjoy both publishing and buildingProfiles combining academic depth with practical engineering maturityLooking forward to seeing your profile!
Paddy HobsonPaddy Hobson
Remote work, England
Lead AI Developer
I am working on a Lead AI Developer role for a UK based team delivering AI solutions into complex, non technical environments. This is a hands on role for someone who codes daily but also leads from the front. You would sit between senior stakeholders and delivery teams, shaping requirements, explaining trade offs, and guiding technical direction without relying on formal authority. What the role actually needs. You are still a builder. Strong in C#, .NET, and Python, comfortable shipping production systems, deploying to cloud, and working with modern AI patterns like LLMs, RAG, and agent based workflows. At the same time, you are confident in front of clients. You can run a requirements session, challenge vague asks, surface constraints early, and translate technical decisions into language that non engineers trust. You have led delivery through influence. Mentoring developers, setting standards, steering architecture discussions, and handling competing priorities when stakeholders want different outcomes. What you would be doingWorking directly with clients to turn real world problems into clear technical designs and delivery plansLeading backlog refinement, sprint planning, and technical prioritisationBuilding and deploying AI enabled features across a Microsoft and Azure focused stackExplaining feasibility, risk, and trade offs in a way that helps stakeholders make decisionsRaising the bar for engineering quality through reviews, coaching, and exampleWhat tends to work well herePeople who have been the technical lead in client facing environmentsDevelopers who enjoy ambiguity and creating clarity rather than waiting for perfect specsEngineers who can say no when needed, and explain why in a constructive wayIf you are interested in this position, feel free to send your updated CV and we'll be in touch if this is a match.
Nathan WillsNathan Wills
California, United States
Senior ML Infra Engineer
Senior Machine Learning Infra Engineer | San Francisco | Competitive Salary EquityOur client is an early-stage AI company building foundation models for physics to enable end-to-end industrial automation, from simulation and design through optimization, validation, and production. They are assembling a small, elite, founder-led team focused on shipping real systems into production, backed by world-class investors and technical advisors.They are hiring a Machine Learning Cloud Infrastructure Engineer to own the full ML infrastructure stack behind physics-based foundation models. Working directly with the CEO and founding team, you will build, scale, and operate production-grade ML systems used by real customers. What you will doOwn distributed training and fine-tuning infrastructure across multi-GPU and multi-node clustersDesign and operate low-latency, highly reliable inference and model serving systemsBuild secure fine-tuning pipelines allowing customers to adapt models to their data and workflowsDeliver deployments across cloud and on-prem environments, including enterprise and air-gapped setupsDesign data pipelines for large-scale simulation and CFD datasetsImplement observability, monitoring, and debugging across training, serving, and data pipelinesWork directly with customers on deployment, integration, and scaling challengesMove quickly from prototype to production infrastructure What our client is looking for3 years building and scaling ML infrastructure for training, fine-tuning, serving, or deploymentStrong experience with AWS, GCP, or AzureHands-on expertise with Kubernetes, Docker, and infrastructure-as-codeExperience with distributed training frameworks such as PyTorch Distributed, DeepSpeed, or RayProven experience building production-grade inference systemsStrong Python skills and deep understanding of the end-to-end ML lifecycleHigh execution velocity, strong debugging instincts, and comfort operating in ambiguity Nice to haveBackground in physics, simulation, or computer-aided engineering softwareExperience deploying ML systems into enterprise or regulated environmentsFoundation model fine-tuning infrastructure experienceGPU performance optimization experience (CUDA, Triton, etc.)Large-scale ML data engineering and validation pipelinesExperience at high-growth AI startups or leading AI research labsCustomer-facing or forward-deployed engineering experienceOpen-source contributions to ML infrastructure This role suits someone who earns respect through hands-on technical contribution, thrives in intense, execution-driven environments, values deep focused work, and takes full ownership of outcomes. The company offers ownership of core infrastructure, direct collaboration with the CEO and founding team, work on high-impact AI and physics problems, competitive compensation with meaningful equity, an in-person-first culture five days a week, strong benefits, daily meals, stipends, and immigration support.
Sam WarwickSam Warwick
Berlin, Germany
Training Infrastructure Engineer
Training Infrastructure Engineer Salary: €80,000 to €150,000 equity Location: Fully remote within Europe (CET ±2 hours) Stage: Recently funded Series A AI startup We are partnering with a fast-growing generative AI company building the next generation of creative tooling. Their platform generates hyper-realistic sound, speech, and music directly from video, effectively bringing silent content to life. The technology is already being used across gaming, video platforms, and creator ecosystems, with a clear ambition to become foundational infrastructure for audio-visual storytelling. Backed by top-tier venture capital and fresh Series A funding, the company is now scaling its core engineering group. This is a chance to join at a point where the technical challenges are deep, the scope is wide, and individual impact is unmistakable. The Role:As a Training Infrastructure Engineer, you will own and evolve the full model training stack. This is a hands-on, systems-level role focused on making large-scale training fast, reliable, and efficient. You will work close to the hardware and close to the models, shaping how cutting-edge generative systems are trained and iterated. What You Will Do:Design and evaluate optimal training strategies including parallelism approaches and precision trade-offs across different model sizes and workloadsProfile, debug, and optimise GPU workloads at single and multi-GPU level, using low-level tooling to understand real hardware behaviourImprove the entire training pipeline end to end, from data storage and loading through distributed training, checkpointing, and loggingBuild scalable systems for experiment tracking, model and data versioning, and training insightsDesign, deploy, and maintain large-scale training clusters orchestrated with SLURMWhat We Are Looking For:Proven experience optimising training and inference workloads through hands-on implementation, not just theoryDeep understanding of GPU memory hierarchy and compute constraints, including the gap between theoretical and practical performanceStrong intuition for memory-bound vs compute-bound workloads and how to optimise for eachExpertise in efficient attention mechanisms and how their performance characteristics change at scaleNice to Have:Experience writing custom GPU kernels and integrating them into PyTorchBackground working with diffusion or autoregressive modelsFamiliarity with high-performance storage systems such as VAST or large-scale object storageExperience managing SLURM clusters in production environmentsWhy This Role:Join at a pivotal growth stage with fresh funding and strong momentumGenuine ownership and autonomy from day one, with direct influence over technical directionCompetitive salary and equity so you share in the upside you help createWork on technology that is redefining how creators produce and experience contentIf you want to operate at the intersection of deep systems engineering and frontier generative AI, this is one of the strongest opportunities in the European market right now.
Anthony KellyAnthony Kelly
Greater London, South East, England
VLA Engineer
Deep Learning Engineer – Advanced Robotics & VLM/VLALocation: Flexible / Remote (UK or Europe preferred) Employment Type: Full-timeAbout the CompanyOur client is an ambitious AI and robotics company developing next-generation humanoid systems designed to transform how intelligent automation supports industrial and everyday environments. Their mission is to advance human potential through robotics that are scalable, safe, and capable of performing complex real-world tasks. This is a rare opportunity to work at the intersection of deep learning, multimodal AI, and robotic embodiment, helping shape the foundations of a truly intelligent automation platform. The Role As a Deep Learning Engineer, you’ll design and train large-scale models that power robotic control and perception — from foundational representation learning to behaviour cloning and reinforcement learning. You’ll work across the full data-to-deployment lifecycle, experimenting with cutting-edge multimodal architectures and building robust pipelines for high-performance, real-time systems. Key ResponsibilitiesDevelop and train deep learning models for manipulation, navigation, and general policy learning.Collaborate with teleoperations and simulation teams to define data collection goals and bridge the sim-to-real gap.Train and fine-tune multimodal LLMs, VLMs, and VLAs, integrating diverse sensory modalities (vision, audio, proprioception, LiDAR, etc.).Build scalable data pipelines for continuous ingestion, curation, weak supervision, and retraining.Partner with MLOps and infrastructure teams to enable distributed training and optimize models for real-time deployment.Contribute to shaping the next generation of embodied AI systems for safe, efficient automation.About You3 years of experience building and deploying deep learning systems (industry or research).Strong proficiency in Python and PyTorch or JAX.Hands-on experience with LLMs, VLMs, or generative models for image/video.Deep understanding of training infrastructure (streaming datasets, checkpointing, distributed compute).Strong communicator with clear experiment documentation and the ability to explain complex technical decisions.Bonus PointsExperience in robotics, autonomous driving, or other embodied AI domains.Background in reinforcement learning (PPO, DPO, SAC, etc.) or RL for LLMs.Experience optimizing deep nets for production (latency, telemetry, on-device inference).Publications at top-tier ML conferences (ICLR, NeurIPS, ICML) or significant open-source contributions.Familiarity with OpenVLA, π models, or similar embodied AI frameworks.What’s on OfferCompetitive compensation including stock options.Flexible remote-first setup with opportunities for international collaboration.Work with world-class researchers and engineers building truly transformative technology.A fast-paced, innovation-driven culture where ideas move quickly from concept to prototype.
Paddy HobsonPaddy Hobson
Zürich, Switzerland
Senior AI Engineer– VLAs (Vision-Language Action) & Dexterous Manipulation
We are looking for an experienced Engineer/Scientist to join a cutting-edge robotics team in Zurich, focused on humanoid manipulation through state-of-the-art AI. You will contribute to developing and deploying learning-based controllers to enable robots to interact intelligently, reliably, and flexibly with complex environments. This role is deeply hands-on with real robot hardware, leveraging your expertise across vision-language-action (VLA) models, diffusion models, reinforcement learning, and dexterous manipulation. What we’re looking for:PhD or Master's in Robotics, AI, or related fieldsStrong expertise in learning-based dexterous manipulation (multi-fingered grasping, task-oriented manipulation)Practical experience with running real-time controllers on robotic hardwareDeep knowledge of diffusion models, transformers, CVAEs, normalizing flowsExperience working with ROS, PyTorch, and PythonFamiliarity with vision-language models (VLA/VLMs/LLMs) for robotic planning or failure detectionBonus: experience with synthetic data generation, simulation environments, teleoperation systemsWhy this role?Join a high-calibre team (ex-ETH / ex-NVIDIA founders) backed by significant investment (Seed Series A secured)Work at the intersection of simulation, reinforcement learning, and robotics hardwareContribute to next-generation humanoid robots without reliance on traditional imitation learning paradigmsSignificant stock offering.If you’re passionate about building more intelligent robots through cutting-edge AI and want to work in a fast-moving, well-resourced environment, we’d love to hear from you.
Paddy HobsonPaddy Hobson