Machine Learning

Discover the best of tech. Machine learning recruitment for next-gen breakthroughs.

There's a world-class candidate behind every innovation. We specialise in connecting them with the startups and scaleups shaping the future of machine learning. 

With several decades of collective experience in tech recruitment, our ML consultants have developed the knowledge, networks, and industry insight needed to source and secure game-changing talent. We’re proud to partner with the world’s Machine Learning innovators, ranging from startups to tech giants across the UK, Ireland, the US, Switzerland, and Germany. 

Whether you're building bleeding-edge multimodal AI systems or you're hoping to find a meaningful new career in Machine Learning, DeepRec.ai’s ML recruiters have the means to support you. 

Hoping to hire? 

Chat with an ML Consultant

Looking for a fresh start?

Discover the best ML jobs

The roles we cover in Machine Learning include:

  • Senior Machine Learning Engineer

  • Machine Learning Engineer

  • Head of AI

  • Head of Deep Learning

  • Head of Machine Learning

  • Deep Learning Engineer

  • Heard of Product - AI

  • Product Owner - AI

  • Project Manager - AI

  • Senior Deep Learning Engineer

  • MLOps Developer

  • MLOps Engineer

  • Machine Learning Ops Engineer

  • KubeFlow/ MLFlow

  • Machine Learning Engineer

  • Machine Learning Researcher

  • Machine Learning Team Lead

  • Head of Machine Learning

  • Head of AI

MACHINE LEARNING CONSULTANTS

Anthony Kelly

Co-Founder & MD EU/UK

Hayley Killengrey

Co-Founder & MD USA

Nathan Wills

Senior Consultant | Switzerland

Sam Oliver

Senior Consultant | Contract DACH

Sam Warwick

Senior Consultant – Geospatial, Earth, & Defence Technology

Jacob Graham

Senior Consultant

LATEST JOBS

Frankfurt am Main, Hessen, Germany
Senior / Principal Research Scientist – Core AI Algorithms (Autonomous Systems)
Senior / Principal Research Scientist – Core AI Algorithms (Autonomous Systems)Location: Germany (Remote-first within Germany, on-site in Frankfurt every 2–4 weeks)About the RoleWe are partnering with a global automotive OEM building a core AI research and algorithm team responsible for the foundational intelligence behind next-generation automated driving systems.This role is research-driven and sits upstream of product teams. The focus is on inventing, validating, and transitioning new perception and world-modeling algorithms from research into production-ready systems. The team operates similarly to a big-tech research lab, but with a clear path to real-world deployment.Research Focus AreasDepending on background and interest, you may work on topics such as:3D scene understanding and world modelingOccupancy, motion forecasting, and dynamic scene reconstructionMulti-sensor perception (camera, LiDAR, radar)Representation learning for autonomous systems (BEV, implicit / generative 3D, Gaussian models, foundation models)Robustness, generalization, and long-tail perceptionLearning under weak, sparse, or noisy supervisionBridging offline training with real-world deployment constraintsKey ResponsibilitiesConduct original research in perception and autonomous systems with clear technical ownershipDesign and prototype novel algorithms and learning frameworksPublish at or contribute toward top-tier conferences and journals (e.g., CVPR, ICCV, ECCV, NeurIPS, ICRA, IROS)Translate research ideas into scalable, production-oriented implementationsCollaborate with applied ML, systems, and hardware teams to ensure feasibilityShape the long-term technical roadmap of the core AI organizationMentor junior researchers and engineers where appropriateRequired BackgroundPhD (or equivalent research experience) in Computer Vision, Machine Learning, Robotics, or a related fieldStrong publication record at top-tier conferences or journalsExperience conducting research within an industrial or applied settingExcellent understanding of modern deep learning methods and 3D perceptionStrong programming skills in Python and/or C Ability to work across the full spectrum from theory to implementationStrongly PreferredResearch experience in autonomous driving, robotics, or embodied AIWork on 3D perception, tracking, SLAM, or world modelsExperience at big-tech research labs, industrial AI labs, or advanced OEM R&DFamiliarity with real-world constraints such as runtime, memory, and system integrationPrior collaboration with product or engineering teamsWhat’s on OfferA research-first role with real influence on production systemsThe opportunity to define core algorithms, not just incremental improvementsA team culture that values publications, patents, and long-term thinkingRemote-first working model within Germany, with regular in-person collaboration in FrankfurtCompetitive compensation aligned with senior / principal research profilesWho This Role Is ForResearchers who want their work to ship into real vehiclesIndustry researchers seeking greater technical ownershipPhD-level candidates who enjoy both publishing and buildingProfiles combining academic depth with practical engineering maturityLooking forward to seeing your profile!
Paddy HobsonPaddy Hobson
Remote work, England
Lead AI Developer
I am working on a Lead AI Developer role for a UK based team delivering AI solutions into complex, non technical environments. This is a hands on role for someone who codes daily but also leads from the front. You would sit between senior stakeholders and delivery teams, shaping requirements, explaining trade offs, and guiding technical direction without relying on formal authority. What the role actually needs. You are still a builder. Strong in C#, .NET, and Python, comfortable shipping production systems, deploying to cloud, and working with modern AI patterns like LLMs, RAG, and agent based workflows. At the same time, you are confident in front of clients. You can run a requirements session, challenge vague asks, surface constraints early, and translate technical decisions into language that non engineers trust. You have led delivery through influence. Mentoring developers, setting standards, steering architecture discussions, and handling competing priorities when stakeholders want different outcomes. What you would be doingWorking directly with clients to turn real world problems into clear technical designs and delivery plansLeading backlog refinement, sprint planning, and technical prioritisationBuilding and deploying AI enabled features across a Microsoft and Azure focused stackExplaining feasibility, risk, and trade offs in a way that helps stakeholders make decisionsRaising the bar for engineering quality through reviews, coaching, and exampleWhat tends to work well herePeople who have been the technical lead in client facing environmentsDevelopers who enjoy ambiguity and creating clarity rather than waiting for perfect specsEngineers who can say no when needed, and explain why in a constructive wayIf you are interested in this position, feel free to send your updated CV and we'll be in touch if this is a match.
Nathan WillsNathan Wills
California, United States
Senior ML Infra Engineer
Senior Machine Learning Infra Engineer | San Francisco | Competitive Salary EquityOur client is an early-stage AI company building foundation models for physics to enable end-to-end industrial automation, from simulation and design through optimization, validation, and production. They are assembling a small, elite, founder-led team focused on shipping real systems into production, backed by world-class investors and technical advisors.They are hiring a Machine Learning Cloud Infrastructure Engineer to own the full ML infrastructure stack behind physics-based foundation models. Working directly with the CEO and founding team, you will build, scale, and operate production-grade ML systems used by real customers. What you will doOwn distributed training and fine-tuning infrastructure across multi-GPU and multi-node clustersDesign and operate low-latency, highly reliable inference and model serving systemsBuild secure fine-tuning pipelines allowing customers to adapt models to their data and workflowsDeliver deployments across cloud and on-prem environments, including enterprise and air-gapped setupsDesign data pipelines for large-scale simulation and CFD datasetsImplement observability, monitoring, and debugging across training, serving, and data pipelinesWork directly with customers on deployment, integration, and scaling challengesMove quickly from prototype to production infrastructure What our client is looking for3 years building and scaling ML infrastructure for training, fine-tuning, serving, or deploymentStrong experience with AWS, GCP, or AzureHands-on expertise with Kubernetes, Docker, and infrastructure-as-codeExperience with distributed training frameworks such as PyTorch Distributed, DeepSpeed, or RayProven experience building production-grade inference systemsStrong Python skills and deep understanding of the end-to-end ML lifecycleHigh execution velocity, strong debugging instincts, and comfort operating in ambiguity Nice to haveBackground in physics, simulation, or computer-aided engineering softwareExperience deploying ML systems into enterprise or regulated environmentsFoundation model fine-tuning infrastructure experienceGPU performance optimization experience (CUDA, Triton, etc.)Large-scale ML data engineering and validation pipelinesExperience at high-growth AI startups or leading AI research labsCustomer-facing or forward-deployed engineering experienceOpen-source contributions to ML infrastructure This role suits someone who earns respect through hands-on technical contribution, thrives in intense, execution-driven environments, values deep focused work, and takes full ownership of outcomes. The company offers ownership of core infrastructure, direct collaboration with the CEO and founding team, work on high-impact AI and physics problems, competitive compensation with meaningful equity, an in-person-first culture five days a week, strong benefits, daily meals, stipends, and immigration support.
Sam WarwickSam Warwick
Berlin, Germany
Training Infrastructure Engineer
Training Infrastructure Engineer Salary: €80,000 to €150,000 equity Location: Fully remote within Europe (CET ±2 hours) Stage: Recently funded Series A AI startup We are partnering with a fast-growing generative AI company building the next generation of creative tooling. Their platform generates hyper-realistic sound, speech, and music directly from video, effectively bringing silent content to life. The technology is already being used across gaming, video platforms, and creator ecosystems, with a clear ambition to become foundational infrastructure for audio-visual storytelling. Backed by top-tier venture capital and fresh Series A funding, the company is now scaling its core engineering group. This is a chance to join at a point where the technical challenges are deep, the scope is wide, and individual impact is unmistakable. The Role:As a Training Infrastructure Engineer, you will own and evolve the full model training stack. This is a hands-on, systems-level role focused on making large-scale training fast, reliable, and efficient. You will work close to the hardware and close to the models, shaping how cutting-edge generative systems are trained and iterated. What You Will Do:Design and evaluate optimal training strategies including parallelism approaches and precision trade-offs across different model sizes and workloadsProfile, debug, and optimise GPU workloads at single and multi-GPU level, using low-level tooling to understand real hardware behaviourImprove the entire training pipeline end to end, from data storage and loading through distributed training, checkpointing, and loggingBuild scalable systems for experiment tracking, model and data versioning, and training insightsDesign, deploy, and maintain large-scale training clusters orchestrated with SLURMWhat We Are Looking For:Proven experience optimising training and inference workloads through hands-on implementation, not just theoryDeep understanding of GPU memory hierarchy and compute constraints, including the gap between theoretical and practical performanceStrong intuition for memory-bound vs compute-bound workloads and how to optimise for eachExpertise in efficient attention mechanisms and how their performance characteristics change at scaleNice to Have:Experience writing custom GPU kernels and integrating them into PyTorchBackground working with diffusion or autoregressive modelsFamiliarity with high-performance storage systems such as VAST or large-scale object storageExperience managing SLURM clusters in production environmentsWhy This Role:Join at a pivotal growth stage with fresh funding and strong momentumGenuine ownership and autonomy from day one, with direct influence over technical directionCompetitive salary and equity so you share in the upside you help createWork on technology that is redefining how creators produce and experience contentIf you want to operate at the intersection of deep systems engineering and frontier generative AI, this is one of the strongest opportunities in the European market right now.
Anthony KellyAnthony Kelly
Greater London, South East, England
AI Engineer - Infrastructure
AI Engineer (Infrastructure) – London -Hybrid, £85k Stock Options Are you ready to help world-leading enterprises deploy cutting-edge AI at scale? We’re looking for an AI Infrastructure Engineer to join a pioneering AI company that’s transforming the way businesses operate. This isn’t just AI - it’s agentic AI that can drive decisions, optimize workflows, and unlock insights that were previously impossible. Trusted by Fortune 500 clients across finance, healthcare, and enterprise, this company has grown 500% year-on-year since emerging from stealth mode in 2021, backed by over $50M from top-tier investors and visionary angels. The Role You’ll be the bridge between enterprise infrastructure and AI innovation. Your focus will be on deploying advanced AI solutions into complex client environments, ensuring reliability, security, and scalability. Your expertise will determine how seamlessly it integrates into real-world systems. What You’ll DoCollaborate with top-tier enterprise clients to design and deploy bespoke AI workflows.Build robust architectures leveraging the latest AI technologies, including LLMs, RAGs, MCPs, and agentic workflows.Ensure enterprise-grade deployment across cloud and on-prem environments, with focus on availability, observability, and security.Translate complex business challenges into scalable, intelligent AI solutions.Serve as a forward-deployed engineer, working closely with both technical teams and business stakeholders.What You’ll BringProven experience deploying enterprise-grade AI or machine learning solutions in cloud and/or on-premise environments.Hands-on familiarity with at least one major cloud provider and strong DevOps skills.Client-facing experience with the ability to design, deliver, and support production-grade AI solutions.Strong solution architecture skills and a track record of driving measurable impact.Bonus points for personal AI projects or agentic AI experience.Why This Role Is ExcitingWork on high-visibility projects that truly change how enterprises operate.Join a company backed by world-class investors and trusted by global Fortune 500 clients.Be part of a team where 50% of members are PhDs, selected from over 50,000 applicants.Enjoy a collaborative, hybrid work environment (office Tues & Weds, optional additional days).BenefitsCompetitive salary (flexible for exceptional candidates)Share options and pension scheme25 days’ paid holiday plus bank holidays (carry over/sell up to 5 days)Work abroad days and flexible benefits, including Health/Dental InsuranceLearning & development budget, tech purchase support, office snacks, team events, referral bonusesInterview ProcessInitial screening callHiring manager/team interview (30-45 mins)Take-home challenge (3 hrs)In office interview - Speak with 3 members of the team (30 mins each), then a final chat with foundersThis is your chance to join a company that’s shaping the future of AI in enterprise. If you thrive in a high-impact environment where your work directly drives client success, this role is for you.
Jacob GrahamJacob Graham
Boston, Massachusetts, United States
Machine Learning Engineer (LLM)
Machine Learning Engineer (LLM) $170,000 - $200,000 (DOE) Boston OR Berkeley, 2-3 days per week in-office We’re working a fast-growing AI company on a mission to automate complex workflows in the financial services sector, starting with insurance. Their technology leverages cutting-edge AI to simplify high-value processes, from multi-turn conversations to full workflow automation. As an ML Engineer within LLMs, you’ll be building and scaling advanced AI systems that power intelligent, multi-agent workflows. You’ll take ownership of designing, fine-tuning, and productionizing large language models, integrating them with backend systems, and optimizing their performance. You’ll collaborate closely with data science, DevOps, and leadership to shape the AI infrastructure that drives the company’s automation solutions. What You’ll Do:Build, fine-tune, and productionize large language model (LLM) pipelines, including PEFT, RLHF, and DPO workflows.Develop APIs, data pipelines, and orchestration systems for multi-agent, multi-turn AI conversations.Integrate models with backend services, including voice orchestration platforms and transcript generation.Optimize model usage and efficiency, transitioning from external APIs to in-house solutions.Collaborate cross-functionally with data scientists, DevOps, and leadership to deliver scalable machine learning solutions. What We’re Looking For:Essential Skills & Experience:Strong proficiency in Python and ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch).Hands-on experience fine-tuning and training LLMs.PEFT, DPO, Prefence Optimization, post-training, supervised fine tuning, RLHFFamiliarity with AWS suite and deploying ML models to production.Ability to reason deeply about ML principles, architectures, and design choices.Knowledge of multi-agent orchestration and conversational AI systems.Desirable Skills & Experience:Background in voice AI, speech-to-text, or text-to-speech systems.Exposure to financial services or insurance applications.Familiarity with optimizing models for long-context scenarios. If you’d like to hear more, please apply or get in touch!
Benjamin ReavillBenjamin Reavill