Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Title goes here...
Title goes here...
Title goes here...
Title goes here...

Content goes here...

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Boston, Massachusetts, United States
Senior MLOps Engineer
Senior MLOps Engineer – GPU Infrastructure & Inference Our client is building AI-native systems at the intersection of machine learning, scientific computing, and materials innovation, applying large-scale ML to solve complex, real-world problems with global impact. They are seeking a Senior MLOps Engineer to own and operate a production-grade GPU platform supporting large-scale model training and low-latency inference for computational chemistry and LLM workloads serving thousands of users. This role holds end-to-end responsibility for the ML platform, spanning Kubernetes-based GPU orchestration, cloud infrastructure and Infrastructure-as-Code, ML pipelines, CI/CD, observability, reliability, and disaster recovery. You will design and operate hardened, multi-tenant ML systems on AWS, build and optimize high-performance inference stacks using vLLM and TensorRT-based runtimes, and drive measurable improvements in latency, throughput, and GPU utilization through batching, caching, quantization, and kernel-level optimizations. You will also establish SLO-driven operational standards, robust monitoring and alerting, on-call readiness, and repeatable release and rollback workflows. The position requires deep hands-on experience running GPU workloads on Kubernetes, including scheduling, autoscaling, multi-tenancy, and debugging GPU runtime issues, alongside strong Terraform and cloud-native fundamentals. You will work closely with research scientists and product teams to reliably productionize models, support distributed training and inference across multi-node GPU clusters, and ensure high-throughput data pipelines for large scientific datasets. Ideal candidates bring 5 years of experience in MLOps, platform, or infrastructure engineering, strong proficiency in Python and modern DevOps practices, and a proven track record of operating scalable, high-performance ML systems in production. Experience supporting scientific, computational chemistry, or other physics-based workloads is highly desirable, as is prior exposure to large-scale LLM serving, distributed training frameworks, and regulated production environments.
Sam WarwickSam Warwick
Greng, Switzerland
AI program manager
We’re hiring an AI Program Manager to take ownership of a central AI delivery function and ensure high-impact AI initiatives move from idea to production at pace. This role is focused on execution, coordination, and decision-making across a broad set of stakeholders, rather than hands-on technical delivery. The role: You’ll be accountable for running a multi-stream AI program, balancing delivery momentum with governance, risk control, and transparency. Acting as the connective tissue between business leaders and technical teams, you’ll help shape how AI work is assessed, prioritised, and delivered across the organisation. What you’ll doLead the planning and execution of a portfolio of AI initiatives, with full accountability for timelines, funding, risks, and outcomesBring together teams across product, data, AI/ML, engineering, and security to deliver against shared objectivesPut in place clear intake and decision frameworks to evaluate AI opportunities and focus effort where it delivers the most valueActively manage delivery constraints, interdependencies, and trade-offs across multiple workstreamsContinuously evolve delivery processes to improve throughput, predictability, and stakeholder confidenceWhat you bringExtensive experience leading large-scale programs in complex, matrixed organisationsA strong track record of managing ambiguity, competing priorities, and senior expectationsWorking knowledge of how AI and data products are developed, validated, and deployed into live environmentsExperience designing operating models, governance forums, and prioritisation mechanismsClear, confident communication style with the ability to influence at executive levelA practical, results-oriented mindset with a bias toward action over theoryAI program delivery experience is a must have
Sam OliverSam Oliver
Spain
MLOps Engineer
MLOps EngineerLocation: Barcelona (Hybrid) Contract: Fixed-term until June 2026 Salary: €55,000 base pro rata Bonuses: €3,000 sign-on €500/month retention bonus Relocation: €2,000 package available Eligibility: EU work authorisation required The opportunity We’re hiring an MLOps Engineer to join a fast-scaling European deep-tech company working at the forefront of AI model efficiency and deployment. This team is solving a very real problem: how to take large, cutting-edge language models and run them reliably, efficiently, and cost-effectively in production. Their technology is already live with major enterprise customers and is reshaping how AI systems are deployed at scale. This is a hands-on engineering role with real ownership. You’ll sit close to both research and production, helping turn advanced ML into systems that actually work in the real world. What you’ll be working onBuilding and operating end-to-end ML and LLM pipelines, from data ingestion and training through to deployment and monitoringDeploying production-grade AI systems for large enterprise customersDesigning robust automation using CI/CD, GitOps, Docker, and KubernetesMonitoring model performance, drift, latency, and cost, and improving reliability over timeWorking with distributed training and serving setups, including model and data parallelismCollaborating closely with ML researchers, product teams, and DevOps engineers to optimise performance and infrastructure usageManaging and scaling cloud infrastructure (primarily Azure, with some AWS exposure)Tech you’ll be exposed toPython for ML and backend systemsCloud platforms: Azure (AKS, ML services, CycleCloud, Managed Lustre), plus AWSContainerisation and orchestration: Docker, KubernetesAutomation and DevOps: CI/CD pipelines, GitOpsDistributed ML tooling: Ray, DeepSpeed, FSDP, Megatron-LMLarge language models such as GPT-style models, Llama, Mistral, and similarWhat they’re looking for3 years’ experience in MLOps, ML engineering, or LLM-focused rolesStrong experience running ML workloads in public cloud environmentsHands-on background with production ML pipelines and monitoringSolid understanding of distributed training, parallelism, and optimisationComfortable working across infrastructure, ML, and engineering teamsStrong English communication skills; Spanish is a plus but not requiredNice to haveExperience with mixture-of-experts modelsLLM observability, inference optimisation, or API managementExposure to hybrid or multi-cloud environmentsReal-time or streaming ML systemsWhy this role stands outWork on AI systems that are already in production with global customersTackle real infrastructure and scaling challenges, not toy problemsCompetitive salary plus meaningful bonusesHybrid setup in Spain with relocation supportJoin a well-funded, high-growth deep-tech environment with long-term impact
Jacob GrahamJacob Graham
Greng, Switzerland
AI Data Engineer
We’re looking for a Data Engineer to help build and scale the data foundations that power modern AI and generative AI solutions. This role is focused on designing resilient data pipelines that support advanced analytics, ML, and LLM-driven use cases across a range of data types. The role: You’ll work closely with AI, ML, and platform teams to shape how data is collected, processed, and made available for downstream intelligence. The focus is on robust engineering, clean data, and systems that can scale as AI use cases move into production. What you’ll be doing:Building and maintaining Python-based data pipelines that handle ingestion, transformation, and enrichment of both structured and unstructured dataApplying AI-assisted techniques to data preparation, including classification, extraction, and feature creation to support ML and LLM workflowsConnecting data pipelines into Azure-based platforms, including data lakes and cloud-native servicesEnsuring pipelines are reliable and performant through testing, monitoring, and continuous optimisationPartnering with data scientists, AI engineers, and platform teams to support end-to-end AI deliveryWhat we’re looking for:Solid hands-on experience as a data engineer, with Python as a core languageProven experience delivering data pipelines in production environments at scaleExposure to AI, ML, or generative AI use cases within data platformsPractical experience working with Azure Data Lake and related Azure data servicesA strong engineering mindset with attention to data quality, system reliability, and performanceComfortable operating in collaborative, cross-functional teams
Sam OliverSam Oliver
Boston, Massachusetts, United States
ML Scientist in AI Explainability
ML Scientist in AI Explainability  Location: Boston Massachusetts Type: Full time Machine Learning Scientist, AI Explainability and Scientific Discovery We are working with a publicly listed deep tech company operating at the intersection of machine learning, material science, and next generation battery technology. The team is applying AI directly to scientific discovery, with real world impact across energy storage, transportation, robotics, and aerospace. This role sits within an advanced AI research group focused on Large Language Models, AI agents, and explainability in scientific problem solving. Your work will directly influence how new battery materials are discovered and validated using AI. The position can be fully remote. What you will work on You will lead research into machine learning methods for scientific discovery, with a strong focus on multimodal Large Language Models and agent based systems.You will study how LLMs reason, plan, and generate solutions when applied to core scientific and engineering questions, particularly in battery and material design.You will design and optimize training pipelines for large models, tackling challenges around data quality, architecture, scalability, and compute efficiency.You will integrate domain specific data sources such as scientific literature and internal research documents into model training and inference.Your research will be deployed into a production multi agent AI system used for real battery technology discovery.You will collaborate closely with researchers, engineers, and external academic labs, and contribute to publications and conference presentations. What we are looking for An MSc or PhD in Computer Science, Statistics, Computational Neuroscience, Cognitive Science, or a related field, or equivalent industry experience.Strong grounding in machine learning, deep learning, and Large Language Models, with hands on research experience.Solid Python skills and experience with frameworks such as PyTorch or TensorFlow.Experience working with causal graphs and explainability focused AI methods.A proven research track record, ideally including peer reviewed publications.The ability to explain complex technical ideas clearly to both technical and non technical stakeholders.Nice to have Exposure to AI applied to material science, chemistry, or battery systems.Familiarity with recent research methods in LLM optimization and reinforcement learning approaches such as GRPO. What is on offerA highly competitive salary and benefits package, including equity in a publicly listed company.The chance to work on AI for science problems with visible global impact.A collaborative research environment alongside experienced ML scientists, engineers, and domain experts.Strong support for professional development, publishing, and long term career growth.
Nathan WillsNathan Wills
Zürich, Switzerland
Mid / Senior SLAM Engineer
Senior SLAM Engineer Location: Zurich Type: Full-time, On-site Company Overview Our client is an early-stage robotics company developing autonomy and intelligent assistance systems for large-scale mobile machinery. By combining learning-based automation with advanced remote operation, thier technology enables a single operator to safely supervise and control multiple machines in complex, real-world environments. The team brings deep academic and industrial expertise in large-scale robotics and perception, and is focused on transitioning state-of-the-art research into production systems deployed on real machines operating in demanding conditions. Role Overview This role sits at the intersection of perception, state estimation, and real-world deployment. You will contribute to the design, implementation, and deployment of advanced localization and mapping solutions for autonomous and semi-autonomous heavy machines. The systems you work on integrate multiple sensing modalities—spanning lidar, vision, inertial sensing, and satellite positioning—into a hardware-agnostic autonomy stack that can be adapted to a wide range of machine types and vintages. The role requires not only strong algorithmic expertise, but also a focus on production-quality software and system robustness. Key ResponsibilitiesDesign, prototype, and deploy real-time localization, mapping, state estimation, and calibration algorithms for large autonomous mobile platformsDevelop SLAM pipelines leveraging lidar, inertial, visual, and GNSS data sourcesOptimize system performance, robustness, and reliability under real-world operating conditionsDefine and maintain testing procedures, validation strategies, and performance metricsCollaborate closely with engineers across perception, controls, systems, and hardware to improve end-to-end autonomy performanceEnsure high-quality, maintainable implementations suitable for deployment on production systemsRequired QualificationsMaster’s or PhD in Computer Science, Robotics, Electrical Engineering, Mechanical Engineering, or a related field3 years of hands-on experience developing and deploying localization and mapping systemsStrong experience implementing SLAM and state estimation algorithms using lidar-inertial-visual sensor fusionProficiency in C and Python, with a focus on production-grade software developmentExperience working in Linux-based development environmentsAbility to manage technical risk, re-prioritize work, and meet deadlines in a fast-paced engineering environmentStrong communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiencesNice to HaveExperience integrating RADAR and/or GPS/GNSS into localization or SLAM systemsFamiliarity with ROS2 and modern robotics middleware
Paddy HobsonPaddy Hobson
London, Greater London, South East, England
Agentic AI Engineer
Applied AI Engineer  I am working with a fast growing AI company building an enterprise grade AI workspace used by major financial institutions to produce and validate client ready work. The platform replaces complex manual workflows with automated AI systems that scale across global teams and has grown rapidly with backing from top tier investors. This role is for engineers who want to build and ship production systems. You will own core parts of the AI agent infrastructure, including multi agent systems, RAG pipelines, and evaluation frameworks. The work is hands on and production focused, covering backend services, AI infrastructure, and delivery at scale. What you will doBuild and deploy backend services and APIs, Python preferred using Django or FastAPIProductionise AI features including RAG, agent orchestration, and evalsCreate data pipelines for training, evaluation, and continuous improvementEnsure performance, reliability, and security across the stackWork closely with founders, engineers, and product teamsWhat we are looking forFive plus years of software engineering experienceProven experience deploying AI applications into productionStrong backend engineering skills and database fundamentalsExperience with cloud infrastructure, Docker, Kubernetes, and CI CDBackground workers, task queues, and Redis experienceFamiliarity with LLM evaluation, monitoring, and safetyDegree from a Russell Group university or equivalent top tier academic background, or alternatively extensive engineering expertise with clear, relevant production experienceThis is a demanding, in office environment with high ownership, shifting priorities, and strong technical standards. You will work directly with founders who have built and exited venture backed companies. If you are an Applied or Agentic AI Engineer looking for real ownership and the chance to build core systems from the ground up, this is worth a conversation.
Nathan WillsNathan Wills
Zürich, Switzerland
GenAI Engineer
We are looking for a GenAI Engineer to join a growing Consulting organisation focused on AI solutions for the varied industries. You will play a key role in developing and integrating enterprise-level AI systems, contributing to the next generation of intelligent tools used by their clients. What You’ll DoDesign, build, and deploy GenAI applications using OpenAI APIs and LLM frameworksDevelop and optimise RAG pipelines for production useCollaborate with cross-functional teams to integrate AI into existing SaaS productsWrite clean, efficient, and scalable code, primarily in PythonContribute to architecture and design discussions around AI deployment and automationEngage with clients and internal teams to ensure alignment on project goalsWhat We’re Looking ForProven background in software development within SaaS or enterprise environmentsStrong practical experience using OpenAI APIs in commercial or large-scale settingsSolid understanding of LLMs, prompt engineering, and model deploymentHands-on experience with RAG pipelines and data retrieval optimisationExcellent communication and stakeholder management skillsAble to work independently and within collaborative teamsNice to HaveFrench for collaboration with teams in LausanneGerman for client interactions in ZurichWhy JoinFully remote flexibility with the option to work near Zurich or LausanneStable, long-term AI projects within the financial sectorClear growth trajectory with opportunities to contribute to upcoming initiativesSupportive, collaborative environment with positive team sentiment
Nathan WillsNathan Wills

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

SEND US A
MESSAGE

We are Deeprec.ai

We are Deeprec.ai

BRIA.AI
Permanent
Hire
2:1
CV to Interview
Bria.ai develop Visual Generative AI for commercial use. Partnering with like-minded clients, they aim to democratize this technology. They empower organisations with Visual Generative AI to enhance products and set new industry benchmarks, ensuring responsible and sustainable growth.
 
Bria.ai approached DeepRec.ai for support in hiring talent with niche skillsets, and had a high bar. They need someone exceptional to join the team in a leadership capacity, before hiring a team below them. We submitted 12 candidates, secured 6 interviews and received the offer on their perfect candidate.  
 
We are delighted to be continuing our relationship with Bria.ai due to our strong performance and are working on a range of roles for them. 
Bria.ai
Tali Kadish

Jonathan followed up and maintained constant communication." 

Tali Kadish
Tali Kadish
Human Resources
EY
NLP
Three Senior NLP Hires Made
2:1
CV to Placement Ratio

EY, one of the world’s leading professional services firms, engaged DeepRec.ai to support the strategic hiring of specialised talent within the field of Natural Language Processing. The focus was on securing senior-level NLP talent capable of driving innovation in a fast-moving technical landscape. DeepRec.ai’s NLP specialists leveraged a global Deep Tech network to deliver a precise and effective search, identifying and submitting seven relevant candidates for the first two roles released to us. EY then retained the team for an additional hire, a third senior-level appointment. All three roles were successfully filled: One Partner, One Director, and One Manager, including one hire from an underrepresented background.

EY
Chief AI Officer - Switzerland

The candidate capabilities and fit have been excellent - DeepRec.ai really understood what we're looking for and delivered candidates who align well with our needs. Their approach has been refreshingly timely and proactive, with regular check-ins and discussions about how things are progressing, which I really appreciate. Sam Oliver has been particularly great to work with. Very proactive in understanding our requirements and does a good job of aligning internally to reduce complexity for us, which makes the whole process much smoother.

Chief AI Officer - Switzerland
Chief AI Officer - Switzerland
FLAGSHIP PIONEERING
4
Female Candidates Shortlisted
3
Key Hires Made
1.6
CV to Interview Ratio
Flagship Pioneering invent platforms and build companies that change the world. They have founded more than 100 first-in-category  bioplatform companies designed to generate multiple products that secure a healthier and more sustainable future. They engaged Hayley Killengrey to hire two Senior Scientists and one Machine Learning Scientist, and were passionate about receiving a diverse shortlist. All hires were relocating across the US and Hayley supported their transition. We are delighted to have supported various companies in their portfolio.
Flagship Pioneering
Mary Jacobs

Working with Hayley has been great. She has been responsive and proactive – integrating with multiple systems we have. She’s introduced us to some excellent team members in the Machine Learning space. Her ability to find quality candidates and encourage us throughout the process has made a real difference. I appreciate her dedication and support in our hiring efforts!

Mary Jacobs
Mary Jacobs
Director, Talent Acquisition
HUAWEI
1:1:1
CV-Interview-Offer Ratio

Founded in 1987, Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. Huawei has over 207,000 employees and operates in over 170 countries and regions, serving more than three billion people around the world. Having previously delivered successful hiring projects for Huawei Ireland, the DeepRec.ai team were brought on to fill several niche positions, including a Lab Director and a Principal Researcher (Data Centre Network Architecture). Our consultants used this opportunity to better understand Huawei’s unique needs in Cold and Warm Media Storage Facilities, ensuring we could assign the right delivery specialist to the project. Given the scarcity of this talent, we built a global candidate map, targeted competitors with similar functions, and extended the search parameters. By leaning on our communities, newsletters, and international talent network, we identified and engaged with candidates in Japan and the USA. DeepRec.ai supported with the relocation packages, ultimately filling the key roles with a CV-to-Interview-to-Offer Ratio of 1:1:1. As a result, DeepRec.ai is now the exclusive talent supplier to Huawei Switzerland Storage Lab.

Huawei
Vanessa Sanchez

I recommend DeepRec.ai on the quality of the candidates presented, the quality of the communication (both with us and the candidate), the responsiveness, and the great follow-up overall. 

Vanessa Sanchez
Vanessa Sanchez
HR Business Partner - R&D
LAUNCHDARKLY
28
Hires Made
6
Women Hired
5
Function Areas Supported
LaunchDarkly helps some of the biggest companies in the world take total control over software launches, get deeper and actionable insights into how users experience their products, and helped revolutionize the ways technical and business teams work independently. They enlisted Hayley Killengrey to build an entire team from scratch. Hayley recruited 28 people across DevOps, Security, Data, Technical Support Engineers and Product Designers, staffing their teams across engineering, infrastructure and data into their Oakland office.
LaunchDarkly
Head of Talent

Trinnovo Group jumpstarted my hiring program and was able to help me build the foundation of my organization at LaunchDarkly. The team brought great candidates to the table, particularly in the areas of devops, infrastructure, and data and were invaluable in helping me quickly fill those initial critical senior level foundational hires. A pleasure to work with and an expert in the close, I don't recall losing a single candidate at offer stage. Highest recommendation, work with Trinnovo Group, you won't regret it!

Head of Talent
Head of Talent
SYNTHESIA
1.5:1
CV to Interview Ratio
Senior Research Engineer
Role
Exclusive
Voice & Video Supplier

Synthesia was founded in 2017 by a team of AI researchers and entrepreneurs from UCL, Stanford, TUM and Cambridge. Its mission is to empower everyone to make video content - without cameras, microphones, or studios. Using AI, Synthesia radically changes the process of content creation and unleashes human creativity for good.

Following a raise of £180M in funding, Synthesia needed strong engineers to move from Scale Up to Enterprise. Having worked with 27 agencies in 2 years and never having made a hire, the team engaged DeepRec.ai to work on these incredibly niche roles, which other recruitment agencies couldn't get a grip of.
 
It was a tough interview process with only the top 1% of candidates getting to interview. There were six stages, including: Intro, Tech Test, Take Home Test, Tech Interview, CEO Meeting & CTO Meeting. We were delighted to have made the hire. 
Synthesia
Mark Deubel
Anthony and Jonathan are knowledgable in the field they recruit for. They understand the challenge and the hiring bar.
They do not push irrelevant candidates, know when to make a gamble and are humble in their approach. Deeprec.ai is the only agency that is not costing me time."
Mark Deubel
Mark Deubel
Global Manager
WORLDCOIN
EHS
Embedded Hiring Solution
8
Key Hires Made
1.43
CV to Interview Ratio
Worldcoin is a cutting-edge blockchain technology company founded by Sam Altman. Looking to establish a presence in Berlin, Germany, they engaged Anthony Kelly as an Embedded Talent Partner to source, evaluate, and onboard top-tier engineering talent. Through close collaboration, integration into Worldcoin’s operations, and a tailored approach to sourcing and interviewing, the engagement played a pivotal role in building a high-performing engineering team for Worldcoin’s Berlin office.
Worldcoin
Head of HR

Anthony was thrown one of our toughest roles and navigated it like a champ. He quickly calibrated the profile and found us one of the strongest candidates on the market. He’s highly communicative and fast-paced. 10/10 would work with him again.

Head of HR
Head of HR