We are fully licensed across the UK, Ireland, Switzerland, Germany and the USA, enabling us to support customers with compliant cross-border talent acquisition.
The East Coast home of DeepRec.ai. From our Boston office, we provide staffing solutions for North America's best-in-class Deep Tech ecosystem.
SALARY GUIDE
Built with fresh insights from our talent network, we developed this guide for anyone hoping to benchmark salaries, align remuneration with the wider market, or learn more about the trends and opportunities across the German Deep Tech space. Download your copy here:
INSIGHTS
Blog
54 Days and Counting: Why Early-Stage Startups Keep Losing ML Systems Talent
30 days ago
Blog
AI Is Rewiring the Staffing Ecosystem - and Embedding DE&I at Its Core
2 months ago
Blog
Hiring in Quantum Computing: Why Traditional Recruitment Models Break Down
4 months ago
LATEST JOBS
London, Greater London, South East, England
Data Science Consultant
Permanent£55000 - £88000 per annum
Job Title: Data Science Consultant (GenAI & AI Consulting) Location: London (Hybrid)Roles: MultipleSeniority: Junor to Experienced The Opportunity Work at the intersection of Data Science and GenAI — not just using AI tools, but shaping how they are evaluated, trusted, and delivered to enterprise clients. This is a consulting role for Data Scientists who can go beyond models and clearly explain why they work, how they’re validated, and the business impact they drive. The RoleDeliver end-to-end Data Science and GenAI solutionsBuild and evaluate ML and LLM-based modelsDefine metrics, validation approaches, and evaluation frameworksTranslate complex AI outputs into clear business insightsWork on PoCs, prototypes, and client-facing innovation projects(Senior/Managing) Lead stakeholders and mentor junior consultantsWhat You’ll BringStrong Python and Machine Learning foundationsExperience across the full Data Science lifecycleSolid understanding of model evaluation, validation, and accuracy metricsExposure to GenAI (LLMs, agentic AI, or similar)Ability to confidently explain technical concepts to clientsConsulting or client-facing experience preferredWhat Makes You Stand OutYou understand why models perform the way they doYou can challenge outputs, not just generate themYou can link AI solutions to real business value and ROIBenefitsBonus schemeUp to 10% pensionPrivate medical25 days holiday option to buy moreLife assurance & income protectionFunded certifications
Posted 2 minutes ago
VIEW ROLEMassachusetts, United States
Senior Machine Learning Engineer
Permanent$175000 - $200000 per annum
Senior Machine Learning Engineer Fully Remote (United States) | up to $200k base equity The role Our client is hiring a Senior Machine Learning Engineer to own the end-to-end development and deployment of large language and machine learning models, with a primary focus on data preprocessing, model training, and fine-tuning across large-scale healthcare datasets. This is a hands-on, builder-focused role. You will be designing and training models that solve real clinical and operational problems, integrating structured and unstructured data, and shaping the long-term ML roadmap as the company scales its US product. What you will be doing Data preprocessingClean, transform, and prepare large, complex healthcare datasets for ML model development.Handle missing values, outlier detection, feature engineering, and normalization at scale.Identify, collect, and curate relevant industry-specific datasets for retraining and fine-tuning.Format data appropriately for the chosen LLM and training pipeline.Model training and fine-tuningDesign, train, and fine-tune LLMs on extensive healthcare data to solve specific clinical or operational problems.Set up and manage the training environment, including GPU instances and supporting tooling.Fine-tune pre-trained LLMs on custom datasets to hit specific objectives.Run hyperparameter experiments (learning rate, batch size, training epochs) to optimize performance.Integrate structured and unstructured data into multimodal and multi-input models.Evaluation, optimization, and pipelinesEvaluate model performance using appropriate metrics, identify gaps, and implement targeted optimizations.Build and maintain robust, scalable data and ML pipelines spanning training, inference, and deployment.Collaborate closely with data scientists, clinicians, and software engineers to integrate models into production.Maintain clear documentation of models, pipelines, and experimental results.What we are looking for Essential5 years of experience in Machine Learning Engineering or a comparable role.Proven experience with large-scale data preprocessing, LLM and model training, and fine-tuning.Distributed training experience with PyTorch Distributed, DeepSpeed, Ray, or Hugging Face Accelerate.GPU/TPU optimization and memory management for large language models.Strong Python and core ML stack: PyTorch, TensorFlow, Scikit-learn, Pandas, NumPy.Solid grasp of ML algorithms, large language models, and deep learning architectures.Nice to haveHands-on healthcare data experience.Experience with cloud platforms (GCP strongly preferred; AWS considered) and distributed compute frameworks like Spark.Familiarity with MLOps practices and tooling.Bachelor's or Master's in Computer Science, Machine Learning, AI, or a related quantitative field.Work authorization Open to US Citizens, Green Card holders, and candidates already in the US on a valid H-1B (transfers considered). About the company Our client is an AI-first healthtech company on a mission to detect cancer earlier and prevent it where possible. Their platform has already assessed over 700,000 patients and identified more than 75,000 cancers, and they are now expanding their US footprint with a greenfield product build off the back of a fresh Series A round, backed by one of the most respected VCs in the world. Most of the cancer industry focuses on treatment. This team is focused on detection and prevention, where the impact on survival rates is greatest. The founders are practising doctors who have lived in the problem space first-hand, and the company is tech-first, with the majority of headcount sitting in engineering, data, and ML. Why joinReal-world impact: AI that directly contributes to earlier cancer detection and improved patient outcomes.Greenfield US build at a critical inflection point, with high ownership from day one.Series A backing from a top-tier global VC.Builder culture: production-grade work, not research or prototypes.Direct exposure to the CTO and senior AI leadership in a flat, fast-moving environment.Continuous learning, with access to the latest tools and methods in AI and healthcare.BenefitsCompetitive base salary plus meaningful equity.Fully remote across the United States.Flexible working arrangements.Continuous learning opportunities and access to leading AI tooling.The chance to do work that genuinely matters: building AI that helps save lives.How to apply This search is being run on a confidential basis by Sam Warwick at DeepRec.ai. To apply or learn more about the company before going forward, please get in touch directly and full details will be shared once an initial conversation has taken place.
Posted about 3 hours ago
VIEW ROLEMassachusetts, United States
Senior MLOps Engineer
Permanent$175000 - $200000 per annum
Senior MLOps Engineer Fully Remote (United States) | up to $200k base equity The role Our client is hiring a Senior MLOps Engineer to build and operate the production platform powering their ML and LLM-driven healthcare workflows. You will design reliable, secure, and compliant systems for model development, evaluation, deployment, monitoring, and continuous improvement, working closely with ML, data, security, and product teams. This is the right seat for someone who has shipped ML systems in production and is excited about LLM orchestration, RAG, evaluations, guardrails, and observability inside a regulated healthcare environment. What you will be doing MLOps and ML platformDesign and operate ML platforms supporting end-to-end workflows: data ingestion, feature engineering, training, evaluation, deployment, and monitoring.Build and maintain CI/CD for ML, including testing, packaging, versioning, reproducibility, automated rollbacks, and approvals.Implement MLOps best practices: model registry, experiment tracking, lineage, governance, and reproducible training environments.Develop scalable training infrastructure: distributed training, GPU scheduling, cost controls, and auto-scaling.Build and maintain feature pipelines and feature stores, ensuring consistency between training and inference.Establish model monitoring and observability: performance, drift, fairness signals where relevant, latency, throughput, and data quality.Own end-to-end LLM delivery pipelines: prompt versioning, retrieval, orchestration, evaluation, deployment, monitoring, and iterative improvement.Build LLM evaluation harnesses, both offline and online: golden datasets, automated regression testing, human-in-the-loop review, and risk scoring.Implement cost controls: token and cost budgeting, caching, autoscaling, and performance tuning.Deployment, reliability, and operationsProductionize ML models on GCP using containers and orchestration (GKE, Cloud Run).Build CI/CD for ML and LLM systems with automated tests and safe rollouts.Implement observability: tracing, metrics, logs, dashboards, and alerting for model and system health, including hallucination indicators and retrieval quality.Data, governance, and healthcare complianceDesign systems with security and privacy by default: IAM, least privilege, secrets management, audit logs, encryption, retention, and PHI/PII handling.Implement governance: model and prompt lineage, dataset provenance, evaluation traceability, and approval workflows aligned with healthcare compliance expectations.Integrate guardrails: content filters, policy checks, prompt injection defenses, structured output validation, and fallback strategies.What we are looking for Essential6 years in software or platform engineering, including 4 years operating ML systems in production.Strong ML engineering background: training pipelines, evaluation, deployment patterns, monitoring, and iteration loops.Demonstrated hands-on experience with LLM systems in production.Strong Python plus production-grade experience building APIs and services.Strong experience with GCP services and cloud-native patterns.Production experience with Vertex AI (pipelines, endpoints, feature store, model registry, evaluation) and/or managed vector search on GCP.Containerization and orchestration with Docker, Kubernetes/GKE, and/or Cloud Run.Work authorization Open to US Citizens, Green Card holders, and candidates already in the US on a valid H-1B (transfers considered). About the company Our client is an AI-first healthtech company on a mission to detect cancer earlier and prevent it where possible. Their platform has already assessed over 700,000 patients and identified more than 75,000 cancers, and they are now expanding their US footprint with a greenfield product build off the back of a fresh Series A round, backed by one of the most respected VCs in the world. Most of the cancer industry focuses on treatment. This team is focused on detection and prevention, where the impact on survival rates is greatest. The founders are practising doctors who have lived in the problem space first-hand, and the company is tech-first, with the majority of headcount sitting in engineering, data, and ML. Why joinReal-world impact: AI that directly contributes to earlier cancer detection and improved patient outcomes.Greenfield US build at a critical inflection point, with high ownership from day one.Series A backing from a top-tier global VC.Builder culture: production-grade work, not research or prototypes.Direct exposure to the CTO and senior AI leadership in a flat, fast-moving environment.Continuous learning, with access to the latest tools and methods in AI and healthcare.BenefitsCompetitive base salary plus meaningful equity.Fully remote across the United States.Flexible working arrangements.
Posted about 3 hours ago
VIEW ROLECalifornia, United States
Founding Member of Technical Staff (Research/Post-training)
Permanent$200000 - $275000 per annum
Founding Member of Technical Staff (Research / Post-Training) Applied AI / RL | San Francisco (onsite) | $200k–$275k 0.25–0.50% equityDeepRec is partnered with a YC-backed (S25), seed-stage applied AI and data company working at the cutting edge of reinforcement learning and agentic systems. They collaborate closely with leading AI labs to train models capable of executing complex, real-world workflows across financial services. Their core platform focuses on building high-quality RL environments that simulate tasks across investment banking, private equity, and hedge funds (e.g. financial modelling, presentations, etc.). Following a recent seed raise, they’re now building out their founding research and engineering team. The Opportunity This is a Founding Member of Technical Staff hire focused on research and post-training. You’ll take ownership of training and evaluating frontier models, shaping external benchmarks, and contributing to the company’s research presence. What You’ll Be DoingTraining open-source / frontier models on proprietary RL environments to validate performance and generate insightsLeading public-facing benchmarks and leaderboard initiatives for frontier modelsPublishing research (blogs, papers) to engage both industry and academic communitiesContributing to core platform work where needed (AI tooling, data pipelines, environment/reward systems)Helping establish engineering and research culture from day oneWhat They’re Looking ForExperience in model post-training (fine-tuning, RLHF, or similar)Track record of publishing research or contributing to open research communitiesFamiliarity with RL, evaluations, or benchmarking for AI agentsStrong startup mindset — high velocity, high ownershipProduct awareness and ability to prioritise across a broad roadmapComfortable engaging with users, customers, and subject matter expertsNice to HavePrevious startup or founding experienceCompensation & Benefits$200k–$275k base 0.25–0.50% equityFully covered healthcare (including dependents)Relocation support401(k)Meals, gym, and transport fully coveredVisa sponsorship availableLocation San Francisco — full-time, onsite
Posted about 3 hours ago
VIEW ROLEMunich, Bayern, Germany
Staff/Principal AI Researcher
Permanent€90000 - €150000 per annum
Staff / Principal AI Researcher About the company A deep-tech startup building ultra energy-efficient computing infrastructure for new-generation AI inference. Our brain-inspired chip architecture is purpose-built for dynamically sparse and event-driven algorithms, delivering an order of magnitude better energy efficiency than today's GPU-based systems. Our hardware is already deployed at leading research institutions across Europe and the US, and we are scaling rapidly as the industry wakes up to the energy bottleneck that GenAI is creating. We are looking for talented and passionate people with a real appetite for problem solving to help shape the future of AI hardware. About the role As a Staff/Principal AI Researcher, you will lead the design and development of advanced AI algorithms tailored for our sparse, brain-inspired computing systems. This is a senior, high-autonomy role sitting at the intersection of frontier AI research and novel hardware, with direct influence over both the technical roadmap and how our algorithms reach customers in production. You will lead technically, mentor other researchers, and work closely with our compiler, hardware, and systems teams to make sure our models actually ship and scale on real silicon. What you'll be doingLeading the design, development, and optimization of advanced AI algorithms tailored for sparse hardware and brain-inspired computing systemsArchitecting and implementing efficient machine learning and deep learning models, with a focus on scalability, performance, and hardware-awarenessDriving innovation in algorithmic approaches for sparse and event-driven computing paradigms, especially but not limited to Transformer-based architecturesMentoring and managing a team of AI engineers and researchers, fostering technical excellence and professional growthDeveloping robust, scalable, and maintainable algorithmic frameworks that integrate cleanly with the wider software and hardware ecosystemDefining and implementing benchmarking methodologies to evaluate model accuracy, efficiency, latency, and energy consumptionCollaborating with compiler, hardware, and systems teams to ensure seamless integration and co-optimization of algorithms and execution pipelinesContributing to technical documentation, research publications, and demonstrators that showcase the team's capabilitiesWhat we're looking forProven experience leading cross-functional technical teams in AI/ML development or applied researchDeep expertise in machine learning and deep learning algorithms, including model design, training, and optimization5 years of relevant industry experience developing production-grade AI solutionsExpert-level proficiency with ML frameworks such as PyTorch or TensorFlow, and familiarity with modern model deployment workflowsHands-on experience optimizing models for hardware acceleration (CPU, GPU, or specialized accelerators)Strong analytical and problem-solving skills with a track record of tackling genuinely hard algorithmic challengesBSc or MSc in Computer Science, Artificial Intelligence, Applied Mathematics, or a related fieldNice to havePhD or Dr.-Ing. in a computationally intensive disciplineHands-on experience with DevOps tools and CI/CD pipelinesFamiliarity with MAMBA or other state-space model architecturesHands-on experience with model compression, quantization, or pruningBackground in computer architectureContributions to open-source projects or publications at leading AI venuesFamiliarity with multi-chip computing concepts and techniquesWhat we offer A highly competitive salary, relocation support, and a flexible, inclusive work environment. We are an equal opportunity employer and welcome people of different backgrounds, nationalities, and experiences.
Posted about 3 hours ago
VIEW ROLEMunich, Bayern, Germany
Senior Software Engineer, ML Infrastructure
Permanent€80000 - €120000 per annum
Senior Software Engineer About the Company We're partnered with a stealth-stage robotics and embodied AI company building toward production-grade physical systems. They sit at the intersection of two demanding worlds, high-performance distributed computing for AI research and real-time execution on resource-constrained robot hardware, and are now hiring the founding members of their Platform Team. About the Role As a founding member of the Platform Team, you'll architect and build the software backbone of the company. The software needs to span two very different worlds, high-performance distributed computing for AI research (training, massive data ingestion, RL simulation) and resource-constrained, real-time execution on physical robot hardware. This is a high-impact, greenfield opportunity. You won't be maintaining legacy code, you'll be making critical architectural decisions that define how the company scales from prototype to production. You'll act as the bridge between research scientists and hardware, ensuring that state-of-the-art models can be trained efficiently and deployed reliably to the real world. Your ResponsibilitiesArchitect and build. Design and implement a scalable software platform that unifies research workflows (training, simulation) with production realities (real-time inference, data collection).Bridge the gap. Develop seamless tooling that facilitates the transition of models from Python-heavy research environments to performant C /Rust runtimes on hardware.Performance optimization. Optimize the stack's critical path, focusing on inference latency, distributed training throughput, and system resource management.Infrastructure and tooling. Establish engineering excellence by setting up robust CI/CD pipelines, build systems (Bazel), and containerization strategies (Docker).Reliability. Engineer fault-tolerant systems capable of handling long-running experiments and safety-critical operations on physical robots.Essential SkillsEducation. MS in Computer Science or a comparable technical field.Software engineering. 5 years shipping high-quality software, with a track record of owning large features from design through deployment.Language proficiency. Expert-level fluency in Python (for tooling and ML infrastructure), plus strong proficiency in either modern C or Rust.System architecture. Demonstrated experience designing scalable software architectures, including microservices, API design (gRPC/REST), and distributed systems.Engineering rigor. A commitment to automated testing, code reviews, and writing maintainable, modular code.Beneficial SkillsMachine learning systems. Experience building ML frameworks (PyTorch), MLOps infrastructure, data pipelines, or deploying models.Build and deploy. Hands-on experience with Docker and Bazel. Experience with orchestration (Kubernetes) or job schedulers (SLURM) is a plus.Robotics middleware. Familiarity with ROS2, DDS, or similar message-passing frameworks.Cloud infrastructure. Experience managing compute resources on AWS, GCP, or Azure using Infrastructure-as-Code (Terraform, Ansible).Simulation. Experience integrating with simulation environments (Isaac Sim, MuJoCo) for Reinforcement Learning.
Posted about 4 hours ago
VIEW ROLEMunich, Bayern, Germany
Senior IT Infrastructure Engineer
Permanent€80000 - €120000 per annum
Senior IT Infrastructure Engineer About the Company We're partnered with an AI and robotics company building infrastructure from the ground up to support advanced development workflows in machine learning and embodied AI. They're at a stage where the IT foundation is being established and shaped, giving this role unusual scope and ownership over the long-term direction of the environment. About the Role As an IT Infrastructure Engineer, you'll be responsible for establishing infrastructure from the ground up, including capacity planning, disaster recovery, and day-to-day operations. You'll manage, configure, and monitor the company's IT infrastructure, including automated backups, ensure the security and availability of resources, and work closely with engineering and operations teams to provide a robust, scalable IT environment that supports AI and robotics development workflows. Your ResponsibilitiesInfrastructure architecture and operations. Design, implement, and maintain on-premise IT infrastructure across compute, storage, and networking. Perform capacity planning, develop and execute backup and disaster recovery strategies, and maintain comprehensive infrastructure documentation.Physical data center and cloud infrastructure. Manage and monitor on-premise IT facilities (servers, cooling, power) and hardware. Design and provision storage and compute/GPU infrastructure for high-performance ML and AI workloads.Enterprise networking. Design and implement WAN/LAN/WiFi network topology with proper segmentation and security controls (firewalls, IDS/IPS). Configure and manage enterprise networking equipment including switches, routers, and load balancers.System administration and support. Deploy and manage Linux server infrastructure. Configure and deploy employee workstations across Linux, macOS, and Windows, and manage IT equipment procurement. Provide technical troubleshooting and support, and manage user accounts with SSO.Vendor management. Establish and manage relationships with technology vendors, negotiate contracts, and coordinate with service providers including ISPs and colocation partners.RequirementsProven track record in building or transforming infrastructureDeep expertise in enterprise networking (WAN/LAN, VLANs, routing, switching, firewalls, VPNs)Strong hands-on experience with server hardware assembly, configuration, and maintenanceExpert knowledge of storage (RAID, SAN/NAS) and backup and recovery solutionsExperience with Linux server administration and troubleshootingSolid understanding of data center operations (power, cooling, security)Hands-on experience provisioning and managing GPU infrastructureScripting skills in Python and Bash for automationExperience with Infrastructure-as-Code tools such as Terraform and AnsibleStrong problem-solving and troubleshooting skills for complex hardware and network issuesExcellent documentation and communication skillsSelf-motivated and able to work independently in a fast-paced environment
Posted about 4 hours ago
VIEW ROLEBerlin, Germany
Distributed Training Infrastructure Engineer
Permanent€150000 - €200000 per annum
Training Infrastructure Engineer About the Company We're partnered with a generative AI lab building the next generation of creative tools by producing realistic sound, speech, and music from video. They're developing cutting-edge foundational generative models that "unmute" silent video content and create custom, hyper-realistic audio for gaming, video platforms, and creators, empowering global storytellers to transform their content. They recently closed a $41 million Seed round co-led by two top-tier US venture firms, with participation from a leading global investor, and are rapidly expanding across Product, Engineering, Go-to-Market, and Growth. About the Role You'll focus on the full training stack, profiling GPU behavior, debugging training pipelines, improving throughput, choosing the right parallelism strategies, and designing the infrastructure that lets the team train models efficiently at scale. The work spans cluster management, model training, efficient data pipelines for video and audio, inference, and optimizing PyTorch code. Your contribution will shape the foundation on which all of their generative models are built and iterated. Key ResponsibilitiesIdentify ideal training strategies (parallelism approaches, precision trade-offs) for a variety of model sizes and compute loadsProfile, debug, and optimize single and multi-GPU operations using tools like Nsight and stack trace viewers to understand what's actually happening at the hardware levelAnalyze and improve the entire training pipeline end to end, including efficient data storage, data loading, distributed training, checkpoint and artifact saving, and loggingSet up scalable systems for experiment tracking, data and model versioning, and experiment insightsDesign, deploy, and maintain large-scale ML training clusters running SLURM for distributed workload orchestrationIdeal Candidate ProfileFamiliarity with the latest and most effective techniques for optimizing training and inference workloads, not from reading papers but from implementing themDeep understanding of GPU memory hierarchy and computation capabilities, knowing what the hardware can do in theory and what prevents you from achieving it in practiceExperience optimizing for both memory-bound and compute-bound operations, with a clear sense of when each constraint mattersExpertise with efficient attention algorithms and their performance characteristics at different scalesNice to HaveExperience implementing custom GPU kernels and integrating them into PyTorchExperience with diffusion and autoregressive models and an understanding of their specific optimization challengesFamiliarity with high-performance storage solutions (VAST, blob storage) and their performance characteristics for ML workloadsExperience managing SLURM clusters at scaleWhy Join?Pivotal moment. Fresh funding is secured and traction is building, this is the point where your contributions can make a real difference to the company's trajectory.True ownership from day one. Genuine autonomy and responsibility, with ideas and work that directly shape both product and company direction.Competitive compensation and equity. Strong packages that ensure you share in the success you help create.Build for the next generation of creators. Be part of the innovation that will transform how creators work and thrive.
Posted about 4 hours ago
VIEW ROLE