We are fully licensed across the UK, Ireland, Switzerland, Germany and the USA, enabling us to support customers with compliant cross-border talent acquisition.
The East Coast home of DeepRec.ai. From our Boston office, we provide staffing solutions for North America's best-in-class Deep Tech ecosystem.
CUSTOMERS SUPPORTED IN BOSTON
MEET THE TEAM
Market Guide
Built with fresh insights from our global talent network, we develop our annual market guides to support anyone hoping to benchmark salaries in the US, align remuneration with the wider market, or learn more about the Deep Tech trends shaping North America. Download your free copy here:

INSIGHTS
Blog
AI Is Rewiring the Staffing Ecosystem - and Embedding DE&I at Its Core
9 days ago
The Leadership Lab Podcast
Earth Observed | Reducing Friction Between EO Providers
3 months ago
LATEST JOBS
California, United States
Senior Agentic AI Engineer
Permanent$300000 - $400000 per annum
Senior Agentic AI Engineer$300,000 - $400,000Onsite, Palo Alto (Remote for exceptional talent)Full time / PermanentA well-known, frontier GenAI company is undergoing a major product pivot, moving from single-modal generative experiences toward a consumer multi-agent ecosystem designed to feel genuinely autonomous, useful, and alive.They’re building the core infrastructure that will define how millions of users interact with AI agents daily. From planning and execution to memory, creativity, and proactive behaviour. This role sits at the heart of this shift: designing and shipping the systems that make intelligent agents function for 1M users.What You’ll DoDesign and evolve the agent runtime, the core loop handling reasoning, tool use, planning, memory retrieval, and response generationBuild agent capabilities across modalities (e.g. image/video generation, voice interaction, browsing, code execution) and ship themOwn LLM orchestration and model routing across multiple providers, optimising latency, cost, reliability, and qualityImplement memory systems that allow agents to learn from interactions (long-term memory, episodic recall, semantic retrieval)Prototype and productionize autonomous behaviours such as proactive task execution, scheduling, and goal-directed workflowsCreate evaluation frameworks and metrics that measure agent quality, personality consistency, and real user impactWhat “Great” Looks LikeYou’ve personally built and shipped agentic systems, not just prompt wrappers or demosYou’re comfortable owning ambiguous, greenfield problems and turning ideas into working product fastYou think in systems: distributed workflows, multi-step reasoning, orchestration, reliabilityYou code daily and care deeply about performance, UX feel, and real-world usefulness(If you’re looking for a narrowly scoped role, heavy process, or pure research track, then this won’t be the right fit.)Why JoinJoin at a genuine product inflection point, early access launch, new architecture direction, and strong internal momentumWork in a small, elite engineering cohort where each senior hire has outsized ownership and influenceHelp define the company’s next-generation agent platform and model infrastructure from the ground upCollaborate closely with product leadership and shape how consumer AI agents evolve in the real worldClear trajectory toward technical leadership and founding-level impact as the organisation scalesIf you’ve built real agent systems and want to work on problems that don’t have playbooks yet, please apply with your resume!
Posted about 6 hours ago
VIEW ROLESan Francisco, California, United States
Senior ML Infra Engineer
Permanent$200000 - $300000 per annum
Senior Machine Learning Infra Engineer | San Francisco | Competitive Salary EquityOur client is an early-stage AI company building foundation models for physics to enable end-to-end industrial automation, from simulation and design through optimization, validation, and production. They are assembling a small, elite, founder-led team focused on shipping real systems into production, backed by world-class investors and technical advisors. They are hiring a Machine Learning Cloud Infrastructure Engineer to own the full ML infrastructure stack behind physics-based foundation models. Working directly with the CEO and founding team, you will build, scale, and operate production-grade ML systems used by real customers. What you will doOwn distributed training and fine-tuning infrastructure across multi-GPU and multi-node clustersDesign and operate low-latency, highly reliable inference and model serving systemsBuild secure fine-tuning pipelines allowing customers to adapt models to their data and workflowsDeliver deployments across cloud and on-prem environments, including enterprise and air-gapped setupsDesign data pipelines for large-scale simulation and CFD datasetsImplement observability, monitoring, and debugging across training, serving, and data pipelinesWork directly with customers on deployment, integration, and scaling challengesMove quickly from prototype to production infrastructure What our client is looking for3 years building and scaling ML infrastructure for training, fine-tuning, serving, or deploymentStrong experience with AWS, GCP, or AzureHands-on expertise with Kubernetes, Docker, and infrastructure-as-codeExperience with distributed training frameworks such as PyTorch Distributed, DeepSpeed, or RayProven experience building production-grade inference systemsStrong Python skills and deep understanding of the end-to-end ML lifecycleHigh execution velocity, strong debugging instincts, and comfort operating in ambiguity Nice to haveBackground in physics, simulation, or computer-aided engineering softwareExperience deploying ML systems into enterprise or regulated environmentsFoundation model fine-tuning infrastructure experienceGPU performance optimization experience (CUDA, Triton, etc.)Large-scale ML data engineering and validation pipelinesExperience at high-growth AI startups or leading AI research labsCustomer-facing or forward-deployed engineering experienceOpen-source contributions to ML infrastructure This role suits someone who earns respect through hands-on technical contribution, thrives in intense, execution-driven environments, values deep focused work, and takes full ownership of outcomes. The company offers ownership of core infrastructure, direct collaboration with the CEO and founding team, work on high-impact AI and physics problems, competitive compensation with meaningful equity, an in-person-first culture five days a week, strong benefits, daily meals, stipends, and immigration support.
Posted 12 days ago
VIEW ROLESan Mateo, California, United States
Senior MLOps Engineer
Permanent$200000 - $250000 per annum
Senior MLOps / ML Infrastructure Engineer About the Company Our client is a Series B, venture-backed deep-tech company building a Physics AI platform that helps engineering teams bring products to market faster, reduce development risk, and explore better designs with greater confidence. The platform combines large-scale simulation data with modern machine learning to generate high-fidelity predictions of physical behavior in near real time. Customers include leading organizations across aerospace, automotive, and advanced manufacturing, working on some of the most demanding real-world engineering problems. The Role This role focuses on building and operating the infrastructure that powers physics-based AI systems at scale. The position enables ML engineers and scientists to train, track, deploy, and monitor models reliably without managing low-level infrastructure. The work sits at the intersection of ML systems, cloud infrastructure, and large-scale simulation data, with a strong emphasis on performance, reliability, and developer productivity. It is a hands-on engineering role in a fast-moving, in-office environment, working closely with ML researchers, platform engineers, and product teams. What You’ll DoDesign, build, and maintain robust MLOps infrastructure supporting the full ML lifecycle, from experimentation and training through to production deployment and monitoringImplement automated training pipelines, experiment tracking, and model lifecycle management using tools such as Kubeflow, MLflow, and Argo WorkflowsDevelop scalable data pipelines capable of handling large volumes of unstructured data, particularly 3D geometric data and physics simulation outputsDeploy machine learning models into production inference systems with strong standards for performance, reliability, and observabilityManage model registries and integrate them with CI/CD workflows to support consistent and reliable model releasesImplement monitoring systems that continuously track model health and performance in productionCollaborate closely with ML researchers, platform engineers, and product teams to evolve the infrastructure platform for physics-based AI applicationsWrite production-grade code and optimize cloud infrastructure, primarily on Google Cloud Platform, while making thoughtful trade-offs around scalability, cost, and operational simplicity using Docker and KubernetesWhat We’re Looking ForBachelor’s degree or higher in Computer Science, Data Science, Applied Mathematics, or a closely related field5 years of industry experience building MLOps platforms or ML systems in production environmentsStrong proficiency in Python, with working knowledge of BASH and SQLHands-on experience with cloud infrastructure such as GCP, AWS, or AzureExperience with containerization and orchestration tools including Docker and KubernetesFamiliarity with modern MLOps frameworks such as Kubeflow, MLflow, and Argo WorkflowsExperience building and maintaining scalable data pipelines, ideally working with unstructured or high-dimensional dataAbility to independently deploy models and implement monitored inference systems in productionComfortable troubleshooting complex distributed systems and building reliable infrastructure that other teams depend onNice to HaveInterest in physics simulation, scientific computing, or HPC environmentsExperience building production MLOps platforms in deep-tech or simulation-heavy environmentsFamiliarity with additional programming languages such as Go or C Working Style and Culture This role suits someone who enjoys startup environments, learns quickly, and communicates clearly across disciplines. The team works on-site five days a week and values close collaboration, fast feedback loops, and hands-on problem solving. There is a strong belief that great infrastructure should be largely invisible, enabling engineers and scientists to move faster without friction.
Posted 12 days ago
VIEW ROLECalifornia, United States
Founding Machine Learning Engineer
Permanent$200000 - $250000 per annum
Founding Machine Learning Research Engineer (Evaluation & Model Iteration Focus) Location: Bay Area Onsite We’re working with a pioneering stealth-stage company in the Bay Area that is redefining how AI is evaluated in healthcare. Founded by ex-Stanford AI Lab researchers, ex-AWS, with deep expertise in representation learning and working on LLM interpretability. We are looking for a Founding ML Engineer to: Lead investigations into model behavior, failure modes, and uncertaintyDeliver decision-grade evidence that informs FDA submissions and hospital adoptionWork directly with medical imaging vendors and hospitalsCombine hands-on ML skills with strong customer-facing judgment To succeed in this role, we're looking for a genuine interest in rigorous evaluation/testing of ML systems, especially in medical AI. This is a high-impact, high-ownership role, your work will directly influence real-world outcomes, FDA approvals, and how high-stakes AI is governed. Compensation includes competitive salary $200k - $250k meaningful early-stage equity (1–3%). If this sounds like something you’d be excited about, please apply with your resume and we can set up a quick conversation to share more details.
Posted 14 days ago
VIEW ROLESan Francisco, California, United States
Speech Algorithm Engineer
Permanent$200000 - $250000 per annum
Speech Algorithm Engineer (Speech LLM / SpeechLLM)$150,000 - $250,000San Francisco, Hybrid 3x per week in officeFull time / PermanentAbout the Role This company is already profitable, growing fast, and used by over 1.5M professionals globally. Revenue is tracking at ~$250M in under three years. The product works and is highly marketable, the next step is making its speech system significantly more accurate across languages, industries, and real-world conversations. We’re hiring a speech algorithm engineer to improve speaker diarization and keyword recognition in productio. This is applied, high-impact work that ships. What You’ll DoImprove speaker diarization and multi-language speech recognition accuracy in real customer conversationsDesign and optimize hotword and terminology recognition systems for industry-specific use casesFine-tune and train large speech models on substantial audio datasetsBuild clear evaluation frameworks to measure keyword accuracy and speaker separation performanceCompare open-source and commercial ASR systems and push performance beyond themWork closely with product and engineering to deploy models into live systems used dailyWhat “Great” Looks LikeYou’ve trained or fine-tuned speech models on large-scale datasets (not small research-only projects)You understand how speech systems behave in noisy, real-world conditionsYou’ve improved measurable production metrics (accuracy, diarization quality, keyword recall)You can read research and turn it into working systemsYou take ownership when performance drops Notable: If your experience is limited to light experimentation or purely academic research without production exposure, this likely won’t be a fit. Why JoinProfitable company at ~$250M run rateHybrid San Francisco team building both hardware and AI systemsReal ownership and visibility, not one engineer in a large orgGlobal product scale and meaningful datasetsClear growth path toward senior technical leadership as the audio function expandsStrong data security and compliance standards, this is enterprise-grade infrastructure
Posted 19 days ago
VIEW ROLEMassachusetts, United States
Machine Learning Research Scientist
Permanent$150000 - $200000 per annum
Machine Learning Research ScientistLocation: Waltham, MA (Hybrid. Open to exceptional candidates outside Boston willing to spend approximately one week per month on site)Our client is an early-stage, venture-backed deep-tech company developing next-generation tools for subsurface characterization to accelerate clean energy deployment. Their work sits at the intersection of numerical physics, geoscience, and advanced machine learning, with a specific focus on reducing the cost and uncertainty of geothermal exploration.Founded by experts in physics and computation, the team is intentionally small, highly technical, and academically rigorous. They value first-principles thinking, intellectual curiosity, and a deep personal commitment to climate and clean energy impact. The company has over two years of runway following a recent pre-seed raise and is preparing for its next funding round.As a Machine Learning Research Scientist, you will help build research-grade machine learning models that tightly integrate physical laws with data. You will work closely with domain experts in physics simulation and software engineering to translate geophysical insight into principled ML architectures that can be trusted in real-world energy decisions.This is a selective, fundamentals-driven research role. Our client is not looking for a tooling-only ML profile, but for someone who thinks in mathematics and physics first.Key ResponsibilitiesDevelop machine learning models grounded in mathematical and physical principles to augment numerical physics simulationsDesign and implement algorithms that explicitly incorporate differential equations and physical constraintsCollaborate closely with physicists and engineers to translate geophysical understanding into ML architecturesInfluence the direction of core ML research within a lean, mission-driven teamBuild reproducible research workflows that feed directly into tools for clean energy deploymentRequired ExperienceMust-HavesPhD or equivalent research experience in Mathematics, Physics, or a closely related quantitative fieldStrong mathematical maturity with regular use of linear algebra, differential equations, and numerical methodsFirst-principles problem-solving approach rather than reliance on high-level ML abstractionsStrong Python skills and experience writing clean, research-grade ML codeGenuine motivation for climate, clean energy, and scientifically meaningful workNice-to-HavesExperience in scientific machine learning, including PINNs, operator learning, or surrogate modelingBackground in numerical simulation or high-performance computingExposure to geophysics, subsurface modeling, or energy-domain problemsWhat Success Looks LikeYou can clearly articulate the why, how, and what of your modeling decisions, particularly where physics and ML intersectYou produce reproducible research that improves the speed and quality of subsurface predictionsYou contribute to both foundational algorithms and practical tools used by scientists and engineersInterview ProcessVideo interview with the founding teamOn-site interview with the technical team over one full day
Posted 26 days ago
VIEW ROLESan Francisco, California, United States
LLM Algorithm Tech Lead
Permanent$200000 - $300000 per annum
LLM Algorithm Lead$200,000 - $300,000San Francisco, HybridFull-time / PermanentA product-focused AI start-up is building LLM systems that run in production and are used daily by over a million professionals. This role is responsible for designing, shipping, and maintaining applied LLM systems that support real product features, with an emphasis on reliability, cost, and scale rather than experimentation. Why This Role MattersOwn how LLM systems behave in a large, user-facing productMake architectural decisions that affect reliability, latency, and costMove LLM features from prototype to stable production systemsSet technical direction for applied LLM algorithms and evaluation practicesWhat You’ll DoDesign structured LLM workflows, including planning, reasoning, and multi-step executionBuild and maintain core components such as memory, personalization, and reusable LLM modulesLead development of LLM-powered product features from design through productionBuild and optimize retrieval pipelines (RAG) via chunking, indexing, reranking, and evaluationSelect and route between models based on performance, cost, and latency constraintsDefine evaluation metrics, monitoring, and feedback loopsDebug production issues and drive algorithm-level improvementsWhat You BringExperience shipping LLM-based systems into productionStrong understanding of prompting, reasoning workflows, and system designHands-on experience with RAG systemsExperience building evaluation, monitoring, or safety mechanismsAbility to lead technical decisions and guide other engineersExperience with inference optimization, efficiency, or large-scale systems is a plus
Posted 30 days ago
VIEW ROLESan Francisco, California, United States
Applied AI Engineer
Permanent$200000 - $300000 per annum
AI Applied Engineer$200,000 - $300,000San Francisco, HybridPermanent / Full-timeA product-led AI start-up is building one of the most widely adopted AI work companions in the world, operating at massive real-user scale with millions of professionals relying on it daily. The challenge problem now is designing AI systems that reliably support complex knowledge work across preparation, collaboration, and follow-through, inside products people trust. This role is ideal for someone who wants to work across AI engineering, product thinking, and ultimately shape how AI actually shows up in day-to-day professional workflows. Why This Role MattersOwn how AI supports high-stakes knowledge workDesign multi-step AI workflows that users rely on repeatedlyHelp define how agent-like systems behave inside a consumer-grade productWork beyond prompt design into evaluation, iteration, and reliabilityWhat You’ll DoOwn the end-to-end design of AI-first workflows for preparation, collaboration, and follow-up Design and iterate multi-step LLM / agentic systems, spanning intent understanding, planning, tool invocation, memory usage, and refinement loopsBuild reusable AI skills, prompts, templates, and evaluation pipelines that can power multiple product experiencesDefine success metrics for AI behaviour, run experiments, and use real interaction data to improve usefulness and reliabilityPartner closely with engineering and ML teams to ship quickly while maintaining a high bar for product quality and user experienceWhat You BringProven experience shipping AI/ML powered products end to endStrong working understanding of LLM systems: prompting, tool calling, retrieval, context construction, evaluation, and common failure modesAbility to translate user needs into clear flows, specs, and examples, including edge cases and expected behavioursComfort working directly with data and interaction logs to debug issues and compare variantsHands-on experience designing agent-like workflows involving multi-step plans, multiple tools, and refinement or self-correction
Posted 30 days ago
VIEW ROLE