We are fully licensed across the UK, Ireland, Switzerland, Germany and the USA, enabling us to support customers with compliant cross-border talent acquisition.

Image 1Image 2
undefined
The East Coast home of DeepRec.ai. From our Boston office, we provide staffing solutions for North America's best-in-class Deep Tech ecosystem.
Hayley Killengrey
HI, I'M Hayley
Co-Founder & MD USA

CUSTOMERS SUPPORTED IN BOSTON

MEET THE TEAM

Anthony Kelly

Co-Founder & MD EU/UK

Hayley Killengrey

Co-Founder & MD USA

Nathan Wills

Senior Consultant | Switzerland

Paddy Hobson

Senior Consultant | DACH

Sam Oliver

Senior Consultant | Contract DACH

Jonathan Harrold

Consultant - Germany

Harry Crick

Consultant | USA

Sam Warwick

Senior Consultant – Geospatial, Earth, & Defence Technology

Benjamin Reavill

Consultant - US

Viki Dowthwaite

Commercial Director

Helena Sullivan

CMO

Marita Harper

HR Partner

Micha Swallow

Head of Talent, People, & Performance

Aaron Gonsalves

Head of Talent

Market Guide

Built with fresh insights from our global talent network, we develop our annual market guides to support anyone hoping to benchmark salaries in the US, align remuneration with the wider market, or learn more about the Deep Tech trends shaping North America. Please let the team know if you'd like a copy.

CONTACT THE TEAM

INSIGHTS

Earth Observed: Accountability from Above

Earth Observed: Accountability from Above

Earth Observed: Spatial Thinking

Earth Observed: Spatial Thinking

What Does Product Market Fit Look Like for Earth Observation?

What Does Product Market Fit Look Like for Earth Observation?

LATEST JOBS

San Francisco, California, United States
Member of Technical Staff (Pre-Training)
Member of Technical Staff - Pre-Training (Remote US)This is an opportunity to join one of the smartest, most ambitious teams in the AI space. Founded in 2023, this fast-growing research and product company is already being talked about alongside some of the biggest names in foundational model development. They’re building powerful, intelligent agent systems and frontier-scale models - and they believe software engineering is the most direct path toward achieving AGI.With major backing from industry leaders, significant compute infrastructure, and a focus on mission-critical enterprise and public-sector environments, they’re tackling some of the hardest AI challenges out there.The RoleAs a Member of Technical Staff (Pre-Training / Data), you’ll be part of a high-performing Data team inside the Applied Research machinery that powers the company’s pre-training and reinforcement learning breakthroughs. Your goal: build the datasets that make better models possible. This is a hands-on, deeply technical role at the intersection of data engineering, research, and large-scale systems.What You’ll DoBuild, scale, and refine huge datasets made up of natural language and source code to train next-gen language modelsWork closely with pre-training, RL, and infrastructure teams to validate your work through fast feedback loopsStay ahead of the curve on data generation, curation, and pre-training strategiesDevelop systems to ingest, filter, and structure billions of tokens across diverse sourcesDesign controlled experiments that help uncover what works and what doesn’tBe a core voice in shaping how the team approaches data for model training - a vital part of their long-term AGI missionWhat You BringSolid hands-on experience with large language models or large-scale ML systemsStrong track record building or working with massive datasets - from raw extraction through to filtering and packagingExposure to training models from scratch - ideally using distributed GPU clustersProficient in Python and ML frameworks like PyTorch or JAX, plus confidence working in Linux, Git, Docker, and cloud/HPC environmentsGreat if you also have some C++/CUDA, Triton kernels, or GPU debugging backgroundYou’re a thinker and a builder - someone who can read the latest paper and turn it into something real, quicklyWhat’s In It for YouFully remote US37 days of paid time off annuallyComprehensive health cover for you and your dependentsMonthly team meetups - travel, accommodation, and even family attendance coveredHome office and wellbeing budgetA competitive salary plus meaningful equityThe chance to work with some of the brightest minds in AGI and do genuinely original workWhat the Process Looks LikeRecruiter intro callFirst technical interview focused on LLMs, performance, or core engineering skillsSecond technical deep dive into your domain (pre-training, data, scaling, etc.)Culture conversation with the founding engineersFinal discussion on compensation and alignmentIf you’re driven by building systems that could reshape how intelligence works - and you want to be surrounded by people who share that fire - this team is where you belong.
Sam WarwickSam Warwick
Toronto, Ontario, Canada
Member of Technical Staff (Frontend)
Member of Technical Staff – Frontend (React.js, Next.js)Location: Toronto, Canada (Hybrid)Type: Full-time, Permanent OverviewOur client (Series A, GenAI Content Platform) is hiring a core frontend engineer in Toronto to architect and scale their browser-based animation and video generation interface. You’ll own the React.js / Next.js web app powering AI-driven content creation for a fast-growing global user base. ResponsibilitiesLead frontend feature development using React.js and Next.js (SSR, ISR, SSG).Implement state management patterns (Zustand, Redux, Jotai, etc.).Integrate with REST/GraphQL APIs and real-time ML-driven backend endpoints.Optimise bundle size, rendering, hydration, and caching across devices and network profiles.Build robust testing pipelines (Jest, React Testing Library, Cypress / Playwright).Establish observability for UI performance, error tracking, and release health.Refactor and modularise code for scaling and improved developer experience.Collaborate closely with backend and ML teams on product UX and performance. Requirements5+ years’ professional frontend experience.Expert-level skills in React.js, Next.js, TypeScript, and modern web standards (ES6+, CSS-in-JS, etc.).Track record building and deploying production-grade, customer-facing applications.Strong grasp of rendering lifecycles, VDOM internals, hydration, and frontend performance tuning.Familiarity with edge compute and deployment (Vercel, Cloudflare Workers) and caching (SWR, ISR, CDNs).Bonus: experience with browser media pipelines (Canvas, WebGL, streaming, WebCodecs).Previous start-up or 0-1 product engineering experience preferred.
Sam WarwickSam Warwick
California, United States
Member of Technical Staff (ML Infrastructure/Inference)
Member of Technical Staff - Machine Learning Infrastructure/High Performance Inference EngineI’m working with a well-funded AI research company building the technical foundations for a new class of embodied agents and digital humans - systems designed with genuine, human-like qualities that can interact, collaborate, and form real connections with people. Their long-term aim is to scale this work into multi-agent simulations and entire societies of autonomous AI entities.As their Member of Technical Staff (ML Infrastructure), you’d design and scale the platforms that make this possible - from high-performance inference engines to distributed training pipelines and large-scale compute clusters that power intelligent, interactive AI systems. You’d work closely with researchers and product engineers to push the limits of inference performance, strengthen the foundations for agentic AI, and evolve the next generation of training and post-training pipelines.Responsibilities:Accelerate research velocity by enabling SOTA experimentation from day one.Build and optimize the full model training pipeline, including data collection, data loading, SFT, and RL.Design and optimize a high-performance inference platform leveraging both open-source and proprietary engines.Develop and scale technologies for large-scale cluster scheduling, distributed training, and high-performance AI networking.Drive engineering excellence across observability, reliability, and infrastructure performance.Partner with research and product teams to turn cutting-edge ideas into robust, production-ready systems.Qualifications:Expertise in one or more of: inference engines, GPU optimization, cluster scheduling, or cloud-native infrastructure.Proficiency with modern ML frameworks such as PyTorch, vLLM, Verl, or similar.Experience building scalable, high-performance systems used in production.Start-up mindset - adaptable, fast-moving, and high-ownership.Why This Opportunity Stands Out:Elite founding team: Engineers and researchers from MIT, Stanford, Google X, Citadel, and top AI labs.Strong funding and backing: Over $40M raised from Prosus, First Spark Ventures, Patron, and notable investors including Patrick Collison and Eric Schmidt.Serious traction: Their flagship AI companion product has already achieved significant user growth and is generating real revenue.Impact and autonomy: A flat, fast-moving environment where you’ll own critical systems and ship meaningful work within weeks.Longevity in vision: This company is not chasing quick exits - they’re deliberately building what they believe will be a historical company, with long-lasting influence on how humans and AI interact.
Sam WarwickSam Warwick
California, United States
Member of Technical Staff (ML Infrastructure/Inference)
Member of Technical Staff - Machine Learning Infrastructure/High Performance Inference EngineI’m working with a well-funded AI research company building the technical foundations for a new class of embodied agents and digital humans - systems designed with genuine, human-like qualities that can interact, collaborate, and form real connections with people. Their long-term aim is to scale this work into multi-agent simulations and entire societies of autonomous AI entities.As their Member of Technical Staff (ML Infrastructure), you’d design and scale the platforms that make this possible - from high-performance inference engines to distributed training pipelines and large-scale compute clusters that power intelligent, interactive AI systems. You’d work closely with researchers and product engineers to push the limits of inference performance, strengthen the foundations for agentic AI, and evolve the next generation of training and post-training pipelines.Responsibilities:Accelerate research velocity by enabling SOTA experimentation from day one.Build and optimize the full model training pipeline, including data collection, data loading, SFT, and RL.Design and optimize a high-performance inference platform leveraging both open-source and proprietary engines.Develop and scale technologies for large-scale cluster scheduling, distributed training, and high-performance AI networking.Drive engineering excellence across observability, reliability, and infrastructure performance.Partner with research and product teams to turn cutting-edge ideas into robust, production-ready systems.Qualifications:Expertise in one or more of: inference engines, GPU optimization, cluster scheduling, or cloud-native infrastructure.Proficiency with modern ML frameworks such as PyTorch, vLLM, Verl, or similar.Experience building scalable, high-performance systems used in production.Start-up mindset - adaptable, fast-moving, and high-ownership.Why This Opportunity Stands Out:Elite founding team: Engineers and researchers from MIT, Stanford, Google X, Citadel, and top AI labs.Strong funding and backing: Over $40M raised from Prosus, First Spark Ventures, Patron, and notable investors including Patrick Collison and Eric Schmidt.Serious traction: Their flagship AI companion product has already achieved significant user growth and is generating real revenue.Impact and autonomy: A flat, fast-moving environment where you’ll own critical systems and ship meaningful work within weeks.Longevity in vision: This company is not chasing quick exits - they’re deliberately building what they believe will be a historical company, with long-lasting influence on how humans and AI interact.
Sam WarwickSam Warwick
San Jose, California, United States
ML Manager
Head of Machine Learning – Video Generation Bay Area, Hybrid | Up to $300K + Equity We’re building next-gen video generation models and looking for a Head of ML to lead our growing research team. You’ll define research direction, drive product integration, and scale frontier visual models into production. What you’ll do:Lead and mentor a team of 5+ ML engineers and researchersSet research roadmap and strategy for video/image generation modelsGuide end-to-end development from experimentation to deploymentCollaborate with product teams to ship scalable, high-impact featuresWhat we’re looking for:Strong background in video or image generation (diffusion, transformers, or similar)Proven ability to lead and inspire ML research teamsHands-on experience building and deploying generative models in productionPassion for pushing boundaries in visual AIA chance to shape the technical vision of a frontier AI company at the cutting edge of creativity and computation.
Harry CrickHarry Crick
New York, United States
AI Customer Success Engineer
AI Solutions Engineer / AI Solutions Manager - Visual Generative AI$150,000 - $180,000New York - 3x days WFH We’re partnered with a pioneering enterprise-grade Visual GenAI platform that has been shaping the responsible AI landscape for over five years. Backed by leading global investors and trusted by some of the world’s largest brands, this company is building the infrastructure for the next generation of safe, scalable, and brand-consistent AI-driven products and services. Now, they’re looking for a Technical AI Solutions Engineer / Solutions Manager to help enterprise customers unlock the full value of their platform. You’ll be the bridge between customers and internal teams, ensuring adoption, ROI, and long-term success.You’ll work hands-on with developers, product managers, and technical decision-makers, helping them integrate cutting-edge generative visual AI into their workflows. Why Join?Be part of a category-defining AI company that’s shaping the future of responsible GenAI.Work with globally recognized enterprise clients across multiple industries.Have a direct impact on product evolution through customer insights.Join a team at the forefront of AI innovation with deep expertise in generative visual technologies. What You’ll Be DoingOwn the post-sales relationship with a portfolio of technical enterprise customers.Guide onboarding, technical enablement, and adoption of the platform.Act as a trusted advisor, helping customers translate technical capabilities into business impact.Work directly with client engineering teams to integrate APIs and platform capabilities.Troubleshoot across the stack (APIs, model behaviour, latency, deployment).Lead workshops, demos, and training sessions for technical stakeholders.Gather and translate customer feedback into product improvements.Collaborate with Product, Engineering, Sales, and Support to advocate for customer needs. Requirements:Must-Haves3–6 years in a SaaS technical customer-facing role (Customer Success Engineer, Solutions Engineer, or Technical Account Manager).Proficiency with API integrations, scripting (Python or JavaScript), and cloud platforms (AWS, GCP, or Azure).Strong grasp of machine learning and generative AI fundamentals, especially computer vision.Ability to translate technical capabilities into business value.Proven success supporting technical customers at a B2B SaaS or AI/ML platform company.Excellent communication, relationship management, and organizational skills.Nice-to-HavesHands-on experience with generative models (diffusion, transformers) or visual AI systems.Familiarity with MLOps workflows, model deployment, or on-device inference.Exposure to creative industries (media, marketing, design).Background in data science, software engineering, or applied AI.
Benjamin ReavillBenjamin Reavill
Boston, Massachusetts, United States
Machine Learning Engineer (LLM)
Machine Learning Engineer (LLM) $160,000 - $200,000+ (DOE) Boston, 2 days per week in-office We’re working a fast-growing AI company on a mission to automate complex workflows in the financial services sector, starting with insurance. Their technology leverages cutting-edge AI to simplify high-value processes, from multi-turn conversations to full workflow automation. As an ML Engineer within LLMs, you’ll be building and scaling advanced AI systems that power intelligent, multi-agent workflows. You’ll take ownership of designing, fine-tuning, and productionizing large language models, integrating them with backend systems, and optimizing their performance. You’ll collaborate closely with data science, DevOps, and leadership to shape the AI infrastructure that drives the company’s automation solutions. What You’ll Do:Build, fine-tune, and productionize large language model (LLM) pipelines, including PEFT, RLHF, and DPO workflows.Develop APIs, data pipelines, and orchestration systems for multi-agent, multi-turn AI conversations.Integrate models with backend services, including voice orchestration platforms and transcript generation.Optimize model usage and efficiency, transitioning from external APIs to in-house solutions.Collaborate cross-functionally with data scientists, DevOps, and leadership to deliver scalable machine learning solutions. What We’re Looking For:Essential Skills & Experience:Strong proficiency in Python and ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch).Hands-on experience fine-tuning LLMs (Hugging Face, PEFT).Familiarity with AWS (especially S3 for model management) and deploying ML models to production.Ability to reason deeply about ML principles, architectures, and design choices.Knowledge of multi-agent orchestration and conversational AI systems.Desirable Skills & Experience:Experience with RLHF or preference optimization.Background in voice AI, speech-to-text, or text-to-speech systems.Exposure to financial services or insurance applications.Familiarity with optimizing models for long-context scenarios. If you’d like to hear more, please apply or get in touch!
Benjamin ReavillBenjamin Reavill
Remote work, United States
AI Evaluation Engineer
AI Evaluation Engineer $160,000 - $180,000 Remote (US-based)Are you passionate about shaping how AI is deployed safely, reliably, and at scale? This is a rare opportunity to join a mission-driven tech company as their first AI Evaluation Engineer, a foundational role where you’ll design, build, and own the evaluation systems that safeguard every AI-powered feature before it reaches the real world.This organization builds AI-enabled products that directly helps governments, nonprofits, and agencies deliver financial support to people who need it most. As AI capabilities race forward, ensuring these systems are safe, accurate, and resilient is critical. That’s where you come in.You won’t just be testing models, you’ll be creating the frameworks, pipelines, and guardrails that make advanced LLM features safe to ship. You’ll collaborate with engineers, PMs, and AI safety experts to stress test boundaries, uncover weaknesses, and design scalable evaluation systems that protect end users while enabling rapid innovation. What You’ll DoOwn the evaluation stack – design frameworks that define “good,” “risky,” and “catastrophic” outputs.Automate at scale – build data pipelines, LLM judges, and integrate with CI to block unsafe releases.Stress testing – red team AI systems with challenge prompts to expose brittleness, bias, or jailbreaks.Track and monitor – establish model/prompt versioning, build observability, and create incident response playbooks.Empower others – deliver tooling, APIs, and dashboards that put eval into every engineer’s workflow. Requirements:Strong software engineering background (TypeScript a plus)Deep experience with OpenAI API or similar LLM ecosystemsPractical knowledge of prompting, function calling, and eval techniques (e.g. LLM grading, moderation APIs)Familiarity with statistical analysis and validating data quality/performanceBonus: experience with observability, monitoring, or data science tooling
Benjamin ReavillBenjamin Reavill