Jacob Graham


Jacob specialises in relationship-driven recruitment focused on MLOps, Infrastructure, and Engineering across the DACH region for DeepRec.

He began his recruitment career in CRM, successfully placing across the full spectrum of Salesforce specialists into partners, ISVs, and end users throughout multiple geographies. Jacob then transitioned into data recruitment, delivering niche SME hires and high-volume contract placements, primarily within investment banking.

Today, he combines market intelligence with targeted headhunting, systematically mapping the market to understand who’s building, hiring, and advancing ML infrastructure. 

Every introduction is intentional, informed, and backed by data.

 

 

JOBS FROM JACOB

Spain
MLOps Engineer
MLOps EngineerLocation: Barcelona (Hybrid) Contract: Fixed-term until June 2026 Salary: €55,000 base pro rata Bonuses: €3,000 sign-on €500/month retention bonus Relocation: €2,000 package available Eligibility: EU work authorisation required The opportunity We’re hiring an MLOps Engineer to join a fast-scaling European deep-tech company working at the forefront of AI model efficiency and deployment. This team is solving a very real problem: how to take large, cutting-edge language models and run them reliably, efficiently, and cost-effectively in production. Their technology is already live with major enterprise customers and is reshaping how AI systems are deployed at scale. This is a hands-on engineering role with real ownership. You’ll sit close to both research and production, helping turn advanced ML into systems that actually work in the real world. What you’ll be working onBuilding and operating end-to-end ML and LLM pipelines, from data ingestion and training through to deployment and monitoringDeploying production-grade AI systems for large enterprise customersDesigning robust automation using CI/CD, GitOps, Docker, and KubernetesMonitoring model performance, drift, latency, and cost, and improving reliability over timeWorking with distributed training and serving setups, including model and data parallelismCollaborating closely with ML researchers, product teams, and DevOps engineers to optimise performance and infrastructure usageManaging and scaling cloud infrastructure (primarily Azure, with some AWS exposure)Tech you’ll be exposed toPython for ML and backend systemsCloud platforms: Azure (AKS, ML services, CycleCloud, Managed Lustre), plus AWSContainerisation and orchestration: Docker, KubernetesAutomation and DevOps: CI/CD pipelines, GitOpsDistributed ML tooling: Ray, DeepSpeed, FSDP, Megatron-LMLarge language models such as GPT-style models, Llama, Mistral, and similarWhat they’re looking for3 years’ experience in MLOps, ML engineering, or LLM-focused rolesStrong experience running ML workloads in public cloud environmentsHands-on background with production ML pipelines and monitoringSolid understanding of distributed training, parallelism, and optimisationComfortable working across infrastructure, ML, and engineering teamsStrong English communication skills; Spanish is a plus but not requiredNice to haveExperience with mixture-of-experts modelsLLM observability, inference optimisation, or API managementExposure to hybrid or multi-cloud environmentsReal-time or streaming ML systemsWhy this role stands outWork on AI systems that are already in production with global customersTackle real infrastructure and scaling challenges, not toy problemsCompetitive salary plus meaningful bonusesHybrid setup in Spain with relocation supportJoin a well-funded, high-growth deep-tech environment with long-term impact
Jacob GrahamJacob Graham

INSIGHTS FROM JACOB

Earth Observed | Reducing Friction Between EO Providers

Earth Observed | Reducing Friction Between EO Providers

Trinnovo Group Impact Report 2025 | How We Work

Trinnovo Group Impact Report 2025 | How We Work