MLOps EngineerLocation: Barcelona (Hybrid) Contract: Fixed-term until June 2026 Salary: €55,000 base pro rata Bonuses: €3,000 sign-on €500/month retention bonus Relocation: €2,000 package available Eligibility: EU work authorisation required The opportunity We’re hiring an MLOps Engineer to join a fast-scaling European deep-tech company working at the forefront of AI model efficiency and deployment. This team is solving a very real problem: how to take large, cutting-edge language models and run them reliably, efficiently, and cost-effectively in production. Their technology is already live with major enterprise customers and is reshaping how AI systems are deployed at scale. This is a hands-on engineering role with real ownership. You’ll sit close to both research and production, helping turn advanced ML into systems that actually work in the real world. What you’ll be working onBuilding and operating end-to-end ML and LLM pipelines, from data ingestion and training through to deployment and monitoringDeploying production-grade AI systems for large enterprise customersDesigning robust automation using CI/CD, GitOps, Docker, and KubernetesMonitoring model performance, drift, latency, and cost, and improving reliability over timeWorking with distributed training and serving setups, including model and data parallelismCollaborating closely with ML researchers, product teams, and DevOps engineers to optimise performance and infrastructure usageManaging and scaling cloud infrastructure (primarily Azure, with some AWS exposure)Tech you’ll be exposed toPython for ML and backend systemsCloud platforms: Azure (AKS, ML services, CycleCloud, Managed Lustre), plus AWSContainerisation and orchestration: Docker, KubernetesAutomation and DevOps: CI/CD pipelines, GitOpsDistributed ML tooling: Ray, DeepSpeed, FSDP, Megatron-LMLarge language models such as GPT-style models, Llama, Mistral, and similarWhat they’re looking for3 years’ experience in MLOps, ML engineering, or LLM-focused rolesStrong experience running ML workloads in public cloud environmentsHands-on background with production ML pipelines and monitoringSolid understanding of distributed training, parallelism, and optimisationComfortable working across infrastructure, ML, and engineering teamsStrong English communication skills; Spanish is a plus but not requiredNice to haveExperience with mixture-of-experts modelsLLM observability, inference optimisation, or API managementExposure to hybrid or multi-cloud environmentsReal-time or streaming ML systemsWhy this role stands outWork on AI systems that are already in production with global customersTackle real infrastructure and scaling challenges, not toy problemsCompetitive salary plus meaningful bonusesHybrid setup in Spain with relocation supportJoin a well-funded, high-growth deep-tech environment with long-term impact
Jacob Graham