THANK YOU FOR YOUR APPLICATION

One of our DeepRec Consultants will be in touch with you soon.

OTHER JOBS YOU MAY LIKE

San Francisco, California, United States
Simulation Engineer
Simulation Engineer Location: Onsite - Bay Area.Company: High-growth AI startup (stealth / early-stage)Focus: Physics-based simulation to ML-driven systemsOverviewOur client is building a new class of AI systems designed to understand and operate within real-world physical environments. The company sits at the intersection of simulation, machine learning, and industrial systems, with a focus on turning high-fidelity simulation data into scalable, production-grade intelligence.They are hiring Simulation Engineers across multiple domains who can bring deep subject-matter expertise and translate complex physical systems into computational models that can be learned, optimised, and deployed. This is not a pure research role. It is for engineers who have built and used simulation systems in real-world environments and understand how those systems behave under production constraints.Key Areas of HiringCandidates should come from one of the following domains:Bioreactors / Bioengineering (top priority)CFD / Fluid Dynamics (medical devices or industrial systems)Aerospace (flight physics, aerodynamics, control systems)Fixed-Wing Drones / UAVsAviation (commercial or defence aircraft systems)Space / Rocket SystemsWhat You’ll DoDevelop and apply high-fidelity simulation models across fluid, structural, thermal, biological, or aerodynamic systemsTranslate simulation outputs into ML-compatible datasets and representationsWork closely with ML and AI teams to enable surrogate modelling, optimisation, and system-level learningImprove simulation performance, scalability, and reliability across large-scale compute environmentsDesign end-to-end pipelines from simulation through to data generation, model training, and deploymentValidate and calibrate models against real-world data where availableWhat They’re Looking ForCore Requirements:Strong background in simulation engineering within a real-world domainExperience with tools such as OpenFOAM, ANSYS Fluent, STAR-CCM , Abaqus, ANSYS Mechanical, COMSOLExperience building or working with custom simulation frameworks (C , Python, MATLAB or similar)Solid understanding of physics-based modelling (fluids, thermodynamics, structural mechanics, control systems, or bio-systems)Experience working with large-scale simulations or HPC environmentsPreferred:Exposure to ML workflows (PyTorch, TensorFlow, surrogate models, optimisation loops)Experience generating or working with synthetic data from simulationsFamiliarity with distributed compute, GPU acceleration, or cloud-based simulation pipelinesBackground in companies such as:Medical Devices: Stryker, Medtronic, Boston Scientific, Zimmer BiometDrones/UAVs: Skydio, DJI, Autel, ParrotAerospace/Aviation: Boeing, Airbus, Joby, defence organisationsSpace: SpaceX, Relativity Space, NASA, Project Kuiper, Muon SpaceWhat Makes This DifferentYou are helping turn simulation into intelligence, not just running modelsDirect exposure to next-generation AI systems grounded in physicsOpportunity to work across multiple industries and problem domainsHigh ownership in shaping how simulation integrates into AI systems for the physical worldIdeal ProfileDomain expert first, not a generalistHas built simulations that informed real-world decisionsComfortable operating in ambiguous, early-stage environmentsInterested in bridging physics and machine learningHiring PriorityBioreactors / Bio-simulation (urgent)CFD / Fluid systemsAerospace / UAVAviationSpace systems
Sam WarwickSam Warwick
Egg b. Zürich, Switzerland
Lead MLOps Engineer
Lead MLOps Engineer Zurich, Switzerland | Hybrid (1 day on-site per week) AI Consultancy CHF 130k – 136k Bonus Overview This is a hands-on MLOps / software engineering role within an AI consultancy that delivers production ML systems into enterprise environments. You’ll be embedded inside the pricing team of a large insurance company, working on an existing project where more engineering capacity is needed to get models into production and keep them running. The core problem is straightforward: models exist, now they need to be deployed, scaled, and made reliable on GCP. What you’ll be doing You own the path from trained model to production system. That includes building MLOps pipelines, deploying models into GCP, and ensuring they’re stable, monitored, and usable once live. You’ll work closely with data scientists on one side and stakeholders on the other to make sure what’s built actually runs and delivers value. Environment You’ll join a lean team: Head of AI Engineering one engineer. The consultancy model means you’re embedded with the client for long-term engagements (typically 1–2 years), so you stay close to the systems you build and continue improving them over time. You’ll work directly with both technical and non-technical stakeholders, with no layers in between. What this offers youOwnership of ML deployment into production on GCPWork on systems used day-to-day by the business (pricing decisions)Small team, high visibility, direct access to leadershipLong-term project continuity (1–2 years)Clear progression into Architect / Principal or consulting trackWhat you bring You’ve deployed ML systems into production and know where things tend to break. You’re strong on GCP, can operate independently, and are comfortable working directly with stakeholders. You care about system reliability, clean architecture, and making ML systems actually work in production. CompensationBase: CHF 130k – 136kBonus: typically ~1 month salaryCHF 2,500 annual learning budgetCHF 1,000 annual wellness budget
Jacob GrahamJacob Graham
London, Greater London, South East, England
AI Researcher
AI Research Engineer – ContractLocation: London – 3 days onsiteContract: 6 months (extensions possible)Start: ASAPOverviewLeading tech company hiring a Research Engineer to work on LLM agents and machine learning systems within an AI research team.Key ResponsibilitiesBuild and optimise machine learning models and LLM-based agentsDesign experiments, tools, and infrastructure for large-scale AI systemsWrite production-quality Python code alongside researchers and engineersTranslate research into practical, scalable solutionsSkills & Experience8 years’ experience in software engineering / machine learningStrong Python and PyTorch experienceExperience with LLMs, generative AI, or large-scale ML systemsAbility to work at pace in a research-driven environmentQualificationsDegree in Computer Science, Machine Learning, or related fieldAdvanced degree (PhD/Master’s) in ML, Maths, or Physics is a plus
Sam OliverSam Oliver
Beijing, China
Humanoid Robotics Product Manager (Mandarin Speaking) RELOCATION TO DUBAI
Job Title: Humanoid Robotics Product Manager (Mandarin Speaking) 📍 Location: Dubai, UAE (Relocation Required) 💰 Salary: $5,000 – $10,000 per month (tax-free) A global industrial technology organisation is building a new robotics and automation division in Dubai, focused on deploying humanoid robots in real industrial environments such as warehouses, factories, construction sites, and maintenance operations. We are looking for a Humanoid Robotics Product Manager to lead the development and deployment of these systems — translating cutting-edge robotics capabilities into scalable commercial products. This role sits at the intersection of robotics engineering, product strategy, and industrial operations, working closely with robotics engineers, AI teams, and enterprise customers.Fluent Mandarin Chinese and English are mandatory for this role. Key Responsibilities • Define the product strategy and roadmap for humanoid robotics solutions in industrial environments • Identify high-value use cases across logistics, manufacturing, construction, and inspection • Translate real-world operational challenges into product requirements • Work closely with robotics hardware, AI/software, and autonomy teams • Lead product development from concept → pilot → large-scale deployment • Manage industrial pilots and convert them into scalable commercial offerings • Support go-to-market strategy, customer engagements, and strategic enterprise deals Requirements • Fluent Mandarin Chinese and English • 5 years of experience in product management or technical leadership in robotics / automation • Strong understanding of industrial environments (manufacturing, logistics, construction, etc.) • Experience delivering hardware software products from concept to deployment • Degree in Robotics, Engineering, Computer Science, or related field • Willingness to relocate to Dubai and travel internationally when required Nice to Have • Experience with humanoid robots or mobile manipulation systems • Familiarity with robot autonomy, perception, and control architectures • Experience launching robotics products into industrial markets 🚀 Why this role? • Work on cutting-edge humanoid robotics deployments • Join a fast-growing robotics initiative within a major global industrial group • Tax-free salary in Dubai • Opportunity to build products deployed across global industrial markets
Paddy HobsonPaddy Hobson
Greater London, South East, England
ML Tech Lead - Multimodal AI
Job Title: ML Tech Lead – Multimodal AILocation: London / Remote Europe ConsideredCompensation: Competitive Base Salary Bonus Travel AllowanceAbout the RoleWe are seeking a hands-on ML Tech Lead to build and lead a brand-new team in a recently created, well-funded AI initiative. You’ll be responsible for shaping the direction of a cutting-edge platform for AI-driven video search and discovery, combining audio, video, and text data. This is a high-visibility role with the chance to impact creative teams and artists globally.Key ResponsibilitiesLead a multidisciplinary team of backend, frontend, and AI engineersArchitect and develop a multimodal AI search platform (video, audio, text)Design and build scalable content ingestion, indexing, and retrieval systemsIntegrate ML models into production search infrastructure (vector search, Elasticsearch/OpenSearch)Mentor engineers and foster a high-impact, collaborative team environmentDeliver robust, production-ready systems on modern cloud infrastructureWhat We’re Looking ForStrong experience in machine learning, especially multimodal modelsHands-on technical expertise in building large-scale search or recommendation systemsProficiency in cloud-based architectures and scalable production systemsLeadership experience: building, mentoring, and guiding engineering teamsPassion for music, media, or creative technology is a plusWhy This Role is ExcitingLead a newly created, high-impact initiative within a global entertainment leaderWork with massive audiovisual datasets and state-of-the-art AI technologyShape tools that directly support artists, creative teams, and content discoveryBe part of a well-funded, forward-thinking AI lab with long-term growth opportunities
Jonathan HarroldJonathan Harrold
Hertfordshire, South East, England
Lead AI Engineer
Lead AI Engineer Location: United Kingdom, Remote or Hybrid Salary: Competitive, dependent on experience About the Company An established global technology platform specialising in digital publishing and print infrastructure is launching a new AI venture focused on transforming how printed content is created. The company is building an AI powered design editor that allows users to generate fully customised, print ready designs directly from natural language prompts. Instead of relying on templates, the platform generates original layouts, graphics, and structured outputs automatically. This is a newly formed AI venture backed by an existing global platform, offering the opportunity to build a product from the ground up while benefiting from the support and infrastructure of an established organisation. The Role The company is now hiring a Lead AI Engineer to design and build the backend systems that power the AI workflows behind the product. This role sits at the intersection of backend engineering and applied AI. You will be responsible for architecting scalable AI systems that translate user prompts into structured, production ready design outputs. You will work closely with product and engineering teams to develop the core AI infrastructure, ensuring the system is reliable, scalable, and capable of supporting high traffic production workloads. Key Responsibilities Design and build backend systems supporting AI driven design generation Develop and maintain LLM pipelines and agent based AI workflows Architect scalable APIs and services using Python frameworks Build infrastructure that supports high volume AI inference and real time workloads Collaborate with product teams to translate user prompts into structured outputs and workflows Ensure reliability, observability, and performance across AI systems Contribute to the technical direction of a new AI product from its earliest stages Required Experience Strong experience building production backend systems using Python Experience working with FastAPI, Pydantic, or similar modern Python frameworks Hands on experience building or deploying LLM based systems Experience with agent architectures, RAG systems, or LLM pipelines Strong understanding of scalable system design and distributed infrastructure Experience deploying machine learning or AI systems into production environments Desirable Experience Experience with generative AI systems or content generation workflows Experience with containerisation and cloud infrastructure Previous experience in startup or early stage product environments Why Join Opportunity to help build a new AI product from the ground up Backed by an established global technology platform Work on cutting edge generative AI systems applied to real world creative workflows Small, highly technical team with significant ownership and impact
Nathan WillsNathan Wills
Dubai, United Arab Emirates
Humanoid Robotics Product Manager (Mandarin Speaking)
Job Title: Humanoid Robotics Product Manager (Mandarin Speaking) 📍 Location: Dubai, UAE (Relocation Required) 💰 Salary: $5,000 – $10,000 per month (tax-free) A global industrial technology organisation is building a new robotics and automation division in Dubai, focused on deploying humanoid robots in real industrial environments such as warehouses, factories, construction sites, and maintenance operations. We are looking for a Humanoid Robotics Product Manager to lead the development and deployment of these systems — translating cutting-edge robotics capabilities into scalable commercial products. This role sits at the intersection of robotics engineering, product strategy, and industrial operations, working closely with robotics engineers, AI teams, and enterprise customers.Fluent Mandarin Chinese and English are mandatory for this role. Key Responsibilities • Define the product strategy and roadmap for humanoid robotics solutions in industrial environments • Identify high-value use cases across logistics, manufacturing, construction, and inspection • Translate real-world operational challenges into product requirements • Work closely with robotics hardware, AI/software, and autonomy teams • Lead product development from concept → pilot → large-scale deployment • Manage industrial pilots and convert them into scalable commercial offerings • Support go-to-market strategy, customer engagements, and strategic enterprise deals Requirements • Fluent Mandarin Chinese and English • 5 years of experience in product management or technical leadership in robotics / automation • Strong understanding of industrial environments (manufacturing, logistics, construction, etc.) • Experience delivering hardware software products from concept to deployment • Degree in Robotics, Engineering, Computer Science, or related field • Willingness to relocate to Dubai and travel internationally when required Nice to Have • Experience with humanoid robots or mobile manipulation systems • Familiarity with robot autonomy, perception, and control architectures • Experience launching robotics products into industrial markets 🚀 Why this role? • Work on cutting-edge humanoid robotics deployments • Join a fast-growing robotics initiative within a major global industrial group • Tax-free salary in Dubai • Opportunity to build products deployed across global industrial markets
Paddy HobsonPaddy Hobson
Baden-Württemberg, Baden-Württemberg, Germany
LLM Performance Engineer
LLM Performance Engineer Baden-WürttembergRemote with quarterly in person engineering workshops€110,000The work Most ML engineers never see what actually happens on the GPU. They train models, call an inference API, and trust the framework. If you have ever opened Nsight or Torch Profiler, followed a request through kernel launches and communication calls, and wondered why half the GPU time disappears into overhead, this work will feel very familiar. The problem Large language models behave very differently in production than they do in benchmarks. Token generation patterns change. Prefill and decode phases behave unpredictably. Communication overhead quietly kills throughput. Schedulers make decisions based on incomplete information. Most infrastructure platforms cannot see any of this.So they optimise the wrong things. Your work changes that. What you will actually build You will make the entire LLM execution path observable, from the moment a request hits the system to the moment CUDA kernels execute on the GPU. That means generating traces that capture:token-level model behaviourkernel launches and GPU utilisationruntime scheduling decisionsmemory movement and communication between GPUs You will use those traces to answer questions like: Why is a GPU only 55% utilised? Where does latency appear between prefill and decode? Why does a supposedly optimised attention kernel stall under load? Then you turn those answers into improvements. Better kernel behaviour. Better runtime execution. Better scheduling decisions across GPU fleets. The results show up in real numbers: higher GPU utilisation, lower latency and more throughput on production workloads. Why this work is different Most ML roles sit above the framework layer. This sits underneath it. You will spend your time inside PyTorch execution paths, CUDA behaviour, inference runtimes and distributed communication. The interesting problems live in the gaps between those layers. The systems you work on also run at meaningful scale. Clusters range from small internal deployments to environments with tens of thousands of GPUs. Performance improvements do not save milliseconds. They change how large fleets of hardware are used. The environment Small engineering team. Around sixty people. No layers of product managers translating problems for you. Engineers talk directly to each other and to the system. Work is fully remote, with occasional engineering sessions in Heidelberg focused on deep technical work rather than company rituals. Performance improvements are measured, validated and shipped to production systems used by paying customers.  You will likely enjoy this if You like profiling GPU workloads. You have dug into CUDA kernels, PyTorch internals or distributed training behaviour to understand why something performs poorly. You prefer investigating real systems over building ML features or training models. You care more about how models run than about how they are trained.
Jacob GrahamJacob Graham