Computer Vision

Experts in computer vision recruitment, we connect top talent with companies that build the future

Self-driving cars, facial recognition systems, sports performance analysis, medical imaging and precision agriculture – computer vision is no longer the technology of tomorrow, it’s a reality of the here and now. Demand for skilled computer vision professionals has never been higher.

Computer vision is also playing a critical role in the advancement of robotics, from autonomous navigation and object detection to human/machine interaction. As robotics applications become more sophisticated, the need for engineers who can combine visual intelligence with real-time decision-making is growing rapidly, and so is the competition to hire them.

Finding candidates with expertise in computer science, Python and C++, digital image processing, deep learning, robotics, and more, is an impossible ask without the right support. This is where DeepRec.ai comes in.

Our specialist recruitment consultants have built the trust and technical knowledge needed to connect job seekers with the best opportunities in this vibrant corner of the tech world.

Why Choose DeepRec.ai?

We’re proud to help our global network of computer vision candidates find fulfilling work, and we've got the tools to do it. From our dedicated AI community to our events programme and inclusive hiring methodology, our aim is to provide lasting value to the customers and candidates we serve.

We’re part of Trinnovo Group, a B Corp accredited recruitment specialist committed to making a positive impact. Contact the team to find out how we can help you thrive in the computer vision space.

The roles we recruit for in Computer Vision include:

  • Head of Computer Vision

  • Senior Computer Vision Engineer

  • Computer Vision Engineer

  • Senior Machine Learning Engineer - Computer Vision 

  • Machine Learning Engineer - Computer Vision 

  • Computer Vision Scientist 

  • Computer Vision Researcher

 

COMPUTER VISION CONSULTANTS

Anthony Kelly

Co-Founder & MD EU/UK

Paddy Hobson

Senior Consultant | DACH

Harry Crick

Consultant | USA

LATEST JOBS

Ontario, Canada
MLOPs Engineer
Job Title: MLOps EngineerWork Arrangement: RemoteLocation: Toronto, CanadaSalary: Up-to $125,000 CADMLOps Engineer – Real-Time AI SystemsWe're looking for an experienced MLOps Engineer to help deploy and scale cutting-edge ML models for real-time video and audio applications. You'll work alongside data scientists and engineers to build fast, reliable, and automated ML infrastructure.Key ResponsibilitiesBuild and manage ML pipelines for training, validation, and inference.Automate deployment of deep learning and generative AI models.Ensure model versioning, rollback, and reproducibility.Deploy models on AWS, GCP, or Azure using Docker and Kubernetes.Optimize real-time inference using TensorRT, ONNX Runtime, or PyTorch.Use GPUs, distributed systems, and parallel computing for performance.Create CI/CD workflows (GitHub Actions, Jenkins, ArgoCD) for ML.Automate model retraining, validation, and monitoring.Address data drift, latency, and compliance concerns.What You Bring3+ years in MLOps, DevOps, or model deployment roles.Strong Python and experience with ML frameworks (PyTorch, TensorFlow, ONNX).Proficiency with cloud platforms, Docker, and Kubernetes.Experience with ML tools like MLflow, Airflow, Kubeflow, or Argo.Knowledge of GPU acceleration (CUDA, TensorRT, DeepStream).Understanding of scalable, low-latency ML infrastructure.Nice to HaveExperience with Ray, Spark, or edge AI tools (Triton, TFLite, CoreML).Basic networking knowledge or CUDA programming skills.
Harry CrickHarry Crick
San Jose, California, United States
AI Researcher - 3D
Job Title: AI ResearcherWork Arrangement: HybridLocation: San JoseWe are seeking a talented Computer Vision AI Researcher with expertise in stable diffusion, diffusion models, GANs, NeRFs, and text-to-image/video synthesis.Key Responsibilities:Conduct pioneering research in computer vision, focusing on diffusion models and text-to-Image/videoDevelop advanced GAN architectures tailored for specific computer vision applications.Investigate NeRFs and related techniques to enhance 3D scene understanding.Drive advancements in text-to-image and text-to-video synthesis using deep learning methods.Collaborate closely with cross-functional teams to productise researchQualifications:Strong research background in stable diffusion, GANs, NeRFs, and text-to-image/video synthesis, evidenced by publications in top-tier conferences or journals.Proficiency in Python and deep learning frameworks (e.g., TensorFlow, PyTorch).Experience with large-scale data processing and model training.
Harry CrickHarry Crick
California, United States
Computer Vision Engineer
Computer Vision Engineer – Hybrid Los Angeles – Up to $220k + Bonus + Equity Startup of 10+ engineers building next-stage real-time vision and sensing systems for defense and autonomy. Profitable and customer funded with no outside investment. Their platforms span digital terrain mapping, classical computer vision, and high performance signal processing, with hardware already deployed in the field. What You’ll Be DoingDesigning and implementing low-latency vision pipelines in C++ for mission-critical defense applications (e.g., region-of-interest tracking, camera calibration, sensor fusion).Collaborating closely with hardware teams to integrate cameras, IMUs, and other sensors on embedded platforms—pushing algorithms from prototype to production.Conducting field-testing (flight tests, desert range tuning) to validate and optimize real-world performance.Working on digital terrain modeling, object-tracking algorithms (drone-to-drone intercept, remote weapons platforms), and AR/display research.Writing reusable, well-documented code that ensures reliability under harsh conditions and tight real-time constraints.Must-Haves3+ years of professional experience developing C++ systems (preferably on Linux).Strong background in classical computer vision or signal processing (Kalman filters, ROI tracking, camera calibration, etc.).Comfortable working hands-on with sensors, cameras, and embedded hardware.Interest in field testing and tuning algorithms in live environments.U.S. Citizen or Green Card holder (required for defense-related work).Nice-to-HavesCUDA/OpenCL experience or other GPU-accelerated development.Prior work on SLAM or visual-inertial odometry for airborne or ground-robot platforms.Experience architecting software for long-range tracking (drones, aircraft, missiles).Familiarity with ROS, RTOS, or other robotics middleware.
Harry CrickHarry Crick
Zürich, Switzerland
AI Engineer - Diffusion Models
Join a company working on the technology that allows an LLM prompt to translate into Humanoid Robot manipulation. AI Engineer - Diffusion ModelsLocation: ZurichAll of the founders hold PhDs in Robotics and Simulation with 15+ years experience working at a global technology leader, they already have a working demo and 3 humanoid robots on-site. The team has an abundance of experience in Robotics, Reinforcement learning, Dexterous Manipulation, Diffusion models and they're now searching for experts in VLM/LLMs. Requirements:Strong academic background with a PhD or Master’s in diffusion models, flow matching, or learning-based trajectory generation, including relevant project experience.Technical expertise in Python, PyTorch, and training/fine-tuning diffusion or flow matching models, with experience in deploying these for robotic systems.Practical experience with GPU-based simulation platforms (e.g., Omniverse or Genesis) and solid understanding of modern ML architectures. Robotics experience is NOT needed for this position, we need someone who understands cutting edge generative models - there is already the experience in the team needed to translate this to the world of robotics. If you're interested in bringing your experience to the robotics domain with a fast growing and extremely well backed company then please apply!
Anthony KellyAnthony Kelly
Berlin, Germany
Scene Understanding Engineer
Our client is building certifiable Level 4 autonomous driving systems for local public transport—designed and developed in Germany. Their mission is to connect people, no matter where they live, by enabling self-determined and sustainable mobility through cognitive artificial intelligence. Their unique approach, rooted in neuroscience and explainable AI, enables real-time decision-making in complex and unknown traffic scenarios—without relying solely on data from millions of kilometres of driving. As a Scene Understanding Engineer, you will play a vital role in shaping the perception and cognition systems that allow our autonomous driver to interpret and interact with its environment. Responsibilities: • Develop and enhance scene understanding algorithms for complex, real-world environments. • Design and implement modular, explainable systems that integrate sensor data and support perception and localization modules. • Lead small development teams and contribute to overall system architecture and software integration. • Collaborate with cross-functional teams to ensure seamless interaction between perception, planning, and control modules. • Participate in testing and validation of autonomous systems in both simulated and real-world environments, including field testing. • Support the certification process by developing traceable and explainable logic for perception systems. Requirements: • Degree in Robotics, Localization, Sensor Fusion, or a related field. • Strong software development skills with C++ and Python. • Proven experience in leading small engineering teams and managing complex software systems. • Solid understanding of model-based design and modular system architecture. • Experience with robotics or autonomous vehicle platforms in real-world or motorsport environments. • Good grasp of deep learning principles, especially as applied to perception. • Fluent in written and spoken English. • Willingness to travel for testing and collaborative projects. • Familiarity with sensor fusion, object fusion, and localization algorithms is a plus. Note: Some technical experience (e.g., deep learning, motorsport testing, or control systems) may be negotiable depending on your background and ability to learn quickly. Why you should join us: • Work in an intellectually stimulating and innovative environment where you can take full ownership of your projects at every stage of development. • Enjoy flat hierarchies, an open culture, and fast decision-making processes. • Collaborate with a skilled and dedicated team eager to share their knowledge and expertise. • Be part of a multinational workplace that values diversity and integrates different backgrounds and perspectives. • Work in the vibrant heart of Berlin, in the dynamic Kreuzberg district.
Paddy HobsonPaddy Hobson
Berlin, Germany
Reinforcement Learning Engineer
Our client is pioneering Level 4 certifiable autonomous driving solutions, tailored for public transport and designed with safety at the core. By leveraging cognitive intelligence and cutting-edge AI based on German research, we create autonomous systems that make logical, explainable decisions in complex road scenarios. Our mission is to enable sustainable, safe, and scalable mobility solutions, ensuring that autonomous technology can connect people everywhere—especially in rural areas and underserved communities. As a Reinforcement Learning Engineer, you'll be instrumental in advancing our unique decision-making framework based on cognitive neuroscience. Your expertise in inference-driven AI, probabilistic modelling, and goal-directed behaviour will help us develop explainable, adaptive systems for autonomous driving. Responsibilities: • Design and implement decision-making architectures based on Active Inference, Bayesian models, and reinforcement learning principles. • Develop generative models and inference-based systems to guide autonomous agents under uncertainty. • Integrate concepts from cognitive robotics, predictive coding, and goal directed behaviour into scalable autonomous driving modules. • Apply and extend the Free Energy Principle and planning-as-inference frameworks for real-world applications in perception and control. • Model and simulate agent-based, hierarchical inference systems to support adaptive, real-time decision-making. • Collaborate cross-functionally with neuroscience-inspired perception, planning, and systems teams to ensure coherence in cognitive modelling. • Analyse and validate behaviour of autonomous systems in both simulation and field test environments. Requirements: • Solid background in reinforcement learning, probabilistic inference, or computational neuroscience. • Experience with Active Inference, Bayesian inference, or hierarchical generative models. • Proficiency in Python (PyTorch, TensorFlow, or JAX), with the ability to implement and train complex inference systems. • Familiarity with decision-making under uncertainty, cognitive architectures, or embodied cognition frameworks. • Strong theoretical foundation in neuro-inspired AI, behavioural modelling, or theoretical neuroscience. • Experience integrating sensorimotor control, action selection, or adaptive control in real-time systems. • Background in robotics, autonomous agents, or AI planning systems is a strong plus. Note: Experience with interdisciplinary AI combining machine learning, neuroscience, and robotics is highly valued, but not strictly required. Why you should join us: • Work in an intellectually stimulating and innovative environment where you can take full ownership of your projects at every stage of development. • Enjoy flat hierarchies, an open culture, and fast decision-making processes. • Collaborate with a skilled and dedicated team eager to share their knowledge and expertise. • Be part of a multinational workplace that values diversity and integrates different backgrounds and perspectives. • Work in the vibrant heart of Berlin, in the dynamic Kreuzberg district
Paddy HobsonPaddy Hobson