Location: Munich (Hybrid/On-site depending on team setup)
Type: Full-time

Company Overview

We are working with a fast-growing Vision / AI company building production software for the food and retail industry. Their systems help customers reduce food waste and improve operational efficiency - supporting sustainability goals through real-time computer vision and automation.

With teams across Europe, the US, and Asia, we combine startup pace with real-world deployments at enterprise customers.

The Role


We are hiring a hands-on engineer to support the delivery of our computer vision / ML products into production. This role sits at the intersection of software engineering applied machine learning, with a strong focus on making ML models run fast, reliably, and at scale on edge devices.

You will be responsible for our core video processing framework and deployment stack, working closely with senior ML engineers to ensure model inference performance, stability, monitoring, and field success. While you won’t be expected to design new ML algorithms or lead model training, you will be involved in diagnosing model issues in the field and improving real-world performance through optimization and iteration.

This is a great fit for someone who enjoys real-world ML delivery: video streams, edge devices, inference performance, and production debugging.

Key Responsibilities

  • ML Model Runtime & Edge Performance
    • Make ML models run efficiently on edge devices (latency, throughput, CPU/GPU utilization, memory constraints)
    • Support inference optimization and troubleshooting (profiling, batching, pipeline tuning, runtime constraints)
    • Investigate real-world model failures (data quality, camera placement, lighting, drift, edge-case behaviour) and work with ML engineers on mitigation strategies
    • Ensure robust model rollout processes: versioning, validation, safe deployment cycles
  • Video Pipeline Engineering (Core Focus)
    • Design and optimize real-time video processing pipelines using GStreamer
    • Integrate and manage streams from IP cameras (RTSP/ONVIF) and USB cameras
    • Debug complex video stream issues (encoding/decoding, dropped frames, jitter, latency, network instability)
  • Deployment & Production Operations
    • Package and deploy services using Docker/Podman on Linux-based edge systems
    • Troubleshoot issues directly on production/staging Linux hosts (logs, profiling, system-level debugging)
    • Implement and maintain monitoring and device health checks (e.g., Checkmk or similar)
  • Event Streaming & Interfaces
    • Build interfaces between edge devices and online tools / connected machines
    • Work with event streaming systems (Kafka or similar) for detections, events, and telemetry
      • deep Kafka expertise isn’t required, but strong conceptual understanding is

Must-Have Skills

  • 2–5 years of professional experience in software engineering / applied ML engineering
  • Strong Python skills (asyncio, threading, multiprocessing)
  • Strong Linux skills: CLI, systemd, bash scripting, networking fundamentals
  • Solid experience with containerization (Docker or Podman)
  • Comfortable debugging real systems remotely and working end-to-end (not just coding isolated modules)
  • Interest in ML delivery and computer vision systems in production

Nice to Have

  • Experience with GStreamer (big plus)
  • Familiarity with computer vision pipelines (OpenCV, image processing)
  • Experience with FFmpeg, RTSP, H.264/H.265, ONVIF
  • WebRTC exposure (low-latency streaming)
  • Kafka / message broker familiarity
  • German language skills (corporate language is English)

Why This Role is Interesting

  • You’ll work at the “real ML” layer: getting models running in production environments where conditions are messy
  • Strong collaboration with senior ML engineers, with room to grow into more ML responsibility over time
  • Direct ownership of the edge inference video stack powering real customer deployments
  • International team, low bureaucracy, hands-on culture