Role: AI Engineering Manager/Staff Engineer (Python / LLMs / Infrastructure)
Location: Fully Remote (Europe)
Salary: €110k-€135k
Employment Type: Full-time

*Please note: Only candidates with Staff-level experience or above will be considered. Proven team leadership - whether as a Tech Lead, Staff Engineer, or Engineering Manager - is a core requirement for this role.*

Join a fast-growing product company at the cutting edge of AI technology. This is an opportunity to lead a talented, cross-functional engineering team while staying hands-on with a modern, high-performance tech stack. Our client’s mission is to build one of the most human-like AI platforms in the world, with millions of users and a strong reputation across academia and media.

We're seeking an AI Engineering Manager or Staff Engineer who combines strong backend engineering and infrastructure skills with proven leadership experience. You’ll play a pivotal role in scaling production AI systems, guiding technical direction, and helping drive delivery of sophisticated, LLM-powered solutions.

Key Responsibilities:
  • Lead and mentor a high-performing team of AI and backend engineers
  • Own and evolve the system architecture for AI/ML deployment at scale
  • Build and maintain FastAPI-based microservices with Python async patterns
  • Manage AI-related infrastructure: containerization (Docker), CI/CD (GitHub Actions), observability (Datadog)
  • Design and support scalable data pipelines using Redis, MongoDB, and Kafka
  • Integrate with LLMs (OpenAI, Anthropic, LLaMA) and vector databases (e.g. Pinecone)
  • Oversee structured logging and system monitoring
  • Collaborate cross-functionally across AI research, DevOps, and product teams
  • Support a robust, high-scale environment (serving 500K+ users)
  • Help shape best practices in software engineering and team culture
Core Requirements:
  • 5+ years of backend development experience in Python
  • Leadership background: experience managing engineering teams or squads
  • Deep knowledge of Redis (asyncio), MongoDB schema design, and FastAPI
  • Hands-on experience with LLM APIs (OpenAI, Anthropic, etc.) and vector databases (e.g. Pinecone)
  • Familiarity with LLaMA models and deployment patterns
  • Proficient in Docker and docker-compose for environment management
  • Solid experience with Kafka in event-driven architectures
  • Expertise in CI/CD with GitHub Actions and observability tooling (e.g., Datadog)
  • Track record of shipping systems at scale (500K+ users or more)
  • Excellent communication skills and stakeholder collaboration abilities
  • A “startup mindset”: proactive, adaptable, and comfortable with ambiguity
Nice to Have:
  • Experience with Kubernetes and deployment orchestration tools (e.g. Quadrant)
  • Scala familiarity or willingness to learn
  • Previous work in AI/ML product teams or research-led environments