- Страна
- Канада
Откликайтесь
на вакансии с ИИ

Engineering Manager, Inference ML Runtime
Исключительная возможность работать в компании-единороге, которая напрямую конкурирует с NVIDIA. Высокий уровень ответственности, работа с передовыми технологиями и стратегическое партнерство с OpenAI делают эту вакансию топовой на рынке.
Сложность вакансии
Роль требует редкого сочетания глубоких знаний в области распределенных систем, ML-инференса и опыта управления командами. Работа с уникальным оборудованием Cerebras добавляет сложности по сравнению со стандартными GPU-стеками.
Анализ зарплаты
Указанная роль Engineering Manager в сфере ML Systems в Кремниевой долине или Торонто обычно предполагает компенсацию выше среднего по рынку из-за дефицита кадров. Ожидаемый совокупный доход (TC) может значительно превышать базовую зарплату за счет опционов.
Сопроводительное письмо
I am writing to express my strong interest in the Engineering Manager, Inference ML Runtime position at Cerebras Systems. With over 8 years of experience in large-scale software engineering and a proven track record of leading high-performance teams, I am excited by the opportunity to contribute to a company that is fundamentally redefining AI compute through its wafer-scale architecture.
In my previous roles, I have focused on building and scaling distributed systems and ML inference pipelines, balancing the need for low latency with high throughput. I have extensive experience with Python and C++ in production environments and have successfully led teams through the complexities of deploying state-of-the-art LLMs. The prospect of working beyond the constraints of traditional GPUs to deliver 10x faster inference speeds is incredibly compelling.
I am particularly drawn to Cerebras' culture of engineering excellence and its mission to empower ML users with seamless, high-performance infrastructure. I am confident that my technical background in ML systems and my leadership experience in fostering high-velocity teams make me an ideal fit to help scale Cerebras’ inference platform and maintain its industry-leading performance advantage.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в cerebrassystems уже сейчас
Присоединяйтесь к команде, создающей самое быстрое в мире решение для инференса ИИ на уникальном «чипе-пластине»!
Описание вакансии
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
The Inference ML Engineering team at Cerebras builds the runtime, APIs, and systems that power the fastest generative AI inference platform in the world.
As an Engineering Manager, Inference ML Runtime, you will lead a team responsible for designing and scaling the systems that enable seamless execution of state-of-the-art AI models on Cerebras hardware. You will operate at the intersection of machine learning, distributed systems, and high-performance runtime engineering, translating cutting-edge research into production-ready infrastructure to serve a variety of text-only and multimodal models.
This role combines technical leadership, people management, and execution ownership, with direct impact on Cerebras’ core inference platform.
What You’ll Do
Technical Leadership
- Own the architecture and evolution of the ML inference runtime and serving systems.
- Guide the design of:
- high-throughput, low-latency inference pipelines;
- multimodal model execution (text, image, audio, video);
- scalable serving infrastructure for concurrent workloads.
- Partner with cloud, compiler, core runtime, hardware, and ML teams to optimize end-to-end performance.
Team Leadership
- Build, manage, and grow a team of ML systems and infrastructure engineers.
- Provide technical direction, mentorship, and career development.
- Foster a culture of ownership, velocity, and engineering excellence.
- Recruit top talent in ML systems, distributed systems, and runtime engineering.
Execution & Delivery
- Drive execution of complex, cross-functional initiatives across:
- ML engineering;
- compiler/runtime teams;
- cloud and infrastructure teams.
- Own delivery of features such as:
- advanced inference capabilities (structured outputs, sampling strategies);
- heterogeneous model types, including test and multimodal;
- performance optimization (latency, throughput, memory efficiency);
- observability and reliability across the inference stack.
- Ensure high-quality releases through strong testing, validation, and operational rigor.
Platform & Performance Ownership
- Scale Cerebras’ inference platform to handle large volumes of concurrent requests at very fast speed
- Drive improvements in:
- latency;
- throughput;
- compute efficiency.
- Identify and prioritize technical debt and system bottlenecks.
- Maintain Cerebras’ industry-leading inference speed advantage.
Cross-Functional Collaboration
- Partner with:
- ML researchers (model enablement);
- compiler teams (model execution optimization);
- cloud/platform teams (deployment and scaling).
- Act as a bridge between research, infrastructure, and production systems.
What You Bring
Required
- 8+ years of experience in:
- large-scale software engineering;
- ML systems or distributed systems.
- 2+ years of engineering management experience.
- Strong programming skills in:
- Python (production systems);
- C++ (performance-critical systems).
- Experience building and scaling large-scale inference systems (LLMs or multimodal).
- Experience working with cloud infrastructures and following best-practices for building scalable microservices and applications.
Preferred
- Experience with:
- LLM serving frameworks (e.g., vLLM, TensorRT-LLM, SGLang);
- PyTorch and deep learning frameworks;
- distributed systems and high-performance computing.
- Familiarity with:
- ML runtime systems;
- model execution pipelines;
- performance optimization for AI workloads.
Why This Role Matters
This team is central to Cerebras’ mission of delivering the fastest AI inference in the world. Your work will directly enable real-time AI applications and unlock new capabilities across enterprise and frontier AI use cases.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2026.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer.We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- C++
- Machine Learning
- Distributed Systems
- LLM
- PyTorch
- Microservices
- Inference
- High Performance Computing
- vLLM
- TensorRT-LLM
Возможные вопросы на собеседовании
Проверка опыта управления в высоконагруженной среде.
Расскажите о самом сложном техническом конфликте в вашей команде при проектировании runtime-системы и как вы его разрешили?
Оценка понимания специфики инференса LLM.
Какие стратегии оптимизации задержки (latency) и пропускной способности (throughput) вы считаете наиболее критичными для мультимодальных моделей?
Проверка навыков масштабирования команд.
Как вы подходите к найму и удержанию инженеров в такой узкой и конкурентной нише, как ML Systems?
Оценка опыта работы с распределенными системами.
Опишите ваш опыт работы с фреймворками для обслуживания LLM (например, vLLM или TensorRT-LLM) и их ограничения при масштабировании.
Проверка умения работать на стыке софта и железа.
Как вы организуете взаимодействие между вашей командой и разработчиками компиляторов/железа для достижения максимальной производительности?
Похожие вакансии
Engineering Manager – Maintenance of Line (MOL)
Manager Applications Design Engineering
Electrical Design Engineering Manager
Engineering Manager
Software Development Manager, Compliance and Multi-Region
Engineering Manager - Adaptive Telemetry | USA | Remote
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- Канада