- Страна
- Нидерланды
Откликайтесь
на вакансии с ИИ

Senior ML Engineer (Token Factory)
Высокий балл обусловлен работой с передовым стеком (H100, FP8, Triton) в одной из самых быстрорастущих AI-инфраструктурных компаний. Вакансия предлагает уникальный масштаб задач (десятки тысяч GPU) и сильную инженерную культуру.
Сложность вакансии
Роль требует исключительных знаний на стыке ML-архитектур и системного программирования GPU. Кандидат должен не просто обучать модели, но и понимать работу памяти GPU, уметь профилировать нагрузку и оптимизировать инференс на уровне ядер.
Анализ зарплаты
Зарплата в Nebius для Senior-позиций в Европе обычно соответствует верхнему эшелону рынка (Tier-1), часто включая значительную бонусную часть или опционы, что сопоставимо с BigTech.
Сопроводительное письмо
I am writing to express my strong interest in the Senior ML Engineer position within the Token Factory team at Nebius. With a deep background in optimizing LLM inference and a passion for pushing hardware to its limits, I am excited by Nebius's mission to build a high-performance platform across tens of thousands of GPUs. My experience in profiling GPU workloads and working with transformer architectures aligns perfectly with your goals of maximizing throughput and minimizing cost-per-token.
In my previous roles, I have focused on squeezing performance out of large-scale models, utilizing tools like Nsight and deep learning frameworks to identify and resolve bottlenecks. I am particularly impressed by Nebius's work with cutting-edge architectures like DeepSeek and MoE models. I am eager to bring my expertise in low-precision inference and distributed systems to help Token Factory set new industry standards for efficiency and scalability.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в nebius уже сейчас
Присоединяйтесь к команде Nebius, чтобы оптимизировать крупнейшие GPU-кластеры Европы и определять будущее инференса LLM!
Описание вакансии
Why work at NebiusNebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we workHeadquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The role
Token Factory is a part of Nebius Cloud, one of the world's largest GPU clouds, running tens of thousands of GPUs. We are building a high-performance inference and fine-tuning platform designed to push foundation models to their hardware limits. Our mission is to maximize throughput, minimise latency, and optimise cost-per-token across tens of thousands of GPUs.
Some directions we are currently working on, and which you can be a part of:
- Inference Optimization: Identifying LLM inference bottlenecks to drive production speedups. Squeezing the maximum performance for a wide range of LLM architectures at scale (e.g., GPT-OSS, Kimi K2.5, DeepSeek V3.1/V3.2, GLM-5).
- Inference engines support: Implement novel speculative decoding architectures, optimise components of various LLM designs (dense/MoE, autoregressive/parallel), and contribute to open-source inference engines.
- Low Precision Training & Inference: Design and productionise low-precision (FP8, NVFP4/MXFP4) training and inference pipelines with measurable gains in throughput and cost-efficiency.
We expect you to have:
- A profound understanding of theoretical foundations of machine learning and transformer architecture.
- Experience profiling GPU workloads using Nsight, PyTorch profiler, or similar tools
- Understanding of GPU memory hierarchy and compute/memory tradeoffs
- Familiarity with important ideas in LLM space, such as MHA, RoPE, KV-cache, Flash Attention, and quantisation
- Understanding of performance aspects of large neural network training (sharding strategies, custom kernels, hardware features etc.)
- Strong software engineering skills (we mostly use Python)
- Deep experience with modern deep learning frameworks
- Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing
- Strong communication and leadership abilities
Nice to have:
- Experience working with open-source inference engines (vLLM, SGLang, TensorRT-LLM), including contributions
- Experience with kernel languages or DSLs such as Triton, Cute, CUTLASS, CUDA
- A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.
- Strong engineering skills, including experience in developing large distributed systems or high-load web services.
- Open-source projects that showcase your engineering prowess
- Excellent command of the English language, alongside superior writing, articulation, and communication skills.
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Flexible working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- PyTorch
- LLM
- CUDA
- Triton
- GPU Profiling
- Nsight
- Flash Attention
- Quantization
- Distributed Systems
- CI/CD
- vLLM
- TensorRT-LLM
- Transformer Architecture
Возможные вопросы на собеседовании
Проверка понимания ключевого механизма оптимизации инференса LLM.
Как бы вы оптимизировали использование KV-кэша для моделей с архитектурой Mixture-of-Experts (MoE) при очень длинных контекстах?
Оценка навыков работы с низкоуровневой производительностью GPU.
Опишите ваш опыт использования NVIDIA Nsight для поиска узких мест, связанных с пропускной способностью памяти (memory bandwidth) против вычислительной мощности (compute bound).
Проверка знаний современных методов квантования, упомянутых в вакансии.
В чем основные сложности внедрения FP8 или NVFP4 для инференса в продакшене по сравнению с традиционным FP16?
Оценка понимания распределенных систем.
Какие стратегии шардирования (Tensor Parallelism vs Pipeline Parallelism) вы бы выбрали для инференса модели уровня DeepSeek V3 на кластере из H100?
Проверка навыков написания кастомных решений.
В каких случаях вы бы предпочли написать кастомный Triton-кернл вместо использования стандартных библиотек типа vLLM или TensorRT-LLM?
Похожие вакансии
Data Scientist (Senior / middle+)
Senior Data Scientist
Старший аналитик AI/ML
Senior Data Engineer
Machine Learning Engineer
Senior Health Data Scientist (EHR Real World Data)
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- Нидерланды