- Страна
- США
- Зарплата
- 200 000 $ – 260 000 $
Откликайтесь
на вакансии с ИИ

Senior Machine Learning Engineer, Voice AI
Высокая оценка обусловлена конкурентной зарплатой, работой с передовыми технологиями (H200, B200) и возможностью стать одним из первых сотрудников в стратегически важном направлении компании.
Сложность вакансии
Роль требует глубоких знаний в области оптимизации инференса (TensorRT-LLM, CUDA) и работы с GPU на низком уровне. Высокая сложность обусловлена спецификой работы с потоковым аудио в реальном времени и необходимостью глубокого понимания архитектур SOTA моделей.
Анализ зарплаты
Предлагаемая зарплата ($200k - $260k) находится на верхнем уровне рыночных значений для Senior ML ролей в Сан-Франциско, особенно учитывая дополнительный пакет акций (equity).
Сопроводительное письмо
I am writing to express my strong interest in the Senior Machine Learning Engineer position for Voice AI at Together AI. With over five years of experience in ML engineering and a deep focus on inference optimization, I have consistently pushed the boundaries of model performance. My background includes extensive work with TensorRT-LLM and vLLM, where I have optimized large-scale models for production environments, focusing on minimizing latency and maximizing GPU utilization.
The challenges unique to voice AI—such as real-time streaming, audio tokenization, and the transition toward end-to-end speech-to-speech architectures—align perfectly with my technical expertise and interests. I am particularly impressed by Together AI's contributions to the open-source community, such as FlashAttention, and I am eager to bring my skills in GPU profiling and kernel-level debugging to a team that is setting the standard for inference infrastructure. I look forward to the possibility of helping Together AI build the fastest and most reliable voice platform in the industry.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в togetherai уже сейчас
Присоединяйтесь к команде, создающей будущее голосового ИИ на базе передовой инфраструктуры Together AI!
Описание вакансии
About the Role
Together AI is building the best inference infrastructure for voice applications. Our Voice AI platform powers production-grade, real-time voice agents and applications — serving speech-to-text and text-to-speech models with best-in-class latency and reliability.
We're looking for a Senior ML Engineer to drive the model serving layer for voice workloads. You'll work hands-on with inference engines like TRT-LLM and SGLang to optimize how we serve models like Whisper, Parakeet, Orpheus, and Kokoro — pushing latency and throughput to the frontier. You'll profile GPU utilization, design batching strategies for streaming audio, and ensure new model architectures can go from research to production quickly.
This is a foundational hire on a small, high-impact team. Voice inference has unique challenges — streaming audio, tokenization, real-time latency budgets — that require dedicated ML engineering focus. You'll shape how Together serves voice models as the industry moves from pipeline architectures (ASR → LLM → TTS) toward end-to-end speech-to-speech.
- Own the model serving stack that powers Together's voice platform across STT, TTS, and speech-to-speech.
- Work directly with state-of-the-art accelerators (H100s, H200s, B200s) to optimize voice model inference.
- Collaborate with model partners (Cartesia, Deepgram, Rime, and others) to bring their models to production on Together's infrastructure.
- Build quality evaluation frameworks that guide model selection for customers and inform the roadmap.
- Join a small, early-stage team with outsized impact on a fast-growing product area.
Responsibilities
- Optimize inference performance for voice models (STT, TTS, speech-to-speech) — targeting best-in-class TTFB, throughput, and GPU utilization across our curated model set.
- Productionize voice models on serverless and dedicated endpoints, including batching strategies, streaming inference, and memory management tailored to audio workloads.
- Build and maintain a voice model evaluation framework — measuring WER across accents, languages, and noise conditions for STT; naturalness, latency, and pronunciation accuracy for TTS.
- Enable new model architectures in our serving stack as the field evolves, including audio-native LLMs, codec-based models (SNAC), and speech-to-speech systems.
- Collaborate with model partners to integrate and optimize their models (Cartesia, Deepgram, Rime, and others) running on Together's infrastructure.
- Profile and debug performance across the full inference stack — from GPU kernels to framework-level bottlenecks — and ship measurable improvements.
- Work with the platform engineering side of the team to ensure the serving layer meets the latency and reliability requirements of real-time voice APIs.
- Contribute to voice model fine-tuning capabilities (STT and TTS) as we enable customers to build differentiated voice experiences on Together.
- Lay the groundwork for multiple new products down the line.
Requirements
- 5+ years of experience in ML engineering, with a focus on model serving, inference optimization, or ML infrastructure.
- Hands-on experience with LLM serving engines (vLLM, SGLang, TensorRT-LLM, or similar) — comfortable reading and modifying engine internals, not just using APIs.
- Strong proficiency in Python and PyTorch; experience with GPU profiling and optimization (CUDA, memory management, kernel-level debugging).
- Track record of shipping ML systems to production with measurable performance improvements.
- Strong product sense — you think about what developers building voice apps actually need, not just what's technically interesting.
- Comfort working on a small, early-stage team where you'll wear multiple hats and move fast.
- Experience with speech and audio ML (ASR, TTS architectures, audio signal processing) is a strong plus but not required — you can learn this quickly if you have strong ML engineering fundamentals.
- Familiarity with audio codecs and tokenization schemes (SNAC, Encodec, DAC) is a plus.
- Experience training or fine-tuning speech models is a plus.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or related field, or equivalent practical experience
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $260,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- PyTorch
- Machine Learning Infrastructure
- CUDA
- vLLM
- TensorRT-LLM
- GPU Profiling
- SGLang
- Text-to-Speech
- Automatic Speech Recognition
- Audio Signal Processing
Возможные вопросы на собеседовании
Проверка опыта работы с инструментами оптимизации, указанными в вакансии.
Расскажите о вашем опыте оптимизации моделей с использованием TensorRT-LLM или SGLang. Каких показателей задержки (latency) вам удалось достичь?
Голосовой ИИ требует работы в реальном времени, что критично для пользовательского опыта.
Как бы вы спроектировали стратегию батчинга для потокового аудио, чтобы минимизировать Time-to-First-Byte (TTFB)?
Вакансия требует навыков профилирования GPU.
Опишите ваш процесс поиска узких мест в производительности инференс-стека. Какие инструменты профилирования вы используете чаще всего?
Проверка понимания специфики аудио-данных.
В чем заключаются основные сложности при работе с аудио-токенизаторами (например, SNAC или Encodec) по сравнению с текстовыми?
Оценка способности адаптироваться к новым архитектурам.
Как вы подходите к задаче переноса новой исследовательской архитектуры (например, аудио-нативной LLM) в высокопроизводительный продакшн?
Похожие вакансии
AI Engineer (CV & Navigation)
Senior / Lead LLM Engineer
Middle, Middle+, Senior GenAI/LLM Разработчик
Senior Python AI Developer
GenAI/LLM Разработчик
Middle / Senior GenAI Engineer (CV)
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США
- Зарплата
- 200 000 $ – 260 000 $