- Страна
- США
- Зарплата
- 166 000 $ – 225 000 $
Откликайтесь
на вакансии с ИИ

Senior Software Engineer, Model Serving
Исключительная вакансия в компании-лидере индустрии (Databricks) с конкурентной зарплатой, работой над передовыми технологиями (LLM, GPU serving) и сильной инженерной культурой. Высокое влияние на продукт и возможность работать с создателями Apache Spark.
Сложность вакансии
Высокая сложность обусловлена требованиями к глубоким знаниям распределенных систем, оптимизации работы GPU/CPU и опыта работы с высоконагруженными ML-инфраструктурами. Роль подразумевает не только разработку, но и архитектурное лидерство в быстрорастущем сегменте Model Serving.
Анализ зарплаты
Предложенный диапазон $166k – $225k полностью соответствует рыночным стандартам для Senior-инженеров в Сан-Франциско. Верхняя граница даже несколько выше медианы, что характерно для топовых технологических компаний уровня Tier-1.
Сопроводительное письмо
I am writing to express my strong interest in the Senior Software Engineer, Model Serving position at Databricks. With over five years of experience in building large-scale distributed systems and a deep focus on high-throughput, low-latency infrastructure, I have closely followed Databricks' innovations in the Data Intelligence Platform. My background in optimizing GPU/CPU inference workloads and developing intelligent autoscaling systems aligns perfectly with your mission to provide a world-class serving platform for LLMs and traditional ML models.
In my previous roles, I have successfully led technical initiatives that improved system availability and reduced operational costs, which are key objectives for the Model Serving team. I am particularly drawn to this role because of the opportunity to work at the intersection of infrastructure and AI research, collaborating with the creators of Apache Spark and MLflow. I am confident that my expertise in system design and my customer-focused approach will allow me to make immediate contributions to the scalability and reliability of Databricks' serving infrastructure.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в databricks уже сейчас
Присоединяйтесь к команде Databricks, чтобы создавать будущее инфраструктуры ИИ и масштабировать Model Serving для крупнейших компаний мира!
Описание вакансии
At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business.
Databricks’ Model Serving product provides enterprises with a unified, scalable, and governed platform to deploy and manage AI/ML models — from traditional ML to fine-tuned and proprietary large language models. It offers real-time, low-latency inference, governance, monitoring, and lineage. As AI adoption accelerates, Model Serving is a core pillar of the Databricks platform, enabling customers to operationalize models at scale with strong SLAs and cost efficiency.
As a Senior Engineer, you’ll play a critical role in shaping both the product experience and the foundational infrastructure of Model Serving. You will design and build systems that enable high-throughput, low-latency inference across CPU and GPU workloads, influence architectural direction, and collaborate closely across platform, product, infrastructure, and research teams to deliver a world-class serving platform.
The impact you will have:
- Design and implement core systems and APIs that power Databricks Model Serving, ensuring scalability, reliability, and operational excellence.
- Drive architectural decisions and trade-offs to optimize performance, throughput, autoscaling, and operational efficiency for CPU and GPU serving workloads.
- Contribute directly to key components across the serving infrastructure — from model container builds and deployment workflows to runtime systems like routing, caching, observability, and intelligent autoscaling — ensuring smooth and efficient operations at scale.
- Collaborate cross-functionally with product, platform, and research teams to translate customer needs into reliable and performant systems.
- Lead technical initiatives that improve latency, availability, and cost-effectiveness across both customer-facing and foundational serving layers.
- Establish best practices for code quality, testing, and operational readiness, and mentor other engineers through design reviews and technical guidance.
What we look for:
- 5+ years of experience building and operating large-scale distributed systems.
- Experience in model serving, inference systems, or related infrastructure (e.g., routing, scheduling, autoscaling, and observability).
- Strong foundation in algorithms, data structures, and system design as applied to large-scale, low-latency serving systems.
- Proven ability to deliver technically complex, high-impact initiatives that create measurable customer or business value.
- Experience building architecture for large-scale, performance-sensitive CPU/GPU inference systems.
- Strong communication skills and ability to collaborate across teams in fast-moving environments.
- Customer-focused mindset with the ability to align implementation details with product goals.
- Passion for mentoring, growing engineers, and fostering technical excellence.
Pay Range Transparency
Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.
Local Pay Range
$166,000—$225,000 USD
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
BenefitsAt Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- System Design
- Distributed Systems
- API Design
- Observability
- Machine Learning Infrastructure
- Data Structures
- Algorithms
- GPU
- Inference
- Autoscaling
Возможные вопросы на собеседовании
Проверка понимания фундаментальных принципов работы с моделями в продакшене.
Как бы вы спроектировали систему автомасштабирования для Model Serving, учитывая резкие скачки трафика и долгое время прогрева GPU-контейнеров?
Важно для обеспечения низких задержек (low-latency), заявленных в описании вакансии.
Какие стратегии кэширования и оптимизации сетевого взаимодействия вы бы применили для минимизации задержек при инференсе больших языковых моделей (LLM)?
Оценка навыков проектирования отказоустойчивых систем.
Опишите ваш опыт обеспечения высокой доступности (High Availability) в распределенных системах. Как вы обрабатываете частичные отказы узлов в кластере инференса?
Проверка умения работать с ресурсоемкими вычислениями.
В чем заключаются основные различия при проектировании инфраструктуры для обслуживания моделей на CPU по сравнению с GPU с точки зрения планировщика (scheduler)?
Оценка лидерских качеств и умения работать в команде.
Расскажите о случае, когда вам пришлось принимать сложное архитектурное решение с учетом компромисса между производительностью и стоимостью владения (TCO). Как вы аргументировали свой выбор?
Похожие вакансии
AI Engineer (CV & Navigation)
Senior / Lead LLM Engineer
Middle, Middle+, Senior GenAI/LLM Разработчик
Senior Python AI Developer
GenAI/LLM Разработчик
Middle / Senior GenAI Engineer (CV)
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США
- Зарплата
- 166 000 $ – 225 000 $