- Страна
- США
- Зарплата
- 225 000 $ – 315 000 $
Откликайтесь
на вакансии с ИИ

HPC Specialist Solutions Architect
Исключительная вакансия в быстрорастущей компании, работающей с передовым оборудованием (H200, B200). Высокий уровень компенсации (до $315k OTE), отличный соцпакет и возможность удаленной работы делают это предложение одним из лучших на рынке ИИ-инфраструктуры.
Сложность вакансии
Роль требует редкого сочетания глубоких знаний в области HPC (высокопроизводительных вычислений), сетевых протоколов (InfiniBand/RoCE) и современных облачных технологий (Kubernetes, Terraform). Кандидат должен обладать экспертными знаниями архитектур NVIDIA и уметь оптимизировать распределенное обучение ИИ-моделей.
Анализ зарплаты
Предлагаемый диапазон $225k - $315k OTE полностью соответствует и даже несколько превышает рыночные стандарты для узкоспециализированных архитекторов HPC в США. Верхняя граница диапазона характерна для позиций уровня Senior/Staff в ведущих технологических компаниях.
Сопроводительное письмо
I am writing to express my strong interest in the HPC Specialist Solutions Architect position at Nebius. With extensive experience in designing and optimizing large-scale GPU clusters and a deep understanding of NVIDIA’s Hopper and Blackwell architectures, I am eager to contribute to your mission of leading the new era of cloud computing for the global AI economy. My background in Kubernetes orchestration, InfiniBand networking, and MLOps toolchains aligns perfectly with the technical requirements of this role.
Throughout my career, I have focused on bridging the gap between complex infrastructure and high-performance AI workloads. I have a proven track record of implementing scalable HPC environments and tuning resource utilization to achieve maximum throughput for distributed training. I am particularly impressed by Nebius's commitment to innovation with minimal bureaucracy, and I am confident that my expertise in RDMA stacks, Terraform, and container runtimes will help drive the success of your AI Cloud customers.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в nebius уже сейчас
Присоединяйтесь к Nebius, чтобы проектировать будущее ИИ-инфраструктуры на базе новейших технологий NVIDIA!
Описание вакансии
Why work at NebiusNebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we workHeadquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Customer experience:
Customer experience at Nebius AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions. Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community.
The role
We are seeking a Specialist HPC Infrastructure Solutions Architect to design, build, and optimize next-generation high-performance computing (HPC) platforms for AI, simulation, and large-scale data processing workloads. The ideal candidate combines deep knowledge of cloud-native architecture, Kubernetes orchestration, networking, and HPC system design with hands-on experience implementing NVIDIA GPU-based compute environments and MLOps toolchains. This role sits at the intersection of infrastructure engineering, accelerated computing, and AI systems design, shaping the foundation for high-throughput, low-latency distributed workloads in cloud environment.
You’re welcome to work remotely from the United States or Canada.
Your responsibilities will include:
- Architect and implement scalable HPC clusters optimized for AI, simulation, and distributed training, leveraging container orchestration frameworks and schedulers (e.g., Kubernetes, Slurm).
- Design and integrate GPU-accelerated compute infrastructures featuring NVIDIA Hopper, Blackwell architectures, NVLink/NVSwitch, and InfiniBand/RoCE Interconnects.
- Deploy, and manage GPU Operator and Network Operator stacks for automated lifecycle management of GPU and high-speed networking components.
- Design and validate cloud HPC environments, focusing on low-latency, high-bandwidth networking, multi-GPU scaling, and efficient workload scheduling.
- Lead reference architectures for AI/ML model training, data pipelines, and MLOps integrations using modern observability and CI/CD tooling.
- Collaborate with hardware vendors (e.g., NVIDIA) and cloud providers to evaluate and optimize emerging HPC and GPU technologies.
- Benchmark system performance, identify bottlenecks, and tune resource utilization across compute, network, and storage tiers.
- Provide expert-level technical guidance to customers, internal teams, and partners on HPC architecture patterns, operational excellence reviews and customer engagements
We expect you to have:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (Ph.D. a plus)
- 3+ years of hands-on experience architecting HPC or large-scale GPU clusters.
- Expertise in Linux systems, Kubernetes, container runtimes (containers, CRI-O, Docker), and related CI/CD practices.
- Strong understanding of HPC networking protocols and RDMA stacks (InfiniBand, NVLink/NVSwitch)
- Deep understanding of storage and I/O optimization for large datasets (Ceph, Lustre, NFS, GPUDirect Storage)
- Familiarity with Terraform, Ansible, Helm, and GitOps workflows.
- Strong scripting skills in Python or Bash for automation and tool integration.
- Excellent communication and documentation skills; ability to lead design
- reviews and customer engagements
It will be an added bonus if you have:
- Proficient with NVIDIA GPU ecosystem: GPU Operator, MIG, DCGM, NCCL, Nsight, and CUDA stack management.
- Experience designing or managing AI/ML pipelines via MLflow, Kubeflow, NeMo, or similar frameworks.
- Experience with cloud-native HPC offerings (Slurm, LFS, PBS, etc.).
- Background in designing multi-tenant GPU infrastructures or AI training farms.
- Exposure to distributed ML frameworks (PyTorch DDP, DeepSpeed, Megatron).
- Knowledge of observability for HPC (Prometheus, DCGM Exporter, Grafana, NVIDIA NGC monitoring tools)
- Contribution to open-source HPC/CUDA/Kubernetes projects is a strong plus
Key Employee Benefits:
- Health Insurance:100% company-paid medical, dental, and vision coverage for employees and families.
- 401(k) Plan:Up to 4% company match with immediate vesting.
- Parental Leave:20 weeks paid for primary caregivers, 12 weeks for secondary caregivers.
- Remote Work Reimbursement:Up to $85/month for mobile and internet.
- Disability & Life Insurance:Company-paid short-term, long-term, and life insurance coverage.
Compensation
We offer competitive salaries, ranging from 225k - 315k OTE (On-Target Earnings) and equity based on your experience, skills, and location.
Join Nebius Today!
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Flexible working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Kubernetes
- NVIDIA GPU
- Infiniband
- RoCE
- Terraform
- Ansible
- Python
- Bash
- Docker
- Slurm
- PyTorch
- CUDA
- Prometheus
- Grafana
- Ceph
- Lustre
- Helm
- GitOps
Возможные вопросы на собеседовании
Проверка понимания специфики сетевого взаимодействия в HPC-кластерах.
Можете ли вы объяснить разницу между InfiniBand и RoCE v2 в контексте распределенного обучения ИИ и как выбор протокола влияет на производительность NCCL?
Оценка навыков оптимизации работы с GPU.
Какие стратегии вы используете для минимизации задержек при передаче данных между GPU (GPUDirect Storage) и как вы настраиваете GPU Operator в Kubernetes?
Проверка опыта работы с планировщиками задач.
Как бы вы реализовали гибридную среду, объединяющую гибкость Kubernetes и возможности пакетной обработки Slurm для обучения LLM?
Оценка навыков траблшутинга и мониторинга.
Опишите ваш подход к выявлению узких мест в производительности при масштабировании обучения модели на сотни узлов с использованием DCGM и Grafana.
Проверка архитектурного мышления.
С какими основными проблемами вы сталкивались при внедрении архитектур NVIDIA Blackwell или Hopper и как вы их решали на уровне инфраструктуры?
Похожие вакансии
Архитектор ИТ-решений
Архитектор
Архитектор
Sr. Solutions Architect
Principal Domain Architect
Principal Architect
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США
- Зарплата
- 225 000 $ – 315 000 $