yandex
anthropic
Страна
США
Зарплата
350 000 $ – 500 000 $
+500% приглашений

Откликайтесь
на вакансии с ИИ

Ускорим процесс поиска работы
ГибридПолная занятость

Research Engineer / Scientist, Alignment Science

Оценка ИИ

Это одна из самых престижных позиций в индустрии ИИ на данный момент. Сочетание миссии по спасению человечества, работы с топовыми исследователями и чрезвычайно высокой компенсации делает эту вакансию исключительной.


Вакансия из Quick Offer Global, списка международных компаний
Пожаловаться

Сложность вакансии

ЛегкоСложно
Оценка ИИ

Роль требует редкого сочетания навыков: глубокой экспертизы в ML-инженерии, опыта в исследованиях безопасности ИИ (AI Safety) и способности работать с огромными вычислительными кластерами. Высокая планка отбора обусловлена необходимостью решать задачи, для которых еще не существует стандартных решений.

Анализ зарплаты

Медиана400 000 $
Рынок300 000 $ – 550 000 $
Оценка ИИ

Предлагаемая зарплата ($350k - $500k) находится на верхнем пределе рынка даже для Сан-Франциско. Она значительно превышает средние показатели для Senior/Staff инженеров в обычных технологических компаниях, соответствуя уровню элитных AI-лабораторий (OpenAI, Google DeepMind).

Сопроводительное письмо

I am writing to express my strong interest in the Research Engineer / Scientist position within the Alignment Science team at Anthropic. With a background in machine learning and a deep commitment to the development of safe, steerable AI systems, I have followed Anthropic’s work on Constitutional AI and Scalable Oversight with great admiration. My experience in building robust ML experiments and my familiarity with technical AI safety research align perfectly with your mission to create beneficial AI that remains helpful and honest as it scales.

In my previous work, I have focused on empirical research and the practical challenges of training large-scale models. I am particularly drawn to your current focus on 'AI Control' and 'Automated Alignment Research,' as I believe that developing automated systems to detect and mitigate risks is crucial for the next generation of frontier models. I am a collaborative engineer who thrives in fast-paced environments and enjoys the 'big science' approach that Anthropic champions. I look forward to the possibility of contributing to your Responsible Scaling Policy and helping ensure that Claude remains a leader in safety and reliability.

+250% к просмотрам

Составьте идеальное письмо к вакансии с ИИ-агентом

Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в anthropic уже сейчас

Присоединяйтесь к Anthropic, чтобы формировать будущее безопасного ИИ и работать над самыми сложными задачами выравнивания нейросетей.

Описание вакансии

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role:

You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.

Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...

  • Scalable Oversight:Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
  • AI Control:Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Alignment Stress-testing: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Automated Alignment Research:Building and aligning a system that can speed up & improve alignment research.
  • Alignment Assessments: Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card), misalignment-risk safety cases, and coordination with third-party evaluators.
  • Safeguards Research: Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment.
  • Model Welfare:Investigating and addressing potential model welfare, moral status, and related questions. See our program announcement and welfare assessment in the Claude 4 system card for more.

Note: For this role, we conduct all interviews in Python and prefer candidates to be based in the Bay Area.

Representative projects:

  • Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
  • Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
  • Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  • Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.
  • Contribute ideas, figures, and writing to research papers, blog posts, and talks.
  • Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.

You may be a good fit if you:

  • Have significant software, ML, or research engineering experience
  • Have some experience contributing to empirical AI research projects
  • Have some familiarity with technical AI safety research
  • Prefer fast-moving collaborative projects to extensive solo efforts
  • Pick up slack, even if it goes outside your job description
  • Care about the impacts of AI

Strong candidates may also:

  • Have experience authoring research papers in machine learning, NLP, or AI safety
  • Have experience with LLMs
  • Have experience with reinforcement learning
  • Have experience with Kubernetes clusters and complex shared codebases

Candidates need not have:

  • 100% of the skills needed to perform the job
  • Formal certifications or education credentials

The annual compensation range for this role is listed below.

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:

$350,000—$500,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

+400% к собеседованиям

Создайте идеальное резюме с помощью ИИ-агента

Создайте идеальное резюме с помощью ИИ-агента

Навыки

  • Python
  • Machine Learning
  • AI Safety
  • Research Engineering
  • Large Language Models
  • Kubernetes
  • NLP
  • Reinforcement Learning

Возможные вопросы на собеседовании

Проверка понимания ключевой концепции Anthropic по обеспечению честности моделей.

Как бы вы спроектировали систему масштабируемого надзора (Scalable Oversight) для оценки ответов модели в области, где экспертных знаний человека уже недостаточно?

Оценка практических навыков работы с RL, который критически важен для выравнивания моделей.

Опишите ваш опыт работы с обучением с подкреплением (RL). С какими основными трудностями вы сталкивались при настройке функций вознаграждения для предотвращения нежелательного поведения модели?

Проверка навыков 'Red Teaming' и понимания уязвимостей LLM.

Предложите метод автоматизированного поиска 'jailbreaks' (взломов) для модели, который был бы эффективнее простого перебора промптов.

Оценка инженерных навыков работы с инфраструктурой.

Расскажите о вашем опыте работы с Kubernetes и распределенным обучением. Как вы оптимизируете использование ресурсов при проведении масштабных экспериментов?

Проверка соответствия философии безопасности компании.

Что вы считаете самым большим риском при переходе от моделей уровня ASL-3 к ASL-4 и как эмпирические исследования могут помочь его минимизировать?

Похожие вакансии

Itvolna.tech
400 000 ₽ – 430 000 ₽

MLOps Engineer (Python)

УдалённоРоссия
Python · FastAPI · aiohttp · SQLAlchemy · asyncio · Docker · Kubernetes · Kafka · Redis · PostgreSQL · MLOps · LLM · RAG · AutoML
+14 навыков
JETLYN
310 000 ₽ – 430 000 ₽

AI Engineer (CV & Navigation)

SeniorУдалённоРоссия
Computer Vision · Python · PyTorch · TensorFlow · SLAM · Deep Learning · Augmented Reality
+7 навыков
NDA
Не указана

Middle, Middle+, Senior GenAI/LLM Разработчик

SeniorУдалённоРоссия
n8n · JSON · PostgreSQL · REST · GraphQL · OAuth2 · FastAPI · JavaScript · TypeScript · React · Python · LangChain · RAG · pgvector · Qdrant · Milvus · Prompt Engineering
+17 навыков
QLAN
Не указана

Middle / Senior GenAI Engineer (CV)

SeniorУдалённоРоссия
Computer Vision · Diffusion Models · Stable Diffusion · SDXL · LoRA · UNet · Python · PyTorch · Machine Learning · Image Generation · Video Generation
+11 навыков
Academy of Digital Industries (ADI)
960 $ – 1 680 $

AI Engineer / AI Mentor

УдалённоКазахстан
Python · NumPy · Pandas · PyTorch · TensorFlow · LLM · NLP · Computer Vision · Machine Learning · Data Science
+10 навыков
NDA
90 000 ₽

Junior разработчик agent AI-систем

JuniorУдалённоРоссия
Python · FastAPI · OpenAI · PostgreSQL · Nginx · Ubuntu · RAG · Vector Database · Embeddings · Figma
+10 навыков
более 1000 офферов получено
4.9

1000+ офферов получено

Устали искать работу? Мы найдём её за вас

Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!

anthropic
Страна
США
Зарплата
350 000 $ – 500 000 $