- Страна
- США
- Зарплата
- 350 000 $ – 500 000 $
Откликайтесь
на вакансии с ИИ

[Expression of Interest] Research Scientist / Engineer, Honesty
Это вакансия в одной из ведущих ИИ-лабораторий мира с чрезвычайно высокой компенсацией и возможностью работать над критически важными задачами безопасности ИИ. Идеальный вариант для топовых исследователей.
Сложность вакансии
Роль требует исключительных навыков в области ML-исследований, глубокого понимания механизмов галлюцинаций LLM и опыта работы с RLHF. Высокий порог входа обусловлен необходимостью наличия степени MS/PhD и специфического опыта в области калибровки моделей.
Анализ зарплаты
Предлагаемая зарплата ($350k - $500k) находится на верхнем пределе рынка даже для Нью-Йорка и Сан-Франциско, значительно превышая средние показатели для Senior/Staff ролей в индустрии.
Сопроводительное письмо
I am writing to express my strong interest in the Research Scientist/Engineer position within the Finetuning Alignment team at Anthropic. With a solid background in machine learning and a deep commitment to AI safety, I have closely followed Anthropic’s pioneering work on Constitutional AI and interpretability. My experience in developing robust evaluation frameworks and fine-tuning large language models aligns perfectly with your mission to minimize hallucinations and enhance truthfulness.
In my previous work, I have focused on calibration and uncertainty estimation, ensuring that models not only provide accurate answers but also communicate their confidence levels effectively. I am particularly excited about the opportunity to design novel RL environments and data curation pipelines that reward honesty. I am eager to bring my technical expertise in Python and my passion for building steerable, reliable AI systems to your collaborative research environment in New York.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в anthropic уже сейчас
Присоединяйтесь к Anthropic, чтобы определять будущее честного и безопасного ИИ в одной из самых влиятельных лабораторий мира.
Описание вакансии
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.
Note: The team is based in New York and so we have a preference for candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year
Responsibilities:
- Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge
- Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model
- Create and maintain comprehensive honesty benchmarks and evaluation frameworks
- Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems
- Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses
- Design and implement prompting pipelines to generate data that improves model accuracy and honesty
- Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims
- Create tools to help human evaluators efficiently assess model outputs for accuracy
You may be a good fit if you:
- Have an MS/PhD in Computer Science, ML, or related field
- Possess strong programming skills in Python
- Have industry experience with language model finetuning and classifier training
- Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy
- Care about AI safety and the accuracy and honesty of both current and future AI systems
- Have experience in data science or the creation and curation of datasets for finetuning LLMs
- An understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs
Strong candidates may also have:
- Published work on hallucination prevention, factual grounding, or knowledge integration in language models
- Experience with fact-grounding techniques
- Background in developing confidence estimation or calibration methods for ML models
- A track record of creating and maintaining factual knowledge bases
- Familiarity with RLHF specifically applied to improving model truthfulness
- Worked with crowd-sourcing platforms and human feedback collection systems
- Experience developing evaluations of model accuracy or hallucinations
Join us in our mission to ensure advanced AI systems behave reliably and ethically while staying aligned with human values.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$350,000—$500,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- Machine Learning
- Large Language Models
- Fine-tuning
- RAG
- Natural Language Processing
- Statistical Analysis
- Data Science
- Reinforcement Learning
- RLHF
Возможные вопросы на собеседовании
Проверка понимания фундаментальных проблем честности моделей.
Как бы вы спроектировали систему вознаграждения в RLHF, которая эффективно разделяет 'угодливость' (sycophancy) и фактическую точность?
Оценка практических навыков работы с данными.
Опишите ваш подход к созданию пайплайна автоматической фильтрации данных для выявления скрытых фактических ошибок, которые модель могла бы усвоить как истину.
Проверка знаний в области калибровки.
Какие метрики, помимо ECE (Expected Calibration Error), вы бы использовали для оценки уверенности модели в своих ответах в открытых доменах?
Оценка опыта в области RAG и заземления.
В каких случаях использование RAG может фактически увеличить вероятность галлюцинаций, и как вы будете это контролировать?
Проверка навыков экспериментального дизайна.
Как вы определите, является ли ошибка модели результатом отсутствия знаний в параметрах или неспособностью правильно извлечь эти знания при генерации?
Похожие вакансии
MLOps Engineer (Python)
AI Engineer (CV & Navigation)
Middle, Middle+, Senior GenAI/LLM Разработчик
Middle / Senior GenAI Engineer (CV)
AI Engineer / AI Mentor
Junior разработчик agent AI-систем
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США
- Зарплата
- 350 000 $ – 500 000 $