- Страна
- США
Откликайтесь
на вакансии с ИИ

Research Engineer, Evals
Высокий балл за работу над критически важными проблемами ИИ в талантливой команде. Привлекательные условия (акции, страховка) и локация в центре ИИ-инноваций (Сан-Франциско), хотя отсутствие удаленки может быть минусом для некоторых.
Сложность вакансии
Роль требует сочетания сильных инженерных навыков и глубокого понимания LLM-систем. Основная сложность заключается в работе с неопределенными критериями успеха и необходимости создания собственных бенчмарков для высокорисковых доменов.
Анализ зарплаты
Зарплата в вакансии не указана, но для позиции Research Engineer в Сан-Франциско рыночный диапазон составляет $160k–$230k плюс значительный пакет опционов. Учитывая стадию компании и сложность задач, предложение, скорее всего, соответствует верхним границам рынка.
Сопроводительное письмо
I am writing to express my strong interest in the Research Engineer, Evals position at Variance. With a background in building robust evaluation pipelines for LLMs and a deep-seated belief that generic benchmarks are insufficient for high-stakes environments, I am excited by your mission to define what 'good' looks like in the messy, adversarial world of fraud and risk investigations.
In my previous experience, I have focused on creating domain-specific datasets and failure analysis loops that go beyond surface-level metrics. I thrive in environments where success criteria are initially ambiguous and require rigorous engineering to sharpen. I am particularly drawn to Variance's focus on craftsmanship and the challenge of building systems that protect people from abuse through scientific experimentation and continuous iteration.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в intrinsic-safety уже сейчас
Присоединяйтесь к команде в Сан-Франциско и создавайте стандарты оценки ИИ нового поколения для борьбы с мошенничеством!
Описание вакансии
Role
At Variance, we are teaching machines to make the hardest judgment calls at scale. That means building AI agents for the high-stakes gray area of risk investigations, fraud, and identity reviews.
We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments. We focus on building, high-consequence systems problems where the edge cases matter most.
We’re looking for a Research Engineer to help define how we measure and improve model quality. You’ll build the benchmarks, datasets, tooling, and evaluation loops that tell us whether our systems are actually getting better on the tasks that matter. This role sits at the center of research, product, and engineering. It is about creating rigorous, domain-specific evaluations that reflect real customer workflows, expose meaningful failure modes, and drive the next generation of model and agent improvements.
You’re a fit if you:
- Care deeply about craftsmanship and have strong opinions about model quality, measurement, and experimental rigor
- Want to work on core model and agent behavior, not just surface-level product metrics
- Are excited by the challenge of defining what “good” looks like in messy, high-stakes environments
- Think in tight loops: hypothesis, benchmark design, evaluation, failure analysis, iteration
- Have strong engineering fundamentals and like building robust systems around ambiguous research problems
- Thrive in environments where success criteria are initially underspecified and need to be sharpened through work
- Are willing to do the work in the trenches: reviewing outputs, grading edge cases, curating datasets, and refining tasks until the evaluation actually measures what matters
- Care deeply about building systems that protect people from fraud, scams, and abuse
What you’ll do
- Build proprietary benchmarks and datasets to evaluate models and model systems on fraud, identity, and risk workflows
- Design and run offline and online evals that measure model performance on real customer tasks, not just abstract benchmarks
- Define quality metrics for judgment systems, including precision, calibration, consistency, abstention, and failure handling
- Study where models and agents break, and turn those failures into better evals, better datasets, and better training loops
- Build reusable evaluation tools and quality building blocks that can be used across different product surfaces and workflows
- Partner closely with research, engineering, product, and design to improve system quality through rigorous experimentation
- Help create a strong culture of scientific experimentation, clear measurement, and continuous iteration
- Push the boundary of how AI systems are evaluated in regulated, adversarial, and high-consequence environments
What success looks like
- We have a clear, trusted view of how our systems perform across the workflows that matter most
- Our evals predict real-world quality better than generic benchmarks
- We identify meaningful failure modes earlier and improve system behavior faster
- We develop differentiated datasets, benchmarks, and quality loops that compound over time
- Research and engineering teams use your work to make better decisions about what to train, ship, and improve next
- Variance becomes known for rigorous, domain-specific evaluation of judgment systems
Preferred background
- Experience training, evaluating, or improving modern ML systems
- Strong programming skills and comfort working in research-heavy codebases
- Experience building benchmarks, datasets, evaluation pipelines, or quality systems
- Familiarity with LLMs, agent systems, retrieval, post-training, or adjacent areas
- Ability to design clean experiments and draw reliable conclusions from noisy results
- Strong engineering judgment and a bias toward building
- Interest in fraud, risk, trust and safety, compliance, or other regulated and adversarial domains
Our culture
We believe in ownership, urgency, and craft. We enjoy spirited debate, wild ideas, and building things we’re proud of. We’re fully in-person in San Francisco.
What we offer
- Competitive salary and meaningful equity
- Platinum-level medical, dental, and vision insurance
- Unlimited PTO, sick leave, and parental leave
- Up to $100 per month in reimbursement for personal health and wellness expenses
- 401(k) plan
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- LLM
- Machine Learning
- Evaluation Frameworks
- Data Curation
- AI Agents
- Retrieval-Augmented Generation
- Experiment Design
Возможные вопросы на собеседовании
Проверка способности кандидата выходить за рамки стандартных метрик (MMLU, GSM8K) в специфических бизнес-кейсах.
Как бы вы спроектировали систему оценки для ИИ-агента, занимающегося расследованием мошенничества, где цена ошибки очень высока?
Оценка навыков анализа ошибок и итеративного улучшения моделей.
Расскажите о случае, когда вы обнаружили систематическую ошибку в работе модели. Как вы превратили этот провал в надежный тест или набор данных?
Важно понять, как кандидат работает с качественными данными в количественной среде.
Как вы подходите к созданию 'золотого набора' данных (gold dataset) в области, где даже эксперты-люди могут расходиться во мнениях?
Проверка понимания специфики работы агентов (галлюцинации, циклы, использование инструментов).
В чем основные отличия оценки автономных агентов от оценки простых чат-ботов или классификаторов?
Оценка инженерной зрелости и умения строить масштабируемые инструменты.
Какие архитектурные решения вы бы использовали для создания переиспользуемого фреймворка оценки, который могли бы использовать разные команды инженеров?
Похожие вакансии
MLOps Engineer (Python)
AI Engineer (CV & Navigation)
Middle, Middle+, Senior GenAI/LLM Разработчик
Middle / Senior GenAI Engineer (CV)
AI Engineer / AI Mentor
AI-специалист
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США