yandex
anthropic
Страна
США
Зарплата
230 000 $ – 270 000 $
+500% приглашений

Откликайтесь
на вакансии с ИИ

Ускорим процесс поиска работы
LeadГибридПолная занятость

Enforcement Operations Lead

Оценка ИИ

Исключительная возможность работать в ведущей AI-лаборатории с очень высокой компенсацией и значимым социальным влиянием. Позиция предлагает участие в формировании будущего безопасности ИИ.


Вакансия из Quick Offer Global, списка международных компаний
Пожаловаться

Сложность вакансии

ЛегкоСложно
Оценка ИИ

Роль требует высокого уровня ответственности, так как связана с управлением внешними вендорами, соблюдением регуляторных норм и работой с потенциально тяжелым контентом. Кандидату необходимо сочетать операционную эффективность с глубоким пониманием этики ИИ.

Анализ зарплаты

Медиана210 000 $
Рынок180 000 $ – 250 000 $
Оценка ИИ

Предлагаемая зарплата ($230k–$270k) находится на верхней границе рыночного диапазона для позиций уровня Lead в сфере Trust & Safety в США, особенно для технологических хабов вроде Сан-Франциско. Это отражает высокую значимость роли и престиж компании.

Сопроводительное письмо

I am writing to express my strong interest in the Enforcement Operations Lead position at Anthropic. With over five years of experience in Trust and Safety operations and a proven track record of managing complex vendor relationships, I am eager to contribute to your mission of building reliable and steerable AI systems. My background in developing scalable SOPs and navigating regulatory reporting requirements aligns perfectly with the needs of your Safeguards team.

In my previous roles, I have successfully transitioned manual enforcement workflows into robust, automated systems, significantly improving operational efficiency and data accuracy. I am particularly drawn to Anthropic's commitment to AI safety and the challenge of building enforcement infrastructure for rapidly evolving product surfaces. I am confident that my detail-oriented approach and experience in cross-functional coordination will allow me to drive meaningful improvements to your model behavior and policy enforcement.

+250% к просмотрам

Составьте идеальное письмо к вакансии с ИИ-агентом

Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в anthropic уже сейчас

Присоединяйтесь к Anthropic, чтобы формировать стандарты безопасности ИИ и управлять критически важными процессами модерации в одной из самых влиятельных компаний индустрии.

Описание вакансии

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

Anthropic's Safeguards team is responsible for enforcing our policies, protecting users, and ensuring our platform is not misused. As a Safeguards Enforcement Analyst focused on Safety Evaluations, you'll play a central role in ensuring our models meet safety and policy standards before and after launch. You'll run and monitor evaluations, drive mitigations when issues surface, coordinate the creation of new evals, and help build the processes and documentation that allow the team to scale this work over time.

This role requires someone who is detail-oriented, comfortable navigating ambiguity, and capable of coordinating across teams to break new ground and drive work to completion. This work is deeply cross-functional — you'll partner closely with policy experts, Safeguards engineering teams, and many other stakeholders throughout the organization to ensure our evaluations are comprehensive and current, and that findings translate into meaningful improvements to model behavior.

Responsibilities

\Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.*

Vendor Operations

  • Own end-to-end management of content moderation vendor relationships, including onboarding, performance management, quality assurance, and capacity planning
  • Partner with internal stakeholders to define vendor scope, set SLAs, and evaluate vendor output quality on an ongoing basis
  • Identify opportunities to scale content review operations efficiently as Anthropic's product surface area grows
  • Develop and maintain standard operating procedures (SOPs) for all vendor-executed review workflows, ensuring consistency and accuracy across content

Regulatory Reporting and Enforcement

  • Partner with Regulatory Operations to ensure that new product features and content surfaces are incorporated into Safeguards reporting workflows as they launch
  • Own enforcement reporting for Regulatory Operations requirements, including maintaining and updating dashboards and tracking mechanisms that provide accurate, timely data to regulatory bodies
  • Produce on-request read-outs of enforcement metrics over specified time ranges to support regulatory reporting obligations
  • Identify and drive improvements to existing reporting infrastructure — including transitioning manual, spreadsheet-based workflows to more robust and scalable solutions
  • Oversee the user-reported content review pipeline, including reviews submitted via the Content Reporting Form across all supported content surfaces
  • Ensure SOPs for content review workflows are kept current as new features and surfaces are added
  • Work collaboratively with the RegOps team to ensure intake processes are prepared to handle emerging report types (e.g., third-party MCP server reports)
  • Maintain a strong understanding of Anthropic's policy framework to provide informed operational guidance and escalation support

Copyright Operations

  • Oversee Safeguards copyright systems, ensuring the right operational processes are in place to handle copyright-related enforcement at scale
  • Partner closely with the Regulatory Operations team to scale copyright operations as Anthropic's products grow, with a particular focus on reducing false positives and improving the accuracy of copyright enforcement workflows
  • Identify gaps in current copyright operational processes and drive cross-functional solutions in collaboration with policy, legal, and engineering stakeholders

You may be a good fit if you:

  • Have 5+ years of experience in trust and safety operations, content moderation program management, or a related field
  • Have managed external vendor or contractor relationships, including performance management and quality assurance
  • Are comfortable working across policy, legal, and operations teams to translate compliance requirements into practical workflows
  • Have experience building or improving operational reporting, dashboards, or enforcement tracking systems
  • Are highly organized, with a track record of maintaining rigorous documentation and SOPs in fast-moving environments
  • Communicate clearly and precisely — both in writing and verbally — across technical and non-technical audiences
  • Are energized by the challenge of building scalable systems in an environment where not everything is already figured out
  • Care deeply about the responsible deployment of AI and the role enforcement operations plays in that mission

Strong candidates may also have:

  • Experience working with regulatory reporting requirements, particularly in the context of online platforms or AI systems
  • Familiarity with content moderation tooling and review workflows at scale
  • Experience with copyright enforcement operations, including false positive mitigation strategies
  • Background in policy enforcement, legal operations, or compliance program management
  • Experience supporting or standing up a new operational function, including writing foundational SOPs and building institutional knowledge from scratch
  • Comfort working with data and metrics to inform operational decisions and surface trends to leadership

The annual compensation range for this role is listed below.

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:

$230,000—$270,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

+400% к собеседованиям

Создайте идеальное резюме с помощью ИИ-агента

Создайте идеальное резюме с помощью ИИ-агента

Навыки

  • Data Analysis
  • Content Moderation
  • Project Management
  • Vendor Management
  • Compliance
  • SOP Development
  • Regulatory Reporting
  • Trust and Safety
  • Copyright Law

Возможные вопросы на собеседовании

Проверка опыта управления внешними командами и контроля качества их работы.

Расскажите о вашем опыте управления внешними вендорами модерации: как вы выстраивали систему SLA и контролировали качество их решений?

Оценка способности кандидата работать в условиях неопределенности и быстро меняющихся требований.

Как вы подходите к созданию SOP (стандартных операционных процедур) для совершенно новых функций продукта, где правила еще не до конца определены?

Проверка навыков работы с данными и отчетностью для государственных органов.

Опишите ваш опыт подготовки данных для регуляторной отчетности. С какими сложностями вы сталкивались при масштабировании этих процессов?

Оценка психологической устойчивости и понимания специфики работы с деликатным контентом.

Эта роль предполагает контакт с нежелательным контентом. Как вы организуете рабочие процессы, чтобы минимизировать риск выгорания — как своего, так и команды?

Проверка умения находить баланс между защитой прав и удобством пользователей.

Как бы вы подошли к снижению количества ложноположительных срабатываний (false positives) в системе защиты авторских прав без ущерба для безопасности?

Похожие вакансии

Navio
от 300 000 ₽

Ведущий специалист по безопасности приложений (AppSec)

LeadГибридРоссия
AppSec · SAST · SCA · ASOC · AntiDDoS · WAF · Kubernetes · Cloud Infrastructure · Linux · Jira · GitLab · Artifactory · Network Security
+13 навыков
Т-Банк
от 430 000 ₽

Red Team Lead

LeadВ офисеРоссия
Red Teaming · Offensive Security · Python · Go · C++ · PowerShell · Linux · Windows · Active Directory · MITRE ATT&CK · SIEM · EDR · WAF · Threat Intelligence · Purple Teaming · PKI · Cryptography
+17 навыков
netskope
147 000 $ – 299 500 $

Principal Engineer, Cloud Firewall

LeadУдалённоСША
C++ · TCP/IP · SSL/TLS · Firewall · IPS/IDS · Wireshark · TCPDump · GTest · PyTest · Ansible · Kubernetes · SQL · NoSQL · CI/CD · Jenkins · Distributed Systems
+16 навыков
iherb
177 000 $ – 225 000 $

Principal Application Security Engineer

LeadУдалённоСША
Python · C++ · .NET · JavaScript · Node.js · Java · AWS · Docker · SAST · DAST · SCA · Threat Modeling · Cryptography · API Design · Microservices · Cloudflare · OWASP Top 10
+17 навыков
SDOdev
380 000 ₽ – 500 000 ₽

Senior Android Security / Reverse Engineer (HTTPS Traffic, Google Services)

SeniorУдалённоРоссия
Android · iOS · TCP/IP · HTTPS · Cryptography · MITM · Frida · Objection · Apktool · Jadx · Hopper · Smali · Hermes · Swift · Dart · Objective-C · C++ · Reverse Engineering · Cybersecurity
+19 навыков
jane
125 200 $ – 195 600 $

Staff IT Administrator

LeadУдалённоКанада
Okta · IAM · RBAC · SaaS · API · FreshService · Automation · Security · HITRUST
+9 навыков
более 1000 офферов получено
4.9

1000+ офферов получено

Устали искать работу? Мы найдём её за вас

Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!

anthropic
Страна
США
Зарплата
230 000 $ – 270 000 $