Anthropic logo

Safeguards Policy Analyst, Fraud & Scams

Anthropic
5 hours ago
Remote

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

As a Safeguards Policy Analyst focused on Fraud & Scams, you will be responsible for designing, building, and executing enforcement workflows that detect and mitigate fraud and scam-related harms on Anthropic's products. You will serve as the subject matter expert on fraud typologies, scam ecosystems, and the threat actors who perpetrate them — translating that expertise into durable and scalable policies. 

This role sits within the Integrity & Authenticity (I&A) team, You will function both as a policy owner, and work closely with threat investigative and enforcement teams.  You will also develop the guidelines that power classifiers, and will be our point of content  cross-functional workstreams. No two days will look the same.

Important context: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a financial, psychological, or otherwise disturbing nature, including detailed fraud schemes and scam content.

Responsibilities: 

Policy Design & Ownership

  • Draft, maintain, and iterate on Fraud & Scams policies governing Anthropic's products and APIs, with clarity for both model enforcement and human reviewers
  • Conduct regular structured policy reviews to identify gaps, ambiguities, and coverage failures, and lead the process to close them
  • Develop detailed threat models for fraud and scam vectors — including social engineering, financial fraud, impersonation scams, phishing, and AI-enabled fraud — and translate these into enforceable policy language
  • Stay current on the fraud and scam landscape, including emerging typologies, regulatory shifts, and threat actor tactics, techniques, and procedures (TTPs)

Enforcement Strategy & Operations

  • Design and architect automated enforcement systems and human review workflows that scale effectively while maintaining high precision and recall
  • Review flagged content to drive enforcement decisions and surface policy improvements grounded in real-world cases
  • Define and manage precision/recall tradeoffs in enforcement, working with data science teams to continuously tune classifiers and detection signals
  • Build and maintain an effective feedback loop between threat intelligence, policy, and enforcement operations to ensure timely response to novel and evolving fraud threats

Technical & Cross-functional Collaboration

  • Serve as the primary policy point of contact for ML and Engineering teams developing fraud detection classifiers, working to translate policy intent into technical artifacts and training signals
  • Partner with Product, Engineering, and Data Science teams to optimize detection models, automated enforcement pipelines, and tooling for fraud-specific policy violations
  • Collaborate with external researchers, law enforcement liaisons, and fraud SMEs to gather feedback on policy effectiveness and emerging risk areas

Stakeholder Alignment & Education

  • Educate and align internal stakeholders — including Legal, Public Policy, and Go-to-Market teams — around Anthropic's fraud and scams policies and enforcement approach
  • Serve as an internal resource on fraud risk, briefing leadership and cross-functional partners as threats evolve
  • Contribute to Anthropic's external communications and policy documentation related to fraud and platform integrity where relevant

You may be a good fit if you have experience:

  • Working as a Trust & Safety professional with a focused background in fraud, scams, or financial crime — particularly in a tech platform or AI context
  • Writing, iterating on, and managing operational policies for fraud or abuse prevention at scale
  • Threat modeling for fraud and scam ecosystems, including social engineering, romance scams, investment fraud, impersonation, and phishing
  • Identifying and articulating common fraud tactics (e.g., pig butchering, advance fee fraud, account takeover facilitation) and how they manifest on AI platforms
  • Using SQL or other data analysis tools to identify trends, measure enforcement efficacy, and surface policy gaps
  • Collaborating cross-functionally with Engineering, ML, Legal, and Policy teams on safety initiatives
  • Working with generative AI products, including writing effective prompts for content review and enforcement use cases
  • Thriving in a fast-paced, ambiguous environment where priorities shift and the threat landscape evolves rapidly

Preferred Qualifications: 

  • Experience at a major technology platform, financial institution, or fraud intelligence firm in a policy, operations, or investigative capacity
  • Familiarity with the generative AI risk landscape and how large language models can be exploited for fraud and social engineering
  • Background in threat intelligence, financial crimes compliance (AML/KYC), or law enforcement focused on cyber-enabled fraud
  • Demonstrated ability to develop and communicate policy positions to diverse stakeholders including legal counsel and executive leadership

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$245,000$285,000 USD

Logistics

Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience

Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process