Build the workforce that drives better governance across AI development and deployment.
We're a nonprofit startup building the workforce that protects humanity.
AI will reshape civilisation within the next decade. If humanity navigates this transition safely, we could build a society where all our needs are met, and poverty, disease, and famine become history. However, it could also concentrate power in authoritarian regimes, empower terrorists with weapons of mass destruction, and disempower most humans via mass job losses. Which future we get depends on what we do right now.
Steering this technology wisely requires expertise and action that doesn't currently exist anywhere near the required scale. In an age where companies are racing to train AI, we're betting on training people who can use their skills and energy to make a positive difference.
Since 2022, we've trained over 5,000 people across technical AI safety, AI policy, and biosecurity. Around 1,000 graduates now work in high-impact jobs at organisations like Anthropic, the UK AI Security Institute, and Google DeepMind.
We're based in London with 5 full-time team members, with plans to expand to Washington DC and the Bay Area. We've raised $10M from Open Philanthropy.
But we're just getting started. By 2027, we plan to:
Train ~100,000 people annually through our flagship Future of AI course (~300 people daily)
Build the course portfolio, which will cover all the defensive layers in our "skill tree", covering every aspect of AI safety — from technical research to policy to societal resilience
Hire 10+ specialists, each owning a part of the tree
Raise $25M+ to expand our impact
Your mission: Build the workforce that drives better governance across AI development and deployment.
You'll reimagine how we train people to shape AI policy — whether that's through intensive bootcamps, longer programmes, or entirely new formats — based on what the field needs.
You'll become a leading expert in AI governance within 1-2 years by:
Strategy development (20%) - Map out work being done in the field, identify bottlenecks, prioritise which interventions to pursue
Talent sourcing (20%) - Recruit and evaluate candidates through application systems
Training design (30%) - Design courses to train and onboard high-potential people, working closely with world-leading experts, and run them every 1-4 weeks
Placement (30%) - Connect course graduates to impactful opportunities, cultivate relationships with hiring managers, track graduate impact
You'll own everything from identifying gaps in AI governance to training people who can fill those gaps and placing them in positions where they can be impactful.
Our past course specialists have leveraged their roles to become leaders in their domains.
Luke Drago began as a participant in our AI governance course. At BlueDot, he redesigned the curriculum in his first month, and then compressed course delivery from 12 weeks to 5 weeks. He worked closely with experts at MIT, Oxford and OpenAI. He recruited a team of 20-30 facilitators to lead discussions, including former senior diplomats and technical leaders from major AI companies. One year later, he's the co-author of The Intelligence Curse (exploring AGI economic transitions), appeared on major platforms like Bankless, and wrote for Time magazine.
Adam Jones led BlueDot's AI Alignment course. He’s written several influential resources on topics like AI alignment and AI safety strategy, which thousands have read. He advised several participants on starting their own organisations and directly connected tens of promising graduates to impactful roles. He now applies this expertise at Anthropic.
You might be a great fit if you:
Have policymaking experience: You've worked directly within government departments, regulatory bodies, or closely with policymakers in the UK, US, EU, or China on AI-related policy.
Have strong AI governance knowledge: You're familiar with key AI governance proposals and can discuss trade-offs between different approaches. We particularly value deep expertise in one specific area over broad, shallow knowledge.
Have high agency to take ownership of outcomes and find creative ways to overcome obstacles without waiting for perfect conditions.
Enjoy user-centered design, conducting user research, prototyping, and iterating to build highly effective products
Have experience designing and facilitating discussions or workshops
Care a huge amount about protecting humanity and making the future awesome
We encourage speculative applications; we expect many strong candidates will not meet all of the criteria listed here.
Massive impact: Opportunity to shape how humanity coordinates its response to AGI risks
Freedom and autonomy: Our expense policy is "act in BlueDot's best interest", unlimited PTO, minimal bureaucracy
Leadership accelerator: Become a recognised leader in your field within 1-2 years, with access to leading experts and networks
£70-100k salary: Competitive compensation that reflects the stakes, not nonprofit rates
Generous benefits: 10% employer pension contribution and private healthcare
London-based role: We have a strong preference for working in person in our London office. UK visa sponsorship available. Some remote exceptions possible but shouldn't be assumed.
If you're energised by this challenge and want to ensure AI benefits rather than destroys humanity, we want to hear from you.
Applying takes 20-30 minutes, and we encourage you to apply as soon as possible.
We'll be evaluating candidates on a rolling basis and expect to make our first offers in early August 2025. We're looking to hire 1-3 people for this role, each working independently on different aspects of AI governance.
Interested in a different part of our AI safety skill tree? If you'd like to become a specialist for technical AI safety, AGI strategy, or another area, register your interest here.
Initial application
4-hour work test (paid)
45-minute interview
Work trial
If you have any questions about the role, email [email protected]