Application deadline: We are conducting interviews actively and aim to fill this role as soon as we find someone suitable.
ABOUT THE OPPORTUNITY
We want to develop a “Science of Scheming”. The goal is ambitious and we’re looking for Research Scientists and Research Engineers who are excited to build a new hard science from the ground up.
YOU WILL HAVE THE OPPORTUNITY TO
- Collaborate with leading AI developers. We partner with multiple labs, giving you access to a breadth of models that no single AI lab could offer. Through long-term research collaborations, your work directly impacts how the most capable AI systems are built and deployed.
- Deeply study the RL dynamics that lead to the emergence of reward-seeking, evaluation awareness or misaligned preferences. Design and train model organisms, and scale your insights to frontier systems.
- Work towards “Scaling laws of scheming”. Build the empirical foundations to predict how scheming risks evolve as models scale in capability.
- Develop novel and ambitious evaluation techniques that have a chance of scaling to highly evaluation aware models.
- Deep dive into AI cognition. Discover patterns in the reasoning processes of frontier AI systems that no one else has ever observed before.
Note: We are not hiring for interpretability roles.
KEY REQUIREMENTS
A diverse range of skill sets will be required to drive our research agenda forward and we don’t expect any single candidate to fulfill all the characteristics below. That being said, a successful candidate likely displays excellence at one or several of the following:
- Fast-paced empirical research: You can design and execute experiments. You always strive to speed up iteration cycles and relentlessly drive progress towards the next empirical milestone.
- Conceptual insights about scheming: You have deeply thought about the problem of AI scheming and are familiar with all the relevant literature. You are able to turn vague and undefined concepts into concrete and insightful experiment proposals.
- Software engineering skills: Strong software engineering skills correlate highly with effective execution, even in an era of AI agents. Our entire stack uses Python.
- Intense interest in AI progress: You always stay up to date on the latest model releases, and continuously tinker with new and creative AI workflows to speed up your work. You are fascinated by AI cognition and actively spend time trying to understand how they think.
- Experience RL-training LLMs: You have hands-on experience in training LLMs via reinforcement learning. You have encountered and resolved countless painful issues from GPU failures to debugging learning instabilities.
- Strong analytical skills: You bring rigorous quantitative chops from working on fields such as scaling laws in LLMs, statistical physics, dynamical systems, applied statistics etc. You're comfortable building mathematical models of empirical phenomena and know how to extract signal from noisy data.
We want to emphasize that people who feel they don’t fulfill all of these characteristics but think they would be a good fit for the position, nonetheless, are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. We don’t require a formal background or industry experience and welcome self-taught candidates.
\nABOUT APOLLO RESEARCH
The rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks.At Apollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight.
We work on the detection of scheming (e.g., building evaluations and novel evaluation techniques), the science of scheming (e.g., model organisms and the study of scaling trends), and scheming mitigations (e.g., control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment and collaborate on fundamental research.
At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.
ABOUT THE TEAM
The current evals team consists of Jérémy Scheurer,Alex Meinke,Bronson Schoen, Felix Hofstätter,Axel Højmark,Teun van der Weij,Alex Lloyd and Mia Hopman.Alex Meinke coordinates the research agenda with guidance from Marius Hobbhahn, though team members lead individual projects. You will mostly work with the evals team as well as our team of software engineers, but you will likely sometimes interact with the governance team to translate technical knowledge into concrete recommendations. You can find our full teamhere.
Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.
How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.
About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2.5 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no LeetCode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.
Your Privacy and Fairness in Our Recruitment Process: We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency, we use AI-powered tools to assist with tasks such as resume screening. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks. Your personal data is handled securely and transparently. We adopt a human-centred approach: all resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at [email protected].