In a rare show of industry collaboration, OpenAI, Anthropic, and Google DeepMind jointly announced a $200 million initiative to fund 500 AI safety researcher positions over the next two years. The program will place researchers at universities, independent safety organizations, and within the companies themselves.
The initiative addresses a well-documented bottleneck: there are approximately 400 full-time AI safety researchers globally, a number that has not kept pace with the explosive growth in AI capabilities. By comparison, there are an estimated 300,000 AI/ML engineers focused on capabilities development. The ratio has become a growing concern among policymakers and the AI research community.
Positions will span several specializations including alignment research, interpretability, robustness testing, evaluation methodology, and AI governance. Compensation will be competitive with industry roles — a deliberate choice to prevent safety research from being perceived as a career sacrifice compared to capabilities work.
For technically inclined job seekers, this represents an unusual opportunity. The initiative includes 50 positions specifically reserved for career changers from adjacent fields: philosophy, cognitive science, mathematics, and cybersecurity. These roles include structured mentorship and a two-year training pathway designed to bring domain expertise to bear on safety problems that benefit from diverse perspectives.
The participating universities include MIT, Stanford, Oxford, UC Berkeley, and Tsinghua University in Beijing. Independent organizations receiving funding include the Center for AI Safety, Redwood Research, and the Alignment Research Center. Applications for the first cohort open on May 15, 2026.