What's Happening?
OpenAI has announced a new safety-focused fellowship program, offering external researchers up to $15,000 in compute resources per month. The fellowship, running from September 2026 to February 2027, aims
to support independent research on AI safety and alignment. This initiative comes shortly after a New Yorker investigation raised questions about OpenAI's commitment to AI safety, following the dissolution of its superalignment and AGI-readiness teams. The fellowship will provide participants with a stipend and mentorship, focusing on areas such as safety evaluation, robustness, and privacy-preserving methods.
Why It's Important?
The introduction of the OpenAI Safety Fellowship highlights the company's effort to address concerns about AI safety and alignment, especially in light of recent criticisms. By funding external researchers, OpenAI aims to foster independent exploration of critical safety issues, potentially leading to advancements in AI governance and risk mitigation. This move could influence other tech companies to prioritize safety in AI development, impacting the broader AI research community and public trust in AI technologies.






