OpenAI has announced a new program inviting external researchers, engineers, and practitioners to pursue research on essential safety precautions and the alignment of advanced AI systems. The program is set to commence from September 14, 2026, through February 5, 2027. The AI platform has pointed out that it is looking for applicants to work on safety questions that would matter for existing and future systems.
The researchers will be asked to keep a tight focus on safety evaluations, ethics and robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others.
The platform has further mentioned a keen interest in work that is empirically grounded, technically strong, and relevant to the broader research community.
The researchers will work closely with OpenAI mentors and engage with peers. Workspace for all researchers will be available in Berkeley, alongside other fellows at Constellation, though fellows may also work remotely. While OpenAI is willing to support researchers in their endeavours, fellows are expected to produce substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship will also include a monthly stipend, computer support, and mentorship.
OpenAI is accepting applicants from a wide range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and other related fields. OpenAI has also clarified that research ability, technical judgment, and execution will be prioritised over specific credentials.













