The Fellowship's Purpose
OpenAI has initiated a Safety Fellowship program for 2026, extending an invitation to researchers from a wide array of disciplines to focus on the crucial
aspects of artificial intelligence safety and alignment. This program is a direct response to the growing concerns surrounding AI risks, potential misuse, and the long-term societal impacts of increasingly powerful AI systems across various sectors and global governance. The overarching goal is to foster a broader community involvement in developing solutions for the most significant challenges presented by AI. Unlike more traditional, highly specialized research programs, this fellowship is intentionally designed to be inclusive, welcoming not only AI experts but also professionals from fields such as cybersecurity, social sciences, and human-computer interaction. By embracing this interdisciplinary approach, OpenAI aims to inject greater diversity into AI safety research and encourage innovative, multifaceted solutions.
Who Can Join?
The OpenAI Safety Fellowship is open to a broad spectrum of individuals, including engineers, academics, and independent researchers, provided they possess strong analytical capabilities and the ability to translate conceptual ideas into practical applications. While a conventional career trajectory is not a prerequisite, candidates must demonstrate a clear potential for impactful research. The application process requires applicants to submit letters of recommendation to support their qualifications. Selected fellows will concentrate on critical AI safety issues such as developing robust evaluation methodologies, enhancing AI system robustness, establishing ethical frameworks, creating privacy-preserving systems, and implementing effective misuse prevention strategies. OpenAI emphasizes a focus on practical, high-impact research that will provide tangible benefits to the wider AI community, aiming for advancements that are both novel and immediately useful.
Support and Resources
Participants in the OpenAI Safety Fellowship will benefit from comprehensive support, including dedicated mentorship from highly experienced professionals in the field. They will also have the opportunity to collaborate within a cohort of their peers. While OpenAI provides a communal research facility in Berkeley, California, the program also accommodates remote participation, offering flexibility to fellows worldwide. Beyond this collaborative environment, fellows will receive a monthly stipend to support their living expenses and research endeavors. Additionally, they will be granted API credits and access to OpenAI's advanced computing resources, essential for undertaking complex AI research. A key expectation at the conclusion of the fellowship is that each participant will produce a valuable output, which could manifest as a research paper, a novel data set, or a benchmark tool, contributing directly to the collective knowledge in AI safety.
Application Timeline
The 2026 cohort of the OpenAI Safety Fellowship is scheduled to run from September 14, 2026, to February 5, 2027. Prospective applicants should note that the application window is currently open and will close on May 3, 2026. Following the closure of applications, OpenAI will undertake a thorough review of all submissions, with the selected candidates being announced on July 25. To apply, individuals are required to visit the official application portal on OpenAI's website and complete the necessary forms, which include submitting a detailed research proposal. OpenAI strongly encourages potential applicants to submit their applications promptly, ensuring that their proposals clearly articulate their research concepts and highlight their potential to contribute significantly to AI safety.












