Introducing Mini & Nano
OpenAI has rolled out its most advanced small-scale AI models to date, christened GPT-5.4 mini and nano. The GPT-5.4 mini is now accessible to all users
of ChatGPT, offering a performance boost that brings it remarkably close to its larger GPT-5.4 counterpart. The nano variant, on the other hand, will be exclusively available through the OpenAI API. This strategic release addresses the growing need for highly capable yet efficient AI solutions across various applications.
Enhanced Performance Leap
The new GPT-5.4 mini model represents a significant stride in AI processing speed, boasting over double the performance of its predecessor, the GPT-5 mini. Critically, it achieves performance levels that rival the full-sized GPT-5.4 model in key benchmarks such as SWE-Bench Pro and OSWorld-Verified. This advancement translates into substantial improvements across a wide spectrum of AI functions, including sophisticated coding, complex reasoning processes, understanding and interacting with multiple data types like images, and adeptly utilizing external tools or computer systems.
Designed for Speed
These newly developed models have been meticulously engineered by OpenAI for specific operational demands where rapid response times are paramount. This focus on low latency is crucial for applications like intelligent coding assistants that require immediate feedback, subagents designed to swiftly execute ancillary tasks, systems that need to capture and interpret screenshots in real-time, and multimodal platforms capable of instantaneous analysis of visual information. In these demanding scenarios, the speed and dependability of the AI are prioritized over its overall size or intricate architecture.
Cost-Effective Solutions
The GPT-5.4 mini and nano models offer a financially attractive alternative to the more resource-intensive GPT-5.4 model. These new iterations are adept at handling intricate coding tasks, such as making precise modifications, navigating extensive codebases, generating user interfaces, and efficiently managing debugging cycles, all with minimal delay. Furthermore, they are priced considerably lower when accessed via OpenAI's API. The GPT-5.4 mini is also integrated into the Codex platform, broadening its accessibility for developers and fostering wider adoption of these efficient AI tools.













