New Models Launched
OpenAI has introduced two groundbreaking AI models, GPT-5.4 mini and GPT-5.4 nano, engineered to address the needs of high-volume computational tasks.
The primary objective behind these releases is to offer superior performance while significantly reducing operational expenses. According to the company's official statement, these scaled-down versions integrate many of the advanced functionalities found in their larger GPT-5.4 system, but are optimized for environments where rapid processing and extensive scalability are paramount. The GPT-5.4 mini, in particular, demonstrates marked improvements over its predecessor, showing enhanced proficiency in areas such as code generation, logical deduction, and image interpretation, all while operating at more than double the previous speed. Impressively, its performance in certain benchmark tests closely rivals that of the more substantial GPT-5.4 system, indicating a remarkable leap in efficiency and capability for a more compact model.
Tailored for Speed & Scale
GPT-5.4 nano represents the most compact offering in this new suite, specifically engineered for applications requiring rapid throughput and minimal expense. Its design makes it exceptionally well-suited for tasks like categorizing data, extracting specific information from text, and performing straightforward coding operations. The company emphasizes that this model is an ideal choice for scenarios where both the cost of each interaction and the speed of the response are critical factors. These new models are strategically developed for applications where instantaneous response times directly influence the user's experience. This includes a wide array of applications such as sophisticated coding assistants, automated workflow systems, and platforms that analyze visual data like images or screenshots, ensuring a seamless and efficient user journey.
Coding & Multimodal Prowess
GPT-5.4 mini excels in complex coding environments, proving beneficial for tasks including the editing and debugging of existing code, as well as navigating and understanding large codebases. It's also deployed in sophisticated setups where larger AI models handle the overarching strategy or planning phases of a task, while the mini model efficiently manages and executes specific sub-tasks. A significant advancement in GPT-5.4 mini is its support for multimodal inputs, meaning it can process and understand information from both text and visual sources, such as images. This allows it to interpret screenshots, providing valuable assistance for computer-based tasks and user interface interactions. Internal evaluations have confirmed that this model exhibits superior performance compared to its earlier iterations in tasks related to system interaction and understanding user interfaces.
Availability & Pricing
Access to GPT-5.4 mini is readily available through OpenAI's API, Codex, and ChatGPT. This model is equipped with advanced features designed to enhance functionality, including support for tool integration, web browsing capabilities, and file management. It also boasts an extensive context window of 400,000 tokens, allowing it to process and retain a large amount of information within a single interaction. The pricing structure for GPT-5.4 mini is set at $0.75 per million input tokens and $4.50 per million output tokens, offering a competitive rate for its capabilities. GPT-5.4 nano, while offering more focused functionality, is priced even more affordably. It is accessible via the API at a cost of $0.20 per million input tokens and $1.25 per million output tokens, making it an exceptionally cost-effective solution for large-scale, repetitive tasks.













