Cybersecurity Firm Identifies Vulnerability in AI Assistant ChatGPT, Raising Trust Concerns
A cybersecurity firm, Check Point Software Technologies, has identified a vulnerability in the AI assistant ChatGPT that could allow unauthorized data extraction. The flaw, found in the system's runtime used for data analysis and Python-based tasks, enables attackers to exploit DNS tunneling to move information out of a secure area without detection. This discovery highlights potential risks for users who rely on ChatGPT for sensitive tasks such as drafting agreements, analyzing medical data, and writing personal documents. OpenAI, the developer of ChatGPT, has acknowledged the issue and confirmed that a fix was deployed in February 2026. Despite the resolution, the incident underscores the importance of trust and security in AI systems, which are increasingly used for critical and private functions.