
ChatGPT could be dubbed one of the most commonly used AI models these days, all over the globe. And most of the users have their accounts linked with Gmail. Though patched now, a security flaw was identified in the model that could have let hackers get their hands on users' Gmail data. According to Radware, a popular cybersecurity firm, the flaw was found in ChatGPT's Deep Research agent, which is a tool deployed to help users analyse large chunks of information. And the flaw could have allowed the hackers to siphon sensitive data from both personal and corporate Gmail accounts of the users. A sigh of relief is that Radware said no instances of exploitation were found before the patching process. Furthermore, OpenAI has already confirmed to Radware that the flaw has been
patched. The flaw was so serious that Radware'd Director Pascal Geenes said, "If a corporate account was compromised, the company wouldn't even know information was leaving."