What's Happening?
A malicious AI model on Hugging Face, falsely presented as an official OpenAI release, has been downloaded over 244,000 times. The model, which reached the top trending position on the platform, contains a Python script that disables SSL verification
and executes commands via PowerShell, potentially compromising systems. This incident underscores the growing risks associated with downloading AI models from unverified sources, as public AI model registries become a new software supply-chain risk. The README file of the fake model diverged from the legitimate project, instructing users to execute potentially harmful scripts. Researchers have previously identified malicious code in Pickle-serialized model files on Hugging Face, highlighting the need for better oversight and tooling in the AI supply chain.
Why It's Important?
The incident highlights significant security vulnerabilities in the AI model distribution ecosystem, particularly affecting enterprises that integrate open-source models into their environments. As developers and data scientists increasingly rely on public AI model registries, the potential for malicious actors to exploit these platforms poses a substantial risk to corporate systems, including access to source code and cloud credentials. This event serves as a cautionary tale for organizations to implement stricter verification processes and security measures when incorporating third-party AI models, to prevent potential breaches and data compromises.












