What's Happening?
A vulnerability has been identified in the open-source library Keras, which could enable attackers to load arbitrary local files or conduct server-side request forgery (SSRF) attacks. Keras, a deep learning API, provides a Python interface for artificial
neural networks and is used for building AI models compatible with JAX, TensorFlow, and PyTorch. The flaw, tracked as CVE-2025-12058 with a CVSS score of 5.9, arises from the library's StringLookup and IndexLookup preprocessing layers, which allow file paths or URLs to be used as inputs to define vocabularies. This vulnerability permits attackers to bypass safe deserialization, potentially exposing sensitive data or enabling remote network requests. Attackers could exploit this by uploading malicious Keras models to public repositories, targeting SSH keys, and compromising victims' SSH access to servers, code repositories, and cloud infrastructure.
Why It's Important?
The discovery of this vulnerability in Keras is significant as it poses a risk to the security of AI models and the infrastructure they operate on. Organizations using Keras for AI development could face severe security breaches, including unauthorized access to servers and cloud resources. This could lead to data theft, service disruptions, and financial losses. The ability for attackers to pivot to active intrusion, clone private repositories, and inject malicious code into CI/CD pipelines highlights the potential for widespread impact across industries relying on AI technologies. The resolution of this vulnerability is crucial to maintaining the integrity and security of AI systems and protecting sensitive information.
What's Next?
The vulnerability has been addressed in Keras version 3.11.4, which now embeds vocabulary files directly into the Keras archive and restricts the loading of arbitrary vocabulary files when safe_mode is enabled. Organizations using Keras are advised to update to the latest version to mitigate the risk of exploitation. Continuous monitoring and security assessments of AI models and their dependencies are recommended to prevent similar vulnerabilities in the future. Stakeholders in the AI and cybersecurity communities may collaborate to enhance security protocols and safeguard against emerging threats.
Beyond the Headlines
This vulnerability underscores the importance of secure software development practices and the need for robust security measures in AI technologies. As AI becomes increasingly integrated into various sectors, ensuring the security of AI models and their infrastructure is paramount. The incident may prompt discussions on the ethical implications of AI security and the responsibilities of developers in safeguarding user data and system integrity.












