What's Happening?
A vulnerability has been identified in the open-source library Keras, which could allow attackers to load arbitrary local files or conduct server-side request forgery (SSRF) attacks. Keras, a deep learning
API, provides a Python interface for artificial neural networks and is used for building AI models compatible with JAX, TensorFlow, and PyTorch. The flaw, tracked as CVE-2025-12058 with a CVSS score of 5.9, arises from the library's StringLookup and IndexLookup preprocessing layers, which permit file paths or URLs as inputs to define vocabularies. This vulnerability enables attackers to access local files or external URLs during model deserialization, potentially exposing sensitive data or enabling remote network requests. Attackers could exploit this by uploading malicious Keras models to public repositories, targeting SSH keys. When victims download and load these models, their SSH private keys are incorporated into the model's vocabulary, allowing attackers to retrieve them. The vulnerability was resolved in Keras version 3.11.4 by embedding vocabulary files directly into the Keras archive and disallowing arbitrary vocabulary file loading when safe_mode is enabled.
Why It's Important?
The identified vulnerability in Keras poses significant risks to cloud security, as it allows attackers to gain unauthorized access to sensitive data and potentially compromise cloud resources. Organizations using Keras for AI model development could face severe security breaches, including the exposure of SSH keys and IAM credentials, leading to unauthorized access to servers, code repositories, and cloud infrastructure. This could result in attackers executing code in production environments, injecting backdoors, or making malicious commits into CI/CD pipelines. The resolution of this vulnerability is crucial to prevent potential data breaches and ensure the security of cloud environments, highlighting the importance of regular updates and security patches in software development.
What's Next?
Organizations using Keras are advised to update to version 3.11.4 to mitigate the vulnerability and prevent potential exploitation. Security teams should review their cloud environments for any signs of compromise and ensure that safe_mode is enabled to restrict arbitrary file loading. Additionally, developers should be cautious when downloading models from public repositories and verify their integrity to prevent malicious model loading. Continuous monitoring and threat intelligence are essential to detect and respond to any attempts to exploit this vulnerability.
Beyond the Headlines
The Keras vulnerability underscores the broader challenges in securing open-source software, which is widely used in AI and machine learning applications. It highlights the need for robust security practices in the development and deployment of AI models, including secure coding practices, regular vulnerability assessments, and community collaboration to identify and address security flaws. As AI technologies continue to evolve, ensuring their security will be critical to maintaining trust and reliability in AI-driven solutions.











