What's Happening?
Security researchers from Cyata have identified vulnerabilities in Anthropic's MCP server, which connects large language models (LLMs) to local data. These flaws, tracked as CVE-2025-68143, CVE-2025-68145, and CVE-2025-68144, could be exploited for remote
code execution and unauthorized file access. The vulnerabilities arise from the server's failure to validate or sanitize specific arguments, allowing attackers to manipulate AI decisions through prompt injections. Anthropic has addressed these issues in the latest server update released in December 2025.
Why It's Important?
The discovery of these vulnerabilities highlights the security challenges associated with integrating AI systems with local data environments. Exploiting these flaws could lead to significant data breaches and unauthorized system access, posing risks to organizations relying on AI-driven solutions. The resolution of these vulnerabilities is crucial for maintaining trust in AI technologies and ensuring the security of sensitive data. This incident underscores the need for robust security measures in AI deployments and the importance of timely updates to mitigate potential threats.













