What's Happening?
Researchers at LayerX have discovered a new technique where hackers use custom fonts to deceive AI web assistants into misclassifying phishing pages as safe. This method exploits a disconnect between what AI analyzes in a page's HTML and what users see
rendered by the browser. By using custom fonts and CSS, hackers can visually alter HTML text for users while keeping it unchanged in the Document Object Model (DOM), which AI tools analyze. This discrepancy allows attackers to perform social engineering attacks, as AI assistants fail to detect the hidden malicious content. LayerX tested this technique with a fake phishing page, finding that several AI tools, including ChatGPT and Copilot, failed to recognize the threat. The company has reached out to affected vendors, but most, except Microsoft, have not addressed the issue, citing it as outside the scope of AI model security.
Why It's Important?
This development highlights a significant vulnerability in AI-based security systems, which are increasingly relied upon to protect users from cyber threats. The ability of hackers to bypass AI defenses using custom fonts poses a risk to both individuals and organizations, as it undermines trust in AI tools designed to safeguard against phishing attacks. The failure of AI assistants to detect these threats could lead to increased incidents of data breaches and financial losses. As AI continues to be integrated into security protocols, understanding and mitigating such vulnerabilities is crucial to maintaining robust cybersecurity measures.
What's Next?
The discovery of this technique may prompt AI developers and cybersecurity firms to reassess and enhance their security models to better detect and respond to such sophisticated attacks. Companies may need to implement additional layers of security that do not solely rely on AI analysis. Users are advised to remain vigilant and skeptical of unexpected web content, even if AI tools indicate it is safe. The ongoing dialogue between researchers and AI vendors will be essential in addressing these vulnerabilities and improving the resilience of AI systems against evolving cyber threats.











