London, May 6 (PTI) Cybercriminals are still struggling to make effective use of AI tools despite widespread experimentation since the launch of ChatGPT, according to a new peer-reviewed study analysing
more than 100 million posts from underground cybercrime forums.
Researchers from the University of Edinburgh, the University of Cambridge and the University of Strathclyde have found that many cybercrime actors lack the skills and resources needed to turn AI tools into major new criminal capabilities.
The study found that AI was being used most effectively to hide patterns that cybersecurity systems are designed to detect, and to run automated social media bots linked to harassment and fraud.
The researchers analysed discussions from the CrimeBB database, which contains posts scraped from underground and dark web cybercrime forums. They examined conversations from November 2022 onwards, when ChatGPT was publicly released, to understand how cybercriminals were experimenting with AI tools.
The study found that AI coding assistants were proving most useful for already skilled users, rather than making cybercrime easier for beginners. Researchers said the tools still required significant technical knowledge to use effectively.
They also found some evidence of AI being used in more advanced forms of automation, particularly in social engineering and bot farming.
Because many forms of cybercrime already rely heavily on automated tools and pre-made software, researchers said AI currently appeared to represent “an evolution rather than a revolution” in criminal activity.
Ben Collier, senior lecturer in digital methods at the University of Edinburgh, said: “Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work.” The researchers said safeguards built into major chatbots appeared to be limiting some harmful uses.
However, they also found early signs that cybercrime communities were attempting to manipulate chatbot responses.
The study said some users in cybercrime forums were also expressing concern about losing technology sector jobs because of AI disruption, which researchers said could potentially push more people towards cybercrime.
Daniel Thomas from the department of computer and information sciences at Strathclyde said: “The more immediate risk is the rapid adoption of poorly secured AI systems by organisations and individuals, which could create new vulnerabilities that criminals can exploit.” PTI HSR ABD ABD ABD















