Hackers Hate AI Slop Even More Than You Do

The complaint sounds familiar. “I’m disappointed that you are working to incorporate AI garbage into the site,” one annoyed person, posting anonymously, said in an online message. “No-one is asking for this—we want you to improve the site, stop charging for new features.”

Only, this is not a regular internet user moaning about AI being forced into their favorite app. Instead, they are complaining about a cybercrime forum’s plans to introduce more generative AI. Like millions of others, scammers, grifters, and low-level hackers are getting annoyed about AI encroaching into their lives and the rise of low-quality AI slop being posted in their online communities.

“People don’t like it,” says Ben Collier, a security researcher and senior lecturer at the University of Edinburgh. As part of a recent study into how low-level cybercriminals are using AI, Collier and fellow researchers spotted an increasing pushback over the use of generative AI in underground cybercrime forums and hacking groups.

During the generative AI boom and hype cycles of the past couple of years, some people posting on hacking forums have moved from being positive about how AI can help hacking to a greater skepticism about the technology, according to the study, which also involved researchers from the University of Cambridge and the University of Strathclyde.

Advertisement

The researchers analyzed 97,895 AI-related conversations on cybercrime forums since the launch of ChatGPT in 2022 until the end of last year. They found complaints about people dumping “bullet-pointed explainers” of basic cybersecurity concepts, moaning about the number of low quality posts, and concerns about Google’s AI search overviews driving down the number of visitors to the forums.

For decades cybercrime message boards and marketplaces, often Russian in origin, have allowed scammers to do business together. They are places where stolen data can be traded, hacking jobs are advertised, and fraudsters shitpost about their rivals. While scammers often try to scam each other, the forums also have a sense of community. For example, users build up reputations for being reliable, and forum owners hold writing competitions.

“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier says. He says the social dynamic of the groups can be messed up by potential cybercriminals trying to gain a better reputation by posting AI-generated hacking explainers. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”

Posts reviewed by WIRED on Hack Forums, a self-styled space for those interested in talking about hacking and sharing techniques, show an irritation caused by people creating posts with AI. “I see a lot of members using AI for making their threads/posts and it pisses me off since they don’t even take the time to write a simple sentence or two,” one poster wrote. Another put it more bluntly: “Stop posting AI shit.”

In several instances, Collier says, users of multiple forums appear to be irritated by AI posts as they want to make friends. “If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction,” one post cited in the research says.

Since ChatGPT emerged toward the end of 2022, there has been significant interest in AI-hacking capabilities and how the technology can transform online crime. Both sophisticated hackers and those less capable have been trying to use AI in their attacks. While some organized fraudsters have boosted their operations with ever-more realistic AI face-swapping technology and social engineering messages translated using AI, a lot of attention has been on generative AI’s capabilities to write malicious code and discover vulnerabilities.

Advertisement