The digital underground, long perceived as a vanguard of technological exploitation, is currently experiencing a wave of internal dissent mirroring the frustrations of the mainstream internet. While the global discourse surrounding generative artificial intelligence (AI) has focused on its potential to revolutionize productivity or automate cyberattacks, a growing segment of the cybercriminal community is expressing profound irritation toward the technology. On encrypted messaging platforms and long-standing hacking forums, the sentiment is becoming increasingly clear: the influx of AI-generated "slop" is degrading the quality of social interaction, undermining established reputation systems, and threatening the human-centric nature of these illicit communities.

The Shift from Hype to Skepticism

Since the public release of ChatGPT in late 2022, the cybersecurity landscape has been dominated by predictions of an AI-driven crime wave. Early reports focused on the ability of large language models (LLMs) to write malicious code, craft flawless phishing emails, and identify software vulnerabilities at scale. However, a comprehensive study conducted by researchers from the University of Edinburgh, the University of Cambridge, and the University of Strathclyde suggests that the reality on the ground is more nuanced and, for many forum participants, increasingly annoying.

The study analyzed nearly 100,000 AI-related conversations on cybercrime forums spanning from the launch of ChatGPT through the end of 2023. The data reveals a significant shift in tone. Initially characterized by curiosity and experimentation, the discourse has pivoted toward skepticism and outright hostility. Researchers identified a recurring pattern of complaints regarding "low-quality" content. Users are increasingly vocal about their distaste for members who "dump" AI-generated explainers on basic cybersecurity concepts—often presented in sterile, bulleted lists—without contributing original thought or practical expertise.

Ben Collier, a security researcher and senior lecturer at the University of Edinburgh, notes that these underground spaces are essentially social environments. For decades, forums—many of Russian origin—have served as marketplaces where stolen data is traded and hacking services are advertised. Yet, they also function as communities where status is earned through demonstrated skill and "proof of work." The introduction of generative AI threatens this social fabric by allowing unskilled actors to mimic the language of experts, thereby diluting the value of hard-earned reputations.

The Reputation Economy and the Threat of "AI Slop"

In the world of cybercrime, reputation is the primary currency. On platforms such as Hack Forums or the more exclusive Russian-language boards, users build "rep" through successful trades, the sharing of proprietary exploits, and participation in community events like writing competitions. Generative AI provides a shortcut for "script kiddies"—inexperienced hackers—to appear more knowledgeable than they are.

This phenomenon has led to a surge in what critics call "AI slop." Forum members have expressed frustration over the proliferation of threads that lack human nuance. One poster on a prominent hacking board recently stated, "I see a lot of members using AI for making their threads and it pisses me off since they don’t even take the time to write a simple sentence or two." Another user was more succinct, demanding that peers "stop posting AI shit."

The pushback is rooted in a desire for genuine human interaction. One post cited in the Edinburgh research emphasized this sentiment: "If I wanted to talk to an AI chatbot, there are many websites for me to do so… I come here for human interaction." The fear among veterans is that these communities will devolve into "clanker forums," where automated bots and AI-assisted accounts interact with one another, effectively hollowing out the communal value of the space.

Chronology of AI Integration and Backlash

The timeline of AI’s integration into the cybercrime ecosystem highlights the rapid transition from novelty to nuisance:

  • November 2022 – Early 2023: The Discovery Phase. Following the launch of ChatGPT, hackers began testing the model’s "guardrails." Discussions centered on "jailbreaking" prompts to bypass safety protocols against generating malware.
  • Mid-2023: The Tooling Phase. Developers within the underground launched specialized, supposedly "unfiltered" models such as WormGPT and FraudGPT. These were marketed as revolutionary tools for business-email compromise (BEC) and malware development.
  • Late 2023: The Saturation Phase. Forums became flooded with AI-generated tutorials and "get rich quick" schemes. It was during this period that the University of Edinburgh researchers noted the first significant uptick in community pushback.
  • Early 2024 – Present: The Era of Skepticism. Sophisticated threat actors began publicly disparaging peers for over-reliance on AI. Concerns emerged regarding the security of AI-generated code, with some warning that LLMs often introduce vulnerabilities that could expose the hacker’s own infrastructure.

Technical Limitations and Security Risks

Beyond the social annoyance, sophisticated cybercriminals are wary of AI for practical technical reasons. Ian Gray, vice president of intelligence at the security firm Flashpoint, explains that high-level threat actors are acutely aware of the deficiencies in current commercial models. While these actors know how to bypass prompts, they are equally cautious about the output.

"They’re cautious of AI-generated projects in forums or marketplaces," Gray noted. "There are weaknesses and vulnerabilities, sometimes exposing the underlying infrastructure."

The cybersecurity industry recently observed a spike in concern regarding "Claude Mythos Preview," a frontier AI model from Anthropic. While some in the industry feared the model could spark a new wave of crime—prompting the Indian government to issue a "red alert" for its infosec sectors—the response within the criminal underground was mixed. Some groups used the existence of such models to mock their competitors, claiming that "all they can do is use AI," implying a lack of fundamental technical skill.

The research indicates that AI has not yet significantly lowered the barrier to entry for high-level cybercrime. Instead, its primary impact has been felt in sectors that were already highly automated. This includes Search Engine Optimization (SEO) fraud, social media botting, and certain types of romance scams where AI is used to translate messages or generate realistic "deepfake" personas for social engineering.

Official Reactions and Industry Implications

The response from the broader cybersecurity community and government entities has been one of cautious observation. While the "AI revolution" in cybercrime has been slower to materialize in a destructive sense than initially feared, officials remain on high alert. The panic surrounding the potential closure of open-source repositories, such as the UK National Health Service (NHS) moving to close-source GitHub repos over AI security concerns, reflects the high stakes involved.

In the underground, the debate over AI integration has even reached the administrative level of marketplaces. Flashpoint researchers recently observed a heated discussion regarding the creation of an "AI-enhanced" cybercrime market. The goal was to use AI to help buyers navigate stolen data and account credentials more efficiently. The proposal was met with fierce resistance, with one participant famously retorting, "IT’S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET."

This resistance stems from a fundamental distrust of the technology’s reliability and the potential for it to draw unwanted attention from law enforcement through predictable patterns or "hallucinations" that could compromise a transaction.

Broader Impact and Future Outlook

The internal friction within cybercrime forums suggests that the human element of digital crime remains its most critical component. While AI can assist with structural tasks—such as improving grammar or organizing data—the "core" of the cybercriminal enterprise relies on trust, intuition, and technical mastery that current AI models cannot replicate.

The findings of the University of Edinburgh and its partners highlight a crucial paradox: as AI makes the internet more "artificial," the value of "human" spaces—even those dedicated to illegal activities—increases. The researchers concluded that AI has not yet led to serious disruptions of established criminal business models. Instead, it has created a new layer of noise that the community must now filter out.

Looking forward, the tension between automation and human-led operations is likely to persist. While some forum members may eventually accept AI assistants to help structure posts, the line is firmly drawn at full automation. The fear of a "clanker forum" is a powerful deterrent against the total adoption of generative AI.

As the cybersecurity industry continues to develop defenses against AI-powered threats, it may find an unlikely ally in the inherent conservatism and elitism of the hacking community itself. The very "slop" that annoys the average internet user is currently serving as a friction point for the next generation of cybercriminals, potentially slowing the democratization of high-level hacking techniques. For now, the digital underground remains a place where human reputation still outweighs algorithmic efficiency.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *