The digital underground, a realm traditionally associated with the rapid adoption of cutting-edge technology, is experiencing an unexpected wave of internal friction. For decades, cybercrime forums and clandestine marketplaces have served as the incubators for sophisticated malware and elaborate social engineering schemes. However, a new study suggests that the very technology once touted as a revolutionary force for threat actors—generative artificial intelligence—is becoming a primary source of resentment within these communities. Much like the broader internet, the dark web is being inundated with "AI slop," leading to a significant pushback from scammers, grifters, and hackers who argue that the technology is degrading the quality of their social and professional spaces.

This phenomenon marks a departure from the initial hype that followed the public release of ChatGPT in late 2022. While early discourse in the underground focused on the potential for AI to automate the creation of malicious code or craft perfect phishing emails, the reality has proven more mundane and irritating for the average forum user. Security researchers have documented a growing trend of "AI-skepticism" among low-level cybercriminals, who now find themselves wading through bot-generated posts, repetitive tutorials, and low-quality content that mimics the "junk" content currently plaguing mainstream platforms like Reddit, Pinterest, and Google Search.

Quantifying the Discontent: A Deep Dive into Forum Sentiment

The scale of this shift in sentiment was recently illuminated by a comprehensive study conducted by researchers from the University of Edinburgh, the University of Cambridge, and the University of Strathclyde. Led by Ben Collier, a security researcher and senior lecturer at the University of Edinburgh, the team analyzed 97,895 AI-related conversations across various cybercrime forums. The data set spans from the launch of ChatGPT in November 2022 through the end of 2023, providing a longitudinal look at how the "honeymoon phase" of generative AI transitioned into a period of frustration.

The research identified several key categories of complaints. A significant portion of the discourse involved users criticizing "bullet-pointed explainers" of basic cybersecurity concepts—content clearly generated by LLMs (Large Language Models) to bolster the reputation of users who lack actual technical expertise. Furthermore, forum participants expressed concerns over the "dead internet" theory becoming a reality within their own niche communities. The influx of low-quality, AI-generated threads has made it increasingly difficult for genuine threat actors to find reliable information, trade stolen data, or collaborate on complex operations.

Perhaps most tellingly, the study found that the introduction of AI has not yet led to the "democratization of cybercrime" that many experts feared. While AI can assist in basic tasks, it has not significantly lowered the barrier to entry for high-level hacking. Instead, its most visible impact has been the pollution of the information ecosystem that these criminals rely on for survival.

A Chronology of the AI Hype Cycle in the Underground

To understand the current backlash, one must examine the timeline of AI integration within the cybercriminal world. The evolution of this sentiment followed a distinct trajectory:

  1. The Discovery Phase (Late 2022 – Early 2023): Following the launch of OpenAI’s ChatGPT, cybercrime forums were abuzz with experimentation. Users shared "jailbreak" prompts designed to bypass safety filters, attempting to force the AI to write ransomware or find software vulnerabilities.
  2. The Productization Phase (Mid 2023): Developers began marketing "malicious" versions of AI, such as WormGPT and FraudGPT. These tools were advertised as LLMs without guardrails, specifically designed for phishing and malware development. However, many in the community soon realized these were often just wrappers for existing commercial models with little added value.
  3. The Saturation Phase (Late 2023 – Early 2024): The forums became flooded with AI-generated "noise." Users began using AI to automate their forum activity to build "rep" (reputation) points. This led to the current era of "AI slop," where the volume of content increased while the utility plummeted.
  4. The Resistance Phase (Present): Forum moderators and veteran users have begun implementing "anti-AI" sentiment. In some cases, forums have considered banning AI-generated posts entirely to preserve the "human" element of the community.

The Erosion of Hacker Meritocracy

The primary reason for the hostility toward AI in these spaces is rooted in the unique social structure of the cybercriminal world. Unlike legitimate social media, where engagement is the primary currency, cybercrime forums operate on a strict meritocracy based on technical skill and reliability.

"These are essentially social spaces," Ben Collier noted in the research. For a hacker to be successful, they must build a reputation. This reputation allows them to sell stolen credentials, recruit partners for "big game hunting" (targeting large corporations), and avoid being scammed by their peers. When a novice user utilizes an AI to generate a complex-looking technical guide, they are essentially "faking" a skill level they do not possess. This undermines the trust that the entire ecosystem is built upon.

On platforms like Hack Forums, the irritation is palpable. Long-time members have expressed disdain for the "clanker" nature of modern discussions, where it feels as though chatbots are simply talking to other chatbots. One user summarized the sentiment bluntly, stating that they come to these forums for "human interaction" and real-world expertise, not to read structured, sterile AI output that offers no original insight.

Sophisticated Threats and the "Claude Mythos" Factor

While low-level scammers grapple with forum clutter, more sophisticated threat actors are viewing AI through a lens of cautious pragmatism. According to Ian Gray, vice president of intelligence at the security company Flashpoint, professional hackers are acutely aware of the limitations of commercial AI models.

"More sophisticated threat actors are aware of the shortfalls of commercial models that have guardrails," Gray says. These actors are less concerned with using AI to write posts and more focused on "jailbreaking" frontier models to assist in high-stakes operations. Recently, the cybersecurity community has been on high alert following the emergence of "Claude Mythos Preview," a frontier model from Anthropic. The potential capabilities of such advanced models have caused significant concern; for instance, India’s cybersecurity agencies recently issued a "red alert" regarding the model’s potential to spark a new wave of cybercrime.

However, even at this high level, there is a culture of elitism that rejects total reliance on AI. Flashpoint’s analysis found that some hacking groups have begun mocking their rivals for using AI, claiming that their dependence on the technology proves they lack "real" hacking talent. "All they can do is use AI," one group reportedly said of their competitors, using the technology as a pejorative to signal a lack of fundamental knowledge.

The Impact on Business Models and Marketplaces

The friction extends beyond social interaction and into the commercial heart of the underground: the marketplaces. These are the platforms where billions of dollars in stolen credit card data, "logs" from infected computers, and illicit services are traded.

There have been documented attempts to create "AI-enhanced" cybercrime markets, intended to streamline the process of searching through massive databases of stolen information. Proponents argued that AI could help buyers find specific types of data—such as accounts belonging to a certain demographic or geographic region—more quickly. However, the proposal met with fierce resistance. The skepticism stems from the fear that AI integration would introduce new vulnerabilities into the marketplaces themselves, potentially exposing the underlying infrastructure to law enforcement or rival hackers.

The study by Collier and his colleagues confirmed that while AI has found a niche in highly automated sectors—such as SEO fraud, social media botting, and certain types of romance scams—it has failed to disrupt established business models. The "skill barrier" remains a formidable defense; while an AI can write a phishing template, it cannot yet manage the complex infrastructure required to execute a large-scale data breach or navigate the nuances of a multi-stage ransomware negotiation.

Broader Implications for Global Cybersecurity

The underground’s rejection of AI slop provides a fascinating case study for the broader tech industry. It suggests that even in environments where rules are non-existent and ethics are discarded, there is an inherent value placed on human-generated content and authentic expertise.

For cybersecurity defenders, this internal friction is a double-edged sword. On one hand, the "pollution" of cybercrime forums may hinder the ability of low-level actors to learn and collaborate, effectively acting as a self-imposed "denial of service" attack on their own communities. On the other hand, the move toward "closed" or more exclusive human-only circles could make it harder for researchers and law enforcement to monitor underground activity.

The "Claude Mythos" panic and the subsequent closing of open-source repositories—such as those by the UK’s National Health Service (NHS) over AI security concerns—demonstrate that the threat of AI in the hands of the wrong person remains high. However, the "wrong person" in this scenario is rarely the one posting AI-generated summaries on a forum. Instead, the real danger lies in the quiet, sophisticated use of AI by nation-states and high-tier criminal syndicates who have the discipline to avoid the "slop" that is currently irritating their lower-level counterparts.

Ultimately, the cybercriminal underground is mirroring the mainstream world in its struggle to balance the efficiency of AI with the necessity of human trust. As one forum user warned, if the trend continues, these historic hubs of digital rebellion risk becoming "clanker forums," devoid of the human ingenuity that made them dangerous in the first place. For now, the "human" hacker remains the most significant threat in the digital landscape, and they are increasingly tired of the machine.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *