A comprehensive investigation by the European nonprofit research group AI Forensics has revealed that Telegram has become a primary marketplace for a sophisticated infrastructure of digital harassment, where thousands of men trade surveillance services and abusive content. The research, conducted over a six-week period earlier this year, analyzed approximately 2.8 million messages across 16 prominent Italian and Spanish Telegram communities. These groups, which boast more than 24,000 active participants, are reportedly utilized to facilitate the stalking and harassment of women, including wives, girlfriends, and former partners, through the sale of hacking tools and the distribution of nonconsensual intimate imagery.

According to the study, the digital environment within these channels is characterized by a high volume of illegal and harmful material. Researchers documented the posting of 82,723 unique media files, including images, videos, and audio recordings. While high-profile celebrities and social media influencers are frequent targets, a significant portion of the abuse is directed at "ordinary" women who are personally known to the perpetrators. In many instances, the victims remain entirely unaware that their private information or manipulated images are being circulated within these clandestine networks.

The Infrastructure of Digital Harassment

The AI Forensics report identifies 13 distinct categories of abusive content and services offered within these Telegram ecosystems. These range from "nudifying" services—AI-powered bots that generate fake nude images of women—to the sale of folders containing child sexual abuse material (CSAM) and depictions of extreme violence. A particularly concerning aspect of the findings is the commercialization of surveillance. Researchers observed numerous advertisements for "professional hacking on commission," where sellers claimed the ability to provide unauthorized access to victims’ phone galleries, social media accounts, and real-time location data.

One specific message intercepted by researchers touted the ability to "spy on your partner’s account," inviting users to send private messages for pricing. Another post advertised a bot designed specifically to "spy on a girl’s gallery." Across the analyzed dataset, there were more than 18,000 references to spying or the exchange of "spy content." This ecosystem operates on a tiered financial model; access to certain channels reportedly costs between €20 and €50 (approximately $23 to $55 USD), while others offer monthly subscriptions starting at €5.

Silvia Semenzin, a lead researcher at AI Forensics who has tracked similar behavior on the platform since 2019, noted that the anonymity provided by Telegram creates a sense of impunity for users. "They feel safe in offering these types of services, which deal directly with controlling your partner or stealing her personal information," Semenzin stated. She emphasized that the majority of this violence is interpersonal, suggesting that the platform’s architecture facilitates a transition from digital harassment to potential physical danger.

A Chronology of Platform Proliferation and Legal Friction

The emergence of these harassment networks on Telegram is not an isolated phenomenon but rather the latest escalation in a long-standing pattern of platform misuse. The timeline of Telegram’s involvement in these issues reflects a broader struggle between the company’s commitment to absolute privacy and the requirements of global law enforcement.

  • 2019: Researchers first exposed large-scale Italian Telegram channels engaged in "virtual rape" and revenge porn, marking the beginning of documented systematic abuse on the platform.
  • 2022-2023: Investigations in the United Kingdom revealed Telegram groups were being used to dox and degrade women who participated in Facebook groups like "Are We Dating the Same Guy?"
  • January 2024: Reports surfaced from China detailing Telegram groups with up to 65,000 members dedicated to selling intimate images of women obtained through hacking and hidden cameras.
  • August 2024: French authorities initiated a criminal investigation into Telegram founder Pavel Durov. The charges, while denied by Durov, relate to the platform’s alleged complicity in facilitating illegal transactions, CSAM distribution, and organized crime due to a lack of moderation.
  • Late 2024: AI Forensics releases its latest findings, highlighting the persistent and evolving nature of these networks in Europe, specifically targeting Spanish and Italian speakers.

Throughout this period, Telegram has consistently positioned itself as a bastion of free speech, often resisting government requests for data or moderation. However, this stance has made it a preferred haven for malicious actors who have been migrated from more strictly moderated platforms like Facebook or X (formerly Twitter).

Supporting Data: The Scale of the Problem

The data provided by AI Forensics paints a stark picture of the platform’s moderation challenges. In the 16 groups studied, dozens of abusive images were shared every hour. The researchers noted that victims are frequently "named, tagged, and locatable" via shared social media profile links, which bridges the gap between digital abuse and real-world stalking.

The technical tools advertised—often referred to as "stalkerware"—are designed to run invisibly on a victim’s device. While the researchers could not independently verify the efficacy of every tool sold, the demand for such software is high. Stalkerware typically allows a perpetrator to monitor text messages, call logs, photos, and GPS locations. The prevalence of these advertisements on Telegram suggests the platform has become a de facto marketplace for tools that facilitate domestic abuse and coercive control.

Furthermore, the "nudifying" AI services represent a growing frontier of tech-enabled abuse. These bots allow users to upload a clothed photo of a woman and receive an AI-generated nude version in seconds. This technology lowers the barrier to entry for harassment, allowing perpetrators to victimize individuals even without access to actual intimate photos.

Official Responses and Regulatory Pressure

In response to the findings, a Telegram spokesperson emphasized that the company takes proactive measures to combat illegal content. The company claims to remove "millions" of pieces of content daily using custom AI tools and maintains strict policies against the promotion of violence, CSAM, and nonconsensual imagery. According to Telegram’s publicly available data, the platform has blocked nearly 12 million groups and channels this year, including over 153,000 linked to child sexual abuse.

"We firmly reject the idea that Telegram profits from content we are actively taking down," the spokesperson stated. The company also asserted its compliance with the European Union’s Digital Services Act (DSA) and noted ongoing communication with the European Commission.

However, advocates and researchers argue that Telegram’s current moderation efforts are insufficient given its scale of over 1 billion monthly active users. Silvia Semenzin and other experts are calling for Telegram to be officially designated as a "Very Large Online Platform" (VLOP) under the DSA. Such a classification would subject the company to more stringent oversight, requiring it to perform systemic risk assessments and provide greater transparency regarding its moderation algorithms and data sharing with researchers.

Broader Impact and Fact-Based Analysis

The implications of the AI Forensics report extend beyond the immediate victims. The existence of these marketplaces reflects a normalization of digital misogyny and the "gamification" of harassment. Adam Dodge, a lawyer and founder of EndTAB (End Technology-Enabled Abuse), noted that Telegram’s unique combination of anonymity, speed, and large network effects makes it an ideal environment for image-based abuse to thrive.

The transition of harassment from public social media platforms to encrypted or semi-private messaging apps like Telegram presents a significant hurdle for law enforcement. While end-to-end encryption is a vital tool for journalists and activists in repressive regimes, the same privacy features are being leveraged by predatory networks to shield their activities from scrutiny.

The financial aspect of these groups also suggests a shift toward organized digital crime. When harassment is monetized through subscriptions and service fees, it creates a self-sustaining ecosystem where "vendors" are incentivized to find new victims and develop more invasive hacking tools. This commercialization elevates the threat from individual bad actors to a structured industry of abuse.

As European regulators continue to evaluate Telegram’s status under the Digital Services Act, the findings of AI Forensics serve as a critical data point. The report concludes that without significant structural changes to how Telegram manages its "discovery" features—such as public channels and searchable groups—the platform will likely remain a central node in the global infrastructure of digital violence against women. The challenge for policymakers remains balancing the fundamental right to private communication with the urgent need to protect individuals from tech-enabled stalking and exploitation.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *