A comprehensive investigation by European nonprofit researchers has revealed that thousands of men are active participants in Telegram communities dedicated to the sale of hacking services, surveillance tools, and the systematic distribution of nonconsensual intimate imagery. The report, compiled by the algorithmic auditing group AI Forensics, highlights a sophisticated infrastructure of harassment where digital tools are marketed specifically to target friends, spouses, and former partners. Beyond the sale of technical exploits, these groups serve as hubs for the promotion of deepfake "nudifying" services and the exchange of folders containing highly abusive content, including depictions of sexual violence and child sexual abuse material.

The findings come at a critical juncture for Telegram and its founder, Pavel Durov, who is currently navigating complex legal challenges in France and ongoing regulatory pressure from the European Union. The research underscores the platform’s dual nature: while it positions itself as a bastion of free speech and privacy, it has simultaneously become a primary conduit for localized and global networks of gender-based abuse.

The Scope and Scale of Digital Harassment Networks

The AI Forensics study involved a six-week granular analysis of nearly 2.8 million messages sent across 16 specific Telegram communities based in Italy and Spain. These groups, which the researchers identified as regular distributors of abusive content, were found to be remarkably active. Over the course of the monitoring period, more than 24,000 unique members participated in the dissemination of 82,723 files, including images, videos, and audio recordings.

While high-profile celebrities and social media influencers are frequent targets of these groups, a significant portion of the abuse is directed at "ordinary" women. Researchers observed that perpetrators often share content featuring women they know personally, leveraging the anonymity of the platform to humiliate or control them. Silvia Semenzin, a lead researcher at AI Forensics, noted that victims are frequently unaware that their private images have been harvested, manipulated, or shared. The report indicates that victims are often doxed—their private information, such as home addresses or links to social media profiles, is published alongside the abusive content to facilitate further harassment.

The Marketplace for Hacking and Surveillance

One of the most alarming aspects of the report is the overt commercialization of "spy" services. Telegram channels act as digital storefronts for services that promise to compromise the privacy of individuals. Researchers documented numerous advertisements for "professional hacking on commission," with sellers claiming they can grant customers access to a target’s phone gallery, social media accounts, and private messages.

Specific offerings identified in the dataset include:

  • Social Media Compromise: Claims of being able to "spy" on a partner’s account or recover deleted messages.
  • Gallery Access: Bots and software marketed as tools to extract photos and videos directly from a victim’s mobile device.
  • Information Gathering: Services that link phone numbers to Instagram accounts or provide geolocated data on specific individuals.

Across the analyzed dataset, there were more than 18,000 references to spying or spy-related content. While researchers could not independently verify the efficacy of every tool sold, the prevalence of these advertisements points to a robust demand for "stalkerware"—software designed to monitor a person’s digital life without their consent. Experts warn that even if some of these services are fraudulent, their existence fosters a culture of entitlement and domestic surveillance that poses a tangible threat to women’s safety.

A Chronology of Telegram’s Contentious Moderation History

The current crisis is not an isolated incident but part of a decade-long pattern of Telegram being utilized by fringe and criminal elements. To understand the current landscape, it is necessary to examine the platform’s history regarding content moderation and its relationship with global authorities.

  • 2013: Telegram is launched by brothers Pavel and Nikolai Durov, emphasizing privacy and resistance to government overreach following their departure from the Russian social media site VK.
  • 2015-2016: The platform gains notoriety as a communication tool for ISIS recruitment and propaganda. Under international pressure, Telegram begins shutting down public channels associated with terrorism but maintains a strict "hands-off" policy for private chats.
  • 2018-2020: Russia attempts to block Telegram after the company refuses to hand over encryption keys to the FSB. The platform successfully evades the ban through domain fronting, solidifying its reputation as a "censorship-proof" app.
  • 2019: Researchers, including Silvia Semenzin, first expose massive Italian Telegram channels engaged in "virtual rape" and the sharing of nonconsensual imagery, involving tens of thousands of users.
  • 2021: Following the January 6 Capitol riot in the United States, far-right groups migrate to Telegram after being deplatformed from mainstream sites like Twitter and Facebook, leading to renewed calls for moderation.
  • 2024: Pavel Durov is detained and placed under formal investigation in France. The charges relate to the platform’s alleged complicity in facilitating organized crime, drug trafficking, and the distribution of child sexual abuse material due to a lack of moderation and cooperation with law enforcement.

The Technological Infrastructure of Abuse

The AI Forensics report highlights how modern technology has lowered the barrier for digital abuse. The rise of "nudifying" bots is a primary example. These AI-driven tools allow users to upload a clothed photo of a woman and receive an AI-generated nude version in seconds. Because these bots are hosted within Telegram, they are easily accessible to anyone with a smartphone, requiring no specialized technical knowledge.

Furthermore, the ecosystem is highly monetized. Access to "premium" channels where the most explicit or "exclusive" content is shared often requires a subscription fee, typically ranging from €5 to €50. This creates a financial incentive for administrators to continually source and distribute new, more "extreme" content. The researchers noted that in some Spanish-language groups, dozens of abusive images were shared every hour, demonstrating a high-volume, automated approach to harassment.

Official Responses and the Regulatory Battleground

In response to the findings, a Telegram spokesperson emphasized that the company employs "custom AI tools" to remove millions of pieces of harmful content daily. The platform’s terms of service explicitly prohibit the promotion of violence, illegal sexual content, and doxing. Telegram’s publicly available data claims that it has blocked nearly 12 million groups and channels this year, including over 153,000 linked to child sexual abuse material.

"We firmly reject the idea that Telegram profits from content we are actively taking down," the spokesperson stated, adding that the company abides by European Union laws, including the Digital Services Act (DSA).

However, digital rights advocates and the AI Forensics team argue that Telegram’s self-reported moderation figures are insufficient. There is an ongoing debate regarding whether Telegram should be classified as a "Very Large Online Platform" (VLOP) under the DSA. Currently, Telegram claims to have fewer than 45 million monthly active users in the EU, which is the threshold for the stricter VLOP designation. Critics suggest the platform may be underreporting its user base to avoid the more rigorous transparency and systemic risk assessment requirements that come with the label.

Broader Impact and Implications for Digital Safety

The findings from Italy and Spain are emblematic of a global trend. Similar networks have been documented in China, where groups of up to 65,000 people trade intimate images, and in the United Kingdom, where Telegram was used to dox women who participated in Facebook "dating safety" groups.

Adam Dodge, founder of EndTAB (End Technology-Enabled Abuse), suggests that Telegram’s specific architecture makes it uniquely dangerous for women. The combination of anonymity, high-speed file sharing, and the ability to create massive, unmoderated broadcast channels allows "image-based abuse marketplaces" to thrive. Unlike traditional social media platforms, which have faced years of public and political pressure to refine their reporting tools, Telegram’s "freedom-first" philosophy often results in a slower response to individual reports of harassment.

The implications of this research extend beyond digital privacy. The normalization of these groups contributes to a broader culture of gender-based violence. When men are able to purchase hacking tools to "spy" on their partners or share manipulated images with thousands of strangers with impunity, the line between digital harassment and physical danger becomes increasingly blurred.

As European regulators continue to scrutinize Telegram’s compliance with the Digital Services Act and French authorities proceed with their criminal investigation into Pavel Durov, the AI Forensics report serves as a stark reminder of the human cost of unmoderated digital spaces. For the thousands of women whose lives have been cataloged and sold in these channels, the debate over platform regulation is not a theoretical legal exercise, but a matter of urgent personal safety. The challenge for the international community remains finding a balance between protecting legitimate privacy and preventing platforms from becoming safe havens for systemic abuse.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *