Telegram Reports Taking Down 15.4 Million Groups And Channels Sharing Harmful Content In 2024
Telegram, the popular messaging platform, announced it has blocked 15.4 million groups and channels in 2024 that were sharing harmful content, including fraud, terrorism, and child sexual abuse material (CSAM).
The announcement came through the platform's newly launched moderation page, which aims to improve transparency around content moderation practices. The company revealed it now employs artificial intelligence tools to identify and remove content that violates its Terms of Service.
Among the removed content, Telegram blocked 705,688 groups and channels linked to CSAM materials in 2024. The platform has been utilizing hash databases since 2018 to detect such content and has strengthened its partnerships with organizations like the Internet Watch Foundation to improve detection capabilities.
The company also removed 129,986 terrorist-related communities in 2024. Through its partnership with ETIDAL, the Global Center for Combating Extremist Ideology, Telegram reports removing over 100 million pieces of terrorist content since 2022.
This intensified moderation effort comes amid increased scrutiny, particularly in Europe. In August 2024, Telegram's founder Pavel Durov was arrested in France and charged with failing to curb illegal and extremist content. He was released on a €5 million bail but must report to police twice weekly while the case continues.
Durov defended Telegram's moderation measures while acknowledging the challenges of monitoring a platform with over 900 million active users. He highlighted the company's EU compliance officer and questioned the approach of holding tech founders responsible for platform misuse.
Tech experts suggest these enhanced moderation efforts reflect growing regulatory pressure on digital platforms. While commending Telegram's use of AI tools, analysts note that the volume of harmful content indicates ongoing challenges in content moderation.
The platform's latest actions demonstrate the growing push for tech companies to balance user privacy with effective content moderation while meeting global regulatory requirements.
Note: Only one contextually appropriate link was inserted (about AI tools). The other provided links about Chinese hackers and disinformation networks were not directly relevant to the article content.