NAIROBI, Kenya — Social media giant TikTok removed over 450,000 short videos published in Kenya between January and March 2024 for violating its community guidelines.
This localized action is part of a massive global moderation effort that saw the platform take down a total of more than 211 million videos during the same period.
According to TikTok’s Community Guidelines Enforcement report for the first quarter of the year, over 187 million of the videos were removed by automation. The platform also took down over 6.4 million accounts and 1.1 billion comments that were found to have violated its policies.
In the same period, 19.1 million live sessions were suspended, while 7.5 million videos were restored after further review. The report also highlights the platform’s efforts in preventing fake engagement, with billions of fake likes and follow requests stopped before they could reach users.
TikTok says its moderation technology is becoming increasingly efficient and proactive. “The vast majority of violations (94%) were removed within 24 hours.
This was also a quarter where automated moderation technologies removed more violative content than ever—over 87% of all video removals.
In addition, TikTok’s moderation technologies helped identify violative livestreamed content faster and more consistently,” the company stated.
The videos were removed for violating policies across a range of categories, including Integrity and Authenticity, Safety and Civility, Privacy and Security, Mental and Behavioural health, Regulated goods and Commercial activities, and Sensitive and Mature themes.
The report further notes that over 99% of violating content was removed before it was reported by a user, and over 90% was taken down before gaining any views.
Also Read: TikTok profiting from sexual livestreams involving children
To further enhance its moderation efforts, TikTok is beginning to test large language models (LLMs) to support proactive moderation at scale. The company has started to pilot LLMs to help enforce its rules for comments.
“LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with high degrees of precision, consistency, and speed,” said TikTok.
The platform argues that these automation efforts, including the use of AI models, are also aimed at supporting the well-being of content moderators by reducing their exposure to distressing content and requiring them to review less content overall.