The Communications Authority of Kenya (CA) has launched an investigation into TikTok following allegations that the platform has been profiting from sexual content on livestreams involving teenagers.
The regulator has also ordered TikTok to remove all adult content involving minors from its platform.
The probe comes after a BBC investigation published on March 3, 2025, revealed that TikTok was allegedly earning money from livestreams featuring teenagers as young as 15 in Kenya.
The report included testimonies from three Kenyan women who admitted they started engaging in explicit activities as teenagers, using TikTok to promote their content and negotiate payments for more explicit material shared on other messaging platforms.

According to the BBC, some livestreams used coded sexual language to promote services, with viewers sending emoji gifts as payment.
Since TikTok prohibits outright nudity, these livestreams acted as advertisements for more explicit content that was later exchanged elsewhere.
The platform reportedly takes a cut of about 70% from all livestream earnings, despite its official ban on solicitation.
Kenyan livestreams are particularly popular on TikTok, with investigators finding up to a dozen suggestive performances every night, watched by global audiences. These findings have raised concerns about TikTok’s ability to enforce its policies and protect vulnerable users.
The CA stated that these allegations highlight significant gaps in TikTok’s content moderation, particularly in preventing the exploitation of minors.
As a result, the regulator has directed TikTok to explain how inappropriate content continues to bypass its moderation systems and to submit a plan detailing how it intends to improve child protection measures.
TikTok employs a combination of human moderators and AI to identify and remove content that violates its policies, including those related to exploitation.
While TikTok maintains a strict policy against exploitation, including sexual solicitation and content involving minors, content moderators have expressed concerns about its effectiveness.
They noted that AI moderation often struggles to detect local slang and subtle cues of solicitation.
Human moderators have also flagged a high number of livestreams from Kenya related to sexual content, raising further doubts about TikTok’s enforcement capabilities.