Headline
Disinfo and Hate Speech Flood TikTok Ahead of Kenya’s Elections
Mozilla researchers identified accounts with millions of view spreading hate speech and disinformation
Last August the TikTok account @aironixon shared a video intercutting scenes from Netflix’s docuseries How to Become a Tyrant with videos and screenshots of Kenyan deputy president and presidential candidate William Ruto. An added caption read, “Is Ruto a tyrant?”
The video is one of 130 identified by Mozilla Foundation fellow Odanga Madung, who has detailed his findings in a new report. Altogether, Madung found hate speech and disinformation in videos that accrued over 4 million views after being shared by 33 TikTok accounts. All of them appear to violate the company’s terms of service, and target Kenya’s upcoming August 9 elections.
“What we noticed is that there are very many cases where a certain page, for example, might have 5,000 followers, but the kind of the content that it posts ends up getting [more than] 500,000 views, because it has been supercharged by the platform itself,” says Madung.
Though Facebook, WhatsApp, and Instagram remain the most popular platforms in Kenya, TikTok’s popularity has risen steadily over the past two years, and it is among the most-downloaded social apps in the country.
For Kenyans heading into an election season, disinformation is hardly new, especially as millions of its citizens have come online over the past decade. Madung worries that the country’s history of electoral violence makes it particularly combustible. In 2007, a contested presidential election between Raila Odinga and Mwai Kibaki led to widespread violence that left more than 1,000 people dead and 600,000 displaced. The country’s current president, Uhuru Kenyatta, was accused of helping to incite the violence by charging armed groups of Kikuyus, the ethnic group to which he and Kibaki both belong, to target Luos, Odinga’s ethnic group.
Kenyatta ascended to the presidency in 2013. His reelection in 2017 also sparked protests that were met with swift police crackdowns. Human Rights Watch documented at least 42 people killed by police, though it estimated the real number to be higher.
Earlier this year, the country’s National Cohesion and Integration Commission, which was formed after the 2007 violence to ensure peaceful elections, warned of the “misuse of social media platforms to perpetuate ethnic hate speech and incitement to violence,” saying that in 2022 hate speech on social platforms had increased 20 percent. The commission also cited increasing communal conflict and personal attacks on politicians, in addition to other conditions that could make this year’s elections particularly tumultuous.
A TikTok video of a speech given by Ruto included the caption, “Ruto hates Kikuyus and wants to take revenge come 2022.” It has garnered more than 400,000 views.
“There’s a very clear attempt to try and use the ghosts of 2007 to move voters one way or the other, to take advantage of or glorify past violence,” says Madung. “That’s something that is completely not taken into account in TikTok’s own hate speech content guidelines.”
Unlike Facebook or Twitter, TikTok serves users content based not on who they follow but what the platform believes their interests are. This can make it hard for researchers like Madung to map out how content spreads and to whom. “There’s no tool like Crowdtangle for TikTok,” he says. “Doing research on TikTok is tiresome, sometimes gruesome, because I had to watch all the videos all the way through to perform content analysis.”
By its very nature, TikTok is harder to moderate than many other social media platforms, according to Cameron Hickey, project director at the Algorithmic Transparency Institute. The brevity of the videos, and the fact that many can include audio, visual, and text elements makes human discernment even more necessary when deciding whether something violates platform rules. Even advanced artificial intelligence tools, like using speech-to-text to quickly identify problematic words, is more difficult “when the audio that you’re dealing with also has music behind it,” says Hickey. “The default mode for people creating content on TikTok is to also embed music.”
That becomes even more difficult in languages other than English.
“What we know generally is that platforms do best at the work of addressing problematic content in the places where they are based or within the languages in which the people who created them speak,” says Hickey. “And there are more people making bad stuff than there are people at these companies trying to get rid of the bad stuff.”
Many pieces of disinformation Madung found were “synthetic content,” videos created to look like they might be from an old news broadcast, or they use screenshots that appear to be from legitimate news outlets.
“Since 2017, we’ve noticed that there was a burgeoning trend at the time to appropriate the identities of mainstream media brands,” says Madung. “We are seeing rampant usage of this tactic on the platform, and it seems to do exceptionally well.”
Madung also spoke with former TikTok content moderator Gadear Ayed to get a better understanding of the company’s moderation efforts more broadly. Although Ayed did not moderate TikToks from Kenya, she told Madung that she was often asked to moderate content in languages or contexts she was not familiar with, and would not have had the context to tell whether a piece of media had been manipulated.
“It’s common to find moderators being asked to moderate videos that were in languages and contexts that were different from what they understood,” Ayed told Madung. “For example, I at one time had to moderate videos that were in Hebrew despite me not knowing the language or the context. All I could rely on was the visual image of what I could see but anything written I couldn’t moderate.”
A TikTok spokesperson told WIRED that the company prohibits election misinformation and the promotion of violence and is “committed to protecting the integrity of [its] platform and have a dedicated team working to safeguard TikTok during the Kenyan elections.” The spokesperson also said that it works with fact-checking organizations, including Agence France-Presse in Kenya, and plans to roll out features to connect its “community with authoritative information about the Kenyan elections in our app."
But even if TikTok removes the offending content, Hickey says that may not be enough. “One person can remix, duet, reshare someone else’s content,” says Hickey. That means that even if the original video is removed, other versions can live on, undetected. TikTok videos can also be downloaded and shared on other platforms, like Facebook and Twitter, which is how Madung first encountered some of them.
Several of the videos flagged in the Mozilla Foundation report have since been removed, but TikTok did not respond to questions about whether it has removed other videos or whether the videos themselves were part of a coordinated effort.
But Madung suspects that they might be. “Some of the most egregious hashtags were things I would find researching coordinated campaigns on Twitter, and then I would think, what if I searched for this on TikTok?”