Human technology

Why dangerous content thrives on Kenyan social media

Comment

NAIROBI — The shooter approaches from behind, raising a gun to his victim’s head. He pulls the trigger and “pop”, a lifeless body slumps forward. The gunshot cuts to another execution, and another.

The video was posted on Facebook, in a large group of al-Shabab and Islamic State supporters, where different versions were viewed thousands of times before being deleted.

As Facebook and its competitor TikTok grow at breakneck speed in Kenya and across Africa, researchers say tech companies are failing to keep pace with a proliferation of terrorist content, hate speech and misinformation, taking advantage of poor regulatory frameworks to avoid stricter oversight. .

“It’s a deliberate choice to maximize labor and profit extraction because they see southern societies as markets, not corporations,” said Nanjala Nyabola, a Kenyan technology researcher and social Sciences.

About 1 in 5 Kenyans use Facebook, which last year rebranded itself as Meta, and TikTok has become one of the country’s most downloaded apps. The prevalence of violent and inflammatory content on platforms poses real risks in the East African country as it prepares for a hard-fought presidential election next month and faces the terrorist threat posed by the resurgence of al-Shabab.

“Our approach to content moderation in Africa is no different than anywhere else in the world,” wrote Kojo Boakye, Meta’s director of public policy for Africa, the Middle East and Turkey, in an email to the Washington Post. “We prioritize safety on our platforms and have taken aggressive action to combat misinformation and harmful content.”

Fortune Mgwili-Sibanda, head of government relations and public policy for TikTok in sub-Saharan Africa, also responded to The Post via email, writing: “We have thousands of people working on security around the world – and we let’s continue to develop. this function in our African online markets in line with the continued growth of our TikTok community on the continent.

Companies’ content moderation strategy is two-pronged: artificial intelligence (AI) algorithms provide a first line of defense. But Meta admitted that it is difficult to teach AI to recognize hate speech in multiple languages ​​and contexts, and reports show that posts in languages ​​other than English often fall through the cracks.

In June, researchers from the London-based Institute for Strategic Dialogue (ISD) published a report describing how al-Shabab and the Islamic State use Facebook to spread extremist content, such as the execution video.

ISD’s two-year investigation found at least 30 public al-Shabab and Islamic State propaganda pages with nearly 40,000 subscribers combined. The groups have released videos depicting gruesome killings, suicide bombings, attacks on Kenyan military forces and training exercises of Islamist militants. Some content had been living on the platform for over six years.

AI addiction was a central problem, said report co-author Moustafa Ayad, as bad actors learned to outsmart the system.

If the terrorists know that the AI ​​is looking for the word jihad, Ayad explained, they can “split JIHAD with dots between the letters, so now it’s not read correctly by [the] AI system.

Ayad said most of the accounts flagged in the report have now been taken down, but similar content has since surfaced, such as a video posted in July featuring Fuad Mohamed Khalaf, an al-Shabab leader. sought by the US government. It garnered 141,000 views and 1,800 shares before being deleted after 10 days.

Terrorist groups can also circumvent human moderation, the second line of defense for social media companies, by exploiting language and cultural gaps, according to the report. Kenya’s national languages ​​are English and Swahili, but Kenyans speak dozens of other tribal languages, dialects and local slang. sheng.

Meta said it has a multidisciplinary team of 350 people, including native speakers of Arabic, Somali and Swahili, who monitor and deal with terrorist content. Between January and March, the company says it removed 15 million pieces of content that violated its anti-terrorism policies, but did not say how much terrorist content it still believes is on the platform.

In January 2019, al-Shabab attacked the DusitD2 compound in Nairobi, killing 21 people. A government inquiry later revealed that they planned the attack using a Facebook account that went undetected for six months, according to local media.

In the last elections in Kenya in 2017, journalists documented how Facebook has struggled to curb the spread of ethnically-based hate speech, a problem researchers say the company is still failing to address. Adding to their concerns is the growing popularity of TikTok, which is also being used to stoke tensions ahead of the August 9 presidential vote.

In June, the Mozilla Foundation published a report describing how election-related misinformation took root on TikTok. The report examined more than 130 videos from 33 accounts that had been viewed more than 4 million times, finding ethnic hate speech, as well as manipulated and fake content that violated TikTok’s own policies.

A music video mimicked a detergent commercial in which the narrator told viewers that “detergent” could eliminate “madoadoa”, including Kikuyu, Luhya, Luo and Kamba tribesmen. Taken literally, “madoadoa” is an innocuous word that means blemish or stain, but it can also be a coded ethnic slur and a call for violence. The video contained graphic images of post-election clashes from previous years.

After the report, TikTok deleted the video and flagged the term “madoadoa,” but the episode showed how nuances of language can elude human moderators. A TikTok whistleblower told report author Odanga Madung that she was asked to watch videos in languages ​​she did not speak and determine, from the images alone, whether they violated her guidelines. .

TikTok did not directly respond to this allegation when asked by The Washington Post, but the company did post a statement recently on efforts to address problematic election-related content.

TikTok said it moderates content in more than 60 languages, including Swahili, but declined to give further details about its moderators in Kenya or how many languages ​​it monitors. It has also launched a Kenya-specific operations center with experts who detect and remove posts that violate its policies. And on July 14, he deployed an in-app user guide with information on elections and media literacy.

“[We] have a dedicated team working to protect TikTok during the elections in Kenya,” Mgwili-Sibanda wrote. “We prohibit and remove election misinformation, promotion of violence and other violations of our policies.”

But researchers still fear that violent rhetoric online could lead to actual violence.

“You will see these lies turn into very tragic consequences for people attending rallies,” said Irungu Houghton, director of Amnesty International Kenya.

Researchers say TikTok and Meta can get away with lower content moderation standards in Kenya, in part because Kenyan law does not directly hold social media companies liable for harmful content on their platforms. In contrast, Germany’s Facebook Act imposes fines of up to $50 million on companies if they fail to remove “manifestly illegal” content within 24 hours of a user filing a complaint.

“It’s a pretty gray area,” said Mugambi Laibuta, a Kenyan lawyer. “[W]When you talk about hate speech, there is no law in Kenya that says these sites have to apply content moderation.

If Meta and TikTok don’t police themselves, experts warn, African governments will do it for them, perhaps in undemocratic and dangerous ways.

“If the platforms don’t get their act together, they become a convenient excuse for authoritarians to crack down on them across the continent…a convenient excuse for them to disappear,” Madung said. “And we all need these platforms to survive. We need them to thrive.