Photos of beheadings, extremist propaganda and violent hate speech related to Islamic State and the Taliban were shared for months within Facebook groups over the past year despite the social networking giant’s claims it had increased efforts to remove such content.
The posts — some tagged as “insightful” and “engaging” via new Facebook tools to promote community interactions — championed the Islamic extremists’ violence in Iraq and Afghanistan, including videos of suicide bombings and calls to attack rivals across the region and in the West, according to a review of social media activity between April and December. At least one of the groups contained more than 100,000 members.
In several Facebook groups, competing Sunni and Shia militia trolled each other by posting pornographic images and other obscene photos into rival groups in the hope Facebook would remove those communities.
In others, Islamic State supporters openly shared links to websites with reams of online terrorist propaganda, while pro-Taliban Facebook users posted regular updates about how the group took over Afghanistan during much of 2021, according to POLITICO’s analysis.
During that time period, Facebook said it had invested heavily in artificial intelligence tools to automatically remove extremist content and hate speech in more than 50 languages. Since early 2021, the company told POLITICO it had added more Pashto and Dari speakers — the main languages spoken in Afghanistan — but declined to provide numbers of the staffing increases.
Yet the scores of Islamic State and Taliban content still on the platform show those efforts have failed to stop extremists from exploiting the platform. Internal documents, made public three months ago by Frances Haugen, a Facebook whistleblower, showed the company’s researchers had warned that Facebook routinely failed to protect its users in some of the world’s most unstable countries, including Syria, Afghanistan and Iraq.
“It’s just too easy for me to find this stuff online,” said Moustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks online extremism, who discovered the Facebook extremist groups and shared his findings with POLITICO. “What happens in real life happens in the Facebook world.”
Many countries across the Middle East and Central Asia are torn by sectarian violence, and Islamic extremists have turned to Facebook as a weapon to promote their hate-filled agenda and rally supporters to their cause. Hundreds of these groups, varying in size from a few hundred members to tens of thousands of users, have sprouted up across the platform — in Arabic, Pashto and Dari — over the last 18 months.
When POLITICO flagged the open Facebook groups promoting Islamic extremist content to Meta, the parent company of Facebook, it removed them, including a pro-Taliban group that was created in the Spring and had grown to 107,000 members.
Yet within hours of its removal, a separate group supportive of the Islamic State had reappeared on Facebook, and again began to publish posts and images in favor of the banned extremist organization in direct breach of Facebook’s terms of service. Those groups were eventually removed after also being flagged.
“We recognize that our enforcement isn’t always perfect, which is why we’re reviewing a range of options to address these challenges,” Ben Walters, a Meta spokesperson, said in a statement.
A problem not solved
Much of the Islamic extremist content targeting these war-torn countries was written in local languages — an issue that researchers also flagged in internal documents made public by Haugen, who submitted them as disclosures made to the Securities and Exchange Commission and provided to the U.S. Congress. POLITICO and a consortium of news outlets reviewed the documents.
In late 2020, for instance, Facebook engineers discovered that just 6 percent of Arabic-language hate speech was flagged on Instagram, the photo-sharing service owned by Meta, before it was published online. That compared to a 40 percent takedown rate for similar material on Facebook.
In Afghanistan, where roughly five million people log onto the platform each month, the company had few local-language speakers to police content, according to a separate internal document published on December 17, 2020. Because of this lack of local personnel, less than 1 percent of hate speech was removed.
“There is a huge gap in the hate speech reporting process in local languages in terms of both accuracy and completeness of the translation of the entire reporting process,” the Facebook researchers concluded.
Yet a year after those findings, pro-Taliban content is routinely getting through the net.
In the now-deleted open Facebook group, with roughly 107,000 members, reviewed by POLITICO, scores of graphic videos and photos, with messages written in local languages, had been uploaded during much of 2021 in support of the Islamic group still officially banned from the platform because of its international designation as a terrorist group.
That included footage of Taliban fighters attacking forces loyal to the now-ousted Afghan government, while other pro-Taliban users praised such violence in comments that escaped moderation.
“There’s clearly a problem here,” said Adam Hadley, director of Tech Against Terrorism, a nonprofit organization that works with smaller social networks, but not Facebook, in combating the rise of extremist content online.
He added he was not surprised that the social network was struggling to detect the extremist content because its automated content filters were not sophisticated enough to flag hate speech in Arabic, Pashto or Dari.
“When it comes to non-English language content, there’s a failure to focus enough machine language algorithm resources to combat this,” he added.
Battle between cyber armies
A significant portion of the recent Facebook group activity focused on digital fights between rival Sunni and Shia militia via Facebook in Iraq — a country continuing to suffer from widespread sectarian violence that has migrated onto the world’s largest social network.
That comes after separate internal Facebook documents from late 2020 raised concerns that so-called “cyber armies” between rival Sunni and Shia groups were using the platform in Iraq to attack each other online.
In several Facebook groups over at least the last 90 days, these battles were playing out, in almost real-time, as Iran- and Islamic State-backed extremists peppered each other’s online communities with sexual images and other graphic content, according to Ayad, the Institute for Strategic Dialogue researcher.
In one, which included militants from both sides of the fight, Shia Iraqi militants goaded Islamic State rivals with photos of scantily-clad women and sectarian slurs, while in the same Facebook group, Islamic State supporters similarly posted derogatory memes attacking local rivals.
“It’s essentially trolling,” said Ayad. “It annoys the group members and similarly gets someone in moderation to take note, but the groups often don’t get taken down. That’s what happens when there’s a lack of content moderation.”
POLITICO has expanded its U.K. technology coverage and is piloting a U.K. edition of our Morning Tech newsletter. To test it out, email [email protected] mentioning U.K. Tech.