According to a report, Facebook has been tagged with photos showing beheadings by ISIS or Taliban and violence hate speech from ISIS and Taliban.

According to a review of activities between April and December, extremists used social media as a tool to ‘promote their hate-filled agenda and rally support’ for hundreds of groups. 

These groups have sprouted up across the platform over the last 18 months and vary in size from a few hundred to tens of thousands of members, the review found. 

A pro-Taliban group was created in spring and had 107,000 members at the time it was dumped, according to a review by Politico. 

Overall, extremist content is ‘routinely getting through the net’, despite claims from Meta – the company that owns Facebook – that it’s cracking down on extremists.

There were reportedly 'scores of groups' allowed to operate on Facebook that were supportive of either Islamic State or the Taliban, according to a new report

A new report claims that there was a’score of groups’ on Facebook which supported either the Islamic State of the Taliban.


We do not permit individuals or groups involved in organised criminality, such as those specifically designated by US governments as narcotics trafficking Kingpins (SDNTKs); hate or terrorism; including entities that are designated by US government (FTOs) as terrorist organizations (SDGTs) to have a presence at the platform. They also prohibit the representation of these individuals by others. 

“We don’t allow prominent leaders of these organizations to have a presence at the platform. We do not permit symbols or content that represents them to be displayed on the platform. Additionally, we do not allow any coordination of support substantively for such individuals or organisations.

Taken from Meta’s transparency centre  

The groups were discovered by Moustafa Ayad, an executive director at the Institute for Strategic Dialogue, a think tank that tracks online extremism. 

MailOnline has contacted Meta – the company led by CEO Mark Zuckerberg which owns several social media platforms inlcuidng Facebook – for comment. 

Ayad shared his insights with Politico, stating that he found it too simple to search for this material online. “What happens in real-life happens on Facebook.”

‘It’s essentially trolling – it annoys the group members and similarly gets someone in moderation to take note, but the groups often don’t get taken down. 

“That’s what happens if there isn’t enough content moderation.” 

According to some reports, Facebook allowed a number of groups which were either supportive of Islamic State or Taliban. 

Some offensive posts were marked ‘insightful’ and ‘engaging’ by new Facebook tools released in November that were intended to promote community interactions. 

Politico discovered that the posts promoted violence by Islamic extremists from Iraq and Afghanistan. They included videos of suicide bombings and calls for attacks on rivals throughout the region and the West. 

In several groups, competing Sunni and Shia militia reportedly trolled each other by posting pornographic images, while it others, Islamic State supporters shared links to terrorist propaganda websites and ‘derogatory memes’ attacking rivals.     

Meta (or Facebook as it was known until the end of October) is led by Mark Zuckerberg (pictured)

Mark Zuckerberg is the leader of Meta, or Facebook as it used to be known up until October.


In October, Facebook (the company, not the product) changed its name to ‘Meta’. 

The name is part of Zuckerberg’s new  ambition to transform the social media platform into a ‘metaverse’ – a collective virtual shared space featuring avatars of real people.

However, Mark Zuckerberg’s decision to make the move was seen as an effort by Zuckerberg to discredit his company after mounting scandals. This follows leaked documents from whistleblowers that claimed Zuckerberg’s platforms had harmed users.   

Frances Haugen, whistleblower at the company’s expense, leaked confidential documents. She claimed shockingly that the company “puts profits above people” by intentionally hurting teens with its content and inciting anger.  

Meta deleted Facebook pages that promoted Islamic extremist content after they were flagged to Politico. 

Politco says that there are still scores of Taliban and ISIS content on Facebook, indicating that Facebook is failing to’stop extremists exploiting the platform. 

In response, Meta said it had invested heavily in artificial intelligence (AI) tools to automatically remove extremist content and hate speech in more than 50 languages. 

“We recognize that enforcement can be imperfect, so we’re looking at a variety of options to resolve these problems,” Ben Walters (a Meta spokesperson) said in a statement. 

The problem is much of the Islamic extremist content is being written in local languages, which is harder to detect for Meta’s predominantly English-speaking staff and English-trained detection algorithms. 

Politco says that in Afghanistan where there are approximately five million users per month, the company was unable to use local language speakers to enforce content.

“Because there was not enough local staff, less than 1% of hate speech was eliminated.”  

Adam Hadley from Tech Against Terrorism is director. Hadley said that he was not surprised Facebook fails to recognize extremist content. Facebook’s automated filters were not sophisticated enough for them to flag hate speech either in Pashto, Arabic, or Dari. 

‘When it comes to non-English language content, there’s a failure to focus enough machine language algorithm resources to combat this,’ Hadley told Politco. 

Meta previously stated that it had ‘identified many groups as terrorist organizations based on the way they behave, and not their ideologies.

The company stated that they do not permit them to be present on their services.

Facebook started to send users messages asking them questions in July about whether friends had ‘become extremists’. 

Tweet screen shots showed one notice asking for help: “Are you worried that someone you know might become an extremist?” 

In July, Facebook users started receiving creepy notifications asking them if their friends are 'becoming extremists'

In July, Facebook users started receiving creepy notifications asking them if their friends are ‘becoming extremists’

One alerted the users with this message: “You might have been exposed to dangerous extremist content recenty.” Both links provided support.

Meta stated at that time that this small trial was being conducted in the US as part of a pilot program for a worldwide approach to avoid radicalisation. 

Legislators and civil rights organizations have long pressed the world’s biggest social media network to fight extremism.

This could have increased in 2021 following the Capitol Riot of January 6, when Trump supporters tried to prevent the US Congress from certifiying Joe Biden’s win in November.   


April 2020Facebook hackers accessed personal and phone numbers of 553 million Facebook users.

July 2019,Facebook data scandal: Facebook is penalized $5 billion for inappropriately sharing personal information of its users

March 2019,Facebook CEO Mark Zuckerberg has promised to rebuild the company based upon six privacy-focused principles

  • Interactions private
  • Encryption
  • Persistency reduction
  • Safety
  • Interoperability
  • Secure data storage

Zuckerberg pledged end-to-end encryption to all its messaging services. This will allow users to connect across WhatsApp, Instagram Direct and Facebook Messenger.

December 2018, Facebook comes under fire after a bombshell report discovered the firm allowed over 150 companies, including Netflix, Spotify and Bing, to access unprecedented amounts of user data, such as private messages.

Some of these “partners” were able to access, edit, delete and read private Facebook messages as well as see every participant in a thread. 

This also enabled Microsoft’s search engine Bing to view the names of Facebook friends, without users consent.

Amazon had permission to access user names and contact details through friends. Yahoo was able to view streamed posts from friends.

September 2018, Facebook disclosed that it had been hit by its worst ever data breach, affecting 50 million users – including those of Zuckerberg and COO Sheryl Sandberg.

Attackers exploited the site’s ‘View As’ feature, which lets people see what their profiles look like to other users.  

Facebook (file image) made headlines in March 2018  after the data of 87 million users was improperly accessed by Cambridge Analytica, a political consultancy

Facebook (file image) made headlines in March 2018  after the data of 87 million users was improperly accessed by Cambridge Analytica, a political consultancy

Unknown attackers used a code feature called “Access Tokens” to gain access to accounts. This could have allowed hackers to view private messages and photos, but Facebook denied that this was the case. 

They also sought to extract people’s personal information such as their name, address, and city from Facebook’s system.

Zuckerberg assured that users’ credit card numbers and passwords were safe.

The breach resulted in the company logging approximately 90 million individuals out of their accounts to protect themselves.

March 2018.After Cambridge Analytica, an American political consulting firm, improperly obtained the personal data of 87,000,000 Facebook users, Facebook became a major news story.

Government inquiries have been initiated into the privacy practices of the company worldwide. This revelation has also sparked a #deleteFacebook movement by consumers.

Cambridge Analytica was a communications company with offices in London, New York and Washington.

Through data-driven campaigns, behavioural psychologists and data scientists in the team, it claims that it is able to ‘find your voter and get them to act’.

Cambridge Analytica stated on their website that “Within the United States only, we have played an important role in winning presidential and congressional races, as well state and federal elections.” It had data covering more than 220 million American voters.

A feature that allowed apps to ask permission for your personal data, as well as all of your Facebook friends’ data, was a win-win situation for the company.

The data firm suspended its chief executive, Alexander Nix (pictured), after recordings emerged of him making a series of controversial claims, including boasts that Cambridge Analytica had a pivotal role in the election of Donald Trump

Alexander Nix, the chief executive of Data firm, was fired after recordings were made of him boasting that Cambridge Analytica played a key role in Donald Trump’s election victory.

The company could thus mine information from 87 million Facebook users, even though only 270,000 of them had given permission.

It was created to assist them in creating software that predicts and influences voters’ votes at the poll box.

After recordings were made of Alexander Nix making controversial claims about Cambridge Analytica’s role in Donald Trump’s election, the data company suspended his chief executive.

According to some, this information was used to aid the UK’s Brexit campaign.