Facebook whistleblower Frances Haugen today told MPs ‘anger and hate’ is the ‘best way to grow’ on Facebook and said she had seen data suggesting it removes only ‘three to five per cent’ of hate speech. 

Ms Haugen stated that Facebook’s algorithms ‘prioritise extrem content’ and that it was’very skilled at dancing with data’ to show it was successfully clampingdown on harmful content. 

Former data scientist, Mark Zuckerberg, said that Mark Zuckerberg’s social-media colossus was causing online hate to get worse. He also stated that the world was in the ‘opening stages’ of a ‘horrific novel’ that will see more ethnic cleansing and political violence if regulators don’t act. 

Ms Haugen is appearing before a parliamentary committee scrutinising the government’s Online Safety Bill, which would place a duty of care on social media companies to protect users – with the threat of substantial fines of up to 10% of their global revenue if they fail to do so.  

She opened the session by saying, “I am extremely, extremly concerned about the state our societies.” Engagement-based ranking is extremely concerning, as it prioritizes extreme content. 

Ms Haugen said that she was also concerned about Facebook’s underinvestment of non-English languages. She said, “I am deeply concerned about their underinvestment of non-English languages. They mislead people in how they are supporting them.”

“UK English is sufficiently different than American English that I wouldn’t be surprised if safety systems they created primarily for American English were not enforced in the UK.” These dialectical differences should be disclosed by Facebook.

Responding to Ms Haugen’s testimony this afternoon, Home Secretary Priti Patel said ‘tech companies have a moral duty to keep their users safe’ following a meeting with Facebook whistleblower Frances Haugen. Ms Patel stated that it was a constructive meeting on online safety. 

Ms Haugen hurled a torrent of accusations that will further damage Facebook’s already damaged reputation. 

  • Facebook’s algorithm prioritizes hate speech by showing people content based upon how many engagements it has received.
  • ‘Anger and hatred’ is the best way to increase’ on Facebook. They also claimed that bad actors were playing the algorithm by making people’s content more hateful.
  • Without regulators, the world is “at the beginning stages of a horrible novel” because extremism is spreading via social media.
  •  Facebook is reluctant to sacrifice ‘even slithers of profit’ to prioritise online safety and was ‘unquestionably’ making online hate worse.

Frances Haugen, Facebook whistleblower, is appearing before a parliamentary panel that will examine the government’s draft legislation to crackdown on harmful online content.

An ex-staffer at Facebook’s integrity division revealed the details of internal documents she secretly copied to reveal the danger she believes the company poses. This includes providing a platform to child groomers, inciting political violence, and fueling misinformation.   

Ms Haugen spoke to MPs today and compared Facebook’s failures to an oil spillage.

She stated that she came forward because it was the most crucial time to act. It doesn’t make it any harder for society to regulate oil companies when we see something like an Oil Spill.

“But right now, the failures in Facebook are making it more difficult for us to regulate Facebook.”

The whistleblower stated that she had “no doubt” that events like the storming the US Capitol would take place in the future, due to Facebook’s ranking system prioritising offensive content. 

She stated that the problem could get worse because Facebook prioritises the creation of large Facebook groups so that people spend more time on it.  

She stated that Facebook has been trying to increase people’s time on Facebook and that the only way they can do this is by increasing the content on the platform with reshares and groups. 

Ministers fear that Facebook’s social media regulation plans could be leaked by civil servants who “want to get a job at tech giant” 

By David Wilcock, Whitehall correspondent for MailOnline 

Ministers are worried that plans for more regulation of social networking sites could be leaked to civil servants by former mandarins working for Facebook.

After a Facebook senior executive raised an online harms problem that was not known to many people at the Department for Digital, Culture, Media and Sport, the alarm was raised.

Mark Zuckerberg’s social media empire is under increasing pressure from misinformation and harmful material being shared by its users. Ministers are working on tighter regulations.

The Advisory Committee on Business Appointments is supposed to scrutinize jobs taken by former senior civil servants from the private sector. However, its powers are limited and less senior appointments are not vetted.

One source said that department mandarins are being attacked by the Times, saying: ‘The problem with DCMS officials is that they think it’s their job for them to work there for four year and then get a job on Facebook.

“They don’t get scrutinized by Acoba except at senior levels.”

The DCMS Civil Servants are among the highest-paid mandarins. Median wages there were just below £50,000 last year, compared to a cross-Whitehall median of below £30,000, according to the Institute for Government.

Average pay for the Civil Service is around £30,000, but at Facebook’s UK arm in 2019 it was more than £117,000.

Many DCMS employees have joined Facebook after having worked elsewhere. It is not clear that they sought information from ex-civil Service colleagues.

Nicola Aitken was previously the “led UK Government efforts against disinformation” and is now working as a misinformation manager there. She also spent a year at Full Fact, an independent organisation that highlights misinformation online.

Farzana Dudhwala is Facebook’s privacy manager since January. She spent a year at DCMS’s Government Office for Artificial Intelligence in 2018-9, and then two years at Centre for Data Ethics and Innovation.  

One group may produce hundreds upon hundreds of pieces of content every day, but only three are delivered. Only the most widely shared content will be distributed. 

Ms Haugen claimed that Facebook groups are acting more like echo chambers and pushing people towards extreme beliefs.  

She said, “You see a normalization of hate and dehumanizing others, and that’s the cause of violent incidents.” 

The whistleblower suggested that regulation could be beneficial for Facebook in the long term by making it more pleasant to be.   

She said that Google and Twitter were “far more transparent” than Facebook. In her call for Mr. Zuckerberg, she asked for 10,000 additional engineers to work in safety and 10,000 to build the new’metaverse. 

Ms Haugen stated that anger and hatred are the best ways to grow on Facebook and that bad actors are playing the algorithm by making their content less hateful. 

“The current system is biased against bad actors and those who push Facebook towards the extremes.” 

The whistleblower asked ministers to consider the harm Facebook causes to society in general, and not just to individuals, when considering new regulation. 

‘Situations like [ethnic violence in]Ethiopia is just the beginning chapters of a horror novel. 

“Facebook is closing down the door to us being able act,” said Zuckerberg. We have a small window of time to regain control over AI. We must take advantage of it.

Ms Haugen called on MPs to regulate Facebook’s paid-for ads, as hateful advertisements were drawing more users. 

She stated that it is cheaper to run an angry, hateful, and divisive ad in Facebook so we are subsidizing hate. 

The whistleblower claimed that Facebook was reluctant to sacrifice “even slithers” of profit to prioritise online security. 

Ms Haugen stated that Facebook’s systems for reporting employee concerns were a “huge weakness” at the company.

She stated that she saw situations in which I was concerned about national security when she worked on counter-espionage. She also said that she didn’t know how best to escalate these matters because her chain of command wasn’t reliable at that point.

“We were told that we should accept less than optimal resource.

‘I flagged repeatedly when working on civic integrity that I felt critical teams were understaffed.

“Right now, there are no incentives internal, that if we make noise about the need for more help, people will not be rallied around to get help, because everyone’s underwater.” 

Ms Haugen first made her shocking revelations to the US Senate earlier this year. There, she argued that a federal regulator is necessary to oversee digital giants like Facebook. 

The Online Safety Bill draft proposes something comparable by creating a regulator to monitor Big Tech’s progress in removing harmful or illegal material from their platforms.

Ministers also want social media companies and individuals to stop anonymous trolls from online abuse. 

Damian Collins, Chair of the Joint Committee on the Draft Online Safety bill, called Ms Haugen’s appearance ‘quite a big moment’. 

He said, “This is a moment, kind of like Cambridge Analytica but maybe bigger in that it provides a real insight into the soul of these businesses.” 

Collins was referring the 2018 data-mining company Cambridge Analytica scandal, which gathered information about as many as 87,000,000 Facebook users without their permission. 

Miss Haugen is testifying the same day Facebook is expected to release its latest earnings. Pictured is its CEO Mark Zuckerberg

Miss Haugen will testify on the same day that Facebook is expected to release its latest earnings. Pictured is Mark Zuckerberg as its CEO. 

Sophie Zhang, another Facebook whistleblower, has already spoken to the committee. She raised the alarm after discovering evidence of online manipulation in countries like Honduras, Azerbaijan, and other countries before she was fired. 

It comes as concerns were raised that details of the new legislation could be leaked to Facebook  by civil servants who ‘want to work for government for four years before getting job at tech giants’ 

After a Facebook senior executive raised an online harms problem that was not known to many people at the Department for Digital, Culture, Media and Sport, the alarm was raised.

The Advisory Committee on Business Appointments is supposed to scrutinize jobs taken by former senior civil servants from the private sector. However, its powers are limited and less senior appointments are not vetted.

One source said that department mandarins are being attacked by the Times, saying: ‘The problem with DCMS officials is that they think it’s their job for them to work there for four year and then get a job on Facebook.

“They don’t get scrutinized by Acoba except at senior levels.”

After having worked in other areas before joining Facebook, several DCMS staff have recently joined Facebook. 

There is no evidence that they sought information from former Civil Service colleagues.

Facebook whistleblower docs show it ‘has known for YEARS” that hate speech is not stopped and it is unpopular with youth. But it ‘lies to investors’. Apple threatened to remove the app over human traficking, and staff failed to notice the Jan 6 riot coming 

MailOnline by Jack Newman

A trove of documents from Facebook whistleblower Francis Haugen have revealed in detail how the tech firm has ignored internal complaints from staff for years to put profits first, ‘lie’ to investors and shield  CEO Mark Zuckerberg. 

The documents were published in deoth Monday morning by a consortium of media organizations. Haugen was preparing to testify before the British Parliament regarding her concerns. 

Last month, she spoke out to inform the public about her concerns regarding the company’s absolute power in the tech- and telecommunications sector. 

These documents are a result of her internal research and she decided to make them public. They are now being called the ‘Facebook Papers by the US media.  

They claim, among others, that

  • Staff at FacebookvFor years, e have reported to the company that they are concerned about their inability to police hate speech 
  • Facebook executives knew that it was losing popularity among young people, but they kept the numbers secret from investors 
  • Despite monitoring various right-wing accounts, staff failed to predict the Capitol riot of January 6, 2009.
  • Apple threatened to remove this app from the App Store due to its failure to police the trafficking in maids in the Philippines 
  • Mark Zuckerberg’s public comments on the company are often inconsistent with internal messaging 
The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen (pictured)

The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen (pictured)

These documents are part of a cache of disclosures made by Frances Haugen (Facebook whistleblower) to the US Securities and Exchange Commission and Congress. They were shown right before she testified in front of Congress on Oct 5, 2012.

Haugen, an ex-manager at Facebook, provided the documents to media outlets. She testified before Congress earlier in the year about her concerns regarding the company. 

Apple threatened to remove Instagram and Facebook from its app store due to concerns that it was being used for traffic to Filipina maids of the Middle East 

Apple threatened to remove Instagram and Facebook from its app store two years ago due to concerns that the platform was being used to trade and sell maids in Mideast.

After promising to crackdown, Facebook admitted in internal documents obtained from The Associated Press that it was under-enforcing confirmed abusive activity’ that led to Filipina maids complaining about being abused on the social media platform. 

Apple gave in and Facebook, Instagram and other apps remained in the App Store.

But Facebook’s crackdown appears to have had limited results. 

A quick Google search for “khadima” or “maids” in Arabic will turn up accounts that feature posed photographs of Africans as well as South Asians, with prices and ages listed next to the images. 

This is despite the fact that the Philippines government has a team who scour Facebook every day to protect job seekers from criminal gangs or unscrupulous recruiters.

Facebook acknowledged that the Mideast continues to be a key source of work in Asia and Africa for women looking to provide for their family back home. However, it also noted that certain countries in the region have ‘egregious’ human rights issues with respect to laborers’ protection.

“In our investigation, domestic employees frequently complained to their recruiting agencies that they were locked in their homes and starved, forced by their employers to extend their contracts indefinitely, without paying them, and repeatedly sold to other employers,’ one Facebook post read. “In response, agencies often told them to be more accommodating.

The report also stated that recruitment agencies were more likely to dismiss serious crimes like physical or sexual assault than they are to help domestic workers.

Facebook stated in a statement to AP that it takes the problem seriously despite the spread of ads exploiting foreign workers throughout the Mideast.

Facebook stated that it prohibits human exploitation in clear terms. “We have been fighting human trafficking on Facebook for many years. Our goal is to stop anyone from exploiting others on our platform.”

One of her complaints was the fact that staff had warned the company for years about its inability to properly police hate speech.  

One problem with its AI tools is that they are unable to pick out hateful comment and there are not enough staff with the language skills necessary to do it manually.   

Real-world violence could result from the failure to block hate speech in volatile areas like Myanmar, the Middle East and Ethiopia.  

In a review posted to Facebook’s internal message board last year regarding ways the company identifies abuses, one employee reported ‘significant gaps’ in certain at-risk countries. 

Mavis Jones, a spokesperson for Facebook, stated in a statement that there are native speakers in over 70 languages who review content and experts in human rights and humanitarian issues. 

She stated that these teams are working to prevent abuse on Facebook’s platform from places where there is a higher risk of violence and conflict.

Jones stated that Jones recognized the challenges and was proud of the work he had done.

The cache of internal Facebook documents gives a detailed look at how employees have raised alarms in recent years about problems with company tools – both technological and human – that are aimed at removing or blocking speech that is not within its standards. 

This report expands on Reuters’ previous reporting on Myanmar, and other countries where the largest social network in existence has failed repeatedly to protect users and has struggled across languages to monitor content.

Among the weaknesses mentioned was a lack of screening algorithms to screen languages in certain countries Facebook has deemed most at-risk for real-world harm or violence stemming form abuses on its site.

Two former employees told Reuters that the company labels countries ‘at-risk’ based on variables such as ethnic violence, unrest, and the number of users. 

According to people, the system is designed to direct resources to areas where abuses could have the greatest impact.

Spopokesman Jones stated that Facebook reviews these countries and assigns priority to them every six months, in accordance with United Nations guidelines. This is in line with United Nations guidelines designed to help companies prevent and remedy human right abuses in their business operations.

According to United Nations experts, Facebook was widely used by hate speech against Myanmar’s Rohingya Muslim population in 2018. 

A former employee said that the company was forced to increase its staffing levels in countries with high vulnerability. 

Facebook claims it should have done more for the country to stop the platform from inciting offline violence.

Ashraf Timeon, Facebook’s former head for policy for the Middle East & North Africa, left in 2017. He stated that the company’s approach towards global growth was ‘colonial’ and focused on monetization without safety precautions.

More than 90% of Facebook’s monthly active Facebook users are from outside the United States and Canada.

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. 

Machine-learning systems can detect such content at varying levels of accuracy.

Languages spoken outside of the United States, Canada, and Europe are a hurdle for Facebook’s automatic content moderation, as the documents provided by Haugen demonstrate. 

The company lacks AI systems that can detect abusive posts in a variety of languages on its platform. 

Zuckerberg said that he had ‘personally decided’ that his company would accept demands from the Vietnamese government to tighten censorship on ‘anti-state” posts. 

Insiders claim that Mark Zuckerberg personally consented to requests from Vietnam’s ruling Communist Party to censor antigovernment dissidents.

Facebook was threatened with expulsion from the country where it earns $1billion per year if it didn’t agree.

Zuckerberg, who was seen as a champion of free speech and the West’s refusal to remove harmful content, agreed to Hanoi’s demands.

Ahead of the Communist party congress in January, the Vietnamese government was given effective control of the social media platform as activists were silenced online, sources claim.

Facebook allowed dissidents to be removed from ‘anti-state” posts. 

Facebook explained to Washington Post that the decision was justified because it ‘ensures our services remain accessible for millions of people who rely upon them every day’. 

In Myanmar, where Facebook-based misinformation has been repeatedly linked to ethnic and religious violence (and again, Facebook), the company acknowledged that it had not stopped hate speech targeting the minority Rohingya Muslim community.

The persecution of Rohingya by the U.S. was described as ethnic cleansing. Facebook publicly promised in 2018 that it would hire 100 native Myanmar speakers to patrol its platforms. 

However, the company has not disclosed the number of content moderators it hired or which of the nation’s dialects they covered.

Global Witness, a rights group, stated that Facebook’s recommendation algorithm continued amplifying army propaganda and other content that violates Myanmar policies after a military coup in February. 

A document revealed that the company did not have any screening algorithms, known as ‘classifiers’, to detect misinformation in Burmese, the language spoken by Myanmar, or hate speech, in the Ethiopian languages Oromo and Amharic in 2020.

These gaps can permit abusive posts to spread in countries where Facebook has determined that there is a high risk of real-world harm.

Reuters found posts in Amharic, an Ethiopian language, this month. They referred to different ethnicities as the enemy and issued them death threats. 

The country has been in conflict for nearly a year between the Ethiopian government in Addis Ababa and rebel forces in Tigray. This conflict has resulted in thousands of deaths and displacements of more than 2,000,000 people.

Jones, a Facebook spokesperson, said that the company now uses proactive detection technology to detect hate speeches in Oromo, Amharic, and that it has increased the number of people with language, country, and topic expertise’, which includes people who have worked in Myanmar or Ethiopia.

According to an unnamed document, Facebook employees shared examples of anti-Muslim narratives and fear-mongering on the site in India. One such example was from 2021. 

According to the document, “Our inability to classify Hindi and Bengali content means that many of it is never flagged or acted upon.” 

Employees have also commented this year on internal posts and made comments pointing out the absence of classifiers for the Urdu or Pashto languages to screen potentially harmful content from users in Afghanistan, Iran, and Pakistan.

Jones stated that Facebook has added hate speech classifiers in Hindi and Bengali in 2020 and classifiers in violence and incitement for Hindi and Bengali this past year. She also stated that Facebook now has hate-speech classifiers in Urdu, but not Pashto.

The documents reveal that Facebook’s human review of posts is not perfect, but it does have gaps across key languages. 

An undated document described how its content moderation operation struggled to deal with Arabic-language dialects from multiple ‘at-risk” countries, leaving it constantly ‘playing catch-up’ 

The document acknowledged that even among its Arabic-speaking reviewers, Yemeni and Libyan (really all Gulf countries) were either missing or have very low representation.

Jones, a Facebook employee, acknowledged that Arabic content moderation presents a’very difficult problem’. Although Facebook has made significant investments in staff over recent years, Jones acknowledged that there is still much to be done.

Three former Facebook employees who worked for the company´s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. 

These people stated that the leadership did not grasp the issues and did insufficient staffing and resources.

Jones of Facebook stated that the California-based company is committed to preventing abuse by users from outside the United States. The same intensity is used domestically.

The company stated that it uses AI to detect hate speech in more languages than 50 languages. 

Facebook stated that it bases its decisions about where to deploy AI on factors such as the size of the market, and the country’s risk assessment. It declined to specify in which countries it didn’t have working hate speech classifiers.

Company knew that Facebook was being used by young people, but it kept this information secret from investors

Facebook researchers created a report in March to Chris Cox, chief product officer. It contained data that indicated that the site was losing popularity among teenagers and young adults.

Bloomberg showed that the ‘time spent on Facebook’ by US teens was down 16 percent year-on-year in one graphic. 

It was also found that young adults spend five percent less time on social networks.

Teenagers resisted signing up for the site and the number was declining.

Facebook’s average age for people to join was 24, if not earlier.

Facebook executives have been quiet about the concern despite extensive research into the popularity decline.

It is claimed that the falling rate of young people has remained mostly hidden as the audience continues its growth, often with duplicate profiles, leading it to misrepresentations regarding audience size.      

This discrepancy is part Haugen’s argument about how Facebook’misrepresented core metrics to advertisers and investors’ by showing overall increase but excluding factors such a decline in key demographics.  

Facebook also claims it has 15,000 content moders reviewing material from global users. Jones said that adding more language expertise was a key focus.

It has hired people to review content in Somali, Oromo and Tigrinya over the past two years. This year, it added moderators in 12 languages, including Haitian Creole.

Facebook declined to state whether it requires a minimum amount of content moderators in order to offer any language on the platform.

Facebook’s users can be a powerful resource to find content that is not in line with company standards. 

Although the company has developed a system to allow them to do this, they acknowledge that it can be costly and time-consuming for users who live in countries with poor internet access. 

According to Reuters documents and digital rights advocates who spoke with Reuters, the reporting tool also had bugs and design flaws.

Next Billion Network, an alliance of tech civic society groups based mostly in Asia, Africa, and the Middle East, stated that it had repeatedly raised concerns with Facebook management’s reporting system in recent times. 

These posts were affected by a technical flaw that prevented Facebook’s content review system being able to see any objectionable text or photos accompanying videos and photos in certain posts reported by users. 

This issue prevented serious violations like death threats in the post text from being properly evaluated, according to the group and a former Facebook employee who spoke out to Reuters. They claimed that the issue was resolved in 2020.

Facebook stated that it continues to improve its reporting system and takes feedback seriously.

Language coverage continues to be a problem. A Facebook presentation from January that was included in the documents concluded that there is a significant gap in Hate Speech reporting in local languages for Afghan users. 

After two decades of U.S. military presence in Afghanistan, the recent withdrawal has sparked an internal power struggle. According to the presentation’s author, so-called “community standards” – rules that govern what users can post in Afghanistan’s main languages Pashto and Dari – are not available in Afghanistan.

A Reuters review of this month’s community standards found that they weren’t available for nearly half the languages Facebook supports, including prompts and menus.

Facebook stated that it intends to have these rules in 59 languages before the end of this year and in 20 more languages by the close of 2022.