It was claimed that Facebook spent three years making clickbait and misinformation more prominent in its users’ news feeds to keep them engaged on its network.

The algorithm used by the firm to decide what people see on a newsfeed was programmed using the reaction emoji to push more provocative content.

Five years ago, five emojis were introduced: ‘love’, ‘haha’, ‘wow’,’sad,’ and angry’. They allow users to react to content in a different way than traditional ‘like’.

According to internal papers, the Washington Post revealed that a ranking algorithm meant that emoji reactions were valued five times more than ‘likes’.

This was because high numbers of reaction emoticons on posts kept users more engaged, which is a key element to Facebook’s business model.

The five Facebook emojis of 'love,' 'haha,' 'wow,' 'sad' and 'angry' were launched five years ago to give users an alternative way to react to content aside from the normal 'like'

Five years ago, Facebook launched the five emojis “love”, “haha”, “wow,” and “angry” to provide users with an alternative way of reacting to content to the usual ‘like.

However, scientists and researchers at the company found that angry posts were more likely to contain misinformation and low quality news.

One employee claimed that he favoured ‘controversial posts’ such as those that make people angry because it could open the door to spam/abuse/clickbait.

Another respondent is said to have replied “It’s possible”. Its data scientists confirmed that the posts that caused the angry emojis and toxicity on its platform were linked in 2019.

Facebook is accused of promoting the worst aspects of its site for three consecutive years – making it more prominent, and allowing it to reach a wider audience.

It would have also had a negative affect on the work and efforts of its content moderators. They were trying to reduce the number of harmful and toxic posts that users saw.

Facebook whistleblower Frances Haugen told MPs yeterday that the firm is ¿unquestionably¿ making online hate worse because it is programmed to prioritise extreme content

Frances Haugen, a Facebook whistleblower, told MPs that Facebook is ‘unquestionably making online hate worse’ because it is programmed for extreme content.

The staff discussions were documented in papers that were provided to the Securities and Exchange Commission and given to Congress by Frances Haugen’s attorneys.

How Facebook’s profits soared as the daily active users reached 1.93billion

Facebook’s September average daily active users hit 1.93billion, which was a jump in profits.

This was 6 percentage points more than last year.

Facebook and its other platforms were used last month by around 3.6billion users.

Facebook’s profits shot 17 per cent higher to £6.7billion in the third quarter amid the jump in users.

Wall Street forecasts fell short of the company’s revenues due to Apple’s new privacy rules.

Apple has been requiring all apps to ask users whether they wish to be tracked. This has made targeting the right audiences more difficult for advertisers. Apple’s new policy would continue to impact business for the remainder of the year, it said.

Facebook’s total revenue – most of which comes from advertising – rose to £21billion in the third quarter.

This was £400million below expectations – though it was more than a third higher than the same period of last year when companies had put their marketing budgets on ice during the pandemic.

Yesterday, the whistleblower in London stated that Facebook was ‘unquestionably making online hate worse’ because it is programmed for extreme content.

Miss Haugen stated to MPs and peers that the bosses of the firm were guilty ‘negligence’ for not accepting the harm caused by their algorithm.

American data scientist, John D. McCormick, claimed that the tech giant was subsidizing hatred because its business model made it less expensive to run angry and divisive ads.

She stated that there was no doubt that the platform’s systems would lead to more violent events, as its most extreme content targets the most impressionable.

Miss Haugen also warned parents to be aware that Instagram, which is owned by Facebook, may never prove safe for their children. The company’s own research showed that it has turned them into addicts. 

She also stated to the joint committee on the Draft Online Safety Bill that it was an ‘essential moment’ for the UK to stand strong and improve social networks.

The Bill will impose a duty to social media companies to protect users and prevent them from posting harmful content. Ofcom, the watchdog, will have the power to fine them up until 10% of their global turnover.

Facebook is currently facing a crisis because Miss Haugen (an ex-product manager at the firm) leaked thousands upon thousands of internal documents that revealed the firm’s inner workings.

Its founder Mark Zuckerberg previously denied her claims, claiming that her attacks on the company were’misrepresenting the work it does’.

Yesterday, the committee pointed out that the tech giant claimed previously that it removed 97% of hateful posts from the platform.

However, leaked research revealed that the staff had estimated that it only removed posts that were generating hate speech at 3 to 5% and 0.6% respectively.

Facebook founder Mark Zuckerberg (pictured) has previously rejected the claims made by Miss Haugen, saying her attacks on the company were 'misrepresenting' the work it does

Facebook founder Mark Zuckerberg (pictured), previously refuted Miss Haugen’s claims, claiming that her attacks against the company were’misrepresenting the work it does’

Miss Haugen responded to a question on hate speech: “Unquestionably, it is making hate worse.” 

She claimed that Facebook was’very good about dancing with data’ in order to make it seem like it was on top, but was reluctant to sacrifice even one’slither’ of profit to make the platform safer.

The committee also heard how Facebook’s research found that 10 to 15 per cent of ten-year-olds were on the platform – despite the minimum age being 13.

Lord Black of Brentwood observed that the Bill exempts legitimate publishers of news from its scope. However Facebook and other platforms do not have to carry such journalism, as they would have had to adhere to the codes of regulators.

These decisions would effectively be made by AI, he stated, and asked Miss Haugen whether AI was trustworthy to make such types of judgments. 

The thumbs up 'Like' logo is shown on a sign at Facebook's offices in Menlo Park, California

A sign in Menlo Park, California shows the thumbs-up ‘Like’ logo.

Miss Haugen stated that the Bill should not be interpreted as a ‘random blog’ in the same way as a recognized news source, as this would limit users’ access quality news on the platform.

She stated, “I’m very worried that if you just exempted across all the boards you will make regulations ineffective.” 

She warned that any system in which AI is the solution is going to fail.

A Facebook spokesperson said last night that the company has always had the commercial incentive of removing harmful content from its sites. 

“People don’t like to see it when they use our app and advertisers don’t want their ads next it.”

MailOnline also reached out to the company today in regards to the latest report regarding emojis.