Facebook has been under constant scrutiny since the reports foreign intervention US Presidential 2016 election results through ads on its social media platform and Cambridge Analytica scandal broke news a couple of years ago. Now, the technology giant has made a big announcement about sweeping changes it plans to bring to curb fake news, propaganda, hate speeches and other malpractices online.

In a blog post, Guy Rosen, VP of Integrity, and Tessa Lyons, Head of News Feed Integrity of Facebook, have shared the company's three-pronged measures—remove, reduce and inform—to counter aforementioned menaces on its social media platforms including Instagram and Facebook Messenger app.

Highlights of new Facebook initiatives:

Facebook is coming hard on fake news peddlers in India ahead of Lok Sabha 2019 elections.Reuters


Firstly, Facebook is rolling out the new section in Community Standards, that outline what is and isn't allowed on its platforms. Key issues include bullying, harassment and hate speech, and other contents, which violate the standards, will get removed faster than before.

It will be using a combination of the latest technology, human review and also user reports to identify and remove harmful groups, whether they are public, closed or secret.

Furthermore, as part of the Safe Communities Initiative, the company has promised come down heavy on Facebook group admins and will be held accountable for Community Standards violations. It plans to start reviewing Facebook groups in the coming weeks.


This aspect focuses on misinformation and also click baits headlines, usually employed by media companies and bloggers to get viewing traffic. These kinds of violations are usually not that severe enough to get removed, but Facebook will seek inputs from the people. The company has already put in place for users to cancel subscription or file report against the content.

However, it is not enough and Facebook knows that; it plans to improve the experience and reducing misinformation on Facebook.

"We're getting better at enforcing against fake accounts and coordinated inauthentic behaviour; we're using both technology and people to fight the rise in photo and video-based misinformation; we've deployed new measures to help people spot false news and get more context about the stories they see in News Feed; and we've grown our third-party fact-checking program to include 45 certified fact-checking partners who review content in 24 languages," Facebook representatives said.

Facebook Blinking Selfie Tool
A Facebook logo reflects on a woman's eye.REUTERS/Dado Ruvic

Facebook has also roped in The Associated Press as part of the third-party fact-checking program. The AP will be expanding its efforts by debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US.

Also, Facebook will curb the reach of the groups, which intentionally spreads misinformation. It will reduce that group's overall News Feed distribution.

In Instagram, Facebook has begun reducing the spread of posts that are inappropriate but do not go against Instagram's Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages. For instance, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.


Facebook will make the users understand the context of the article pushed on the News Feed. This will help people decide what to read, trust and share on its social media channel. The 'Context Button' has been on Facebook since April 2018 and going forward, people will be offered more background information about the publishers and articles they see in News Feed so they can better decide what to read, trust and share. Facebook is testing this feature for images that have been reviewed by third-party fact-checkers.

Facebook also announced changes in the Messenger app to curb impersonation and will offer a verified badge, give more options in the settings for the user to block an annoying person and like WhatsApp, the Facebook Messenger app to get 'forwarded tag' indicator on messages and also context button to curb the spread of misinformation.