Social network

Facebook’s fact-checking policy is working

FACEBOOK'S FACT-CHECKING POLICY

Since 2016, and after the US presidential election, Facebook decided to run a policy to reduce the distribution of false reports on the platform. Facebook’s fact-checking policy is performed in the form of labels indicating the validity and reliability of the re-posts shared by users. Misleading stories are labeled as “Disputed by [third party fact checkers]”.

The question that if the policy is working or not is answered by the fact that since the implementation of the social media strategy, more people are preventing themselves from re-sharing news and report labeled as unreliable.

A study conducted by California University also approved the same results among their 500 participants, showing that people are now more careful about what news they share on social media, after the move Facebook made in this regard.

In 2016, Facebook’s fact-checking policy introduced the labels by which the authenticity status of reports and news were announced. However, in 2017, the tags were displayed by a number of related articles and pages under the post, that implied whether the said report was true or false.

Facebook claims that this latter move helped even more users to recognize whether what they are reading is based on reality or not.

Facebook fact-checking policy

Facebook is checking the validity of stories published by users

 

Reportedly, almost 60% of Americans are now less likely to re-share stories and news without making sure of their validity. This has a positive relationship with the fact that Facebook users are so careful about the credibility of their own pages and profiles since they stop where they might be seen as invalid.

Although Facebook’s fact-checking policy is working, there remains the issue of “scale.” Despite the positive impact this move has had on reducing false stories’ distribution, the effect is so little that in the larger scale it is almost seen as “nothing is done”.

With covering 42 languages, Facebook has already paid a large amount of money to the third party fact-checkers to monitor contents from daily 2.4 billion users on the platform. The amount of the stories checked so far by these groups can reach as little as 5% of the daily published contents by Facebook users, and this is an insignificant amount.

Facebook is currently looking into possible technological and automated solutions to implement the idea even more broadly. This Facebook fact-checking policy is still an infant, showing a prosperous future, going hand in hand with computational linguistics technology.

Was this post helpful?

Francine Berano
I love blogging that’s why I am a blogger.👌🏻🌟 Freelancing at Inosocial company.

Comments are closed.

Next Article:

0 %