Adding “trust” and “distrust” buttons to social media sites like Facebook could reduce the spread of online misinformation, new research from a London university has found.
A study by researchers from University College London found that adding trust/dislike buttons alongside the standard “like” button incentivised people to be trustworthy and led to a large reduction in the amount of misinformation being shared.
UCL’s Professor Tali Sharot, co-lead author of the study published today in eLife, said: “Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true.
“Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”
For the study, UCL researchers used a simulated social media platform and asked participants to share news articles, half of which were inaccurate. Some participants were able to respond with “trust” or “distrust” reactions, as well as the typical “like” or “dislike” reactions.
The study found that participants started posting more true than false information in order to gain “trust” reactions. Researchers also found that participants began to pay more attention to how reliable a news story appeared to be when deciding whether to repost it.
Participants who had the option of using “trust” or “distrust” buttons also ended up with more accurate beliefs, researchers said.
Professor Sharot added: “Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarisation of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity. Existing ways to combat this, such as flagging inaccurate posts, have had limited impact.”
Co-lead author, UCL PhD student Laura Globig said: “Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement.
“While it’s difficult to predict how this would play out in the real world with a wider range of influences, given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation.”