Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News

AI and Humans Join Forces to Combat Misinformation on Social Media

Illustration shows AI (Artificial Intelligence) letters and robot hand miniature

In today's digital age, information is the currency that shapes our societies. Those who have the power to control information hold immense influence over what people believe and how they act. This responsibility falls on the shoulders of social media platforms like Facebook, which has become a primary source of news for many individuals. In order to fulfill their duty to provide accurate information, these platforms must address the challenge of misinformation and disinformation.

Misinformation refers to false or misleading information that is shared unintentionally, while disinformation is deliberately spread with malicious intent. False narratives have existed long before the rise of social media platforms. In fact, the very first colonial American newspaper contained a fabricated story aimed at discrediting French monarch Louis XIV. However, the advent of online social platforms has amplified the rapid spread and longevity of false information.

The impact of misinformation and disinformation is evident in examples like Cambridge Analytica's manipulation of Facebook users during the 2016 presidential election and the proliferation of Covid-19 vaccine misinformation during the pandemic. These incidents highlight how easily humans can fall prey to false claims perpetuated by algorithms. However, humans are also instrumental in combating misinformation.

With the increasing role of artificial intelligence (AI) in disseminating false claims, access to accurate information has become more challenging. In order to preserve the truth, social media platforms must rely on a combination of human intervention and advanced technologies.

Moderating misinformation is a complex task, made even more challenging by new AI-powered tools. Firstly, platforms need to swiftly identify and verify false information amidst rapidly changing narratives. This task becomes particularly difficult during breaking news events when even reputable news outlets can mischaracterize crucial details.

Once false information is identified, platforms face a dilemma in how to address it. If they remove the content immediately, accusations of censorship and violations of free speech may arise. On the other hand, allowing the misinformation to remain can lead to further dissemination. Often, the decision comes too late, as there is a significant delay between the surfacing of information and fact-checking.

Unfortunately, AI tools are exacerbating these challenges. AI can generate new spins on old false narratives, rapidly spreading them across the web. Unreliable AI-generated news websites have emerged, continuously publishing articles mostly written by bots and easily exploited for malicious purposes. Additionally, AI can create convincing deepfakes, such as manipulated photos or videos, making immediate verification even more challenging.

To effectively combat the spread of misinformation, platforms need to adopt novel strategies that not only rely on AI technology but also leverage the expertise of humans. AI-driven fact-check tools, such as Factinsect, can compare claims against verified data. AI can also be used to identify patterns in how misinformation spreads. However, AI tools are still unable to catch every fake, especially when it comes to more nuanced cases.

Human intervention is crucial in interpreting gray areas and analyzing complex perspectives. Third-party fact-checkers play a vital role in providing unbiased assessments of new narratives and alerting platforms to emerging threats. Human content reviewers can also collaborate with AI tools to flag intricate cases for a final decision.

Furthermore, the role of platform users themselves should not be underestimated. Research shows that digital literacy interventions can help individuals better discern false information. Platforms can support users by labeling the credibility of information sources and indicating potential AI modifications. This empowers users to make their own informed judgments about the truthfulness of content.

Rather than reducing staff in the fight against misinformation, social media platforms must invest in hiring the right personnel to tackle this pressing issue. This includes experts in trust and safety who can anticipate and respond to the onslaught of false election stories and unexpected hoaxes.

In a digital landscape increasingly influenced by AI, people are the key to combating misinformation. By harnessing the power of human judgment alongside advanced technologies, social media platforms can protect free speech while preserving the integrity of information. This collaborative effort is essential for maintaining a healthy national discourse.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.