Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Elizabeth Nolan Brown

Studies Keep Finding That Social Media Algorithms Don't Increase Polarization. Why Is the Press So Skeptical?

New research looking at Facebook in the run-up to the 2020 election finds scant evidence to suggest that social media algorithms are to blame for political polarization, extremism, or belief in misinformation. The findings are part of a project in which Meta opened its internal data to academic researchers. The results of this collaboration will be publicized in 16 papers, the first four of which were just published in the journals Science and Nature.

One of the studies found that switching users from an algorithmic feed to a reverse chronological feed—something suggested by many social media foes as the responsible thing to do—actually led to users seeing more political content and more potential misinformation. The change did lead to seeing less content "classified as uncivil or containing slur words" and more content "from moderate friends." But none of these shifts made a significant difference in terms of users' political knowledge, attitudes, or polarization levels.

"Algorithms are extremely influential in terms of…shaping their on-platform experience," researcher Joshua Tucker, co-director of the Center for Social Media and Politics at New York University, told The Washington Post. Despite this, "we find very little impact in changes to people's attitudes about politics and even people's self-reported participation around politics."

Another of the experiments involved limiting re-shared content in some users' feeds. Reshares—a measure of social media virility—"is a key feature of social platforms that could plausibly drive" political polarization and political knowledge, the researchers suggest. Users who saw no reshared content for three months did wind up having less news knowledge, as well as lower engagement with the platform and less exposure to "untrustworthy content." But it did not make a difference in political attitudes or polarization levels.

Nor did increasing users' exposure to ideologically diverse views—as another of the experiments did—wind up significantly shifting "affective polarization, ideological extremity, candidate evaluations and belief in false claims."

Taken together, the studies strike a strong blow against the "zombie bite" theory of algorithmic exposure, in which people are passive vessels easily infected by divisive content, fake news, and whatever else social media platforms throw at them.

They're the latest in a long line of papers and reports casting doubt on the now-conventional wisdom that social media platforms—and particularly their algorithms—are at fault for a range of modern political and cultural problems, from political polarization to extremism to misinformation and much more. (Reason highlighted a lot of this research in its January 2023 cover story, "In Defense of Algorithms.")

Yet despite a substantial body of research challenging such assumptions, a lot of the press remains credulous about claims of tech company culpability and villainy while reporting very skeptically on any evidence to the contrary. And this media bias is on full display in the coverage of the new Facebook and Instagram studies.

The Post's piece on them contains this in-article ad after the first paragraph: "Tech is not your friend. We are. Sign up for The Tech Friend newsletter."

It's an almost perfect distillation of the larger dynamic at play here, in which traditional media—having lost ample eyeballs and advertising dollars to social media—seems intent to cast tech platforms as untrustworthy, unscrupulous, and dangerous for democracy, in contrast to the honest, virtuous, and democracy-protecting members of the mainstream press.

The Post piece goes on to quote three people uninvolved with the Facebook studies who have qualms about it, including "Facebook whistleblower" Frances Haugen. "She argued that by the time the researchers evaluated the chronological approach during the fall of 2020, thousands of users already had joined mega groups that would have flooded their feeds with potentially problematic content," the Post reports.

This is, of course, a very different complaint than the one typically heard from Haugen and her ilk—that Facebook's algorithms deliberately push divisive and extreme content. Here Haugen shifts the goal posts, complaining about groups that people self-select into and the fact that Facebook showed them content from these groups at all.

And the Post also moves the goal posts, describing the study as being "conducted in a world in which, in many ways, the cat was already out of the bag. A three-month switch in how information is served on a social network occurred in the context of a long-standing change in how people share and find information." Tucker tells the Post: "This finding cannot tell us what the world would have been like if we hadn't had social media around for the last 10 to 15 years."

Of course, the big fear for years has been about bursts of election-time information—pushed by hostile foreign actors, U.S. political groups, etc.—and their potential ability to tilt political outcomes thanks to algorithmic amplification. These new studies squarely strike at such fears, while any "long-standing change" in information finding is, in this context, utterly irrelevant, as is some hypothetical world in which social media never existed. The only purpose statements like these seem designed to serve is to minimize the findings in question.

The coverage in Science—which published three of the new papers—is even weirder. The journal has packaged the studies in a special issue with the cover line "Wired to Split" and an introduction titled "Democracy Intercepted."

The cover features two groups of people—one dressed in red, one dressed in blue—sitting on opposite sides of the Meta logo, facing in opposite directions. Each member of each group is intently looking at a laptop or tablet or smartphone, with several members appearing outraged. The design seems to illustrate the exact opposite of what was actually found in the studies, as do the slogans and introductory text associated with the new studies.

"Can a business model that prioritizes 'engagement algorithms' pose a threat to democracy?" asks Science in the introduction. It goes on to state that "tech companies have a public responsibility to understand how design features of platforms may affect users and, ultimately, democracy. The time is now to motivate substantive changes and reforms." It teases the research in question—without once mentioning that the findings go against more hysterical interpretations. It's as if the whole package was designed with a preferred narrative in mind but no regard for the actual research at hand.

Because what the actual research found—as Talia Stroud, a lead researcher on the project and director of the Center for Media Engagement at the University of Texas at Austin puts it—is that experimenting with popular ideas to tweak algorithms in a supposedly socially responsible way simply "did not sway political attitudes."

Stroud is quoted in Nature, which does a better overall job of framing the research in a realistic way ("Tweaking Facebook feeds is no easy fix for polarization, studies find" is the headline of an article about it). But even Nature can't resist quotes that minimize the findings. "The science is nice, but the generalizability seems limited," Northwestern University political scientist James Druckman is quoted in Nature as saying. "This is just another data point in that discussion."

The post Studies Keep Finding That Social Media Algorithms Don't Increase Polarization. Why Is the Press So Skeptical? appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.