Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Input
Input
Technology
J. Fergus

Facebook could make a nicer news feed, but it would hurt revenue

The pressure cooker of a high stakes election and a deadly pandemic has only amplified criticisms of Facebook this year. It’s no secret that, behind closed doors, even employees are divided over how the company has handled misinformation and hate speech. The New York Times now reports that multiple effective tools were crafted this year by concerned employees. The best ones were watered down, axed, or enacted only temporarily to appease hyperpartisan pages or protect the company’s revenue.

A nicer Facebook —

As the Trump administration tries to unearth nonexistent proof of a stolen election, Facebook has started to use at least one of its post-election contingency features. The company increased the weight it gives to publishers’ secret news ecosystem quality scores (NEQ) to surface authoritative news sources more prominently in users’ feeds. Employees have argued for this “nicer news feed” to become permanent, according to the NYT, but the platform fully intends to halt or reverse its positive policy streak.

This month, Facebook engineers and data scientists revealed the outcome of a group of experiments called “P(Bad for the World).” Posts were designated as good or bad for the world through user surveys, and the employees found that posts seen by large numbers of people were more likely to be in the bad column.

Shocked into action, the team made an AI model that could predict these negative posts and apply lower rankings to them in news feeds. Initial tests showed that the algorithm successfully limited harmful posts’ visibility, but it also decreased how often users open Facebook — a metric of great value to executives, likely tied to how often the company can serve users ads. The algorithm was ultimately changed to apply lighter demotions to a larger swath of harmful posts.

Didn’t make the cut —

Several other proposals never made it onto the platform at all. One feature would retroactively alert users that they shared false information and another algorithm deprioritized “hate baiting” posts which seem relatively innocuous but inspire vitriolic comments.

While employees felt these proposals were knocked down because they would disproportionately affect right-wing users and publishers as well as the bottom line, Facebook’s integrity executive Guy Rosen told the NYT that these assertions were incorrect. Rosen claims the first tool simply wasn’t effective, and the hate-baiting tool could be weaponized against pages if users’ spammed the comments of their posts.

Ignoring the bad to focus on the worse —

Reportedly, at a recent virtual meeting, Facebook executives extolled the positive effects of the company’s misinformation policies, even as it allows “Stop the Steal” groups to swell unchecked. As far as decision-makers at the company are concerned, their work here is done. Now, they need to shore up against inevitable antitrust charges surrounding its aggressive acquisitions and data collection policies. At the risk of having a profitable acquisition (and WhatsApp cleaved from its portfolio), Facebook isn’t in the business of making its core platform less engaging — regardless of the societal costs.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.