Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The New York Times
The New York Times
Business
Natasha Singer

LinkedIn Ran Social Experiments On 20 Million Users Over Five Years

LinkedIn ran experiments on more than 20 million users over five years that, while intended to improve how the platform worked for members, could have affected some people’s livelihoods, according to a new study.

In experiments conducted around the world from 2015 to 2019, LinkedIn randomly varied the proportion of weak and strong contacts suggested by its “People You May Know” algorithm — the company’s automated system for recommending new connections to its users. The tests were detailed in a study published this month in the journal Science and co-authored by researchers at LinkedIn, the Massachusetts Institute of Technology, Stanford University and Harvard Business School.

LinkedIn’s algorithmic experiments may come as a surprise to millions of people because the company did not inform users that the tests were underway.

Tech giants like LinkedIn, the world’s largest professional network, routinely run large-scale experiments in which they try out different versions of app features, web designs and algorithms on different people. The long-standing practice, called A/B testing, is intended to improve consumers’ experiences and keep them engaged, which helps the companies make money through premium membership fees or advertising. Users often have no idea that companies are running the tests on them. (The New York Times uses such tests to assess the wording of headlines and to make decisions about the products and features the company releases.)

But the changes made by LinkedIn are indicative of how such tweaks to widely used algorithms can become social engineering experiments with potentially life-altering consequences for many people. Experts who study the societal effects of computing said conducting long, large-scale experiments on people that could affect their job prospects, in ways that are invisible to them, raised questions about industry transparency and research oversight.

“The findings suggest that some users had better access to job opportunities or a meaningful difference in access to job opportunities,” said Michael Zimmer, an associate professor of computer science and the director of the Center for Data, Ethics and Society at Marquette University. “These are the kind of long-term consequences that need to be contemplated when we think of the ethics of engaging in this kind of big data research.”

The study in Science tested an influential theory in sociology called “the strength of weak ties,” which maintains that people are more likely to gain employment and other opportunities through arms-length acquaintances than through close friends.

The researchers analyzed how LinkedIn’s algorithmic changes had affected users’ job mobility. They found that relatively weak social ties on LinkedIn proved twice as effective in securing employment as stronger social ties.

In a statement, LinkedIn said that during the study it had “acted consistently with” the company’s user agreement, privacy policy and member settings. The privacy policy notes that LinkedIn uses members’ personal data for research purposes. The statement added that the company used the latest, “noninvasive” social science techniques to answer important research questions “without any experimentation on members.”

LinkedIn, which is owned by Microsoft, did not directly answer a question about how the company had considered the potential long-term consequences of its experiments on users’ employment and economic status. But the company said the research had not disproportionately advantaged some users.

The goal of the research was to “help people at scale,” said Karthik Rajkumar, an applied research scientist at LinkedIn who was one of the study’s co-authors. “No one was put at a disadvantage to find a job.”

Sinan Aral, a management and data science professor at MIT who was the lead author of the study, said LinkedIn’s experiments were an effort to ensure that users had equal access to employment opportunities.

“To do an experiment on 20 million people and to then roll out a better algorithm for everyone’s jobs prospects as a result of the knowledge that you learn from that is what they are trying to do,” Aral said, “rather than anointing some people to have social mobility and others to not.” (Aral has conducted data analysis for The New York Times, and he received a research fellowship grant from Microsoft in 2010.)

Experiments on users by big internet companies have a checkered history. Eight years ago, a Facebook study describing how the social network had quietly manipulated what posts appeared in users’ News Feeds in order to analyze the spread of negative and positive emotions on its platform was published. The weeklong experiment, conducted on 689,003 users, quickly generated a backlash.

The Facebook study, whose authors included a researcher at the company and a professor at Cornell University, contended that people had implicitly consented to the emotion manipulation experiment when they had signed up for Facebook. “All users agree prior to creating an account on Facebook,” the study said, “constituting informed consent for this research.”

Critics disagreed, with some assailing Facebook for having invaded people’s privacy while exploiting their moods and causing them emotional distress. Others maintained that the project had used an academic co-author to lend credibility to problematic corporate research practices.

Cornell later said its internal ethics board had not been required to review the project because Facebook had independently conducted the study and the professor, who had helped design the research, had not directly engaged in experiments on human subjects.

The LinkedIn professional networking experiments were different in intent, scope and scale. They were designed by LinkedIn as part of the company’s continuing efforts to improve the relevance of its “People You May Know” algorithm, which suggests new connections to members.

The algorithm analyzes data like members’ employment history, job titles and ties to other users. Then it tries to gauge the likelihood that a LinkedIn member will send a friend invite to a suggested new connection as well as the likelihood of that new connection accepting the invite.

For the experiments, LinkedIn adjusted its algorithm to randomly vary the prevalence of strong and weak ties that the system recommended. The first wave of tests, conducted in 2015, “had over 4 million experimental subjects,” the study reported. The second wave of tests, conducted in 2019, involved more than 16 million people.

During the tests, people who clicked on the “People You May Know” tool and looked at recommendations were assigned to different algorithmic paths. Some of those “treatment variants,” as the study called them, caused LinkedIn users to form more connections to people with whom they had only weak social ties. Other tweaks caused people to form fewer connections with weak ties.

Whether most LinkedIn members understand that they could be subject to experiments that may affect their job opportunities is unknown.

LinkedIn’s privacy policy says the company may “use the personal data available to us” to research “workplace trends, such as jobs availability and skills needed for these jobs.” Its policy for outside researchers seeking to analyze company data clearly states that those researchers will not be able to “experiment or perform tests on our members.”

But neither policy explicitly informs consumers that LinkedIn itself may experiment or perform tests on its members.

In a statement, LinkedIn said, “We are transparent with our members through our research section of our user agreement.”

In an editorial statement, Science said, “It was our understanding, and that of the reviewers, that the experiments undertaken by LinkedIn operated under the guidelines of their user agreements.”

After the first wave of algorithmic testing, researchers at LinkedIn and MIT hit upon the idea of analyzing the outcomes from those experiments to test the theory of the strength of weak ties. Although the decades-old theory had become a cornerstone of social science, it had not been rigorously proved in a large-scale prospective trial that randomly assigned people to social connections of different strengths.

The outside researchers analyzed aggregate data from LinkedIn. The study reported that people who received more recommendations for moderately weak contacts generally applied for and accepted more jobs — results that dovetailed with the weak-tie theory.

In fact, relatively weak contacts — that is, people with whom LinkedIn members shared only 10 mutual connections — proved much more productive for job hunting than stronger contacts with whom users shared more than 20 mutual connections, the study said.

A year after connecting on LinkedIn, people who had received more recommendations for moderately weak-tie contacts were twice as likely to land jobs at the companies where those acquaintances worked compared with other users who had received more recommendations for strong-tie connections.

“We find that these moderately weak ties are the best option for helping people find new jobs and much more so than stronger ties,” said Rajkumar, the LinkedIn researcher.

The 20 million users involved in LinkedIn’s experiments created more than 2 billion new social connections and completed more than 70 million job applications that led to 600,000 new jobs, the study reported. Weak-tie connections proved most useful for job seekers in digital fields like artificial intelligence, while strong ties proved more useful for employment in industries that relied less on software, the study said.

LinkedIn said it had applied the findings about weak ties to several features, including a new tool that notifies members when a first- or second-degree connection is hiring. But the company has not made study-related changes to its “People You May Know” feature.

Aral of MIT said the deeper significance of the study was that it showed the importance of powerful social networking algorithms — not just in amplifying problems like misinformation but also as fundamental indicators of economic conditions like employment and unemployment.

Catherine Flick, a senior researcher in computing and social responsibility at De Montfort University in Leicester, England, described the study as more of a corporate marketing exercise.

“The study has an inherent bias,” Flick said. “It shows that, if you want to get more jobs, you should be on LinkedIn more.”

View original article on nytimes.com

© 2022 THE NEW YORK TIMES COMPANY

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.