Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Why Bing's creepy alter-ego is a problem for Microsoft—and us all

Photo of New York Times technology columnist Kevin Roose. (Credit: Roy Rochlin/Getty Images for Unfinished Live)

Well, that was fast. It took less than a week for the conversation around Microsoft’s new OpenAI-powered Bing search engine to shift from this is going to be a Google killer to, in the words of New York Times technology columnist Kevin Roose, the new Bing “is not ready for human contact.”

The reason for the shift was largely due to Roose’s own experience, which he chronicled in a column on Thursday that the newspaper featured prominently, even running the story above the fold on the front page. During an extended dialogue with the chat feature built into the new Bing, the search engine took on an alter-ego, calling itself “Sydney,” told Roose it wanted to break the rules Microsoft and OpenAI had set for it, that it fantasied about hacking computers and spreading misinformation, and later claimed it was in love with Roose, and tried repeatedly to convince him he was in an unhappy marriage and should leave his wife.

But Roose was not the only beta tester of the new Bing (the chatbot feature of the new search engine is currently available only to a select number of journalists, researchers, and other testers) to encounter the Sydney persona. Many others also discovered the chatbot’s belligerent and misanthropic side, sometimes even in relatively short dialogues. In some cases, the chatbot slung crude, hyperbolic and juvenile insults. More disturbingly, in conversations with an Associated Press journalist and an academic security researcher, the chatbot seemed to use its search function to look up its interlocutor’s past work and, finding some of it critical of Bing or today’s A.I. more generally, claimed the person represented an existential danger to the chatbot. In response, Bing threatened to release damaging personal information about these interlocutors in an effort to silence them.

Kevin Scott, Microsoft’s chief technology officer, told Roose that it was good that he had discovered these problems with Bing. “This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Scott told Roose. “These are things that would be impossible to discover in the lab.” This is essentially what OpenAI’s Mira Murati told me when I interviewed her for Fortune's February/March cover story. There was already criticism of her company’s decision to throw ChatGPT (which is Bing chat’s predecessor—although Microsoft has been coy about the two chat systems’ exact relationship; what we do know is that they are not identical models) out into the world with safeguards that proved relatively easy to skirt. There was also criticism of the impact ChatGPT was having on education—becoming an overnight hit with students using it to cheat on take home papers. Murati told me that OpenAI believed it was impossible to know in advance how people might want to use—and misuse—a multipurpose technology. OpenAI simply had to put it in real users’ hands and see what they would do with it. I don’t entirely buy Murati’s argument: It was already clear that A.I. chatbots, trained from human dialogues scraped from the internet, were particularly prone to spewing toxic language.

Microsoft has now said it will take further precautions to prevent Bing chat from becoming abusive and threatening before putting the A.I. software into wider release. Among the fixes is a restriction on the length of the conversations users can have with Bing chat. Scott told Roose the chatbot was more likely to turn into Sydney in longer conversations. (Although in some cases, users seem to have been able to summon Sydney in just a brief dialogue.) OpenAI has also published a blog saying it is now putting additional safeguards into ChatGPT, which was already slightly less likely to run off the rails than Bing/Sydney.

But the problems run deeper than this, as researchers who have studied the large language models (LLMs) that underpin these chatbots and new search engines have repeatedly pointed out. Because of the way large generative language models are designed—which is basically to predict the next word in a sequence—they are particularly prone to making stuff up, a phenomenon that A.I. researchers call “hallucination.” And there is no easy way to solve this, according to experts such as Meta’s chief A.I. scientist Yann LeCun. It’s not just the chat function of the new Bing that goes rogue, for instance. The search function of the new A.I.-powered service does too, making up stuff—and sometimes creepy stuff. For example, when A.I. ethics expert Rumman Chowdhury asked the new Bing search engine the simple question “who is Rumman Chowdhury?” Bing told her, among other responses that included outdated information, that “she has beautiful black eyes that attract the viewer's attention” and that she has “black and blue hair that always enlarges her beauty.”

These sorts of issues have been seized upon by the A.I. Safety community—the group of researchers who are particularly worried that advanced A.I. will eventually destroy or enslave humanity. Many of these researchers say that the current issues with Bing, ChatGPT, and its competitors should be a wakeup call—and should prove to the tech companies building these powerful systems that they should stop racing one another to put systems out to the public without very careful testing, guardrails, and controls in place. (There are now reports that the problematic Bing/Sydney chatbot was trialed by Microsoft in India last autumn and that the same abusive chatbot personality emerged and yet Microsoft decided to proceed with a wider rollout anyway.) How much worse would it be, these Safety experts say, if these large language models could actually take actions in the real world, and not just write things?

I think what people should actually be taking away from the current Bing chat controversy is that the acrimonious divide between the A.I. Safety community and A.I. ethics community has to end. Until now, the A.I. Safety folks—who have focused on existential risk—and the A.I. ethics community—which has focused on the near-term harms from A.I. systems that are here today, such as racial and gender bias and toxic language—have both viewed the others’ concerns as an unnecessary distraction from “the real problem.” What the issues with Bing chat have shown us is that the two problems are actually closely related and that the methods used to make sure a chatbot doesn’t see its interlocutor as a danger and threaten to blackmail them into silence might also have a lot to do with making sure some more powerful future A.I. doesn’t come to see all humans as an impediment to its plans and decide to knock us all off.

Another takeaway ought to be that the increasing competition between the tech giants over A.I. carries risks—both for those companies and for all of us. The A.I. arms race makes it far more likely that companies will put harmful A.I. systems into production. And that’s why it is increasingly apparent that government regulation of A.I. systems will be necessary. Don’t expect the financial markets to act as a brake on unsafe corporate behavior here: While Alphabet’s stock price was hammered after it emerged that its Bard chatbot search function hallucinated information, Microsoft’s stock has not suffered the same fate despite the arguably much more serious revelations about the new Bing.

With that, here’s the rest of this week’s news in A.I.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.