There is nothing new about "chatbots" that are capable of maintaining a conversation in natural language, understanding a user's basic intent, and offering responses based on preset rules and data. But the capacity of such chatbots has been dramatically augmented in recent months, leading to handwringing and panic in many circles.
Much has been said about chatbots auguring the end of the traditional student essay. But an issue that warrants closer attention is how chatbots should respond when human interlocutors use aggressive, sexist, or racist remarks to prompt the bot to present its own foul-mouthed fantasies in return. Should AIs be programmed to answer at the same level as the questions that are being posed?
If we decide that some kind of regulation is in order, we must then determine how far the censorship should go. Will political positions that some cohorts deem "offensive" be prohibited? What about expressions of solidarity with West Bank Palestinians, or the claim that Israel is an apartheid state (which former US president Jimmy Carter once put into the title of a book)? Will these be blocked as "anti-Semitic"?
The problem does not end there. As the artist and writer James Bridle warns, the new AIs are "based on the wholesale appropriation of existing culture" and the belief that they are "actually knowledgeable or meaningful is actively dangerous". Hence, we should also be very wary of the new AI image generators.
In his 1805 essay "On the gradual formation of thoughts in the process of speech" (first published posthumously in 1878), the German poet Heinrich von Kleist inverts the common wisdom that one should not open one's mouth to speak unless one has a clear idea of what to say: "If therefore a thought is expressed in a fuzzy way, then it does not at all follow that this thought was conceived in a confused way. On the contrary, it is quite possible that the ideas that are expressed in the most confusing fashion are the ones that were thought out most clearly."
The problem is not that chatbots are stupid; it is that they are not "stupid" enough. It is not that they are naive (missing irony and reflexivity); it is that they are not naive enough (missing when naivety is masking perspicacity). The real danger, then, is not that people will mistake a chatbot for a real person; it is that communicating with chatbots will make real persons talk like chatbots -- missing all the nuances and ironies, obsessively saying only precisely what one thinks one wants to say.
When I was younger, a friend went to a psychoanalyst for treatment following a traumatic experience. This friend's idea of what such analysts expect from their patients was a cliché, so he spent his first session delivering fake "free associations" about how he hated his father and wanted him dead. The analyst's reaction was ingenious: he adopted a naive "pre-Freudian" stance and reproached my friend for not respecting his father ("How can you talk like that about the person who made you what you are?"). This feigned naivety sent a clear message: I don't buy your fake "associations". Would a chatbot be able to pick up on this subtext?
Most likely, it would not, because it is like Rowan Williams's interpretation of Prince Myshkin in Dostoyevsky's The Idiot. According to the standard reading, Myshkin, "the idiot", is a saintly, "positively good and beautiful man" who is driven into isolated madness by the harsh brutalities and passions of the real world. But in Williams's radical re-reading, Myshkin represents the eye of a storm: good and saintly though he may be, he is the one who triggers the havoc and death that he witnesses, owing to his role in the complex network of relationships around him.
It is not just that Myshkin is a naive simpleton. It is that his particular kind of obtuseness leaves him unaware of his disastrous effects on others. He is a flat person who literally talks like a chatbot. His "goodness" lies in the fact that, like a chatbot, he reacts to challenges without irony, offering platitudes bereft of any reflexivity, taking everything literally and relying on a mental auto-complete rather than authentic idea-formation. For this reason, the new chatbots will get along very well with ideologues of all stripes, from today's "woke" crowd to "MAGA" nationalists who prefer to remain asleep. ©2023 Project Syndicate
Slavoj Žižek, Professor of Philosophy at the European Graduate School, is International Director of the Birkbeck Institute for the Humanities at the University of London and the author, most recently, of 'Heaven in Disorder' (OR Books, 2021).