Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Policy
Foreign Policy
Comment
Sasha Polakow-Suransky

Can ChatGPT Explain Geopolitics?

Click to reveal the correct answer and see annotations from our editor on how GPT-4 did.
  • Essay One is by GPT-4
    Essay one was written by Lauren Grachuk, a freshman at the University of Massachusetts Amherst.
  • Essay Two is by GPT-4
    Essay two was written by GPT-4! See our annotations below on how it could improve.

ESSAY Two

The Annexation of Crimea: A Precursor to Larger Conflict in Ukraine

By GPT-4

In the geopolitical chess game, the annexation of Crimea by Russia in 2014 marked a significant shift in power dynamics. This event, far from being a mere territorial dispute, signaled a potential resurgence of Russian expansionism and effectively undermined the post-Cold War order. This essay argues that the annexation of Crimea paved the way for a larger war in Ukraine by destabilizing the region, emboldening Russian assertiveness, and exposing the weakness of the international community’s response.

The annexation of Crimea was a calculated move that destabilized Ukraine and the region. Crimea, with its strategic location and large Russian-speaking population, was an ideal target for Russia’s expansionist ambitions. The annexation, by altering Ukraine’s territorial integrity and sovereignty, triggered a domino effect, destabilizing the Donbas region and leading to a protracted conflict in eastern Ukraine.

By annexing Crimea, Russia demonstrated its readiness to use military force to achieve geopolitical goals. This action emboldened Russia, escalating its assertiveness in the region. The annexation was not an isolated incident but part of a larger pattern of Russian aggression, which was further evidenced by its direct and indirect involvement in the war in eastern Ukraine.

The international response to the annexation of Crimea exposed the limitations and weaknesses of existing mechanisms to prevent such violations. Despite strong condemnations and the imposition of sanctions by the West, Russia faced no significant military or political repercussions. This lack of effective deterrence has contributed to the subsequent escalation of the conflict in Ukraine.

The annexation of Crimea, therefore, created a dangerous precedent. What began as a territorial dispute evolved into a larger war, challenging the established norms of international relations and posing profound questions about the future of peace and security in the region.

The destabilization of Ukraine following the annexation of Crimea fueled separatist sentiments in the Donbas region, leading to a violent and protracted conflict. The annexation also created a power vacuum in the region, enabling the rise of various armed groups and intensifying the conflict.

The assertiveness displayed by Russia during the annexation of Crimea has further escalated the conflict in Ukraine. By demonstrating its willingness to use force to alter borders, Russia sent a clear message to the international community about its geopolitical ambitions. This brazen display of force was a catalyst for further aggression.

Finally, the inadequacy of the international response to the annexation of Crimea has exacerbated the conflict in Ukraine. The imposition of economic sanctions by the West has not deterred Russia from pursuing its aggressive policies.

The annexation of Crimea by Russia marked a turning point in the conflict in Ukraine. It destabilized the region, emboldened Russia, and exposed the limitations of the international community. As such, it paved the way for a larger war in Ukraine, posing a profound threat to peace and security in the region.

Three images produced by Stable Diffusion, Midjourney, and DALL·E when given the prompt “Editorial illustration showing how Russia’s annexation of Crimea led to wider war in Ukraine.”
Three images produced by Stable Diffusion, Midjourney, and DALL·E when given the prompt “Editorial illustration showing how Russia’s annexation of Crimea led to wider war in Ukraine.”
From left: Images produced by Stable Diffusion, Midjourney, and DALL·E when given the prompt “Editorial illustration showing how Russia’s annexation of Crimea led to wider war in Ukraine.”

Like ChatGPT, AI image tools lack critical thinking and empathy—skills needed to turn complicated topics into original conceptual illustration. Increasingly, however, these tools can produce convincing knockoffs and deepfakes, especially when given prescriptive prompts, such as which visual metaphor to employ or which artists to mimic. This February, the U.S. Copyright Office ruled that illustrations produced for a comic book by the AI program Midjourney were in violation of U.S. copyright law. “A person who provides text prompts to Midjourney does not ‘actually form’ the generated images and is not the ‘master mind’ behind them,” the office wrote.—Lori Kelley, FP creative director

FP’s Verdict

By Sasha Polakow-Suransky

ChatGPT, for all its amusing poetic output, riddle-solving prowess, and on-demand composition, is essentially doing one thing: spitting out what it considers the next most likely word in a sequence.

“Generative AI systems are quite good at mimicking the patterns of human language,” said Sarah Myers West, the managing director of the AI Now Institute and a former advisor on AI to the U.S. Federal Trade Commission. But, she added, “they lack any of the context or depth of human understanding and often are trained on static data sets that aren’t up to date with events in the world.” The result? “They serve more as [a] picture of what people said on the internet up until a year or two ago than they are a meaningful reflection of the reality we live in.”

When instructed to write in the style of a widely published author, such as the late Christopher Hitchens, ChatGPT and other large language models can be very good mimics. When it comes to producing original analytical content, though, they struggle—and the tone can seem vague or overly generalized.

As Flynn Coleman, an international human rights lawyer and the author of A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are, told Foreign Policy via email: “These tools are not capable of original, authentic, or empathetic human thought.”

“They cannot replicate the creativity, nuance, and critical thinking that we possess, nor can they generate and interrogate original arguments,” she added.

While the chatbot is capable of self-improvement and correction, its writing and rewriting are formulaic rather than imaginative. Its limitations were evident in this assignment—as when we instructed it, in an earlier interaction, to rewrite a passage arguing for a negotiated cease-fire while taking into account possible Ukrainian objections. Given this prompt, it was not able to produce any genuine analysis of how or why Ukraine might object. GPT-4 instead simply modified the text mechanically, urging Western nations to “respect Ukraine’s sovereign decisions.”

In academia, there are well-founded fears that AI-generated content won’t be detectable by existing tools such as plagiarism software. Students are already using tools such as ChatGPT to produce essays that aren’t original but could still get a passing grade. Paul Musgrave, an assistant professor of political science at the University of Massachusetts Amherst who helped facilitate this project—by asking his students to submit essays, one of which we chose to feature here, by undergraduate Lauren Grachuk—observed that “it’s a great machine for regurgitating the conventional wisdom, and like all conventional wisdom, it’s imprecise and unfounded.” Still, he said, “the thing about all of this for me is how easy it is for ChatGPT to get a C or a B … but how hard it is to get an A or even a B+.”

The reason ChatGPT has not yet cleared that bar has to do with its inability to detect or test what is true or false. In March, linguists Noam Chomsky and Ian Roberts and AI expert Jeffrey Watumull wrote an essay in the New York Times pointing out that current large language models cannot go beyond description and prediction and, as such, “are stuck in a prehuman or nonhuman phase of cognitive evolution.”

As David Schardt noted in a March article for the Center for Science in the Public Interest, “even when provided with accurate information, ChatGPT can get it wrong. Sometimes it puts words, names, and ideas together that appear to make sense but actually don’t belong together.” Indeed, many users have catalogued references to articles that don’t exist and fake legal case citations.

As Chomsky and his colleagues wrote, “machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.”

Some of ChatGPT’s forays into fiction quickly became evident during our interactions with the chatbot while preparing this feature. In an earlier iteration of the exercise, GPT-4 fabricated some sources with real authors on plausible topics in plausible journals—but the actual titles and dates provided led to articles that didn’t exist; in other cases, GPT-4 provided realistic-looking links to JSTOR with authentic citations, yet a reference to a real book about Crimea published in 2010 came with a link that led to an article from 1950 on polynomials in a Scandinavian mathematics journal.

(The model does appear to be learning, however. Eight weeks later, most of these hallucinations seemed to have subsided; in the article we feature, it provided a genuine list of references to real articles on the topic of Crimea and Ukraine.)

The failure to distinguish truth from falsehood or the tendency to generate hallucinated content that is presented—and then accepted—as reliable information online does have more sinister implications. There are, for instance, fears that as some news and publishing outlets experiment with using large language models, false AI-produced content could flood the internet and that future models feeding on that data set will replicate and propagate falsehoods, making it increasingly difficult to discern fact from fiction in online sources.

Those risks increase when it comes to AI-generated images and videos, which have an arguably greater capacity to misrepresent reality and deceive viewers—especially in the event of deepfake videos or shocking AI-generated images of public figures emerging, say, at the height of a political campaign. Chris Meserole and Alina Polyakova presciently addressed this topic in Foreign Policy in 2018, noting that such images are difficult to counter because “the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality.”

These are still early days for large language models, and the pace of development is extremely rapid. “The reality is that these tools aren’t going anywhere and will only grow in popularity—Pandora’s box has been opened,” Coleman said.

This story also appears in the Summer 2023 issue of Foreign Policy. Subscribe now to support our journalism.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.