Google’s development of a generative A.I. tool designed to write news stories, which they have already pitched to The New York Times and The Washington Post, raises urgent questions: Is there a role for A.I. in newsrooms? And if so, what effect will the technology have on journalism and, by extension, democracy?
If journalism embraces generative A.I., the public can never again be sure that what they read under even the most respected mastheads, whether in print or digital, was written by a human being who is honestly reporting the facts.
On the other hand, if journalism embraces assistive A.I., which can help with everything except generating net new stories, the result may be that journalists will have more time to research, write, and report. That could be a boon for the profession–and by extension the democratic process, which depends on a well-informed voter base.
It is essential that we educate the public about the difference between assistive and generative A.I. News outlets must be transparent with their readers about which type of A.I. they will allow into the newsroom. Because generative A.I. is prone to lying (or “hallucinating” as it is euphemistically called), such statements will be the only way readers can know which sources to trust.
Yet as of this writing, only one major newspaper in the English-speaking world, The Financial Times, has pledged that what they print will not be written by machines.
We can only assume that deciding whether to use generative A.I. is still an open question at other publications. Yet readers need to know, and they need to know now, not only before the 2024 presidential election in the U.S., but before the Internet is irrevocably transformed by A.I.
It is estimated that by 2026, up to 90% of what we see on our screens will be the product of generative A.I., a technology not fully understood even by its creators. Once that happens, we can never again know whether a piece of content was created by a human being. News publications need to be the exception to this, a refuge where readers can go to seek out the truth.
Before the Internet, the difference between content made to persuade and content made to inform was clear. In many cases, the size and prominence of the word “Advertisement” in the pages of a print newspaper were mandated by law. Two centuries of such laws, coupled with commercial protections for newspapers and an education system geared toward mass literacy, were upended when newspapers migrated online.
On the Internet, readers accessed news through aggregators and search engines, not just publications’ websites, raising important questions. Are search platforms publishers? Are search results subject to freedom of speech and editorial ethics? Social media, where the public increasingly gets its news, complicated these questions further. The addition of generative A.I. threatens to blur these distinctions even more.
We have not had the luxury of time to answer these questions. The tech companies that are changing our information environment move faster than journalism–and much faster than government. The consequences of being so outpaced have been dire for both. We have seen newsrooms roiled and reduced, with local news hit especially hard. We have seen the U.S. government attacked by a small portion of its citizens who have been radicalized by social media, where there are no borders drawn between goodwill reportage and propaganda, despite some efforts by platforms and tech companies.
As a profession, journalism cannot wait to make a decision about how A.I. will be used. ChatGPT, the most famous generative A.I., has a good claim to be the most rapidly adopted technology in history, reaching 100 million users in eight weeks.
By contrast, it took Facebook four years to reach that number and about 15 years for computers to reach a comparable percentage of the U.S. population. It took telephones nearly a century. The time for newsrooms to make a decision is upon us. The potential risks and benefits of this technology are expanding by the week.
At stake is the enduring character of our public conversation. If legacy news media rush to adopt generative A.I., it would do a tremendous disservice to the truth and to the potential of A.I. technology to work for the public good.
While assistive A.I. can free up newsrooms to do more truth-seeking reporting and boost reader engagement, generative A.I. would sow confusion and, if left unchecked, would risk making the public so exhausted by an unreliable information environment that they might rush toward any falsehood or ideology which would provide comfort.
Under these conditions, as Hannah Arendt wrote in The Origins of Totalitarianism, “One could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism.”
If our remaining journalistic institutions make a clear, upfront decision about which A.I.-based tools they choose to adopt, we can avoid a further slide toward cynicism and the decay of our democratic process that would inevitably follow.
Josh Brandau is the CEO of Nota. He’s the former CMO/CRO of the Los Angeles Times.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
More must-read commentary published by Fortune:
- We’re now finding out the damaging results of the mandated return to the office–and it’s worse than we thought
- Demand for urban real estate will be challenged for the rest of the decade. Here’s how the world’s superstar cities are projected to fare by 2030
- ‘The Feckless 400’: These companies are still doing business in Russia–and funding Putin’s war
- ‘An invisible epidemic’: America’s workers are going hungry as food insecurity bites