Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News
Politics

Michael Cohen unknowingly used fake AI legal information in court filing

Michael Cohen unknowingly used AI-generated fake legal information in court filing.

Former attorney to President Trump, Michael Cohen, recently found himself embroiled in yet another controversy amidst his ongoing legal battles. Cohen admitted to unknowingly using an artificial intelligence tool to acquire fake legal information, which he subsequently passed on to his attorney and included in a court filing seeking to end his supervised release.

In an unsealed court filing, Cohen stated that, as a non-lawyer, he was not aware of the emerging risks and trends in legal technology. He claimed to be oblivious to the fact that the tool he used, known as Google Bard, was a generative text service capable of producing seemingly authentic citations and descriptions that were actually fabricated. Regrettably, Cohen's lack of awareness led to him unwittingly submitting false information to a judge.

While this incident may be perceived as just another blunder in Cohen's tumultuous career, it must be noted that he is not the first nor the last person to mistakenly rely on AI-generated content in legal proceedings. According to experts, such technology is readily available and easily accessible to anyone without adequate vetting processes in place.

These AI tools, including Google Bard and ChatGPT, lack sufficient guardrails to prevent individuals from unknowingly submitting erroneous information to courts. Although there is a small fine print notification when using Bard, cautioning users about potential inaccuracies, it often goes unnoticed or underestimated. The absence of stringent regulations surrounding the use of AI-generated content exacerbates the possibility of similar incidents occurring in the future.

Even though Cohen, as a non-lawyer, may be more vulnerable to falling prey to these AI tools, it is crucial to recognize that this technology is far from infallible. As demonstrated by a recent experiment, even when consulting AI regarding less consequential matters—such as determining emergency contact priorities during a hypothetical life or death situation mixed with a football game—it yielded ambiguous and potentially unreliable responses.

The term coined for the dissemination of false information from AI sources is a 'hallucination.' It serves as a reminder that these tools reflect the imperfections of their human creators. Caution and critical evaluation are necessary when utilizing AI-generated content, even for individuals with a legal background like Michael Cohen.

Ultimately, Cohen's unintentional submission of fabricated information highlights the need for greater awareness and caution when employing AI tools. Without proper scrutiny, these technologies can inadvertently impede the integrity of legal proceedings. As society continues to grapple with the ethical implications of AI, it is imperative that individuals comprehend the limitations and exercise prudent judgment when engaging with these innovative but imperfect tools.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.