The article is here; here is the Introduction:
Careless speech has always existed on a very large scale. When people talk, they often give bad advice or wrong information. The scale was made more visible by the public Internet as the musings and conversations of billions of participants became accessible and searchable to all. This dynamic produced a set of tort and free speech principles that we have debated and adjusted to over the last three decades. AI speech systems bring a new dynamic. Unlike the disaggregated production of misinformation in the Internet era, much of the production will be centralized and supplied by a small number of deep pocket, attractive defendants (namely, OpenAI, Microsoft, and other producers of sophisticated conversational AI programs). When should these companies be held liable for negligent speech produced by their programs? And how should the existence of these programs affect liability between other individuals?
This essay begins to work out the options that courts or legislatures will have. I will explore a few hypotheticals that are likely to arise frequently, and then plot out the analogies that courts may make to existing liability rules. The essay focuses on duty—that is, whether under traditional tort principles (which have historically accommodated and absorbed First Amendment principles)—courts should even entertain a case. Where there is no duty, a claim will fail early even if the plaintiff would be able to prove a lack of reasonable care, factual and legal causation, and damages.
In the end, I conclude that existing duty rules, if not modified for the AI context, could wind up missing the mark for optimal deterrence. They can be too broad, too narrow, or both at the same time, depending on how courts decide to draw their analogies.
The post Journal of Free Speech Law: "Negligent AI Speech: Some Thoughts About Duty," by Prof. Jane Bambauer appeared first on Reason.com.