A bunch more companies today signed up to the White House’s voluntary commitments on safe AI development (for context, here’s Jeremy Kahn’s skeptical take on the launch of the commitments back in July). Among them is Adobe, which also used the occasion to put more flesh on the bones of its proposal for a federal anti-impersonation law.
Dana Rao, Adobe EVP and general counsel, first revealed the proposal in mid-July, telling Axios and then a Senate Judiciary Committee that creators should be able to seek statutory damages from people who use AI to impersonate them or their style, for commercial gain.
Rao expanded on the idea in a blog post today, in which he pointed out that copying someone’s work may be a copyright infringement, “but copyright doesn’t cover style.” That hasn’t been a problem until now, he argued, because closely copying someone’s style has required a lot of skill and time—but that all changes in a world of generative AI.
Adobe’s own Firefly genAI tool isn’t particularly prone to “style impersonation” because it’s been trained only on “our own licensed Adobe Stock images, other works in the public domain, moderated generative AI content, and work that is openly licensed by the rightsholder,” Rao claimed, adding that “other tools out there that are primarily trained off the web” would be the legislation’s main target.
Indeed, Adobe seems sure enough of Firefly’s safety in this regard that it last month promised enterprise customers it would cover their legal bills if anyone sues them over the copyright implications of their Firefly-generated output. Nonetheless, the “Federal Anti-Impersonation Right (FAIR) Act” that Adobe is proposing would broadly benefit the company and its peers by making the impersonator, rather than the tool’s vendor, the target.
As it happens, Wired yesterday published a good Steven Levy interview with Sundar Pichai, in which the Google boss revisited the subject of new AI laws. Although he’s previously been quite vocal in calling for a new AI regulatory framework, Pichai said many deployment scenarios are already covered by existing rules, such as the need for FDA approval in the medical realm, for example. Passing federal privacy legislation should be a higher priority for the U.S. than a new AI law, he added, because “in privacy, AI raises the stakes even more.”
(Google was in the first tranche to sign up for the White House commitments, along with Meta, Microsoft, OpenAI, Amazon, Anthropic, and Inflection. Alongside Adobe in the second round are Nvidia, IBM, Salesforce, Stability, Palantir, Cohere, and Scale AI.)
The Biden administration noted in today’s announcement that an executive order was on its way to “protect Americans’ rights and safety,” and said it would “continue to pursue bipartisan legislation to help America lead the way in responsible AI development.” For what it’s worth, Biden has also repeatedly called for the passage of federal privacy legislation, but given the furor around AI—there are a few congressional hearings on the matter happening this week alone—I think lawmakers’ priorities lie elsewhere.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer