Nvidia and Adobe have both joined the generative A.I. party, with platforms that target the enterprise (Nvidia’s Picasso, a cloud service for developing and deploying generative A.I. applications) and creators (Adobe’s Firefly family of models, which will be incorporated across apps like Photoshop and Premiere). The companies will work together on this—they’ll codevelop new generative A.I. models, and Firefly is partially hosted on Picasso—and have accordingly come up with a joint approach to the thorny issue of copyright.
One of the most controversial aspects of the generative A.I. explosion has been the training of these models on preexisting content—both the written word, which has resulted in news organizations complaining to OpenAI, and imagery, which has led Getty Images to sue Stability AI. This is a novel area of copyright law, and some argue there’s no requirement to ask a copyright holder’s permission before training a system on their work. But Adobe and Nvidia are very keen to reassure their customers that their tools are legally sound for commercial use.
The key, they hope, is to limit what their models get trained on. The first Firefly model has been trained on licensed and out-of-copyright content, but also on the Adobe Stock imagery repository—Adobe promises to compensate Stock contributors in some way (details to be confirmed once Firefly exits beta). Meanwhile, Nvidia’s Picasso was trained on Getty Images and Shutterstock images, as well as those from Adobe Stock, and the chip giant also intends to pay royalties.
Getty’s Nvidia tie-up is “testament to the feasibility of a path of responsible A.I. development,” Getty Images CEO Craig Peters told Reuters.
Separately, the Writers Guild of America has made proposals around how to credit A.I.-generated scripts. Instead of proposing a ban, the Guild wants scriptwriters to be able to use generative A.I. and then claim the credit for the script. It remains to be seen what the studios think of the idea, but attributing such scripts to humans might be the smart option because the U.S. Copyright Office last week issued guidance reiterating the longstanding principle that something needs to have human authorship if it is to be protected by copyright. (Remember the monkey selfie case? Same principle applies.)
The office said simply feeding a generative A.I. model some prompts doesn’t count as human authorship, but a work could still qualify for copyright if a human selects or arranges A.I.-generated material “in a sufficiently creative way,” or modifies the material to a certain degree. But even in such cases, “copyright will only protect the human-authored aspects of the work.” How will that work with partially A.I.-generated movie scripts or, for that matter, with the A.I.-generated illustrations that Adobe has been accepting into its Stock repository since December? ¯_(ツ)_/¯
There’s still so much to work out regarding A.I.’s place in our creative and legal systems and, as with everything else in the nascent field, new ideas will swiftly undergo trial by fire in the real world. More news below, but do also check out this piece of research that shows how professional Go players developed innovative tactics after being bested by A.I. systems—an inspiring reminder that we too can evolve.
Want to send thoughts or suggestions to Data Sheet? Drop me a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.