Adobe has unveiled its new artificial intelligence video generation model: Firefly Video, offering the ability to create short clips from text, image, or video.
The new model will be integrated into a future version of Premiere Pro and be available as a standalone tool. This is similar to the way Adobe rolled out the Firefly AI image generator into Photoshop and as a standalone app. Whether or not it becomes one of the best AI video generators remains to be seen.
Adobe worked with creative professionals and the video editing community when creating the video model and only trained it on "commercially safe" videos where it had permission. This is the same approach Adobe took with the Firefly image model.
Unlike other video models, Adobe has focused on adding to human-created content, including clip extensions and alternative perspective generations. It will also be able to generate standalone clips from images or text.
What is Adobe Firefly Video
Firefly Video will be available later this year on the Adobe website, as well as within Adobe Premiere Pro for the generative clip extension feature.
With the text-to-video model, you'll be able to use descriptive prompts the same way you can now with Runway or Luma Labs Dream Machine, From the example clips it seems videos will be about five seconds initially, shorter than Runway Gen-3.
Adobe is pitching it as a way to generate b-roll that "seamlessly fills gaps in your timeline" or to enhance real footage with effects. The model will also include camera controls like angle, motion and zoom as we've seen recently in Dream Machine.
Other use cases include creating atmospheric elements within a video including fire, smoke and dust particles. Using AI these could be "layered over existing content using blend modes or keying inside Adobe’s tools like Premiere Pro and After Effects."
Adobe says the image-to-video model will allow users to take an existing image and make it move. This includes animating text or objects.
Building on human creativity
Adobe plans to embed generative AI, through the Firefly Video model, into Adobe Premiere Pro and do for video what it did for images in Adobe Photoshop.
It will let users turn to AI to extend clips to cover gaps in footage, smooth out transitions and even hold a shot longer than you managed to film to help for smoother and "perfectly timed" edits.
“Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation,” said Ashley Still, senior VP, Creative Product Group at Adobe.
Still added: “We are excited to bring new levels of creative control and efficiency to video editing with Firefly-powered Generative Extend in Premiere Pro.”
It is hard to tell how this will stack up to the third-generation AI video tools like Kling, Runway Gen-3, Luma Labs Dream Machine and OpenAI's Sora until I get my hands on it to try it, but it has to significant features the other models don't. It is 'commercially safe' and verifiably so, and it is integrated into Adobe products.