AI video is still some way away from complete realism (we've all seen the lightsaber duel that became a meme), but it could be about to find its way to more developers, and consumers, in a big way.
Runway and Luma Labs, two of the AI video generation pioneers, have both released their API (App Programming Interface) for Runway Gen-3 Alpha Turbo and Luma Dream Machine respectively. These are in limited access at the time of writing, but you can expect more models to be supported in the coming months.
This will allow developers of applications to integrate generative video AI models into their own applications, catapulting the creation tools into a news stratosphere in terms of popularity. For example, a Chrome extension could be built to generate a short video response to a post on X, rather than typing or selecting a pre-made gif.
Why video generation AI APIs are important
I am tempted to make my next coding project a tool to automate my AI video production workflow using the new Dream Machine API from @LumaLabsAI. Would need to be a visual/GUI application to allow for approval at each stage. For example, having an API generate 4 Flux images for…September 17, 2024
As the post on X (formerly Twitter) from our own AI editor explains, the possibilities are almost endless.
It'd be possible to run the generated video through a pipeline of sorts, but it could also lead to bespoke video editing tools that specialize in generative AI video.
It could also cut down on the tedium of creating generative video in batches, running them through a uniform process like a shot list for different perspectives on the same content.
You can find Runway API access here, while Luma's API details including documentation are here.