Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Eric Hal Schwartz

Turn your selfie into an action star with this new AI image-to-video feature

Runway Image-to-Video.

Artificial intelligence-powered video maker Runway has added the promised image-to-video feature to its Gen-3 model released a few weeks ago, and it may be as impressive as promised. Runway has enhanced the feature to address its biggest limitations in the Gen-2 model released early last year. The upgraded tool is miles better at character consistency and hyperrealism, making it a more powerful tool for creators looking to produce high-quality video content.

Runway's Gen-3 model is still in alpha testing and only available to subscribers who pay $12 per month per editor for the most basic package. The new model had already attracted plenty of interest even when it came out with only text-to-video capabilities. But, no matter how good a text-to-video engine is, it has inherent limits, especially when it comes to characters in a video looking the same over multiple prompts and appearing to be in the real world. Without visual continuity, it's hard to make any kind of narrative. In earlier iterations of Runway, users often struggled to keep characters and settings uniform across different scenes when relying solely on text prompts. 

Offering reliable consistency in character and environmental design is no small thing, but the use of an initial image as a reference point to maintain coherence across different shots can help. In Gen-3, Runway's AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how it works in the video below. 

Stills to Films

Runway's image-to-video feature doesn't just ensure people and backgrounds stay the same when seen from a distance. Gen-3 also incorporates Runway's lip-sync feature so that someone speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say, and the movement will be animated to match. Combining synchronized dialogue and realistic character movements will interest a lot of marketing and advertising developers looking for new and, ideally, cheaper ways to produce videos. 

Runway isn't done adding to the Gen-3 platform, either. The next step is bringing the same enhancements to the video-to-video option. The idea is to keep the same motion but in a different style. A human running down a street becomes an animated anthropomorphic fox dashing through a forest, for instance. Runway will also bring its control features to Gen-3, such as Motion Brush, Advanced Camera Controls, and Director Mode.

AI video tools are still in the early stages of development, with most models excelling in short-form content creation but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it is far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all racing to make the definitive AI video generator. Of course, they're all keeping a wary watch on OpenAI and its Sora video generator. OpenAI has some advantages in name recognition, among other benefits. In fact, Toys"R"Us has already made a short film commercial using Sora and premiered it at the Cannes Lions Festival. Still, the film about AI video generators is only in its first act, and the triumphant winner cheering in slow-motion at the end is far from inevitable.

You might also like...

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.