Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Cesar Cadenas

OpenAI's Sora will one day add audio, editing, and may allow nudity in content

A woman's eye and man in a spacesuit generated by OpenAI Sora.

OpenAI’s Chief Technology Officer Mira Murati recently sat down with The Wall Street Journal to reveal interesting details about their upcoming text-to-video generator Sora.

The interview covers a wide array of topics from the type of content the AI engine will produce to the security measures being put into place. Combating misinformation is a sticking point for the company. Murati states Sora will have multiple safety guardrails to ensure the technology isn’t misused. She says the team wouldn’t feel comfortable releasing something that “might affect global elections”. According to the article, Sora will follow the same prompt policies as Dall-E meaning it’ll refuse to create “images of public figures” such as the President of the United States. 

Watermarks are going to be added too. A transparent OpenAI logo can be found in the lower right-hand corner indicating that it's AI footage. Murati adds that they may also adopt content provenance as another indicator. This uses metadata to give information on the origins of digital media. That's all well and good, but it may not be enough. Last year, a group of researchers managed to break "current image watermarking protections", including those belonging to OpenAI. Hopefully, they come up with something tougher.

Generative features

Things get interesting when they begin to talk about Sora's future. First off, the developers have plans to “eventually” add sound to videos to make them more realistic. Editing tools are on the itinerary as well, giving online creators a way to fix the AI’s many mistakes. 

As advanced as Sora is, it makes a lot of errors. One of the prominent examples in the piece revolves around a video prompt asking the engine to generate a video where a robot steals a woman’s camera. Instead, the clip shows the woman partially becoming a robot. Murati admits there is room for improvement stating the AI is “quite good at continuity, [but] it’s not perfect”.

Nudity is not off the table. Murati says OpenAI is working with “artists… to figure out” what kind of nude content will be allowed.  It seems the team would be okay with allowing “artistic” nudity while banning things like non-consensual deep fakes. Naturally, OpenAI would like to avoid being the center of a potential controversy although they want their product to be seen as a platform fostering creativity. 

Ongoing tests

When asked about the data used to train Sora, Murati was a little evasive. 

She started off by claiming she didn’t know what was used to teach the AI other than it was either “publically available or license data”. What’s more, Murati wasn’t sure if videos from YouTube, Facebook, or Instagram were a part of the training. However she later admitted that media from Shutterstock was indeed used. The two companies, if you’re not aware, have a partnership which could explain why Murati was willing to confirm it as a source.

Murati states Sora will “definitely” launch by the end of the year. She didn’t give an exact date although it could happen within the coming months. For now, the developers are safety testing the engine looking for any “vulnerabilities, biases, and other harmful results".

If you're thinking of one day trying out Sora, we suggest learning how to use editing software. Remember, it makes many errors and might continue to do so at launch. For recommendations, check out TechRadar's best video editing software for 2024.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.