Gaming is one area where AI has not made any significant advances yet (except for upscaling). But that could all change soon. A user named Indiegameplus on the aivideo subreddit shared a video he reportedly made of GTA IV remastered with real-life graphics, with just AI text prompts.
The video portrays a GTA character roaming an ultra-realistic-looking city from the 3rd person's viewpoint. The first scene shows a red Ferrari sportscar (though the model itself looks ultimately custom) driving through a sprawling metropolis at night featuring LED advertisement displays on most surrounding buildings. The following scene shifts to a 3rd personal perspective of the main character walking towards an NPC bystander on the sidewalk of a busy city at either sunset or dawn.
Skipping some scenes, the main gameplay highlights include the main character going up to a stopped vehicle and pulling a classic GTA-style car hijack, pulling the driver out of the driver's seat, and hijacking the vehicle. Though hilariously, the victim walks away from his car after being forcefully thrown out of it by the main character as if nothing happened. The next scene shows the main character back at the second scene, pulling a bat on the NPC bystander and taking down the NPC with the bat.
aivideo from r/aivideo
The video is an AI-generated version of a photorealistic GTA IV remaster gameplay demo trailer. The creator purportedly created the video entirely through Runway's latest Gen-3 Alpha model. Runway is an AI research company that builds content generation tools focusing on video content generation guided by user-inputted text prompts.
The video shows the power of modern-day AI and a glimpse at what it could offer gamers in the future. With video-generation tools already getting so good, it's logical that we will see this kind of text prompt-style generation make its way into game engines or AI-generation tools optimized for building video games rather than video content.
Before AI-made games arrive, we could see the graphics portion of the gaming development pipeline get transferred to AI first, with all graphical effects, such as object rendering, lighting, anti-aliasing, and more, being processed through machine learning hardware rather than classic 3D rasterization/ray-tracing hardware. Nvidia itself believes this is the future, nothing that something such as DLSS version "10" will have full neural rendering.