With Nvidia and Intel having revealed their locally run AI chatbots recently, it seems AMD doesn't want to be left out and has also published its own solution for owners of Ryzen and Radeon processors. In five or six steps, users can start interacting with an AI chatbot that runs on their local hardware rather than in the cloud — no coding experience required.
AMD's guide requires users to have either a Ryzen AI PC chip or an RX 7000-series GPU. Today, Ryzen AI is only available on higher-end Ryzen APUs based on Phoenix and Hawk Point with Radeon 780M or 760M integrated graphics. That suggests that while the Ryzen 5 8600G is supported, the Ryzen 5 8500G may not work... except the application itself only lists "CPU with AVX2 support" as a requirement, which means it should work (perhaps very slowly) on a wide range of processors.
Users need to download and install LM Studio, which has a ROCm version for RX 7000-series users — note again that the standard package also works with Intel CPUs and Nvidia GPUs. After installing and launching LM Studio, just search for the desired LLM, such as the chat-optimized Llama 2 7B. AMD recommends using models with the "Q4 K M" label, which refers to a specific level of quantization (4-bit) and other characteristics. While Ryzen CPU users are free to chat it up with the bot at this point — and it's not clear whether the NPU is even utilized, but we'd guess it's not — RX 7000-series GPU users will need to open up the right side panel and manually enable GPU offloading and drag the offload slider completely to "max."
AMD's tutorial means that there's at least one easy to use official method to run AI chatbots on all consumer hardware from AMD, Intel, and Nvidia. Unsurprisingly, Nvidia was first with its Chat with RTX app, which naturally only runs on Nvidia GPUs. Chat with RTX is arguably the most fleshed-out of the solutions, as it can analyze documents, videos, and other files. Plus, support for this Nvidia chatbot stretches back to the 30-series, and 20-series support may be on the table.
Meanwhile, Intel's AI CPU/NPU and GPU solutions are more in the weeds. Instead of using an app to showcase a local AI chatbot, Intel demonstrated how you can use Python to code one. While the code users will have to write isn't exactly long, having any coding involved at all is going to be a barrier for a lot of potential users. Additionally, chat responses are displayed in the command line, which doesn't exactly scream "cutting-edge AI." You could try LM Studio instead, though it doesn't appear to have Intel GPU or NPU support yet, so it would just use your CPU.
While AMD doesn't have its own AI chatbot app like Nvidia, it seems further along than Intel in respect to features, since there's at least ROCm GPU hardware support. The next step for AMD is probably to make its own version of Chat with RTX, or at least to work with the developers of LM Studio to enable more features for AMD hardware. Perhaps we'll even see AI functionality integrated into the Radeon Adrenalin driver suite — AMD does make driver-level AI optimizations, and the driver suite often receives new features like Fluid Motion Frames.
Get all your news in one place.
100’s of premium titles.
One app.
Start reading
One app.
Get all your news in one place.
100’s of premium titles. One news app.
AMD fires back at Nvidia and details how to run a local AI chatbot on Radeon and Ryzen — recommends using a third-party app
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member?
Sign in here
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member?
Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member?
Sign in here
Our Picks