The next-generation Nvidia Blackwell GPU architecture and RTX 50-series GPUs are coming, right on schedule. While Nvidia hasn't officially provided any timeframe for when the consumer parts will be announced, there have been plenty of rumors and supposed leaks of data. We spoke with some people earlier this year, and the expectation was that we'd see at least the RTX 5090 and RTX 5080 by the time the holiday season kicks off in October or November, but more recent rumors plus the delay of Blackwell B200 may have pushed things back. Whenever they launch, we expect the Blackwell GPUs will join the ranks of the best graphics cards.
Nvidia provided many of the core details for its data center Blackwell B200 GPU. While the AI and data center variants will inevitably differ from consumer parts, there are some shared aspects between past consumer and data center Nvidia GPUs, and we expect that to continue. That means that we at least have some good indications of certain aspects of the future RTX 50-series GPUs.
There are still a lot of unknowns, with leaks that appear more like people throwing darts at the wall instead of having actual inside information. We'll cover the main rumors along with other details, including the release date, potential specifications, and other technology. Over the coming months, we can expect additional details to come out, and we'll be updating this article as information becomes available. Here's everything we know about Nvidia Blackwell and the RTX 50-series GPUs.
Blackwell and RTX 50-series Release Dates
Of all the unknowns, the release date — at least for the first Blackwell GPUs — may be the easiest to pin down, especially now. Despite what we personally heard earlier in the year, the RTX 50-series is expected to release in January 2025, with a CES 2025 reveal. That's a delay, but there are good reasons for it.
Nvidia's data center Blackwell B100/B200 encountered packaging problems and were also delayed. Given how much money the data center segment raked in over the past year (see Nvidia's latest earnings), putting more money and wafers into getting B200 ready and available makes sense. Gamers? Yeah, we'll have to wait a bit longer.
Nvidia is late, based on historical precedent. The Ada Lovelace RTX 40-series GPUs first appeared in October 2022. The Ampere RTX 30-series GPUs first appeared in September 2020. Prior to that, RTX 20-series launched two years earlier in September 2018, and the GTX 10-series was in May/June 2016, with the GTX 900-series arriving in September 2014. That's a full decade of new Nvidia GPU architectures arriving approximately every two years. But we're still only a few months beyond the normal cadence.
It's not just about the two-year consumer GPU cadence, either. Nvidia first revealed core details of the Hopper H100 architecture in March 2022 at its annual GPU Technology Conference (GTC), with Ada Lovelace arriving in October 2024. And in May 2020, it first revealed its Ampere A100 architecture, followed by the consumer variants a few months later. The same thing happened in 2018 as well, with Volta V100 and Turing, and in 2016 there was the Tesla P100 and Pascal. So, in the past four generations, we've learned first about the data center and AI GPUs, with the consumer GPUs revealed and launched later in the same year. Now that Nvidia has revealed the Blackwell B200 architecture, again at GTC, and it was a reasonably safe bet we'd hear about the consumer variants in the fall... if it weren't for that pesky CoWoS packaging problem.
With the full Blackwell B200 availability pushed back into 2025, everything else has been pushed back as well. Another factor continues to be AI workloads, and we could see professional or data center cards using the same GPUs as the consumer models arrive around the same time as the RTX 50-series. Nvidia's current RTX Ada Generation professional GPUs typically cost three to four times as much as consumer cards using the same chips, with double the memory. It's not difficult to imagine a scenario where Nvidia opts to prioritize AI and data center models over consumer cards, considering the R&D costs associated with creating a new architecture.
We don't know the exact names or models Nvidia plans for the next generation Blackwell parts. We're confident we'll have RTX 5090, RTX 5080, RTX 5070, and RTX 5060 cards, and probably some combination of Ti and/or Super variants. Some of those variants will undoubtedly come out during a mid-cycle refresh about one year after the initial salvo. We're also curious about whether or not Nvidia will have an RTX 5050 GPU — it skipped that level on desktops with the 40-series and 20-series, though the latter had the GTX 1660 and 1650 class GPUs.
Given the past patterns, we expect the top-tier RTX 5090 and 5080 to arrive first, in early 2025. Then we'll see a 5070-class card (maybe with a Ti or Super suffix), followed by the 5060-class about six months after the first GPUs. Once the first Blackwell GPUs arrive, we can expect to see the typical staggered release schedule.
TSMC 4NP, refined 4nm Nvidia
One of the surprising announcements at GTC 2024 was that Blackwell B200 will use the TSMC 4NP node — "4nm Nvidia Performance," or basically a tuned/tweaked variation of the N4P node. While it's certainly true that process names have largely become detached from physical characteristics, many expected Nvidia to move to a refined variant of TSMC's cutting-edge N3 process technology. Instead, it opted for a refinement of the existing 4N node that has already been used with Hopper and Ada Lovelace GPUs for the past two years.
Going this route certainly offers some cost savings, though TSMC doesn't disclose the contract pricing agreements with its various partners. Blackwell B200 also uses a dual-chip solution, with the two identical chips linked via a 10 TB/s NV-HBI (Nvidia High Bandwidth Interface) connection. Perhaps Nvidia just didn't think it needed to move to a 3nm-class node for this generation.
And yet, that opens the door for AMD and even Intel to potentially shift to a newer and more advanced process node, cramming more efficient transistors into a smaller chip. Nvidia took a similar approach with the RTX 30-series, using a less expensive Samsung 8N process instead of the newer and better TSMC N7. It will be interesting to see if this has any major impact on how the various next-generation GPUs stack up.
Of course, it's also possible that Blackwell B200 variants will use TSMC 4NP while consumer chips use a different node. Much of that depends on how much of the core architecture gets shared between the data center and consumer variants and whether Nvidia thinks it's beneficial to diversify. There's precedent here for having different nodes and even manufacturers, as Ampere A100 used TSMC N7 while the RTX 30-series chips used Samsung 8N. GTX 10-series Pascal GP107 and GP108 were also made on Samsung's 14LPP, while GP102, GP104, and GP106 were made on TSMC 16FF.
Next generation GDDR7 memory
It's long been expected that the consumer and professional (i.e., not strictly data center) Blackwell GPUs will move to GDDR7 memory. All indications from GTC 2024 are that GDDR7 will be ready in time for the next generation of GPUs, with full production already underway. In fact, Samsung and SK hynix showed off GDDR7 chips at GTC, and Micron confirmed that GDDR7 is also in production.
The current generation RTX 40-series GPUs use GDDR6X and GDDR6 memory, clocked at anywhere from 17Gbps to 23Gbps. GDDR7 has target speeds of up to 36Gbps, 50% higher than GDDR6X and 80% higher than vanilla GDDR6. SK hynix says it will even have 40Gbps chips, though the exact timeline for when those might be available wasn't detailed. Regardless, this will provide a much-needed boost to memory bandwidth at all levels.
We don't know if Nvidia will actually ship cards with memory clocked at 36Gbps. In the past, it used 24Gbps GDDR6X chips but clocked them at 22.4Gbps or 23Gbps — and some 24Gbps Micron chips were apparently down-binned to 21Gbps in the various RTX 4090 graphics cards that we tested. So, Nvidia could take 36Gbps memory but only run it at 32Gbps. That's still a healthy bump to bandwidth.
At 36Gbps, a 384-bit GDDR7 memory interface can provide 1728 GB/s of bandwidth. That's 71% higher than what we currently get on the RTX 4090. A 256-bit interface would deliver 1152 GB/s, compared to the 4080 Super's 736 GB/s — a 57% increase. 192-bit cards would have 864 GB/s, and even 128-bit cards would get up to 576 GB/s of raw bandwidth. Nvidia might even go so far as to create a 96-bit interface with 432 GB/s of bandwidth.
We also expect that Nvidia will keep using a large L2 cache with Blackwell. This will provide even more effective memory bandwidth — every cache hit means a memory access that doesn't need to happen. With a 50% cache hit rate as an example, that would double the effective memory bandwidth, though note that hit rates vary by game and settings, with higher resolutions in particular reducing the hit rate.
GDDR7 also potentially addresses the issue of memory capacity versus interface width. At GTC, we were told that 16Gb chips (2GB) are in production, but 24Gb (3GB) chips are also coming. The larger chips with non-power-of-two capacity probably won't be ready until 2025, but those will be more important for lower-tier parts. That's another point in favor of an early 2025 announcement, incidentally, because it means the top models could come with 50% more VRAM capacity.
Still, there's no pressing need for consumer graphics cards to have more than 24GB of memory, though we could see a 32GB RTX 5090 (with a 512-bit interface). Even 16GB is generally sufficient for gaming, with a 256-bit interface. Professional GPUs on the other hand are often used for large 3D models as well as AI workloads where having more VRAM would be a major boon. A 512-bit interface with 3GB chips on both sides of the PCB could yield a professional RTX 6000 Blackwell Generation as an example with 96GB of memory.
More importantly, the availability of 24Gb chips means Nvidia (along with AMD and Intel) could put 18GB of VRAM on a 192-bit interface, 12GB on a 128-bit interface, and 9GB on a 96-bit interface, all with the VRAM on one side of the PCB. We could even see 24GB cards with a 256-bit interface, and 36GB on a 384-bit interface — and double that capacity for professional cards. Pricing will certainly be a factor for VRAM capacity, but it's more likely a case of "when" rather than "if" we'll see 24Gb GDDR7 memory chips on consumer GPUs.
Blackwell architectural updates
The Blackwell architecture will almost certainly contain various updates and enhancements over the previous generation Ada Lovelace architecture, but right now the summary of what we know for certain can be summed up with two words: not much. But every generation of Nvidia GPUs has contained at least a few architectural upgrades, and we can expect the same to occur this round.
Nvidia has increased the potential ray tracing performance in every RTX generation, and Blackwell seems likely to continue that trend. With more games like Alan Wake 2 and Cyberpunk 2077 pushing full path tracing — not to mention the potential for modders to use RTX Remix to enhance older DX10-era games with full path tracing — there's even more need for higher ray tracing throughput. There will probably be other RT-centric updates as well, just like Ada offered SER (Shader Execution Reordering), OMM (Opacity Micro-Maps), and DMM (Displaced Micro-Meshes). But what those changes might be is as yet unknown.
What we do know is that the data center Blackwell B200 GPU has reworked the tensor cores yet again, offering native support for FP4 and FP6 numerical formats. Those will be primarily useful for AI inference, and considering the consumer GPUs will do double duty with the professional cards, it's a safe bet that all Blackwell chips will support FP4 and FP6 as well. (Ada added the same FP8 support as Hopper to its tensor cores, as a related example.)
What other architectural changes might Blackwell bring? If we're correct that Nvidia is sticking with TSMC 4NP for the consumer parts, we wouldn't anticipate massive alterations. There will still be a large L2 cache, and the enhanced OFA (Optical Flow Accelerator) used for DLSS 3 frame generation will of course stick around. It will probably get some tweaks to improve it, though we'll have to wait and see.
We mused previously that Nvidia might use the same NV-HBI interlink with two chips for the top GB202, but such a chiplet-style approach has added costs for packaging, and if GB202 can fit within the reticle size, Nvidia will likely go that route. The latest rumor says GB202 could have a 744mm^2 die size. Could NV-HBI show up on consumer GPUs as well? We think that's a reasonable possibility — but probably only down the road, like on a future RTX 60- or 70-series halo product, once the packaging costs come down.
Raw compute, for both graphics and more general workloads, will almost certainly increase by a decent amount, though probably more along the lines of a 30% boost rather than a 50% increase. RTX 4080 offers 40 TeraFLOPS of FP32 compute compared to the 3080's 30 TeraFLOPS, for example — a 33% increase — while the 4090 offers 83 TeraFLOPS compared to the 3090's 40 TeraFLOPS — a much larger 107% increase. Perhaps Nvidia will "go big" on the RTX 5090 as well while making smaller improvements elsewhere, but we'll have to wait and see.
RTX 50-Series Pricing
How much will the RTX 50-series GPUs cost? Frankly, considering the current market conditions, there's little reason to expect Nvidia to reduce prices relative to the current RTX 40-series GPUs. Nvidia will price the cards as high as it feels the market will accept. With potentially higher AI performance and the increased demand from the non-gaming sector, we might be lucky if the next generation carries the same pricing structure as the current generation.
At the same time, we hope that generational pricing won't increase. $1,000 for the "step down" RTX 4080 Super means that particular level of GPU now costs 43% more than it did in the RTX 2080 Super days. Of course, we also had the "$699" RTX 3080 10GB and "$1,199" RTX 3080 Ti in between, when prices were all kinds of messed up thanks to the prevalence of GPU cryptomining coupled with the effects of Covid-19. Thankfully, while it's currently technically profitable to mine certain cryptocurrencies with a GPU, WhatToMine puts the estimated income at far less than $1 per day for an RTX 4090 — meaning it would take over ten years to break even at current rates and prices. (No one should be doing that, as the GPU is more likely to die before breaking even.)
The budget GPU sector has also basically died off. Integrated graphics have reached the point where they're "fast enough" for most common workloads, even including modest gaming — that's particularly true for mobile processors, with desktop options typically being far less potent. The last new GPUs to truly target the budget sector were AMD's rather unimpressive RX 6500 XT and RX 6400 — Nvidia hasn't made a new sub-$200 GPU since the GTX 1650 Super launched in 2019 (unless you want to count the travesty that was the GTX 1630).
That means for dedicated desktop graphics cards we're now living in a world where "budget" means around $300, "mainstream" means $400–$600, "high-end" is for GPUs costing close to $1,000, and the "enthusiast" segment targets $1,500 or more. Or at least, that appears to be Nvidia's take on the situation. AMD's GPUs tend to be a bit more affordable, particularly when looking at street prices, but Nvidia has maintained a higher pricing structure for at least the past four years.
How good/bad will prices be when Blackwell GPUs arrive? Don't be surprised if everything costs more than the prior generation, particularly for custom AIB partner models that come with a factory overclock. Whether prices remain high will likely depend as much on whether the AI bubble bursts or not.
Blackwell speculative specifications
Given everything we've said so far, it should hopefully be clear that there's very little official information on Blackwell currently available. The Nvidia hack in 2022 gave us the Blackwell name and some potential codenames, but that was over two years ago, and a lot can change in that time. Plus, the details on Blackwell were pretty thin in the first place.
However, as with every major GPU architecture update, plenty of rumors and supposed leaks are floating around. Some suggest they have inside knowledge, others appear to be guesses. Just to cite a few examples, one 'leak' said we should expect Blackwell GB202 to have a 384-bit memory interface in November 2023, while a more recent leak in March 2024 says Blackwell GB202 will have a 512-bit interface. The 512-bit interface has recently firmed up as the most likely solution, based on other 'leaks,' but some of that might be wishful thinking rather than factual.
Something else to chew on is the NV-HBI dual-chip solution for the Blackwell B200 that we mentioned earlier. Perhaps the top-tier Blackwell GB202 could take the same approach and have two GB203 chips linked via NV-HBI. That would allow Nvidia to keep the actual die size of the fastest chips in check while simultaneously providing for much higher levels of performance. But packaging costs for NV-HBI would likely make that only necessary if a full GB202 die can't otherwise fit within the reticle limit. And that doesn't seem to be the case, with a rumored 744mm^2 die size.
Here's our updated speculative specs table, with estimated names and specs as appropriate. The large number of question marks should make it clear that we do not have any hard information at present.
Again, take the above information with a massive helping of salt — seriously, just dump out the whole salt shaker! We've basically plugged in some numbers that seem plausible and stuffed them into the usual Nvidia formula with a given number of SMs, which then gives the CUDA, RT, and tensor core counts based on the usual 128 CUDA, 1 RT, and 4 tensor cores per SM. There are also (traditionally) four TMUs (Texture Mapping Units) per SM. Nvidia can tweak the enabled SM counts quite easily, so final specs may not be nailed down until a few months before launch.
A lot of the potential specs come from recent rumors that could be mere guesses. While the massive GB202 die seems likely, it's interesting that it's more than double the SM counts of the supposed GB203. That's a very big gap, almost too big to be true. Maybe there will be some other in-between chip in the future.
Other aspects are basically placeholders using whatever Nvidia currently has with the RTX 40-series cards. This mostly applies to L2 cache size, power requirements, and pricing, for example. We make no claims to have insider knowledge of the actual specs right now, though some of the rumored core counts are likely getting pretty accurate as the GPUs are expected to launch in less than two months.
For the time being, clock speed estimates are a static 2.5 GHz on the GPU clock and 36Gbps on the GDDR7 clock — with 20Gbps on the apparently still GDDR6 GB207 die. That's according to recent 'leaks' as well. We're really hoping to see 3GB chips on all the GPUs with a 192-bit or narrower memory interface, to provide a boost in VRAM capacity. #fingers-crossed
We'll update the above table over the coming months and even years as the rumors develop. Eventually, we'll have official part names and specifications. We'll almost certainly end up with far more than five different graphics cards as well, but there's no sense in guesstimating where those might land at present. Just note that there are ten different RTX 40-series desktop GPUs and twelve different RTX 30-series desktop variants (counting the 3060 12GB / 8GB and 3050 8GB / 6GB as different models).
16-Power Connectors, Take Three
After the 16-pin meltdown fiasco that plagued the first wave of RTX 4090 cards, many people probably want Nvidia to abandon the new PCI-SIG standard. We'll bet our proverbial GPU hats that it doesn't happen, though the change to the modified ATX 12V-2x6 connector has hopefully put any potential problems to rest.
What's interesting is that the RTX 40-series wasn't the first generation of GPUs to come with a 16-pin connector. The RTX 30-series used 12-pin adapters (without the extra four sense pins of 12VHPWR) starting clear back in 2020. We didn't hear a bunch of stories about melting 3090 and 3080 adapters, but then most of those cards had TGPs well under 400W. The RTX 3090 Ti GPUs were the first to use the newer 16-pin connector, but again with no rash of reported meltdowns. With RTX 40-series making widespread use of 16-pin, that means Blackwell will be the third generation of Nvidia GPUs to at least partially adopt the standard.
One of the key elements with the 4090 melting problems seems to be pulling 450W or more through a single relatively compact connector. We can't help but wonder how high Nvidia might push power requirements with Blackwell, but it's difficult to imagine anything over 600W. Even so, using two 16-pin connectors that each offer 300W would be the more sensible approach in our book than trying to do that with a single connector. We'll have to see what happens.
There have long been rumors of a new Titan-class card, first for Ada and now for Blackwell. Such a GPU might be the first Nvidia-made card to come with dual 16-pin connectors, and perhaps a quad-slot cooler as well. And if you don't have an ATX 3.0 power supply, you'll still have to use the chonky and ugly 8-pin to 16-pin adapters.
The future GPU landscape
Nvidia won't be the only game in town for next-generation graphics cards. There's plenty of evidence to suggest we'll see Intel's Battlemage release this winter as well, and AMD RDNA 4 will also arrive at some point — we expect to see most of the next-generation GPUs in early to mid 2025.
But while there will certainly be competition, Nvidia has dominated the GPU landscape for the past decade. At present, the Steam Hardware Survey indicates Nvidia has 78% of the graphics card market, AMD sits at 14.6%, and Intel accounts for just 7.2% (with 0.12% "other"). That doesn't even tell the full story, however.
Both AMD and Intel make integrated graphics, and it's a safe bet that a large percentage of their respective market shares comes from laptops and desktops that lack a dedicated GPU. AMD's highest market share for what is clearly a dedicated GPU comes from the RX 580, sitting at #31 with 0.81%. Intel doesn't even have a dedicated GPU listed in the survey. For the past three generations of AMD and Nvidia dedicated GPUs, the Steam survey suggests Nvidia has 92.6% of the market compared to 7.4% for AMD.
Granted, the details of how Valve collects data are obtuse, at best, and AMD may be doing better than the survey suggests. Still, it's a green wave of Nvidia cards at the top of the charts. Recent reports from JPR say that Nvidia controlled 88% of the add-in GPU market compared to 12% for AMD, as another example of the domination currently going on.
What we've heard from Intel suggests it intends for Battlemage to compete more in the mainstream and budget portions of the graphics space. And by that, we mean in the $200 to perhaps $600 price range. However, Intel hasn't said much lately, so that could have changed. AMD definitely competes better with Nvidia for the time being, both in performance and drivers and efficiency, but we're still waiting for its GPUs to experience their "Ryzen moment" — GPU chiplets so far haven't proven an amazing success.
Currently, Nvidia delivers higher overall performance, and much higher ray tracing performance. It also dominates in the AI space, with related technologies like DLSS — including DLSS 3.5 Ray Reconstruction — Broadcast, and other features. It's currently Nvidia's race to lose, and it will take a lot of effort for AMD and Intel to close the gap and gain significant market share, at least outside of the integrated graphics arena. On the other hand, high Nvidia prices and a heavier focus on AI for the non-gaming market could leave room for its competitors. We'll see where the chips land later this year.
- MORE: Best Graphics Cards
- MORE: GPU Benchmarks and Hierarchy
- MORE: All Graphics Content