Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Efosa Udinmwen

Nvidia’s BlueField-3 SuperNIC morphs into a special self-hosted storage powerhouse with an 80GBps memory boost and PCIe-ready architecture

NVIDIA BlueField 3 DPU.

  • Nvidia unveils upgraded BlueField-3 DPUs
  • New editions allow flexible storage server configurations
  • BlueField-3 DPU offloads CPU tasks, and reduces latency for storage-heavy environments

Nvidia has revealed a new iteration of its BlueField-3 Data Processing Unit (DPU), that is not just a regular SuperNIC, but a self-hosted model mainly for storage.

The new offering greatly increases memory bandwidth compared to its predecessors, as while the BlueField-2 DPU utilized a single-channel design, resulting in lower memory bandwidth than the first generation, the BlueField-3 boasts dual 64-bit DDR5-5600 memory interfaces.

This upgrade translates to 80GB of bandwidth, enabling faster data processing and efficiency, particularly for applications which rely on high-speed data access.

Self-hosted solutions for storage applications

The special version, classified as B3220SH, also it introduces advanced capabilities for direct hardware connections. With its ability to expose PCIe roots, this model enables direct integration with NVMe SSDs and GPUs, bypassing the need for an external CPU.

This capability allows for greater flexibility in configuring storage solutions without relying on traditional x86 or Arm CPUs, enabling a more streamlined architecture for storage servers. The integration of a PCIe switch further enhances this model's functionality by allowing multiple devices to be connected seamlessly. This architecture not only simplifies data flow, it also reduces latency and improves the overall performance in storage-intensive applications.

The versatility of the BlueField-3 extends beyond storage, as its architecture supports various applications across sectors such as high-performance computing (HPC) and artificial intelligence (AI). The new model can offload tasks from CPUs, it frees up valuable processing resources for revenue-generating workloads.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.