AMD’s latest DRAM-focused patent effectively doubles memory bandwidth without faster DRAM silicon, but rather through changing the on-module logic.
AMD’s Patent Has Managed To Double Memory Bandwidth Without Advancing The DRAM Silicon
Hardware-focused upgrades always have limitations since they come with the ‘overhead’ of upgrading architectures or revamping logic/semiconductor utilization. However, with AMD’s new patent, the firm has effectively managed to double DDR5 memory bandwidth output through a relatively more straightforward technique, which the firm labels as ‘high-bandwidth DIMM’ (HB-DIMM). Instead of focusing solely on DRAM process upgrades, the patent has integrated an RCD (register/clock driver) and data-buffer chips to boost memory bandwidth, hence a DIMM-focused change.

Let’s examine the technical details of this implementation. The patent reveals that the HB-DIMM technique doesn’t focus on DRAM improvements; through simple re-timing and multiplexing, the memory bandwidth increases from 6.4 Gb/s per pin to 12.8 Gb/s per pin, effectively doubling the output. Through the RCD, AMD essentially leverages the onboard data buffers to combine two normal-speed DRAM streams into one faster stream to the processor, which allows the bandwidth to double when allocated to the host system.
This application is mainly intended for AI and other bandwidth-bound workloads, but the patent also mentions another interesting implementation concerning APUs/iGPUs. This involves using two different ‘memory plugs’: the standard DDR5 PHY and an added HB-DIMM PHY. The larger memory pool would be one from the DDR5, while the smaller one would be targeted towards moving data much faster through the above-mentioned HB-DIMM approach.

For APUs, this approach would work best with on-device AI, where the preference is a faster response when the system is crunching AI tasks that stream lots of data. Since edge AI is becoming more important in traditional systems, this approach will greatly benefit AMD. The only downside with this approach is likely the increased power requirements needed to facilitate high memory bandwidth, which would also require an effective cooling mechanism.
AMD is one of the leading firms in the memory space, and for those unaware, Team Red designed HBM by collaborating with SK Hynix, which shows that they are experts in the field. The HB-DIMM approach certainly looks promising since it effectively doubles memory bandwidth without the need to rely on advancing DRAM silicon.
Source link