Mega Memory: Crucial Introduces Massive 128GB LRDIMMs

Crucial-DDR4

DIMM capacities go up. It’s still one of the unwritten rules of computing that remains true even as GPU and CPUs no longer offer the same yearly performance improvements they once did. I still have the first stick of RAM I ever bought for the first PC I bought with my own money (a 16MB stick of newfangled DDR-SDRAM rated for 66MHz that could eventually hit 133MHz if and only if I stuck it in the third slot on my Socket 7 motherboard). Today, the average loaf of bread comes with at least 256MB of digital whole wheat goodness. But even given this inevitable progression, Crucial’s 128GB LRDIMM announcement raises a few eyebrows.

It’s been a few years since we discussed LRDIMMs, so let’s start there. Traditional registered DIMMs connect to the parallel memory bus that attaches to the DRAM controller aboard Intel and AMD processors. LRDIMMs (Load Reduced DIMMs), in contrast, have a memory buffer chip that serves as the connection point for the CPU’s onboard memory controller.

Here’s why that matters: If you’ve ever read up on overclocking or DRAM clocks, you’re probably aware there’s an inverse relationship between the number of DRAM sticks in a system and that system’s maximum stable DRAM clock. It’s not always a 1:1 relationship — whether DIMMs are single-sided or double-sided can also matter, for example, and sometimes motherboard vendors will identify specific RAM from various manufacturers that is rated for higher clocks than other memory on the market. But generally speaking, the greater the number of DIMMs per memory channel, the slower the total RAM clock. This typically isn’t a problem for desktops, but it can limit maximum memory configurations in servers. That’s where LRDIMMs come into play, as shown in the diagram below:

LRDIMM

Image courtesy of Dell

Dell notes that LRDIMMs greatly reduce the electrical load placed on the CPU per-DIMM slot, and therefore allow much larger memory configurations. LRDIMMs can also allow for substantially higher operating frequencies, though this depends on the server CPU and platform. Either way, they’re quite useful for big iron servers, and Crucial is launching some of the biggest iron around. Hitting a DDR4-2666 transfer speed might not sound like much when consumer variants are available at 50 percent higher transfer rates, but Crucial points out equivalent products from other vendors often top out at 2133MHz. This RAM also supports ECC, which isn’t exactly surprising given its intended market.

LRDIMM-2

One reason Crucial was able to hit such high densities is its use of TSVs. Through-silicon-vias are a next-generation packaging technique that run wires directly through integrated circuits (ICs) rather than wiring them together at the package edge. HBM and HBM2, for example, both use TSVs. These DIMMs are rated for CL22 and are built on 20nm process technology; each visible chip consists of a 4-Hi stack. It’s interesting to see TSV’s being used for a broader range of applications, though the very high prices on these DIMMs — currently an eye-watering $3,999 each on Crucial’s site — speaks to the economics of deploying the manufacturing technique.

The fact that these are DIMMs (and high latency DIMMs at that) is probably part of why we’re seeing the technology used here. One major barrier to stacking memory on top of CPUs, for example, is the risk of trapping large amounts of heat at the bottom of the stack. We don’t expect to see consumer memory loads in such eye-popping configurations any time soon, but it’s interesting to see the technology rolling out in servers.

About Skype

Check Also

, Samsung Announces ‘Gauss’ AI for Galaxy S24, #Bizwhiznetwork.com Innovation ΛI

Samsung Announces ‘Gauss’ AI for Galaxy S24

For the last several years, smartphones have shipped with processors designed to accelerate machine learning …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation