Faster memory is a focal point in the race to boost application performance, and an industry consortium aims to make computers zippy with a new specification released on Tuesday.
The Hybrid Memory Cube Consortium, which aims to replace standard DDR3 memory modules found in computers today, has proposed a faster and more power-efficient specification for its emerging Hybrid Memory Cube memory technology.
The HMC Gen2 specification doubles the throughput of the original specification, which could deliver 15 times the bandwidth of a standard DDR3 module, while consuming 70 percent less energy. The new specification could speed up calculations in supercomputers, boost in-memory computing for applications like databases, or aid in providing faster response times to web requests.
No, this isn't the Borg
The HMC technology is based on current DDR3 DRAM. But with HMC, the memory modules are structured like a cube instead of being placed flat on a motherboard. The stacked memory chips in a cube are linked through a wire-like connection called Through Silicon Via (TSV). The HMC memory uses an advanced memory controller.
HMC provides a much needed throughput and power-efficiency upgrade to current memory implementations, and is considered a bridge to emerging technologies like MRAM (magnetoresistive RAM), RRAM (resistive RAM) and PCM (phase-change memory), which are still years away from practical deployment. The new forms of memory are capable of retaining data even after devices are shut off, which DRAM cannot do.
The new HMC Gen2 specification boosts memory throughput to 30G bps (bits per second), which is double that of the previous specification, over distances of up to 20 centimeters. The Gen2 specification will be finalized by the middle of this year, and will replace the previous specification released in the middle of last year.
The development of HMC was led by Samsung and Micron Technology, and members of the consortium include Microsoft, ARM, Altera and Xilinx.
The HMC memory is more relevant to servers than laptops, said Mike Black, technology strategist at Micron. Servers carry more memory and need more throughput than laptops.
“System designers are looking for access to very high memory bandwidth,” Black said. “It allows them to get access to memory bandwidth with fewer pins.”
The new HMC specification will also reduce complexity in board design, Black said. Stacking memory in cubes will reduce the cost of building systems.
Memory cubes are real
Micron has released HMC products with multiple memory channels with throughput speed of 160GBps (bytes per second). With the new specification, throughput of that memory product could double, Black said.
HMC could also find use in the networking sector in systems that need more memory to buffer high-volume traffic, Black said.
The new HMC specification comes as the server makers and chip makers prepare to adopt the latest DDR4 memory, which provides 50 percent more bandwidth and 35 percent more power savings than DDR3. Intel is adding DDR4 support to its server chips in the third quarter, and companies like Samsung and Micron are already manufacturing memory modules. HMC will be able to support DDR4 memory.
There will be even faster data transfers and power efficiency in HMC as DDR4 gets widely adopted, and both technologies align well, Black said.