Intel’s new Stratix 10 MX FPGA Taps HBM2 For Massive Memory Bandwidth

72 0

Intel’s new Stratix 10 MX FPGA Taps HBM2 For Massive Memory Bandwidth

This site may earn affiliate commissions from the links on this page. Terms of use.

Intel announced its new Stratix 10MX FPGA today, marking the first time an FPGA has been available with HBM2 memory onboard. The Stratix 10 MX has up to 10x more memory bandwidth than competing solutions that rely on DDR4 (512GB/s of aggregate bandwidth in two HBM2 stacks). Like the now-confirmed AMD / Intel team-up to build Vega graphics into certain Intel CPUs, Intel is using its Embedded Multi-Die Interconnect Bridge (EMIB) to connect various components of the system.

All Stratix 10 MX FPGAs use HBM2, but they offer varying amounts of memory on-package, from 3.25GB (MX 1100) to 16GB (MX 1650, MX 2100). Available SRAM also varies (45-90Mbit), as do the number of logic elements, I/O pins, and PCIe 3.0 x16 IP blocks. The point of this particular FPGA family, unsurprisingly, is to offer far more memory bandwidth than you’d typically see on an FPGA, with a lower physical footprint and less energy consumption.

According to Intel, this kind of shift is critical to deploying FPGAs in certain spaces. Implementing large memory pools on an FPGA with DDR4 is limited by the number of I/O pins and memory channels you can plausibly fit on a card. HBM2 short-circuits this problem by packing a huge amount of bandwidth into a much smaller form factor. Those of you who have followed the memory standard’s evolution may recall that AMD justified adopting it for the Fury X family because it reduced memory subsystem power consumption dramatically (and energy efficiency tests later bore out that the Fury Nano was the smallest, most-efficient GPU AMD shipped for quite a long time).

Intel-FPGA-10MX

The more memory bandwidth you need, the more lopsided and in-favor of HBM2 the comparison becomes. At 400GB/s of bandwidth, Intel projects that it can reduce platform size by 24x, with power consumption savings of 50 percent at 128GB/s of memory bandwidth and even more at higher capabilities.

According to Intel, adding the HBM2 buffer to FPGA designs is critical for enabling FPGAs to continue scaling into HPCs and other data center designs. To date, HBM2 has been locked up almost exclusively in very high end products. Only AMD’s Vega has tried to bring HBM2 to mainstream graphics cards, and the high price on those GPUs strains the definition of ‘mainstream.’ We may eventually see the memory technology come to lower-end, cheaper cards, or it may be that HBM2 ultimately be supplanted by GDDR6.

Now read: How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips

Published at Mon, 18 Dec 2017 21:02:10 +0000

About The Author