AMD or willfully embrace HBM, both CPU and GPU will be used?
Recently, rumors broke out on the Internet that AMD’s next-generation Zen 4-core EPYC Genoa processor may be equipped with HBM content in order to compete with Intel’s next-generation server CPU Xeon Sapphire Rapids. Coincidentally, the recent Linux kernel patch also revealed the news that AMD's next generation of Instinct MI200 GPU based on the cDNA 2 core will also use HBM2e, and the video memory is up to 128GB. This means that AMD is likely to embrace HBM in the server market.
HBM, which has no relationship with the consumer graphics market
As a kind of high-bandwidth memory, HBM is actually a high-performance DRAM first invested in research and development by AMD. In order to realize this vision, AMD hired SK Hynix with 3D stacking process production experience, and with the help of interconnection and packaging manufacturers, jointly developed the HBM memory.
Fiji GPU / AMD
AMD is also the first manufacturer to introduce it to the GPU market and apply it to its Fiji GPU. Then in 2016, Samsung took the lead in the mass production of HBM2, AMD was snatched in the limelight by NVIDIA, which took the lead in applying this new standard of memory to its Tesla P100 accelerator card.
At that time, the advantages and disadvantages of HBM were very obvious. Since the beginning, the bandwidth advantage is being caught up by GDDR6, and the design difficulty and cost are other hurdles that are difficult to overcome. Although these costs do not account for the bulk of high-end graphics cards, it is more painful to use HBM for mid-to-low-end graphics cards. But AMD did not give up HBM2 but continued to introduce HBM2 on Vega graphics cards.
However, this may be the last time we saw HBM on consumer GPUs. AMD has never used HBM on subsequent RDNA architectures. Only GPUs based on the cDNA architecture and used for accelerators are still using HBM.
Why is the server market?
How did HBM take root in the server market? This is because one of the most suitable application scenarios for HBM is an environment with limited power and maximum bandwidth, which perfectly meets the requirements of artificial intelligence computing in HPC clusters or large-scale intensive computing data centers.
A100 video memory comparison of different specifications / Nvidia
This is the reason why these companies that include data center businesses continue to use HBM. Nvidia is still using HBM2 and HBM2e in its powerful server GPU A100, and may even continue to be used in the next generation of Hopper architecture. It is rumored that Intel's Xe-HP and Xe-HPC GPUs, which have not yet been released, will also use HBM.
However, the consumer GPUs of these two manufacturers have invariably avoided HBM and chose GDDR6 and GDDR6X. It is conceivable that they do not want to take AMD's detours.
AMD patents / AMD
As for AMD's first use of HBM on the CPU, it is not groundless. In a patent issued by AMD last year, HBM appeared in the chip design. Intel’s competing Xeon Sapphire Rapids server CPU also officially announced that it will use HBM, but mass production will not wait until 2023. These can see how "sweet" HBM is in the server market, and they all began to develop HBM to CPU.
Although JEDEC, which established the standard, has not yet released the relevant specifications for HBM3, SK Hynix, who has been studying the next generation of HBM, disclosed the latest information on HBM3 in June this year, and HBM will usher in further performance improvements.
HBM2E and HBM3 performance comparison / SK Hynix
SK Hynix's ability to achieve such a high-performance improvement is likely to benefit from the patent license agreement signed with Xperi last year. These agreements include DBI Ultra 2.5D/3D interconnection technology, which can be used for the innovative development of 3DS, HBM2, HBM3, and subsequent DRAM products. The traditional copper pillar interconnection can only achieve 625 interconnections per square millimeter, while the DBI Ultra can achieve 100,000 interconnections in the same area.
Since JEDEC announced the HBM2E standard in 2018, HBM has not been updated for nearly 3 years. Samsung even announced the development of HBM-PIM with an artificial intelligence engine in February this year. Whether HBM3 can continue to dominate the server field in the future, I believe that the proportion of HBM in server products planned by several major manufacturers has already given the answer.