Samsung unveils HBM4E at Nvidia GTC, raises bar for AI memory

Home > Business > Industry

print dictionary print

Samsung unveils HBM4E at Nvidia GTC, raises bar for AI memory

Samsung Electronics' sixth-generation high bandwidth memory, HBM4 [SAMSUNG ELECTRONICS]

Samsung Electronics' sixth-generation high bandwidth memory, HBM4 [SAMSUNG ELECTRONICS]

 
Samsung Electronics debuted its seventh-generation high bandwidth memory, HBM4E, at the Nvidia GTC 2026 technology conference in San Jose, California, on Monday, ahead of planned sample shipments in mid-2026. 
 
It marks Samsung’s first public showcase of the next-generation memory technology ahead of mass production and positions the company early in the race to supply advanced memory for AI systems alongside rivals such as SK hynix.
 
Samsung said its HBM4E chips deliver processing speeds of 16 gigabits per second (Gbps) per pin, a 36 percent improvement over the previous generation’s 11 Gbps and double the performance of the current industry standard of 8 Gbps. The chips also achieve 4.0 terabytes per second of bandwidth, referring to the total amount of data the memory can transfer each second.
 

Related Article

 
Samples of the standard HBM4E products are expected to be delivered to customers in mid-2026, the company said. Custom versions of the chips — designed for specific processors and known as application-specific integrated circuit (ASIC) solutions — are scheduled to begin initial wafer production across multiple projects in the second half of the year.
 
Samsung also publicly confirmed that its sixth-generation HBM4 memory will be integrated into Vera Rubin, the AI computing platform from Nvidia. Mass production of the HBM4 chips began in February.
 
The HBM4 lineups combine 10-nanometer-class 1c dynamic random access memory (DRAM), referring to Samsung’s advanced memory manufacturing node, with a 4-nanometer logic process used to control memory operations. 
 
The sixth-generation chips deliver processing speeds of 11.7 Gbps, which Samsung said can be increased to 13 Gbps through optimization.
 
At the event, Samsung also showcased hybrid copper bonding, a next-generation packaging technology designed to improve how stacked memory dies are connected. The technique directly bonds copper surfaces between memory layers, allowing stacks of 16 layers or more while improving thermal dissipation by over 20 percent compared with the current industry method known as thermal compression bonding.
 
Samsung Electronics' Socamm2 [SAMSUNG ELECTRONICS]

Samsung Electronics' Socamm2 [SAMSUNG ELECTRONICS]

 
Samsung’s booth also featured a dedicated “Nvidia Gallery” highlighting the companies’ supplier partnership. Alongside HBM4, the company displayed several products designed for AI infrastructure, including Socamm2 low-power server DRAM modules and the PM1763 solid-state drive.
 
Samsung Electronics' PM1763 solid-state drive [SAMSUNG ELECTRONICS]

Samsung Electronics' PM1763 solid-state drive [SAMSUNG ELECTRONICS]

The company also presented LPDDR5X and LPDDR6 mobile memory aimed at AI workloads on personal devices. LPDDR5X delivers speeds of up to 25 Gbps while reducing power consumption by as much as 15 percent, while LPDDR6 pushes memory bandwidth further to between 30 and 35 Gbps.

BY LEE JAE-LIM [[email protected]]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)