How HBM split the paths of Samsung, AMD and SK hynix
Published: 02 Sep. 2025, 07:00
Updated: 02 Sep. 2025, 11:35
Audio report: written by reporters, read by AI
Nvidia CEO Jensen Huang delivers a speech during the Computex 2025 exhibition in Taipei, Taiwan, on May 19. [AP]
[CHIP REPORT ②]
Korea’s semiconductor industry is undergoing a dramatic shake-up, fueled by the explosive rise of generative AI. Market leaders are losing ground, while once-overlooked underdogs are gaining momentum. This Chip Report series unpacks the forces driving this shift — and explores how the industry’s new hierarchy is likely to take shape in the years ahead. — Ed.
At the Hot Chips symposium in California in 2016, a senior engineer from Micron took the stage and sharply criticized high bandwidth memory (HBM), a fledgling technology that had entered the market only a year earlier. He confidently claimed that the soon-to-launch Hybrid Memory Cube (HMC) would be a game changer.
Samsung Electronics followed, announcing plans to release a more affordable version of HBM that balanced performance with cost. At the time, Samsung had already established dominance in the HBM2 market.
The final presentation came from SK hynix. The company had seen disappointing performance and sales from its HBM2 products. Nevertheless, its presenter quietly introduced plans for HBM3 and made a subtle jab at the two chip giants, suggesting that even a smaller player could snatch candy from a bigger one.
That moment, when HBM was dismissed or doubted, is hard to imagine today. In the age of artificial intelligence, HBM is essential to powering high-performing GPUs designed by Nvidia, earning the moniker "the prince of memory."
Yet, just nine years ago, the three dominant memory makers — Samsung Electronics, SK hynix, and Micron — held diverging views on its potential. Those early decisions now define the market.
As of 2024, SK hynix holds 55 percent of the HBM market, followed by Samsung at 40 percent and Micron at just 5 percent, according to JPMorgan.
The push for next-generation memory began in the 2000s as the semiconductor industry sought to overcome the so-called "memory wall" — a bottleneck caused by the widening performance gap between CPUs and memory. Despite CPUs processing data quickly, they often had to wait idly for memory to catch up. Improving memory speeds became essential to advancing overall system performance.
HBM was born from this challenge. Its 18-year development journey illustrates the complexities of the semiconductor industry: competition, cooperation, failure and reinvention.
A union of underdogs
Nvidia CEO Jensen Huang leaves a note reading “JHH LOVES SK HYNIX!” on the display featuring HBM4 and HBM3E after visiting the SK hynix booth at Computex 2025, held at the Nangang Exhibition Center in Taipei, Taiwan, on Aug. 20. [YONHAP]
In 2007, AMD — second in the GPU market — launched a project to develop a GPU equipped with a new type of memory: HBM. Contrary to popular belief, this was not a strategic bet on AI. At the time, the idea of AI-focused GPUs did not exist. It wasn’t until 2011 that researchers from Google, Stanford University and New York University recognized the suitability of GPUs for deep learning. A University of Toronto team led by Geoffrey Hinton, a 2024 Nobel laureate, won a major AI competition using GPUs in 2012.
AMD's motivation was simpler: to increase market share in gaming GPUs.
Meanwhile, Hynix, a predecessor of SK hynix, had only recently emerged from a five-year workout under creditor management following Hyundai Electronics' exit. Financial performance was barely back in the black. Despite this, Hynix agreed in 2010 to become AMD's HBM partner. Interposer technology was sourced from Taiwan’s UMC, packaging from ASE, as well as the Arizona-based Amkor, and manufacturing from TSMC.
DRAM stacking splits the market
HBM's defining structural feature is the vertical stacking of DRAM dies with thousands of microscopic through-silicon vias to connect them electrically. This approach distinguished HBM from its competitor, HMC.
Micron was firmly in the HMC camp, and Samsung joined its consortium. Hynix, however, believed HBM would become the mainstream technology due to its operational similarity to conventional DRAM.
But HBM development was expensive and time-consuming. By late 2011, Hynix was back in the red. Internal doubts about the HBM project grew. AMD, too, was facing financial challenges, and the project spanned the tenures of four different CEOs.
It wasn't until Lisa Su, who joined AMD in 2012 and became CEO in 2014, took charge that the project gained momentum. Hynix also accelerated development after its acquisition by SK Group in 2012 under Chairman Chey Tae-won.
The project took eight and a half years to bear fruit. Raja Koduri, then head of AMD’s GPU division, later reflected that the team "boiled soup with stones" to see it through amid internal opposition.
In tandem, SK hynix and AMD worked meticulously to create an industry standard. Believing standardization was essential for market growth, they helped the JEDEC consortium release the first HBM specification in 2013. In contrast, HMC never achieved standardization, a key reason for its market failure.
In June 2015, AMD CEO Lisa Su unveiled the Radeon R9 Fury X at E3, touting it as the company’s most complex and powerful GPU to date and the first to feature HBM. The tech world took notice. Foreign media remarked that a Korean company had developed HBM, and it wasn't Samsung. Tech firms lined up to request samples from SK hynix.
Yet the debut was underwhelming. Fury X failed to shake up the GPU market, hampered by the high cost of HBM.
Enter Nvidia and Samsung
Nvidia CEO's autograph on Samsung Electronics's GDDR7 graphics memory at Samsung's booth on the fourth day of Nvidia's annual software developer conference in San Jose, California, on March 20. [YONHAP]
SK hynix's brief moment of triumph faded quickly. In January 2016, just six months after SK hynix released its first HBM, Samsung began the mass production of HBM2. Nvidia, which had considered HMC, pivoted to HBM and hired former AMD engineers. In April 2016, it launched the Tesla P100 GPU, powered exclusively by Samsung's HBM2.
Nvidia CEO Jensen Huang declared at GTC 2016 that it had succeeded in designing the world’s first GPU for AI acceleration through a partnership with Samsung Electronics.
Though AMD and SK hynix built the first HBM GPU, it was Nvidia and Samsung that ushered in the era of HBM-powered AI GPUs.
By 2017, AMD’s new GPUs and Intel’s Field Programmable Gate Arrays also adopted Samsung’s HBM2. Meanwhile, SK hynix struggled to pass quality tests for HBM2 with major customers. Emboldened by its early success, the company had implemented aggressive technologies to boost performance, but these proved difficult to optimize. Few customers were willing to adopt products that were not production-ready.
As the HBM2 market expanded, Samsung captured most of the gains. SK hynix's internal morale sank. Executives were rotated out, and the HBM team became an undesirable assignment.
Google, TSMC and missed opportunities
Around 2016, Broadcom approached Samsung with a request: supply HBM2 for Google’s second-generation tensor processing unit (TPU). If Samsung could meet 100 percent of the demand, Broadcom promised exclusivity.
Google had quietly developed the TPU for in-house AI research. The first generation used Double Data Rate 3 while the second, revealed in 2017, required HBM2.
Three companies were involved: Broadcom for design, Samsung for memory and TSMC for fabrication.
But cooperation faltered. In 2017, a Samsung engineer reported that TSMC had refused to grant access to its facility although there were memory issues tied to Samsung's HBM. Tensions escalated.
“Such standoffs were common between 2016 and 2017, with each side deflecting blame and delays lasting up to six months,” said a senior source with knowledge of the matter.
An academic with semiconductor industry experience noted that while TSMC might've been strategically uncooperative, foundries ultimately control production and place the burden of proof on memory suppliers.
HBM demand remained modest at the time. Google's TPU was used internally, and Nvidia's GPU volumes had yet to spike. Inside Samsung, skepticism about HBM grew. It was seen as complex to develop and low in demand.
The underdog rises
By the time, SK hynix had recognized that HBM required a fundamentally different approach from standard memory. Whereas traditional DRAM allowed for split supply, HBM required strict adherence to customer timelines and specifications. Failure to meet them meant selling nothing.
Reviving lessons from the early days, SK hynix formed a cross-functional task force for HBM2E, its third-generation product. Top management emphasized organizational agility and cross-team coordination. Frequent internal reviews ensured early stage manufacturing compatibility.
HBM2E introduced pivotal changes, notably in packaging. While all three memory makers had used thermal compression with nonconductive film, SK hynix switched to mass reflow molded underfill, citing improved process efficiency and heat dissipation. The shift proved critical.
Another strength was customization. Though HBM follows the standards established by the Joint Electron Device Engineering Council, each customer has distinct requirements. SK hynix adapted its designs accordingly, guided by the principle that "the customer is always right."
HBM2E performed well in the market, but true success arrived with HBM3. In June 2022, SK hynix began mass-producing HBM3, which went into Nvidia's H100 GPU — a runaway success in AI acceleration.
For SK hynix, it marked the beginning of a new era.
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
BY LEE GA-RAM, YI WOO-LIM, PARK HAE-LEE, SHIM SEO-HYUN





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)