Offshore
Photo
Jukan
NVIDIA's AI Accelerator 'Tiering' Coming... Focus on HBM4 Processing Speed
Big tech companies including NVIDIA are expected to devise a 'high-performance-centric' memory supply strategy to maximize next-generation artificial intelligence (AI) accelerator performance. In this scenario, Samsung Electronics—which holds a technical edge in specifications within the 6th-generation High Bandwidth Memory (HBM4) market—stands to benefit.
According to industry sources on the 19th, NVIDIA is reviewing a 'Dual Bin' structure to simultaneously secure HBM4 supply stability and performance. Dual binning is a strategy where chips based on the same design are supplied in top-tier and second-tier grades based on speed, power efficiency, and other criteria.
Big tech companies are expected to apply 'top bin' chips rated at 11.7 Gbps or above to their flagship products, while using 'second bin' chips in the 10 Gbps range for complementary products in parallel.
In NVIDIA's case, rather than expanding lower-spec volume, the company is likely to focus on high-performance bins for its flagship products—driven by the symbolic importance of achieving peak AI accelerator performance.
Big tech firms are grappling with memory bottlenecks as they develop next-generation AI accelerators. While compute units (GPUs) operate at high speeds, memory transfer speeds cannot keep pace, degrading overall system efficiency.
Samsung Electronics' HBM4 operates at 11.7 Gbps on paper. It utilizes 1c (6th-generation 10nm-class) DRAM and a 4nm-based base die. This exceeds the JEDEC standard of 8 Gbps by approximately 46%. It represents a 1.22x improvement over HBM3E (9.6 Gbps), with prospects of achieving up to 13 Gbps going forward.
An industry official explained, "Rather than simply focusing on volume procurement, NVIDIA is prioritizing accelerator performance maximization," adding that "Samsung Electronics' influence will strengthen as it can stably produce top-bin parts."
tweet
NVIDIA's AI Accelerator 'Tiering' Coming... Focus on HBM4 Processing Speed
Big tech companies including NVIDIA are expected to devise a 'high-performance-centric' memory supply strategy to maximize next-generation artificial intelligence (AI) accelerator performance. In this scenario, Samsung Electronics—which holds a technical edge in specifications within the 6th-generation High Bandwidth Memory (HBM4) market—stands to benefit.
According to industry sources on the 19th, NVIDIA is reviewing a 'Dual Bin' structure to simultaneously secure HBM4 supply stability and performance. Dual binning is a strategy where chips based on the same design are supplied in top-tier and second-tier grades based on speed, power efficiency, and other criteria.
Big tech companies are expected to apply 'top bin' chips rated at 11.7 Gbps or above to their flagship products, while using 'second bin' chips in the 10 Gbps range for complementary products in parallel.
In NVIDIA's case, rather than expanding lower-spec volume, the company is likely to focus on high-performance bins for its flagship products—driven by the symbolic importance of achieving peak AI accelerator performance.
Big tech firms are grappling with memory bottlenecks as they develop next-generation AI accelerators. While compute units (GPUs) operate at high speeds, memory transfer speeds cannot keep pace, degrading overall system efficiency.
Samsung Electronics' HBM4 operates at 11.7 Gbps on paper. It utilizes 1c (6th-generation 10nm-class) DRAM and a 4nm-based base die. This exceeds the JEDEC standard of 8 Gbps by approximately 46%. It represents a 1.22x improvement over HBM3E (9.6 Gbps), with prospects of achieving up to 13 Gbps going forward.
An industry official explained, "Rather than simply focusing on volume procurement, NVIDIA is prioritizing accelerator performance maximization," adding that "Samsung Electronics' influence will strengthen as it can stably produce top-bin parts."
tweet
Offshore
Photo
Offshore
Video
Michael Fritzell (Asian Century Stocks)
https://t.co/PCUaPMVE9G
tweet
https://t.co/PCUaPMVE9G
@chitchatstocks E-Ink 8069 - DaBaotweet
AkhenOsiris
Smelling a software rally with @dalibali2 posting so much... man's trying to will it into existence
tweet
Smelling a software rally with @dalibali2 posting so much... man's trying to will it into existence
tweet
Offshore
Video
memenodes
Jensen Huang when the market needs saving
https://t.co/xLEOzbmlsx
tweet
Jensen Huang when the market needs saving
https://t.co/xLEOzbmlsx
JUST IN: Nvidia CEO says he’s preparing new chips "the world has never seen before" - Kalshitweet
Offshore
Video
memenodes
Rare footage of the Black Khan of Mongolia making his first visit to the Ottoman Empire in the year 1440. https://t.co/eipakJhDJ4
tweet
Rare footage of the Black Khan of Mongolia making his first visit to the Ottoman Empire in the year 1440. https://t.co/eipakJhDJ4
tweet
Jukan
"Adapting to a backside power delivery process requires fundamentally redesigning the power routing architecture, which is not a simple port but something closer to a full IP redevelopment."
💯💯
tweet
"Adapting to a backside power delivery process requires fundamentally redesigning the power routing architecture, which is not a simple port but something closer to a full IP redevelopment."
💯💯
https://t.co/DM9rhrQ6ua - Damnang2tweet
X (formerly Twitter)
Damnang2 (@damnang2) on X
Intel Foundry: A Last Chance