Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @BryanGreenbaum: Very good listen. @verdadcap seems to be spot on here. The Bubble You Can't Exit Dan Rasmussen on the Private Equity Trap https://t.co/REYGEMUZ9y
tweet
Offshore
Photo
Jukan
Bank of America forecasts that, due to the sharp rise in DRAM prices, smartphone shipments in 2026 will decline by a high single-digit percentage, while PC shipments will drop by a low double-digit percentage (around 10–13%).

What surprised me when I saw this table was that, contrary to the fear-mongering from many sell-side analysts, even though automotive DRAM prices have risen significantly, vehicle (automotive) shipments are actually expected to show solid growth. (Was I being too pessimistic?)

And… the decline in low-end smartphone shipments seems much more severe than I thought… they’re projecting it could drop by nearly 50%…
tweet
Michael Fritzell (Asian Century Stocks)
Risk of expropriation probably much lower in retail / services. But you’ll still have IDR exposure and Philippine equities are just as cheap.

We shall look for an extended buy list as this is the most interesting market globally
- Olivier (Emerging Value)
tweet
Michael Fritzell (Asian Century Stocks)
RT @gvancomp: A stable government (doesn't mean we have to like them)
Banks will start lending again
Tourism will continue to look good as 1H25, BKK was all over the news for the earthquake.
Thb will weaken as the gold pump is done.

Banks, retail and tourism stocks are among the most exposed to the outcome of Thailand's February 8 election https://t.co/U6yvoebggx
- Bloomberg
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
New Delfi product for the lunar new year https://t.co/jl5hH97B1R
tweet
Michael Fritzell (Asian Century Stocks)
Well, if you want great Chinese stock picks, then subscribe to a Substack run a "China expert".

However: 👇
tweet
Michael Fritzell (Asian Century Stocks)
RT @douglaskimkorea: Did you know that the Korean stock market just surpassed Germany to become the 10th largest stock market in the world? Korea's stock market - US$3.25 trillion vs Germany's stock market - US$3.22 trillion....and Korea is still classified as an Emerging Market (not Developed Market) by the MSCI.
tweet
Jukan
Rumor: Starting with TPU v8, Google will no longer use HBM?

The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional HBM is limited by its design of being fixed on the motherboard, resulting in a capacity ceiling.

Accordingly, Google will develop a new solution to be launched in 2027. The physical form involves removing HBM and establishing independent DRAM memory cabinets (containing 16–32 Trays), dynamically allocating memory through photonic technology.

This technology deconstructs the originally single and simple HBM component into three parts:

- Transport Layer: Employs all-optical interconnects, ensuring cross-cabinet communication efficiency through OCS (Optical Circuit Switching) and customized CXL protocols. The CPUs, GPUs, and memory modules of the memory pool share a single set of protocols.

- Storage Layer: Utilizes large-scale DRAM arrays to replace HBM, significantly increasing the addressing space. The memory corresponding to a single TPU can leap from 192GB/256GB to 512GB or even above 768GB.

- Control Layer: Adds dedicated memory-side CPU servers for management.

Compared to the native "TPU+HBM" direct connection, this "three-in-one" split-combination solution results in a calculation efficiency loss of less than 2%.

Regarding this technology, first is OCS, which satisfies high-speed switching in an all-optical environment and achieves bandwidth and latency close to direct connections with HBM or silicon photonic HBM. Traditional Ethernet (via copper) typically has a latency of over 200 nanoseconds, while using an OCS all-optical switching network can reduce latency to below 100 nanoseconds, which is why it is important.

Second, in this architecture, there is a dual-side CPU architecture (Tier-1 and Tier-2 CPUs):

Tier-1 CPU (TPU side): Located on the TPU motherboard, primarily responsible for interconnect communication between TPUs.

Tier-2 CPU (Memory pool side): Most likely deployed on the memory server (DRAM server) side, specifically responsible for communication coordination between TPUs and the distributed memory addressing space.

The Tier-2 CPU is deployed independently because, logically, the original TPU motherboard CPU could still read the memory pool, but using the old CPU would involve complex protocol conversions (such as translation between PCIe signals and CXL-like protocols), creating efficiency bottlenecks.

Third, the interface is completed directly at the chip level through a "photonic packaging interface." This method is similar to CPO (Co-Packaged Optics) technology, integrating optical interfaces directly within the package of chips like the CPU/TPU, replacing traditional external optical modules. The first supplier contacted during the solution design stage was Lightmatter, with multiple suppliers to follow.

This solution, which removes HBM and changes it to an external DRAM memory pool, actually converts what was originally ultra-high-frequency motherboard-level access into "cross-cabinet access." Theoretically, this would generate huge latency and efficiency losses. However, this is not the case. Specifically, complex electrical/optical conversions exist between chips, hosts, and ring networks; these hardware-level protocol conversions and settings generate significant hidden overhead invisible to users. After adopting the DRAM memory pool solution, although CXL translation is introduced, many cumbersome hardware protocol conversion steps from the original architecture are removed.

If HBM prices drop and performance improves due to capacity expansion by manufacturers like Samsung and Hynix over the next two years, Google is unlikely to return to the HBM solution due to cost considerations. Google does not believe that upstream manufacturers like Hynix, Samsung, and Micron will subvert their own main product line pricing or mass production strategies to accommodate o[...]
Offshore
Jukan Rumor: Starting with TPU v8, Google will no longer use HBM? The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional HBM is limited by…
ne or two major customers. They might release some profit margin, but they will not cooperate to an extreme degree.

This solution also reduces reliance on CoWoS because HBM is no longer needed. At the same time, the HBM chips originally on the silicon interposer substrate occupied a large area; after removing HBM, the saved CoWoS area can be entirely given to the TPU's Compute Core. Thus, within the same physical dimensions, a TPU chip with stronger performance and a larger area can be made, no longer restricted by the physical size of HBM. Regarding memory, the V7 generation had a single HBM capacity of about 192GB, and V8A is about 256GB, but through memory pooling, the memory per TPU can easily double to 512GB or even reach 768GB or more.

The solution is expected to be implemented next year, with the final route determined before March 5. The initial deployment ratio is about 30%, with 100% replacement expected to be achieved in 3 years.

Sector Beneficiaries:

- OCS (Optical Engine): Lightmatter, as the primary supplier, provides photonic packaging interfaces, integrating optical interfaces within the chip package to replace external modules.

- CXL-like: Requires CXL-like chips (MXC chips) to achieve the interconnect between TPUs and the memory pool, costing $100 per chip. One chip manages two channels for two 256GB memory modules, matching the TPU and memory side synchronously. If it is 512GB, two MXC chips are needed; for 768GB, four chips.

- DRAM Modules: The quantity of GBs increases significantly.

- CPU: Each memory Tray needs to be equipped with a CPU for scheduling; high performance is not required here, and ARM-based CPUs can be used.

- PCB: Independent DRAM cabinets require large, multi-layer PCBs to carry a large number of DIMM slots.

Source: 国泰海通 (Guotai Haitong)
tweet
Jukan
I personally don’t think this rumor is true, but I’m sharing it because I think you should be aware of it.

Rumor: Starting with TPU v8, Google will no longer use HBM?

The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional HBM is limited by its design of being fixed on the motherboard, resulting in a capacity ceiling.

Accordingly, Google will develop a new solution to be launched in 2027. The physical form involves removing HBM and establishing independent DRAM memory cabinets (containing 16–32 Trays), dynamically allocating memory through photonic technology.

This technology deconstructs the originally single and simple HBM component into three parts:

- Transport Layer: Employs all-optical interconnects, ensuring cross-cabinet communication efficiency through OCS (Optical Circuit Switching) and customized CXL protocols. The CPUs, GPUs, and memory modules of the memory pool share a single set of protocols.

- Storage Layer: Utilizes large-scale DRAM arrays to replace HBM, significantly increasing the addressing space. The memory corresponding to a single TPU can leap from 192GB/256GB to 512GB or even above 768GB.

- Control Layer: Adds dedicated memory-side CPU servers for management.

Compared to the native "TPU+HBM" direct connection, this "three-in-one" split-combination solution results in a calculation efficiency loss of less than 2%.

Regarding this technology, first is OCS, which satisfies high-speed switching in an all-optical environment and achieves bandwidth and latency close to direct connections with HBM or silicon photonic HBM. Traditional Ethernet (via copper) typically has a latency of over 200 nanoseconds, while using an OCS all-optical switching network can reduce latency to below 100 nanoseconds, which is why it is important.

Second, in this architecture, there is a dual-side CPU architecture (Tier-1 and Tier-2 CPUs):

Tier-1 CPU (TPU side): Located on the TPU motherboard, primarily responsible for interconnect communication between TPUs.

Tier-2 CPU (Memory pool side): Most likely deployed on the memory server (DRAM server) side, specifically responsible for communication coordination between TPUs and the distributed memory addressing space.

The Tier-2 CPU is deployed independently because, logically, the original TPU motherboard CPU could still read the memory pool, but using the old CPU would involve complex protocol conversions (such as translation between PCIe signals and CXL-like protocols), creating efficiency bottlenecks.

Third, the interface is completed directly at the chip level through a "photonic packaging interface." This method is similar to CPO (Co-Packaged Optics) technology, integrating optical interfaces directly within the package of chips like the CPU/TPU, replacing traditional external optical modules. The first supplier contacted during the solution design stage was Lightmatter, with multiple suppliers to follow.

This solution, which removes HBM and changes it to an external DRAM memory pool, actually converts what was originally ultra-high-frequency motherboard-level access into "cross-cabinet access." Theoretically, this would generate huge latency and efficiency losses. However, this is not the case. Specifically, complex electrical/optical conversions exist between chips, hosts, and ring networks; these hardware-level protocol conversions and settings generate significant hidden overhead invisible to users. After adopting the DRAM memory pool solution, although CXL translation is introduced, many cumbersome hardware protocol conversion steps from the original architecture are removed.

If HBM prices drop and performance improves due to capacity expansion by manufacturers like Samsung and Hynix over the next two years, Google is unlikely to return to the HBM solution due to cost considerations. Google does not believe that upstream manufacturers like Hynix,[...]
Offshore
Jukan I personally don’t think this rumor is true, but I’m sharing it because I think you should be aware of it. Rumor: Starting with TPU v8, Google will no longer use HBM? The incident was triggered by the global capacity shortage of HBM, which will be…
Samsung, and Micron will subvert their own main product line pricing or mass production strategies to accommodate one or two major customers. They might release some profit margin, but they will not cooperate to an extreme degree.

This solution also reduces reliance on CoWoS because HBM is no longer needed. At the same time, the HBM chips originally on the silicon interposer substrate occupied a large area; after removing HBM, the saved CoWoS area can be entirely given to the TPU's Compute Core. Thus, within the same physical dimensions, a TPU chip with stronger performance and a larger area can be made, no longer restricted by the physical size of HBM. Regarding memory, the V7 generation had a single HBM capacity of about 192GB, and V8A is about 256GB, but through memory pooling, the memory per TPU can easily double to 512GB or even reach 768GB or more.

The solution is expected to be implemented next year, with the final route determined before March 5. The initial deployment ratio is about 30%, with 100% replacement expected to be achieved in 3 years.

Sector Beneficiaries:

- OCS (Optical Engine): Lightmatter, as the primary supplier, provides photonic packaging interfaces, integrating optical interfaces within the chip package to replace external modules.

- CXL-like: Requires CXL-like chips (MXC chips) to achieve the interconnect between TPUs and the memory pool, costing $100 per chip. One chip manages two channels for two 256GB memory modules, matching the TPU and memory side synchronously. If it is 512GB, two MXC chips are needed; for 768GB, four chips.

- DRAM Modules: The quantity of GBs increases significantly.

- CPU: Each memory Tray needs to be equipped with a CPU for scheduling; high performance is not required here, and ARM-based CPUs can be used.

- PCB: Independent DRAM cabinets require large, multi-layer PCBs to carry a large number of DIMM slots.

Source: 国泰海通 (Guotai Haitong) - Jukan tweet
Offshore
Photo
Jukan
Samsung and SK Hynix to Accelerate Conversion Investment in Cutting-edge NAND in Q2

Samsung Electronics and SK Hynix are set to begin large-scale conversion investments in cutting-edge NAND flash. While these plans were previously delayed in favor of DRAM, concrete investment schedules are now being finalized. This strategy aims to address the surging demand in the NAND market, driven primarily by the AI industry.

According to industry sources on the 2nd, both Samsung and SK Hynix plan to proceed with conversion investments for cutting-edge NAND during the second quarter of this year.

Samsung Electronics began mass production of its 280-layer V9 (9th generation) NAND in September 2024. However, current production capacity remains low, estimated at around 15,000 wafers per month (wpm), as initial production lines at the Pyeongtaek campus were limited due to insufficient market demand at the time.

Starting in Q2, Samsung will invest in expanding V9 NAND capacity, focusing on the X2 line in Xi’an, China. This line currently produces older 6th and 7th-generation NAND. The nearby X1 line has mostly completed its transition to 8th-generation NAND.

The scale of the discussed conversion investment is approximately 40,000 to 50,000 wpm. Given the timing of equipment investment, V9 NAND is expected to enter the "ramp-up" (mass production expansion) phase next year.

A semiconductor industry official stated, "Samsung initially planned the V9 transition at the Xi’an X2 line for Q1, but the schedule was pushed back to Q2. With conversion investments also being prepared at Pyeongtaek Campus 1 (P1), the proportion of V9 production is expected to increase significantly next year."

SK Hynix is also planning a transition to 321-layer 9th-generation NAND in Q2. The primary goal is to secure a V9 production capacity of approximately 30,000 wpm at the M15 fab in Cheongju. Considering current capacity is around 20,000 wpm, this represents a substantial investment.

Industry experts explained, "Both Samsung and SK Hynix are planning conversion investments to meet the growing demand for cutting-edge NAND. While capital expenditure (CAPEX) strategies have focused on DRAM, NAND is also rapidly experiencing supply shortages."
tweet
Offshore
Photo
Jukan
SCMP: DeepSeek’s next-generation model reportedly has trillions of parameters, and the sharp increase in model size has slowed down training speed. As a result, the DeepSeek model expected around the Lunar New Year will likely be a minor update to V3. https://t.co/Hq57648BSb

New: Almost every frontier Chinese AI lab is racing to release major new models before the Lunar New Year, including Zhipu GLM-5 and MiniMax M2.2. https://t.co/q8N31dhbFb

Meanwhile, despite widespread speculation about V4, DeepSeek is expected to only release a minor upgrade to V3 due to additional time needed to train its next flagship trillion-parameter model, a source said.

h/t Beijing colleague Ben Jiang
- Vincent Chow
tweet
Michael Fritzell (Asian Century Stocks)
RT @ArenaManCapital: Conclusion thus far: IMO, LLMs suffer from consensus thinking. It is hard to get an LLM to produce original deep thought. Index/momentum hugging investors might have trouble competing, but concentrated, original thinking investors who know their subject matter are safe.

Today, what edges do humans have over machines in investing? The edges I see in machines over humans are emotional and research/knowledge.
- ARENA MAN CAPITAL
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @DavidInglesTV: Markets in Asia are getting shredded on Monday. Indonesia and Korea down over 5% each; commodities tossed out the window https://t.co/j4p6urVQHi
tweet