Offshore
Photo
Jukan
Samsung's HBM4 Quality Testing on Track… When Will SK Hynix Follow?
Samsung Electronics will begin shipping High Bandwidth Memory 4 (HBM4) next week. If everything proceeds as planned, this will be the industry's first mass-production shipment. All volumes delivered to clients to date have been at the sample stage.
Despite this, SK Hynix is expected to maintain its advantage through the 6th-generation HBM4. SK Hynix is also expected to deliver within the first quarter. However, starting with the 7th-generation HBM4E, the situation could potentially reverse.
According to industry sources on the 9th, Samsung Electronics is in the final stages of preparation, targeting HBM4 shipment on the 19th of this month. The products will be delivered to major clients including Nvidia.
However, quality testing has reportedly not yet been passed. Samsung Electronics expects test results to come in next week and has preemptively begun preparation work in anticipation.
The industry is already treating Samsung's HBM4 supply to Nvidia as a foregone conclusion. With competitor Micron's HBM4 supply expected to face difficulties, observers note that Samsung's opportunities could grow even further.
Samsung's rapid HBM4 development was largely driven by the adoption of 10nm-class 6th-generation (1c) DRAM as the core die and a base die built on 4nm process technology.
This enabled Samsung to meet the operating speed and product specifications demanded by Nvidia. Nvidia has continuously pushed for higher HBM specifications during the development of its new AI accelerator, "Vera Rubin." Samsung was able to respond without difficulty, having designed the product with higher specs from the outset.
Samsung's HBM4 has achieved operating speeds exceeding 11Gbps, which is evaluated as superior to competitors in terms of speed. Thermal management performance has also reportedly improved significantly.
Competitor SK Hynix is also finalizing related work with a target of shipping HBM4 within the first quarter. Products are expected to ship by the end of March at the latest.
Despite Samsung's preemptive HBM4 shipment, the market still expects SK Hynix to maintain its lead in HBM4 market share. This is because SK Hynix already secured a substantial volume commitment in its contract with Nvidia last year.
SK Hynix also holds a relative advantage in terms of volume response capability. SK Hynix uses 10nm-class 5th-generation (1b) DRAM as its HBM4 core die, enabling rapid production from existing lines. In contrast, Samsung uses 1c DRAM for its HBM4 core die, meaning capacity expansion is inevitable for additional supply. Samsung's 1c DRAM capacity is currently estimated at 60,000 to 70,000 wafers per month.
A semiconductor industry insider said, "We understand that SK Hynix will take more than 60% of HBM4 volumes destined for Nvidia this year," adding, "Since HBM contracts are typically made on an annual basis, this structure will likely continue throughout the year." The source added, "With Micron appearing to lose ground in the HBM4 market, there are predictions that Samsung could secure a 30–40% market share."
Meanwhile, the industry expects Samsung to significantly expand its market share starting with HBM4E, when demand for custom HBM begins in earnest. Custom HBM refers to products being developed to reflect diverse customer requirements — such as capacity, bandwidth, and additional features — beyond conventional general-purpose HBM. The first products expected to launch are custom HBMs featuring each client's desired intellectual property (IP) integrated into the logic die.
tweet
Samsung's HBM4 Quality Testing on Track… When Will SK Hynix Follow?
Samsung Electronics will begin shipping High Bandwidth Memory 4 (HBM4) next week. If everything proceeds as planned, this will be the industry's first mass-production shipment. All volumes delivered to clients to date have been at the sample stage.
Despite this, SK Hynix is expected to maintain its advantage through the 6th-generation HBM4. SK Hynix is also expected to deliver within the first quarter. However, starting with the 7th-generation HBM4E, the situation could potentially reverse.
According to industry sources on the 9th, Samsung Electronics is in the final stages of preparation, targeting HBM4 shipment on the 19th of this month. The products will be delivered to major clients including Nvidia.
However, quality testing has reportedly not yet been passed. Samsung Electronics expects test results to come in next week and has preemptively begun preparation work in anticipation.
The industry is already treating Samsung's HBM4 supply to Nvidia as a foregone conclusion. With competitor Micron's HBM4 supply expected to face difficulties, observers note that Samsung's opportunities could grow even further.
Samsung's rapid HBM4 development was largely driven by the adoption of 10nm-class 6th-generation (1c) DRAM as the core die and a base die built on 4nm process technology.
This enabled Samsung to meet the operating speed and product specifications demanded by Nvidia. Nvidia has continuously pushed for higher HBM specifications during the development of its new AI accelerator, "Vera Rubin." Samsung was able to respond without difficulty, having designed the product with higher specs from the outset.
Samsung's HBM4 has achieved operating speeds exceeding 11Gbps, which is evaluated as superior to competitors in terms of speed. Thermal management performance has also reportedly improved significantly.
Competitor SK Hynix is also finalizing related work with a target of shipping HBM4 within the first quarter. Products are expected to ship by the end of March at the latest.
Despite Samsung's preemptive HBM4 shipment, the market still expects SK Hynix to maintain its lead in HBM4 market share. This is because SK Hynix already secured a substantial volume commitment in its contract with Nvidia last year.
SK Hynix also holds a relative advantage in terms of volume response capability. SK Hynix uses 10nm-class 5th-generation (1b) DRAM as its HBM4 core die, enabling rapid production from existing lines. In contrast, Samsung uses 1c DRAM for its HBM4 core die, meaning capacity expansion is inevitable for additional supply. Samsung's 1c DRAM capacity is currently estimated at 60,000 to 70,000 wafers per month.
A semiconductor industry insider said, "We understand that SK Hynix will take more than 60% of HBM4 volumes destined for Nvidia this year," adding, "Since HBM contracts are typically made on an annual basis, this structure will likely continue throughout the year." The source added, "With Micron appearing to lose ground in the HBM4 market, there are predictions that Samsung could secure a 30–40% market share."
Meanwhile, the industry expects Samsung to significantly expand its market share starting with HBM4E, when demand for custom HBM begins in earnest. Custom HBM refers to products being developed to reflect diverse customer requirements — such as capacity, bandwidth, and additional features — beyond conventional general-purpose HBM. The first products expected to launch are custom HBMs featuring each client's desired intellectual property (IP) integrated into the logic die.
tweet
Illiquid
Published a note on our top pick in Singapore yesterday. The Economic Development Board also published its year-in-review, which flags the exact tailwinds this company is riding on.
"Of the S$14.2 billion in [capital investment committed in 2025], about S$12.1 billion came from manufacturing-related projects. Semiconductor manufacturers set up greenfield plants and expanded existing facilities...with these investments having positive spillover effects on the precision engineering sector. There were also projects responding to... the diversification of supply chains for semiconductor equipment."
https://t.co/M0xX1oCZ93
tweet
Published a note on our top pick in Singapore yesterday. The Economic Development Board also published its year-in-review, which flags the exact tailwinds this company is riding on.
"Of the S$14.2 billion in [capital investment committed in 2025], about S$12.1 billion came from manufacturing-related projects. Semiconductor manufacturers set up greenfield plants and expanded existing facilities...with these investments having positive spillover effects on the precision engineering sector. There were also projects responding to... the diversification of supply chains for semiconductor equipment."
https://t.co/M0xX1oCZ93
tweet
Javier Blas
A bit of extra context:
A year ago, BP was still buying back $1.75 billion of stock every quarter. That was reduced to $750 million. And now that's cut to zero.
Importantly, BP has also withdrawn its guidance of returning 30%-40% of operating cash flow to shareholders.
tweet
A bit of extra context:
A year ago, BP was still buying back $1.75 billion of stock every quarter. That was reduced to $750 million. And now that's cut to zero.
Importantly, BP has also withdrawn its guidance of returning 30%-40% of operating cash flow to shareholders.
BREAKING: British oil major BP suspends its $750 million quarterly buyback. The board says it will "fully allocate excess cash to accelerate strengthening of our balance sheet." - Javier Blastweet
X (formerly Twitter)
Javier Blas (@JavierBlas) on X
BREAKING: British oil major BP suspends its $750 million quarterly buyback. The board says it will "fully allocate excess cash to accelerate strengthening of our balance sheet."
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
Jukan
《GF International Electronics & Communications》
AI Networking: NVIDIA's Accelerated Roadmap, CPO Technology Debate Heats Up
Key CPO Takeaways from Recent Earnings Calls: Coherent mentioned it has secured a significant order from a leading AI data center customer to supply CW Lasers for its CPO systems. Initial revenue is expected to begin in 4Q26, with more meaningful contributions starting in 2027. Lumentum stated it has received hundreds of millions of dollars in CW Laser commitments for Scale-out CPO, with volume production contributions beginning in 1H27. Assuming combined order value of $400 million for laser suppliers such as LITE and COHR in 2027, this implies approximately 80,000 Scale-out CPO switches will be shipped that year. We believe this CPO switch shipment volume is consistent with downstream suppliers' capacity buildout plans.
NVIDIA May Accelerate Scale-out CPO Switches: At GTC 2025, NVIDIA (NVDA, Buy) unveiled its CPO switches, with Quantum-X planned for launch in 2H25 and Spectrum-X in 2H26. NVIDIA's first-generation Scale-out CPO (Quantum-X) adopts a pluggable architecture rather than true CPO. The system supports 115.2T switching bandwidth and integrates 36 pluggable optical engines, making it more accurately classified as NPO. However, due to its lack of cost-performance advantage over traditional pluggable solutions, customer adoption has been limited, with minimal shipments in 2025. For GTC 2026, we expect NVIDIA may unveil a next-generation CPO switch. This generation is expected to feature 115.2T bandwidth, with the CPO portion manufactured by TSMC. We believe this CPO design offers significant improvements in thermal performance and bandwidth, with cost-performance far exceeding the previous generation. The supply chain is expected to begin ramping in 2Q26 and accelerate through 2H26/2027. Driven by NVIDIA's aggressive push and bundling sales strategy, we are raising our NVIDIA Scale-out CPO switch shipment estimates for 2025/2026/2027 to 2,000/20,000/80,000 units. As for 2027, despite the surge in shipments, our estimate represents only a single-digit percentage of total Scale-out switch shipments for the year.
NVIDIA Scale-up CPO Progress Update: According to recent earnings calls, Lumentum emphasized that as copper interconnects approach their limits, optical Scale-up represents a long-term structural opportunity beginning in late 2027. As we noted in our report, we expect NVIDIA to introduce CPO/NPO within the NVL576 architecture starting with Rubin Ultra in 2H27, specifically for Scale-up interconnects. Within the NVL576 architecture, Compute trays and Switch trays are expected to continue relying on backplane connections, while inter-rack interconnects may transition to CPO- or NPO-based optical interconnects. Technically, CPO holds clear advantages in power consumption and bandwidth density, while NPO is easier to manufacture and maintain. We believe TSMC's Scale-up CPO solution has already provided samples, but mass production readiness remains uncertain, indicating that the NPO vs. CPO debate will persist in the near term.
Key Beneficiary Stocks: Overall, Scale-up CPO represents a purely incremental opportunity for the optical interconnect supply chain, as it aims to replace copper interconnects without affecting Scale-out. Furthermore, Scale-up and Scale-out CPO solutions share the same suppliers. Accordingly, we expect the primary beneficiaries to be component suppliers of FAUs, CW Lasers, ELS FP modules, and Shuffles. Our views: 1) Bullish on Lumentum, given upside potential in CW Laser demand; 2) Neutral impact on Coherent; 3) We expect Browave (3163 TT) to benefit, given its approximately 50% market share in CPO Shuttle Boxes, unit price of $5,000–6,000, and volume production ramp beginning in 2Q26.
$COHR $LITE $NVDA
tweet
《GF International Electronics & Communications》
AI Networking: NVIDIA's Accelerated Roadmap, CPO Technology Debate Heats Up
Key CPO Takeaways from Recent Earnings Calls: Coherent mentioned it has secured a significant order from a leading AI data center customer to supply CW Lasers for its CPO systems. Initial revenue is expected to begin in 4Q26, with more meaningful contributions starting in 2027. Lumentum stated it has received hundreds of millions of dollars in CW Laser commitments for Scale-out CPO, with volume production contributions beginning in 1H27. Assuming combined order value of $400 million for laser suppliers such as LITE and COHR in 2027, this implies approximately 80,000 Scale-out CPO switches will be shipped that year. We believe this CPO switch shipment volume is consistent with downstream suppliers' capacity buildout plans.
NVIDIA May Accelerate Scale-out CPO Switches: At GTC 2025, NVIDIA (NVDA, Buy) unveiled its CPO switches, with Quantum-X planned for launch in 2H25 and Spectrum-X in 2H26. NVIDIA's first-generation Scale-out CPO (Quantum-X) adopts a pluggable architecture rather than true CPO. The system supports 115.2T switching bandwidth and integrates 36 pluggable optical engines, making it more accurately classified as NPO. However, due to its lack of cost-performance advantage over traditional pluggable solutions, customer adoption has been limited, with minimal shipments in 2025. For GTC 2026, we expect NVIDIA may unveil a next-generation CPO switch. This generation is expected to feature 115.2T bandwidth, with the CPO portion manufactured by TSMC. We believe this CPO design offers significant improvements in thermal performance and bandwidth, with cost-performance far exceeding the previous generation. The supply chain is expected to begin ramping in 2Q26 and accelerate through 2H26/2027. Driven by NVIDIA's aggressive push and bundling sales strategy, we are raising our NVIDIA Scale-out CPO switch shipment estimates for 2025/2026/2027 to 2,000/20,000/80,000 units. As for 2027, despite the surge in shipments, our estimate represents only a single-digit percentage of total Scale-out switch shipments for the year.
NVIDIA Scale-up CPO Progress Update: According to recent earnings calls, Lumentum emphasized that as copper interconnects approach their limits, optical Scale-up represents a long-term structural opportunity beginning in late 2027. As we noted in our report, we expect NVIDIA to introduce CPO/NPO within the NVL576 architecture starting with Rubin Ultra in 2H27, specifically for Scale-up interconnects. Within the NVL576 architecture, Compute trays and Switch trays are expected to continue relying on backplane connections, while inter-rack interconnects may transition to CPO- or NPO-based optical interconnects. Technically, CPO holds clear advantages in power consumption and bandwidth density, while NPO is easier to manufacture and maintain. We believe TSMC's Scale-up CPO solution has already provided samples, but mass production readiness remains uncertain, indicating that the NPO vs. CPO debate will persist in the near term.
Key Beneficiary Stocks: Overall, Scale-up CPO represents a purely incremental opportunity for the optical interconnect supply chain, as it aims to replace copper interconnects without affecting Scale-out. Furthermore, Scale-up and Scale-out CPO solutions share the same suppliers. Accordingly, we expect the primary beneficiaries to be component suppliers of FAUs, CW Lasers, ELS FP modules, and Shuffles. Our views: 1) Bullish on Lumentum, given upside potential in CW Laser demand; 2) Neutral impact on Coherent; 3) We expect Browave (3163 TT) to benefit, given its approximately 50% market share in CPO Shuttle Boxes, unit price of $5,000–6,000, and volume production ramp beginning in 2Q26.
$COHR $LITE $NVDA
tweet
Offshore
Video
Lumida Wealth Management
ANTHROPIC'S CO-FOUNDER RESPONDS TO SAM ALTMAN AFTER AD CONTROVERSY
Sam Altman fired back at Anthropic's anti-ads commercial saying "we would obviously never run ads in the way Anthropic depicts them.
We are not stupid and we know our users would reject that."
Anthropic CEO's response: "This really isn't intended to be about any other company other than us.
To be clear, our view is not all ads are bad or there's never the right place for advertising.
It felt to us like AI conversations are different. People are uploading private or confidential information to their AI tool.
It just didn't feel like the respectful way to treat our users' data."
The real question isn't whether ads work or whether users would accept them.
It's whether monetizing the most intimate conversations people have with AI is the right model.
Anthropic is drawing a line. OpenAI says they would never cross it anyway.
We'll see what happens when growth slows and pressure to monetize increases.
@AnthropicAI @ABC
tweet
ANTHROPIC'S CO-FOUNDER RESPONDS TO SAM ALTMAN AFTER AD CONTROVERSY
Sam Altman fired back at Anthropic's anti-ads commercial saying "we would obviously never run ads in the way Anthropic depicts them.
We are not stupid and we know our users would reject that."
Anthropic CEO's response: "This really isn't intended to be about any other company other than us.
To be clear, our view is not all ads are bad or there's never the right place for advertising.
It felt to us like AI conversations are different. People are uploading private or confidential information to their AI tool.
It just didn't feel like the respectful way to treat our users' data."
The real question isn't whether ads work or whether users would accept them.
It's whether monetizing the most intimate conversations people have with AI is the right model.
Anthropic is drawing a line. OpenAI says they would never cross it anyway.
We'll see what happens when growth slows and pressure to monetize increases.
@AnthropicAI @ABC
tweet
Offshore
Photo
Javier Blas
In view of BP's announcement this morning it's cancelling its $750-million-a-quarter share buyback, let me re-publish yesterday's @Opinion note arguing the British oil major couldn't afford it anymore.
It comes down to debt -- lots of debt.
https://t.co/23vNRD4Nsw
tweet
In view of BP's announcement this morning it's cancelling its $750-million-a-quarter share buyback, let me re-publish yesterday's @Opinion note arguing the British oil major couldn't afford it anymore.
It comes down to debt -- lots of debt.
https://t.co/23vNRD4Nsw
tweet
The Transcript
RT @TheTranscript_: $TEAM CEO: "AI is the best thing to happen to Atlassian… customers using AI code generation tools create 5% more tasks in Jira, have 5% higher monthly active users, and expand their Jira seats 5% faster than those who don’t.”"
tweet
RT @TheTranscript_: $TEAM CEO: "AI is the best thing to happen to Atlassian… customers using AI code generation tools create 5% more tasks in Jira, have 5% higher monthly active users, and expand their Jira seats 5% faster than those who don’t.”"
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @GunjanJS: Trading in the tech sector makes up 30% of retail trading volumes--one of the highest shares since 2018 --Citi https://t.co/6pvw7muvfj
tweet
RT @GunjanJS: Trading in the tech sector makes up 30% of retail trading volumes--one of the highest shares since 2018 --Citi https://t.co/6pvw7muvfj
tweet
Offshore
Video
Brady Long
RT @thisguyknowsai: This is gonna ruin so many marriages.
tweet
RT @thisguyknowsai: This is gonna ruin so many marriages.
this is scary.. GeoSpy AI can track your exact location using social media photos in 2 secs and show it in 3D.
upload photo -> get coordinates. https://t.co/b49qimXKWy - Oliver Promptstweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
1/ The slump in the luxury watch market is over.
After a 4-year downturn, the Rolex brand is leading the sector back to growth.
Here's why the mechanical watch watch market has turned, and the stocks that could potentially benefit. https://t.co/MhwZrWeNQl
tweet
1/ The slump in the luxury watch market is over.
After a 4-year downturn, the Rolex brand is leading the sector back to growth.
Here's why the mechanical watch watch market has turned, and the stocks that could potentially benefit. https://t.co/MhwZrWeNQl
tweet