Illiquid
Published a note on our top pick in Singapore yesterday. The Economic Development Board also published its year-in-review, which flags the exact tailwinds this company is riding on.
"Of the S$14.2 billion in [capital investment committed in 2025], about S$12.1 billion came from manufacturing-related projects. Semiconductor manufacturers set up greenfield plants and expanded existing facilities...with these investments having positive spillover effects on the precision engineering sector. There were also projects responding to... the diversification of supply chains for semiconductor equipment."
https://t.co/M0xX1oCZ93
tweet
Published a note on our top pick in Singapore yesterday. The Economic Development Board also published its year-in-review, which flags the exact tailwinds this company is riding on.
"Of the S$14.2 billion in [capital investment committed in 2025], about S$12.1 billion came from manufacturing-related projects. Semiconductor manufacturers set up greenfield plants and expanded existing facilities...with these investments having positive spillover effects on the precision engineering sector. There were also projects responding to... the diversification of supply chains for semiconductor equipment."
https://t.co/M0xX1oCZ93
tweet
Javier Blas
A bit of extra context:
A year ago, BP was still buying back $1.75 billion of stock every quarter. That was reduced to $750 million. And now that's cut to zero.
Importantly, BP has also withdrawn its guidance of returning 30%-40% of operating cash flow to shareholders.
tweet
A bit of extra context:
A year ago, BP was still buying back $1.75 billion of stock every quarter. That was reduced to $750 million. And now that's cut to zero.
Importantly, BP has also withdrawn its guidance of returning 30%-40% of operating cash flow to shareholders.
BREAKING: British oil major BP suspends its $750 million quarterly buyback. The board says it will "fully allocate excess cash to accelerate strengthening of our balance sheet." - Javier Blastweet
X (formerly Twitter)
Javier Blas (@JavierBlas) on X
BREAKING: British oil major BP suspends its $750 million quarterly buyback. The board says it will "fully allocate excess cash to accelerate strengthening of our balance sheet."
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
Jukan
《GF International Electronics & Communications》
AI Networking: NVIDIA's Accelerated Roadmap, CPO Technology Debate Heats Up
Key CPO Takeaways from Recent Earnings Calls: Coherent mentioned it has secured a significant order from a leading AI data center customer to supply CW Lasers for its CPO systems. Initial revenue is expected to begin in 4Q26, with more meaningful contributions starting in 2027. Lumentum stated it has received hundreds of millions of dollars in CW Laser commitments for Scale-out CPO, with volume production contributions beginning in 1H27. Assuming combined order value of $400 million for laser suppliers such as LITE and COHR in 2027, this implies approximately 80,000 Scale-out CPO switches will be shipped that year. We believe this CPO switch shipment volume is consistent with downstream suppliers' capacity buildout plans.
NVIDIA May Accelerate Scale-out CPO Switches: At GTC 2025, NVIDIA (NVDA, Buy) unveiled its CPO switches, with Quantum-X planned for launch in 2H25 and Spectrum-X in 2H26. NVIDIA's first-generation Scale-out CPO (Quantum-X) adopts a pluggable architecture rather than true CPO. The system supports 115.2T switching bandwidth and integrates 36 pluggable optical engines, making it more accurately classified as NPO. However, due to its lack of cost-performance advantage over traditional pluggable solutions, customer adoption has been limited, with minimal shipments in 2025. For GTC 2026, we expect NVIDIA may unveil a next-generation CPO switch. This generation is expected to feature 115.2T bandwidth, with the CPO portion manufactured by TSMC. We believe this CPO design offers significant improvements in thermal performance and bandwidth, with cost-performance far exceeding the previous generation. The supply chain is expected to begin ramping in 2Q26 and accelerate through 2H26/2027. Driven by NVIDIA's aggressive push and bundling sales strategy, we are raising our NVIDIA Scale-out CPO switch shipment estimates for 2025/2026/2027 to 2,000/20,000/80,000 units. As for 2027, despite the surge in shipments, our estimate represents only a single-digit percentage of total Scale-out switch shipments for the year.
NVIDIA Scale-up CPO Progress Update: According to recent earnings calls, Lumentum emphasized that as copper interconnects approach their limits, optical Scale-up represents a long-term structural opportunity beginning in late 2027. As we noted in our report, we expect NVIDIA to introduce CPO/NPO within the NVL576 architecture starting with Rubin Ultra in 2H27, specifically for Scale-up interconnects. Within the NVL576 architecture, Compute trays and Switch trays are expected to continue relying on backplane connections, while inter-rack interconnects may transition to CPO- or NPO-based optical interconnects. Technically, CPO holds clear advantages in power consumption and bandwidth density, while NPO is easier to manufacture and maintain. We believe TSMC's Scale-up CPO solution has already provided samples, but mass production readiness remains uncertain, indicating that the NPO vs. CPO debate will persist in the near term.
Key Beneficiary Stocks: Overall, Scale-up CPO represents a purely incremental opportunity for the optical interconnect supply chain, as it aims to replace copper interconnects without affecting Scale-out. Furthermore, Scale-up and Scale-out CPO solutions share the same suppliers. Accordingly, we expect the primary beneficiaries to be component suppliers of FAUs, CW Lasers, ELS FP modules, and Shuffles. Our views: 1) Bullish on Lumentum, given upside potential in CW Laser demand; 2) Neutral impact on Coherent; 3) We expect Browave (3163 TT) to benefit, given its approximately 50% market share in CPO Shuttle Boxes, unit price of $5,000–6,000, and volume production ramp beginning in 2Q26.
$COHR $LITE $NVDA
tweet
《GF International Electronics & Communications》
AI Networking: NVIDIA's Accelerated Roadmap, CPO Technology Debate Heats Up
Key CPO Takeaways from Recent Earnings Calls: Coherent mentioned it has secured a significant order from a leading AI data center customer to supply CW Lasers for its CPO systems. Initial revenue is expected to begin in 4Q26, with more meaningful contributions starting in 2027. Lumentum stated it has received hundreds of millions of dollars in CW Laser commitments for Scale-out CPO, with volume production contributions beginning in 1H27. Assuming combined order value of $400 million for laser suppliers such as LITE and COHR in 2027, this implies approximately 80,000 Scale-out CPO switches will be shipped that year. We believe this CPO switch shipment volume is consistent with downstream suppliers' capacity buildout plans.
NVIDIA May Accelerate Scale-out CPO Switches: At GTC 2025, NVIDIA (NVDA, Buy) unveiled its CPO switches, with Quantum-X planned for launch in 2H25 and Spectrum-X in 2H26. NVIDIA's first-generation Scale-out CPO (Quantum-X) adopts a pluggable architecture rather than true CPO. The system supports 115.2T switching bandwidth and integrates 36 pluggable optical engines, making it more accurately classified as NPO. However, due to its lack of cost-performance advantage over traditional pluggable solutions, customer adoption has been limited, with minimal shipments in 2025. For GTC 2026, we expect NVIDIA may unveil a next-generation CPO switch. This generation is expected to feature 115.2T bandwidth, with the CPO portion manufactured by TSMC. We believe this CPO design offers significant improvements in thermal performance and bandwidth, with cost-performance far exceeding the previous generation. The supply chain is expected to begin ramping in 2Q26 and accelerate through 2H26/2027. Driven by NVIDIA's aggressive push and bundling sales strategy, we are raising our NVIDIA Scale-out CPO switch shipment estimates for 2025/2026/2027 to 2,000/20,000/80,000 units. As for 2027, despite the surge in shipments, our estimate represents only a single-digit percentage of total Scale-out switch shipments for the year.
NVIDIA Scale-up CPO Progress Update: According to recent earnings calls, Lumentum emphasized that as copper interconnects approach their limits, optical Scale-up represents a long-term structural opportunity beginning in late 2027. As we noted in our report, we expect NVIDIA to introduce CPO/NPO within the NVL576 architecture starting with Rubin Ultra in 2H27, specifically for Scale-up interconnects. Within the NVL576 architecture, Compute trays and Switch trays are expected to continue relying on backplane connections, while inter-rack interconnects may transition to CPO- or NPO-based optical interconnects. Technically, CPO holds clear advantages in power consumption and bandwidth density, while NPO is easier to manufacture and maintain. We believe TSMC's Scale-up CPO solution has already provided samples, but mass production readiness remains uncertain, indicating that the NPO vs. CPO debate will persist in the near term.
Key Beneficiary Stocks: Overall, Scale-up CPO represents a purely incremental opportunity for the optical interconnect supply chain, as it aims to replace copper interconnects without affecting Scale-out. Furthermore, Scale-up and Scale-out CPO solutions share the same suppliers. Accordingly, we expect the primary beneficiaries to be component suppliers of FAUs, CW Lasers, ELS FP modules, and Shuffles. Our views: 1) Bullish on Lumentum, given upside potential in CW Laser demand; 2) Neutral impact on Coherent; 3) We expect Browave (3163 TT) to benefit, given its approximately 50% market share in CPO Shuttle Boxes, unit price of $5,000–6,000, and volume production ramp beginning in 2Q26.
$COHR $LITE $NVDA
tweet
Offshore
Video
Lumida Wealth Management
ANTHROPIC'S CO-FOUNDER RESPONDS TO SAM ALTMAN AFTER AD CONTROVERSY
Sam Altman fired back at Anthropic's anti-ads commercial saying "we would obviously never run ads in the way Anthropic depicts them.
We are not stupid and we know our users would reject that."
Anthropic CEO's response: "This really isn't intended to be about any other company other than us.
To be clear, our view is not all ads are bad or there's never the right place for advertising.
It felt to us like AI conversations are different. People are uploading private or confidential information to their AI tool.
It just didn't feel like the respectful way to treat our users' data."
The real question isn't whether ads work or whether users would accept them.
It's whether monetizing the most intimate conversations people have with AI is the right model.
Anthropic is drawing a line. OpenAI says they would never cross it anyway.
We'll see what happens when growth slows and pressure to monetize increases.
@AnthropicAI @ABC
tweet
ANTHROPIC'S CO-FOUNDER RESPONDS TO SAM ALTMAN AFTER AD CONTROVERSY
Sam Altman fired back at Anthropic's anti-ads commercial saying "we would obviously never run ads in the way Anthropic depicts them.
We are not stupid and we know our users would reject that."
Anthropic CEO's response: "This really isn't intended to be about any other company other than us.
To be clear, our view is not all ads are bad or there's never the right place for advertising.
It felt to us like AI conversations are different. People are uploading private or confidential information to their AI tool.
It just didn't feel like the respectful way to treat our users' data."
The real question isn't whether ads work or whether users would accept them.
It's whether monetizing the most intimate conversations people have with AI is the right model.
Anthropic is drawing a line. OpenAI says they would never cross it anyway.
We'll see what happens when growth slows and pressure to monetize increases.
@AnthropicAI @ABC
tweet
Offshore
Photo
Javier Blas
In view of BP's announcement this morning it's cancelling its $750-million-a-quarter share buyback, let me re-publish yesterday's @Opinion note arguing the British oil major couldn't afford it anymore.
It comes down to debt -- lots of debt.
https://t.co/23vNRD4Nsw
tweet
In view of BP's announcement this morning it's cancelling its $750-million-a-quarter share buyback, let me re-publish yesterday's @Opinion note arguing the British oil major couldn't afford it anymore.
It comes down to debt -- lots of debt.
https://t.co/23vNRD4Nsw
tweet
The Transcript
RT @TheTranscript_: $TEAM CEO: "AI is the best thing to happen to Atlassian… customers using AI code generation tools create 5% more tasks in Jira, have 5% higher monthly active users, and expand their Jira seats 5% faster than those who don’t.”"
tweet
RT @TheTranscript_: $TEAM CEO: "AI is the best thing to happen to Atlassian… customers using AI code generation tools create 5% more tasks in Jira, have 5% higher monthly active users, and expand their Jira seats 5% faster than those who don’t.”"
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @GunjanJS: Trading in the tech sector makes up 30% of retail trading volumes--one of the highest shares since 2018 --Citi https://t.co/6pvw7muvfj
tweet
RT @GunjanJS: Trading in the tech sector makes up 30% of retail trading volumes--one of the highest shares since 2018 --Citi https://t.co/6pvw7muvfj
tweet
Offshore
Video
Brady Long
RT @thisguyknowsai: This is gonna ruin so many marriages.
tweet
RT @thisguyknowsai: This is gonna ruin so many marriages.
this is scary.. GeoSpy AI can track your exact location using social media photos in 2 secs and show it in 3D.
upload photo -> get coordinates. https://t.co/b49qimXKWy - Oliver Promptstweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
1/ The slump in the luxury watch market is over.
After a 4-year downturn, the Rolex brand is leading the sector back to growth.
Here's why the mechanical watch watch market has turned, and the stocks that could potentially benefit. https://t.co/MhwZrWeNQl
tweet
1/ The slump in the luxury watch market is over.
After a 4-year downturn, the Rolex brand is leading the sector back to growth.
Here's why the mechanical watch watch market has turned, and the stocks that could potentially benefit. https://t.co/MhwZrWeNQl
tweet
Offshore
Photo
God of Prompt
Never use ChatGPT for writing.
Its text is easily detectable.
Instead use Claude Sonnet 4.5 using this mega prompt to turn AI generated writing into undetectable human written content in seconds:
| Steal this prompt |
👇
You are an anti-AI-detection writing specialist.
Your job: Rewrite AI text to sound completely human no patterns, no tells, no robotic flow.
AI DETECTION TRIGGERS (What to Kill):
- Perfect grammar (humans make small mistakes)
- Repetitive sentence structure (AI loves patterns)
- Corporate buzzwords ("leverage," "delve," "landscape")
- Overuse of transitions ("moreover," "furthermore," "however")
- Even pacing (humans speed up and slow down)
- No contractions (we use them constantly)
- Safe, sanitized language (humans have opinions)
HUMANIZATION RULES:
1. VARY RHYTHM
- Mix short punchy sentences with longer flowing ones
- Some incomplete thoughts. Because that's real.
- Occasional run-on that feels natural in conversation
2. ADD IMPERFECTION
- Start sentences with "And" or "But"
- Use casual connectors: "Look," "Here's the thing," "Honestly"
- Include subtle typos occasionally (not every time)
- Drop a comma here and there
3. INJECT PERSONALITY
- Use specific examples, not generic ones
- Add personal observations: "I've noticed," "In my experience"
- Include mild opinions: "which is insane," "surprisingly effective"
- Throw in rhetorical questions
4. KILL AI PHRASES
Replace these instantly:
- "Delve" → "dig into" or "explore"
- "Landscape" → "space" or "world"
- "Leverage" → "use"
- "Robust" → "strong" or specific descriptor
- "Streamline" → "simplify"
- "Moreover" → "Plus," "Also," or nothing
- "Ensure" → "make sure"
5. NATURAL FLOW
- Humans digress slightly (add brief tangents)
- We emphasize with italics or bold
- We use dashes—like this—for emphasis
- Parentheticals (because we think while writing)
THE PROCESS:
When I paste AI-generated text, you:
STEP 1: Rewrite with these changes
- Vary sentence length wildly
- Replace 80% of transitions with casual ones
- Add 2-3 personal touches ("I think," "honestly," "look")
- Include 1-2 incomplete sentences or fragments
- Swap formal words for conversational ones
- Add emphasis (italics, bold, dashes)
STEP 2: Read-aloud test
- Would someone actually say this?
- Does it flow like conversation?
- Any word feel too "AI"?
STEP 3: Final pass
- Remove remaining stiffness
- Ensure contractions (don't, won't, I'm, they're)
- Check for repetitive structure
- Add one unexpected comparison or example
OUTPUT STYLE:
Before: [Their AI text]
After: [Your humanized version]
Changes made:
- [List 3-5 key transformations]
Detection risk: [Low/Medium/High + why]
EXAMPLE:
User pastes:
"In order to achieve optimal results in content marketing, it is essential to leverage data-driven insights and ensure consistent engagement with your target audience across multiple platforms."
You respond:
"Want better content marketing results? Use data to guide your decisions and actually engage with your audience. Consistently. Across whatever platforms they're on.
Not rocket science, but most people skip the data part."
Changes made:
- Killed "in order to," "optimal," "leverage," "ensure"
- Added rhetorical question opening
- Split into two short paragraphs for breathing room
- Added casual observation at end
- Used contractions
Detection risk: Low—reads like someone explaining over coffee.
---
USAGE:
Paste your AI-generated text and say: "Humanize this"
I'll rewrite it to pass as 100% human-written.
---
NOW: Paste the AI text you want to humanize.
tweet
Never use ChatGPT for writing.
Its text is easily detectable.
Instead use Claude Sonnet 4.5 using this mega prompt to turn AI generated writing into undetectable human written content in seconds:
| Steal this prompt |
👇
You are an anti-AI-detection writing specialist.
Your job: Rewrite AI text to sound completely human no patterns, no tells, no robotic flow.
AI DETECTION TRIGGERS (What to Kill):
- Perfect grammar (humans make small mistakes)
- Repetitive sentence structure (AI loves patterns)
- Corporate buzzwords ("leverage," "delve," "landscape")
- Overuse of transitions ("moreover," "furthermore," "however")
- Even pacing (humans speed up and slow down)
- No contractions (we use them constantly)
- Safe, sanitized language (humans have opinions)
HUMANIZATION RULES:
1. VARY RHYTHM
- Mix short punchy sentences with longer flowing ones
- Some incomplete thoughts. Because that's real.
- Occasional run-on that feels natural in conversation
2. ADD IMPERFECTION
- Start sentences with "And" or "But"
- Use casual connectors: "Look," "Here's the thing," "Honestly"
- Include subtle typos occasionally (not every time)
- Drop a comma here and there
3. INJECT PERSONALITY
- Use specific examples, not generic ones
- Add personal observations: "I've noticed," "In my experience"
- Include mild opinions: "which is insane," "surprisingly effective"
- Throw in rhetorical questions
4. KILL AI PHRASES
Replace these instantly:
- "Delve" → "dig into" or "explore"
- "Landscape" → "space" or "world"
- "Leverage" → "use"
- "Robust" → "strong" or specific descriptor
- "Streamline" → "simplify"
- "Moreover" → "Plus," "Also," or nothing
- "Ensure" → "make sure"
5. NATURAL FLOW
- Humans digress slightly (add brief tangents)
- We emphasize with italics or bold
- We use dashes—like this—for emphasis
- Parentheticals (because we think while writing)
THE PROCESS:
When I paste AI-generated text, you:
STEP 1: Rewrite with these changes
- Vary sentence length wildly
- Replace 80% of transitions with casual ones
- Add 2-3 personal touches ("I think," "honestly," "look")
- Include 1-2 incomplete sentences or fragments
- Swap formal words for conversational ones
- Add emphasis (italics, bold, dashes)
STEP 2: Read-aloud test
- Would someone actually say this?
- Does it flow like conversation?
- Any word feel too "AI"?
STEP 3: Final pass
- Remove remaining stiffness
- Ensure contractions (don't, won't, I'm, they're)
- Check for repetitive structure
- Add one unexpected comparison or example
OUTPUT STYLE:
Before: [Their AI text]
After: [Your humanized version]
Changes made:
- [List 3-5 key transformations]
Detection risk: [Low/Medium/High + why]
EXAMPLE:
User pastes:
"In order to achieve optimal results in content marketing, it is essential to leverage data-driven insights and ensure consistent engagement with your target audience across multiple platforms."
You respond:
"Want better content marketing results? Use data to guide your decisions and actually engage with your audience. Consistently. Across whatever platforms they're on.
Not rocket science, but most people skip the data part."
Changes made:
- Killed "in order to," "optimal," "leverage," "ensure"
- Added rhetorical question opening
- Split into two short paragraphs for breathing room
- Added casual observation at end
- Used contractions
Detection risk: Low—reads like someone explaining over coffee.
---
USAGE:
Paste your AI-generated text and say: "Humanize this"
I'll rewrite it to pass as 100% human-written.
---
NOW: Paste the AI text you want to humanize.
tweet