Offshore
Photo
Benjamin Hernandez๐
Power Move Recommendation: $UOKA
Ticker: $UOKA | Target: $5.45
$UOKA is the sleeper hit of the Consumer Cyclical sector! Todayโs price action is a total game-changer for the micro-cap space.
Reason calling it: Nasdaq compliance confirmation and fresh project design partnerships https://t.co/szjMNsgiJZ
tweet
Power Move Recommendation: $UOKA
Ticker: $UOKA | Target: $5.45
$UOKA is the sleeper hit of the Consumer Cyclical sector! Todayโs price action is a total game-changer for the micro-cap space.
Reason calling it: Nasdaq compliance confirmation and fresh project design partnerships https://t.co/szjMNsgiJZ
tweet
The Few Bets That Matter
The $HIMS x $NVO drama should stop.
Many make it look like a David vs Goliath narrative but it isn't. It is one of a company with patents against one trying to steal patents.
That's all it is.
You failed one stock pick. Put your big boy's pants and move on for god sake.
tweet
The $HIMS x $NVO drama should stop.
Many make it look like a David vs Goliath narrative but it isn't. It is one of a company with patents against one trying to steal patents.
That's all it is.
You failed one stock pick. Put your big boy's pants and move on for god sake.
tweet
Offshore
Video
memenodes
when you wake up from a dream that was better than your actual life https://t.co/cYvIaPKSp9
tweet
when you wake up from a dream that was better than your actual life https://t.co/cYvIaPKSp9
tweet
Offshore
Video
Michael Fritzell (Asian Century Stocks)
How can I bet Kyrgyzstan or Kazakhstan ski tourism? Air Astana?
tweet
How can I bet Kyrgyzstan or Kazakhstan ski tourism? Air Astana?
A friend: CZ, this a-hole is spreading FUD againโฆ
Me:
(Video from today, in Kyrgyzstan. No AI) https://t.co/owjYTv6N58 - CZ ๐ถ BNBtweet
Offshore
Video
Quiver Quantitative
We were recently asked about new congressional stock trades which caught our eye.
Viasat stock has now risen 516% since we reported on it.
Rheinmetall stock has risen 239%. https://t.co/cqRS4zPkIg
tweet
We were recently asked about new congressional stock trades which caught our eye.
Viasat stock has now risen 516% since we reported on it.
Rheinmetall stock has risen 239%. https://t.co/cqRS4zPkIg
tweet
Offshore
Photo
Clark Square Capital
RT @ClarkSquareCap: Sharing a new project: the Special Situations Digest.
Check out the (free) link below. https://t.co/NT0wb21Sxl
tweet
RT @ClarkSquareCap: Sharing a new project: the Special Situations Digest.
Check out the (free) link below. https://t.co/NT0wb21Sxl
tweet
Offshore
Photo
memenodes
How the gun in my drawer looks at me every time I get liquidated https://t.co/W6L9h1TelZ
tweet
How the gun in my drawer looks at me every time I get liquidated https://t.co/W6L9h1TelZ
tweet
Dimitry Nakhla | Babylon Capitalยฎ
20 Quality Compounders Return on Capital Employed (ROCE) >30% over LTM ๐ธ
1. $NFLX 30%
2. $TSM 30%
3. $CTAS 31%
4. $BLK 33%
5. $PM 34%
6. $VLO 35%
7. $NVR 36%
8. $V 38%
9. $KLAC 42%
10. $ASML 43%
11. $MTD 44%
12. $LRCX 46%
13. $STX 51%
14. $MA 60%
15. $IDXX 62%
16. $APP 63%
17. $AAPL 65%
18. $BKNG 68%
19. $NVDA 81%
20. $FICO 89%
___
๐ผ ๐๐๐๐๐๐ง ๐๐๐พ๐ ๐ง๐๐ฉ๐๐ค ๐๐ฃ๐๐๐๐๐ฉ๐๐จ ๐ข๐ค๐ง๐ ๐๐๐๐๐๐๐๐ฃ๐ฉ ๐๐๐ฅ๐๐ฉ๐๐ก ๐ช๐จ๐๐๐ ๐๐ฝ
๐๐๐๐ = ๐๐ฉ๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ซ๐จ๐๐ข๐ญ (๐๐๐๐) รท ๐๐๐ฉ๐ข๐ญ๐๐ฅ ๐๐ฆ๐ฉ๐ฅ๐จ๐ฒ๐๐
๐๐ฉ๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ซ๐จ๐๐ข๐ญ (๐๐๐๐) = profit before interest and taxes
๐๐๐ฉ๐ข๐ญ๐๐ฅ ๐๐ฆ๐ฉ๐ฅ๐จ๐ฒ๐๐ = total capital used in the business*
*Commonly calculated as Total Assets โ Current Liabilities ๐๐ณ Equity + Long-term Debt
___
Imagine a car wash business:
You invest $1,000,000 to build it (land, equipment, machines)
Each year, the car wash generates $200,000 in operating profit (before interest & taxes)
ROCE = $200,000 รท $1,000,000 = 20%
๐๐ฉ๐ช๐ด ๐ฎ๐ฆ๐ข๐ฏ๐ด: For every dollar tied up in the business, the company generates 20 cents of operating profit per year
tweet
20 Quality Compounders Return on Capital Employed (ROCE) >30% over LTM ๐ธ
1. $NFLX 30%
2. $TSM 30%
3. $CTAS 31%
4. $BLK 33%
5. $PM 34%
6. $VLO 35%
7. $NVR 36%
8. $V 38%
9. $KLAC 42%
10. $ASML 43%
11. $MTD 44%
12. $LRCX 46%
13. $STX 51%
14. $MA 60%
15. $IDXX 62%
16. $APP 63%
17. $AAPL 65%
18. $BKNG 68%
19. $NVDA 81%
20. $FICO 89%
___
๐ผ ๐๐๐๐๐๐ง ๐๐๐พ๐ ๐ง๐๐ฉ๐๐ค ๐๐ฃ๐๐๐๐๐ฉ๐๐จ ๐ข๐ค๐ง๐ ๐๐๐๐๐๐๐๐ฃ๐ฉ ๐๐๐ฅ๐๐ฉ๐๐ก ๐ช๐จ๐๐๐ ๐๐ฝ
๐๐๐๐ = ๐๐ฉ๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ซ๐จ๐๐ข๐ญ (๐๐๐๐) รท ๐๐๐ฉ๐ข๐ญ๐๐ฅ ๐๐ฆ๐ฉ๐ฅ๐จ๐ฒ๐๐
๐๐ฉ๐๐ซ๐๐ญ๐ข๐ง๐ ๐๐ซ๐จ๐๐ข๐ญ (๐๐๐๐) = profit before interest and taxes
๐๐๐ฉ๐ข๐ญ๐๐ฅ ๐๐ฆ๐ฉ๐ฅ๐จ๐ฒ๐๐ = total capital used in the business*
*Commonly calculated as Total Assets โ Current Liabilities ๐๐ณ Equity + Long-term Debt
___
Imagine a car wash business:
You invest $1,000,000 to build it (land, equipment, machines)
Each year, the car wash generates $200,000 in operating profit (before interest & taxes)
ROCE = $200,000 รท $1,000,000 = 20%
๐๐ฉ๐ช๐ด ๐ฎ๐ฆ๐ข๐ฏ๐ด: For every dollar tied up in the business, the company generates 20 cents of operating profit per year
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: ๐จ Holy shitโฆ Stanford just published the most uncomfortable paper on LLM reasoning Iโve read in a long time.
This isnโt a flashy new model or a leaderboard win. Itโs a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say theyโre doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and itโs further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasnโt stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that donโt correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors donโt just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But theyโre very clear that none of these are silver bullets yet.
The takeaway isnโt that LLMs canโt reason.
Itโs more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed weโll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
Thatโs the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
RT @godofprompt: ๐จ Holy shitโฆ Stanford just published the most uncomfortable paper on LLM reasoning Iโve read in a long time.
This isnโt a flashy new model or a leaderboard win. Itโs a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say theyโre doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and itโs further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasnโt stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that donโt correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors donโt just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But theyโre very clear that none of these are silver bullets yet.
The takeaway isnโt that LLMs canโt reason.
Itโs more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed weโll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
Thatโs the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: I donโt think many investors truly appreciate how deep the moats at $SPGI and $MCO really are.
These arenโt just data businesses โ theyโre embedded gatekeepers in global capital markets, with network effects, regulatory reliance, & decades of trust that are hard to replicate.
tweet
RT @DimitryNakhla: I donโt think many investors truly appreciate how deep the moats at $SPGI and $MCO really are.
These arenโt just data businesses โ theyโre embedded gatekeepers in global capital markets, with network effects, regulatory reliance, & decades of trust that are hard to replicate.
tweet