Offshore
Photo
Clark Square Capital
RT @TheAppInvestor: This has been a good 5 days, thanks @ClarkSquareCap for the idea! https://t.co/XvEDWIGApA
tweet
RT @TheAppInvestor: This has been a good 5 days, thanks @ClarkSquareCap for the idea! https://t.co/XvEDWIGApA
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @InvestInJapan: JPX launched the JPX Start-Up Acceleration 100 Index focused on "high growth start-ups" that trade on TSE.
Although to be honest, not sure how useful this is..
Current constituents below with some familiar names.
tweet
RT @InvestInJapan: JPX launched the JPX Start-Up Acceleration 100 Index focused on "high growth start-ups" that trade on TSE.
Although to be honest, not sure how useful this is..
Current constituents below with some familiar names.
tweet
Michael Fritzell (Asian Century Stocks)
RT @paulg: Prediction: In the AI age, taste will become even more important. When anyone can make anything, the big differentiator is what you choose to make.
https://t.co/3GQUlfH58t
tweet
RT @paulg: Prediction: In the AI age, taste will become even more important. When anyone can make anything, the big differentiator is what you choose to make.
https://t.co/3GQUlfH58t
tweet
X (formerly Twitter)
Michael Fritzell (Asian Century Stocks) (@MikeFritzell) on X
RT @paulg: Prediction: In the AI age, taste will become even more important. When anyone can make anything, the big differentiator is what…
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$APP CAGR Potential based on 2028 EPS estimates & different multiples:
Dec 2026: $15.68 (+67% YoY)
Dec 2027: $19.93 (+27% YoY)
Dec 2028: $25.80 (+29%)
CAGR assuming 2028 EPS Est:
26x → 20.6%
25x → 19.0%
24x → 17.3%
23x → 15.6%
22x → 13.8%
21x → 12.0%
20x → 10.1% https://t.co/TPT2n7QBnh
tweet
$APP CAGR Potential based on 2028 EPS estimates & different multiples:
Dec 2026: $15.68 (+67% YoY)
Dec 2027: $19.93 (+27% YoY)
Dec 2028: $25.80 (+29%)
CAGR assuming 2028 EPS Est:
26x → 20.6%
25x → 19.0%
24x → 17.3%
23x → 15.6%
22x → 13.8%
21x → 12.0%
20x → 10.1% https://t.co/TPT2n7QBnh
tweet
Offshore
Photo
Jukan
I've been at Citrini for a while now, and this is the first time I'm basically writing a report all by myself… I'm seriously pouring my soul into it. It's so exhausting. https://t.co/jWwxOKlhOY
tweet
I've been at Citrini for a while now, and this is the first time I'm basically writing a report all by myself… I'm seriously pouring my soul into it. It's so exhausting. https://t.co/jWwxOKlhOY
tweet
Offshore
Photo
Brady Long
RT @thisguyknowsai: I reverse-engineered the actual prompting frameworks that top AI labs use internally.
Not the fluff you see on Twitter.
The real shit that turns vague inputs into precise, structured outputs.
Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.
Here's what actually moves the needle:
tweet
RT @thisguyknowsai: I reverse-engineered the actual prompting frameworks that top AI labs use internally.
Not the fluff you see on Twitter.
The real shit that turns vague inputs into precise, structured outputs.
Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.
Here's what actually moves the needle:
tweet
Offshore
Photo
The Transcript
$ABNB CEO: Airbnb’s defense against disintermediation is focusing on what AI can’t replicate
"A chatbot can give you a list of homes, but it can't give you the unique ones you find on Airbnb..." https://t.co/5lwQ6BVXcD
tweet
$ABNB CEO: Airbnb’s defense against disintermediation is focusing on what AI can’t replicate
"A chatbot can give you a list of homes, but it can't give you the unique ones you find on Airbnb..." https://t.co/5lwQ6BVXcD
tweet
The Transcript
RT @TheTranscript_: Microsoft commits to building frontier in-house foundationreducing OpenAI dependence
"We have to develop our own foundation models, which are at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world" - $MSFT AI chief
[FT]
tweet
RT @TheTranscript_: Microsoft commits to building frontier in-house foundationreducing OpenAI dependence
"We have to develop our own foundation models, which are at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world" - $MSFT AI chief
[FT]
tweet
Offshore
Photo
DAIR.AI
// Improving Efficiency of Evolutionary AI Agents //
Evolutionary AI agents are powerful but can be wasteful.
Systems, inspired by AlphaEvolve and OpenEvolve, iteratively generate, mutate, and refine candidate solutions using LLMs. However, every refinement step invokes the same large model regardless of task difficulty.
Most mutations don't need a 32B model.
This new research introduces AdaptEvolve, a framework that dynamically selects which model handles each evolutionary step based on intrinsic generation confidence.
Instead of routing everything through the largest available model, a lightweight decision tree router estimates whether the small model's output is sufficient or needs escalation.
The confidence signal comes from four entropy-based metrics computed on the small model's token probabilities: Mean Confidence for global assurance, Lowest Group Confidence for localized reasoning collapses, Tail Confidence for solution stability, and Bottom-K% Confidence for distinguishing noise from systematic hallucination.
A shallow decision tree, bootstrapped from just 50 warm-up examples, uses these signals to make real-time routing decisions.
What makes this practical?
The router adapts online. An Adaptive Hoeffding Tree continuously updates its decision boundaries as the evolutionary population drifts toward harder edge cases.
On LiveCodeBench, AdaptEvolve retains 97.9% of the 32B upper-bound accuracy (73.6% vs 75.2%) while cutting compute cost by 34.4%. On MBPP, the router identifies that 85% of queries are solvable by the 4B model alone, reducing cost by 41.5% while maintaining 97.1% of peak accuracy. Across benchmarks, the method reduces total inference compute by 37.9% while retaining 97.5% of the upper-bound performance.
Evolutionary agents don't need maximum capability at every step. Confidence-driven routing turns the cost-capability trade-off from a fixed choice into a dynamic, per-step decision.
Paper: https://t.co/YSNCKZuTeN
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
// Improving Efficiency of Evolutionary AI Agents //
Evolutionary AI agents are powerful but can be wasteful.
Systems, inspired by AlphaEvolve and OpenEvolve, iteratively generate, mutate, and refine candidate solutions using LLMs. However, every refinement step invokes the same large model regardless of task difficulty.
Most mutations don't need a 32B model.
This new research introduces AdaptEvolve, a framework that dynamically selects which model handles each evolutionary step based on intrinsic generation confidence.
Instead of routing everything through the largest available model, a lightweight decision tree router estimates whether the small model's output is sufficient or needs escalation.
The confidence signal comes from four entropy-based metrics computed on the small model's token probabilities: Mean Confidence for global assurance, Lowest Group Confidence for localized reasoning collapses, Tail Confidence for solution stability, and Bottom-K% Confidence for distinguishing noise from systematic hallucination.
A shallow decision tree, bootstrapped from just 50 warm-up examples, uses these signals to make real-time routing decisions.
What makes this practical?
The router adapts online. An Adaptive Hoeffding Tree continuously updates its decision boundaries as the evolutionary population drifts toward harder edge cases.
On LiveCodeBench, AdaptEvolve retains 97.9% of the 32B upper-bound accuracy (73.6% vs 75.2%) while cutting compute cost by 34.4%. On MBPP, the router identifies that 85% of queries are solvable by the 4B model alone, reducing cost by 41.5% while maintaining 97.1% of peak accuracy. Across benchmarks, the method reduces total inference compute by 37.9% while retaining 97.5% of the upper-bound performance.
Evolutionary agents don't need maximum capability at every step. Confidence-driven routing turns the cost-capability trade-off from a fixed choice into a dynamic, per-step decision.
Paper: https://t.co/YSNCKZuTeN
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: CLAUDE IS SO COOKED THIS TIME
China just dropped Kimi K2.5, the best open model for OpenClaw(ClawdBot)
It's on par with Claude Opus4.5,
but 8x CHEAPER!!!
It's currently the #1 most used model for OpenClaw and the #1 most used model overall on OpenRouter!
Here's everything you should know:
tweet
RT @godofprompt: CLAUDE IS SO COOKED THIS TIME
China just dropped Kimi K2.5, the best open model for OpenClaw(ClawdBot)
It's on par with Claude Opus4.5,
but 8x CHEAPER!!!
It's currently the #1 most used model for OpenClaw and the #1 most used model overall on OpenRouter!
Here's everything you should know:
tweet