Offshore
Photo
anon
RT @stevehou: Hitachi Construction Machinery (6305 JT) has gone vertical as Japanese (and global) mining demand enters an inflection (from - to +).
Interesting signal for those that follow and consistent with what I’ve been discussing since my 2026 outlook and recent posts on heavy industry. https://t.co/e59wq8PfeQ
tweet
RT @stevehou: Hitachi Construction Machinery (6305 JT) has gone vertical as Japanese (and global) mining demand enters an inflection (from - to +).
Interesting signal for those that follow and consistent with what I’ve been discussing since my 2026 outlook and recent posts on heavy industry. https://t.co/e59wq8PfeQ
tweet
Offshore
Video
Bourbon Capital
$AMZN one of the greatest businesses on earth still on sale https://t.co/mqAx1JGEmL
tweet
$AMZN one of the greatest businesses on earth still on sale https://t.co/mqAx1JGEmL
tweet
Offshore
Photo
The Transcript
Etsy CFO: "We ended the year with solid results, in-line with or better than our expectations...We saw stabilization and some improvement in our key customer metrics, including moderation in active buyer declines."
$ETSY: +21% Pre-Market https://t.co/d2RGSIQtKp
tweet
Etsy CFO: "We ended the year with solid results, in-line with or better than our expectations...We saw stabilization and some improvement in our key customer metrics, including moderation in active buyer declines."
$ETSY: +21% Pre-Market https://t.co/d2RGSIQtKp
tweet
Michael Fritzell (Asian Century Stocks)
This is great stuff. Worth a follow #FF
tweet
This is great stuff. Worth a follow #FF
Buying :
Asset Value Investors Limited — SAXA, Inc. (TSE:6675) from 0.00% → 5.03%
Asset Value Investors Limited — Sharingtechnology, Inc. (TSE:3989) from 23.27% → 26.05%
Will Field Capital — Qol Holdings Co., Ltd. (TSE:3034) from 13.89% → 14.97%
Old Peak Group — Nakano Corporation (TSE:1827) from 6.11% → 7.12%
Old Peak Group — Akatsuki Corp. (TSE:8737) from 10.89% → 11.91%
Selling :
Brandes Investment Partners — Komori Corporation (TSE:6349) from 9.03% → 7.78%
Goodhart Partners — Sinko Industries Ltd. (TSE:6458) from 6.93% → 5.77% - Value Trapped 🇸🇬tweet
X (formerly Twitter)
Value Trapped 🇸🇬 (@TheLongHappy) on X
Buying :
Asset Value Investors Limited — SAXA, Inc. (TSE:6675) from 0.00% → 5.03%
Asset Value Investors Limited — Sharingtechnology, Inc. (TSE:3989) from 23.27% → 26.05%
Will Field Capital — Qol Holdings Co., Ltd. (TSE:3034) from 13.89% → 14.97%
Old Peak…
Asset Value Investors Limited — SAXA, Inc. (TSE:6675) from 0.00% → 5.03%
Asset Value Investors Limited — Sharingtechnology, Inc. (TSE:3989) from 23.27% → 26.05%
Will Field Capital — Qol Holdings Co., Ltd. (TSE:3034) from 13.89% → 14.97%
Old Peak…
Offshore
Video
Moon Dev
I’m the only one doing anything productive with openclaws
I’ve got 5 of them here cooking all day https://t.co/yUoiF6LRZK
tweet
I’m the only one doing anything productive with openclaws
I’ve got 5 of them here cooking all day https://t.co/yUoiF6LRZK
tweet
Offshore
Photo
The Transcript
The sign for AGI is when these two actually join hands: 😅 https://t.co/uPWdQAWJpJ
tweet
The sign for AGI is when these two actually join hands: 😅 https://t.co/uPWdQAWJpJ
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @gvancomp: Quintessential reading provided by the one, the only @MichaelMcGaughy 🙏🏻 https://t.co/G9S0QUuM53
tweet
RT @gvancomp: Quintessential reading provided by the one, the only @MichaelMcGaughy 🙏🏻 https://t.co/G9S0QUuM53
tweet
Offshore
Photo
The Transcript
Klarna CEO @klarnaseb: "The number of our banking consumers has doubled in the past year, generating more than three times the revenue of our average consumers."
$KLAR: -15% Pre-Market https://t.co/ejFVtsAzhJ
tweet
Klarna CEO @klarnaseb: "The number of our banking consumers has doubled in the past year, generating more than three times the revenue of our average consumers."
$KLAR: -15% Pre-Market https://t.co/ejFVtsAzhJ
tweet
Javier Blas
Paraphrasing a senior energy official attending the IEA ministerial meeting:
‘…We’re meeting at a time of huge political upheaval for oil: Venezuela, Iran… And, amazingly, oil prices aren’t a big concern for everyone here. It’s $70 a barrel and not $100-plus a barrel…’
tweet
Paraphrasing a senior energy official attending the IEA ministerial meeting:
‘…We’re meeting at a time of huge political upheaval for oil: Venezuela, Iran… And, amazingly, oil prices aren’t a big concern for everyone here. It’s $70 a barrel and not $100-plus a barrel…’
tweet
Offshore
Photo
The Transcript
Wayfair CEO: "We had our third consecutive quarter of new customer growth, on top of healthy growth in repeat orders, all in the face of a category that contracted in the low single digits for the final quarter of the year. "
$W: -7% Pre-Market https://t.co/6sQbMZdabc
tweet
Wayfair CEO: "We had our third consecutive quarter of new customer growth, on top of healthy growth in repeat orders, all in the face of a category that contracted in the low single digits for the final quarter of the year. "
$W: -7% Pre-Market https://t.co/6sQbMZdabc
tweet
Offshore
Photo
The Transcript
Lemonade CEO: "Our first quarter results were strong, headlined by accelerating growth alongside healthy loss ratios and stability in our expense base."
$LMND: +12% Pre-Market https://t.co/a7NIXOaIm9
tweet
Lemonade CEO: "Our first quarter results were strong, headlined by accelerating growth alongside healthy loss ratios and stability in our expense base."
$LMND: +12% Pre-Market https://t.co/a7NIXOaIm9
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet