Offshore
Video
memenodes
Single mothers when they see a successful 30year old man with no kid https://t.co/q7ZNtw4qee
tweet
Single mothers when they see a successful 30year old man with no kid https://t.co/q7ZNtw4qee
tweet
Offshore
Photo
memenodes
Look at him having the best time of his life
tweet
Look at him having the best time of his life
JUST IN: 🇰🇵 North Korea's Kim Jong Un personally test drives new military rocket launcher vehicle. https://t.co/lnLKXk2zNL - BRICS Newstweet
Offshore
Photo
God of Prompt
🚨 R.I.P Harvard MBA.
I built a personal MBA using 12 prompts across Claude and Gemini.
It teaches business strategy, growth tactics, and pricing psychology better than any $200K degree.
Here's every prompt you can copy & paste: https://t.co/DhvhgN0OEz
tweet
🚨 R.I.P Harvard MBA.
I built a personal MBA using 12 prompts across Claude and Gemini.
It teaches business strategy, growth tactics, and pricing psychology better than any $200K degree.
Here's every prompt you can copy & paste: https://t.co/DhvhgN0OEz
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Perplexity is terrifyingly good at competitive intelligence.
If you use these 10 prompts, you’ll see why:
(Bookmark this thread for later) https://t.co/iEiiYxTKyp
tweet
RT @godofprompt: Perplexity is terrifyingly good at competitive intelligence.
If you use these 10 prompts, you’ll see why:
(Bookmark this thread for later) https://t.co/iEiiYxTKyp
tweet
Offshore
Photo
Javier Blas
COLUMN: Is the White House lulling itself into a false sense of security about energy and the Middle East?
"Just because Trump bombed Iran last year without sending oil prices skyrocketing, it doesn’t mean he can do it again."
@Opinion
https://t.co/Pqx721NJbL
tweet
COLUMN: Is the White House lulling itself into a false sense of security about energy and the Middle East?
"Just because Trump bombed Iran last year without sending oil prices skyrocketing, it doesn’t mean he can do it again."
@Opinion
https://t.co/Pqx721NJbL
tweet
Moon Dev
5 openclaws and i will be chasing jim simons
he ran up a net worth of $31b and didn't have 6 opus's
and 5 openclaws...
ill show every step of the way https://t.co/EcL5uji1VK
tweet
5 openclaws and i will be chasing jim simons
he ran up a net worth of $31b and didn't have 6 opus's
and 5 openclaws...
ill show every step of the way https://t.co/EcL5uji1VK
tweet
anon
RT @willschoebs: BuySell Technologies 7685 is a very unique to 🇯🇵 business, that is crushing lately…don’t think it’s possible to replicate in the 🇺🇸 for multiple cultural, geographic, etc reasons
tweet
RT @willschoebs: BuySell Technologies 7685 is a very unique to 🇯🇵 business, that is crushing lately…don’t think it’s possible to replicate in the 🇺🇸 for multiple cultural, geographic, etc reasons
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
Offshore
Photo
anon
RT @zephyr_z9: Investors are bidding up a Titanium Aluminide blade supplier, because aero jet engines suppliers are converting/repurposing that capacity for industrial nat gas turbine production for AI DCs
tweet
RT @zephyr_z9: Investors are bidding up a Titanium Aluminide blade supplier, because aero jet engines suppliers are converting/repurposing that capacity for industrial nat gas turbine production for AI DCs
Rarely seen a ~3bagger in 2 month aerospace engine is a good hunting ground will be my next séries of post after video game 🇯🇵 https://t.co/1kRg472yPe - govro12tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet