Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: AI has created extraordinary investment opportunities in AI leaders — while fear of AI disruption has 𝘴𝘪𝘮𝘶𝘭𝘵𝘢𝘯𝘦𝘰𝘶𝘴𝘭𝘺 created extraordinary investment opportunities in businesses viewed as being threatened by it.
A bit ironic, no?
tweet
RT @DimitryNakhla: AI has created extraordinary investment opportunities in AI leaders — while fear of AI disruption has 𝘴𝘪𝘮𝘶𝘭𝘵𝘢𝘯𝘦𝘰𝘶𝘴𝘭𝘺 created extraordinary investment opportunities in businesses viewed as being threatened by it.
A bit ironic, no?
tweet
Offshore
Photo
The Few Bets That Matter
Last month I shared an article about FinX darlings I wouldn't buy. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those companies. But it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/n9LAg3cfxj
tweet
Last month I shared an article about FinX darlings I wouldn't buy. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those companies. But it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/n9LAg3cfxj
https://t.co/gymKiwJpsu - The Few Bets That Mattertweet
Offshore
Video
Quiver Quantitative
JUST IN: Neguse questions Bondi on the elimination of the DOJ's cryptocurrency enforcement team https://t.co/oS3vhq9EPO
tweet
JUST IN: Neguse questions Bondi on the elimination of the DOJ's cryptocurrency enforcement team https://t.co/oS3vhq9EPO
tweet
Offshore
Photo
The Few Bets That Matter
Last month I shared an article about FinX darlings I wouldn't buy & why. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those, but it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/3CLgBw32Gb
tweet
Last month I shared an article about FinX darlings I wouldn't buy & why. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those, but it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/3CLgBw32Gb
https://t.co/gymKiwJpsu - The Few Bets That Mattertweet
Offshore
Photo
God of Prompt
RT @alex_prompter: 🚨 Holy shit... This paper just quietly redefined what AI agents actually need to survive in the real world.
For years, we’ve been obsessed with making models “smarter.” Bigger context windows. Longer chain-of-thought. More tools.
But this research introduces something far more fundamental:
An "Agent World Model."
And it changes the game.
Here’s the core idea:
Most AI agents today operate like goldfish.
They see a prompt → act → forget.
They don’t build an internal model of how the world works. They don’t simulate consequences. They don’t track evolving environments in a structured way.
This paper argues that if agents are going to operate in complex, dynamic environments, they need something closer to what humans use:
A structured internal world representation.
Not just memory.
Not just retrieval.
A predictive model of how actions change the environment.
According to the authors: the Agent World Model allows systems to:
• Represent environment states
• Simulate future trajectories
• Anticipate outcomes before acting
• Update beliefs based on feedback
That’s not prompt engineering.
That’s cognition scaffolding.
The shift is subtle but massive.
Right now, most agents are reactive:
Observe → Respond.
World-model agents become proactive:
Observe → Simulate → Evaluate → Act.
Instead of blindly calling tools and hoping for the best, the agent internally “imagines” possible futures and selects actions that optimize long-term objectives.
This is how reinforcement learning agents beat humans in Go.
This is how autonomous driving systems predict collisions.
This is how humans plan.
The paper outlines how the world model integrates perception, memory, planning, and action into a unified loop.
And here’s what’s wild:
It’s not about making the base LLM bigger.
It’s about structuring how the agent thinks over time.
That’s the missing layer in most “AI agent” demos today.
You can chain prompts.
You can add tools.
You can build multi-agent workflows.
But without a world model, your system has no stable understanding of:
• What has changed
• What persists
• What consequences actions create
It’s operating blind between steps.
The deeper implication?
The next leap in AI won’t come from larger models.
It will come from agents that accumulate structured understanding of their environments and reason over it.
From reactive systems → to internally simulated systems.
From autocomplete → to world-aware actors.
If this architecture matures, we’re not just building better assistants.
We’re building systems that can plan, adapt, and operate autonomously in messy, real-world environments without constant human correction.
And that’s a completely different category of intelligence.
Paper: "Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning"
tweet
RT @alex_prompter: 🚨 Holy shit... This paper just quietly redefined what AI agents actually need to survive in the real world.
For years, we’ve been obsessed with making models “smarter.” Bigger context windows. Longer chain-of-thought. More tools.
But this research introduces something far more fundamental:
An "Agent World Model."
And it changes the game.
Here’s the core idea:
Most AI agents today operate like goldfish.
They see a prompt → act → forget.
They don’t build an internal model of how the world works. They don’t simulate consequences. They don’t track evolving environments in a structured way.
This paper argues that if agents are going to operate in complex, dynamic environments, they need something closer to what humans use:
A structured internal world representation.
Not just memory.
Not just retrieval.
A predictive model of how actions change the environment.
According to the authors: the Agent World Model allows systems to:
• Represent environment states
• Simulate future trajectories
• Anticipate outcomes before acting
• Update beliefs based on feedback
That’s not prompt engineering.
That’s cognition scaffolding.
The shift is subtle but massive.
Right now, most agents are reactive:
Observe → Respond.
World-model agents become proactive:
Observe → Simulate → Evaluate → Act.
Instead of blindly calling tools and hoping for the best, the agent internally “imagines” possible futures and selects actions that optimize long-term objectives.
This is how reinforcement learning agents beat humans in Go.
This is how autonomous driving systems predict collisions.
This is how humans plan.
The paper outlines how the world model integrates perception, memory, planning, and action into a unified loop.
And here’s what’s wild:
It’s not about making the base LLM bigger.
It’s about structuring how the agent thinks over time.
That’s the missing layer in most “AI agent” demos today.
You can chain prompts.
You can add tools.
You can build multi-agent workflows.
But without a world model, your system has no stable understanding of:
• What has changed
• What persists
• What consequences actions create
It’s operating blind between steps.
The deeper implication?
The next leap in AI won’t come from larger models.
It will come from agents that accumulate structured understanding of their environments and reason over it.
From reactive systems → to internally simulated systems.
From autocomplete → to world-aware actors.
If this architecture matures, we’re not just building better assistants.
We’re building systems that can plan, adapt, and operate autonomously in messy, real-world environments without constant human correction.
And that’s a completely different category of intelligence.
Paper: "Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning"
tweet
Offshore
Photo
God of Prompt
skills and connects are available for free users of Claude
no ads + file creation + connectors + skills = ChatGPT killer?
tweet
skills and connects are available for free users of Claude
no ads + file creation + connectors + skills = ChatGPT killer?
We're bringing some of Claude’s most-used features to the free plan.
File creation, connectors, and skills are all now available without a subscription. https://t.co/6EjrwLTWVQ - Claudetweet
Offshore
Video
God of Prompt
we're cooked ngl
tweet
we're cooked ngl
PROMPT: "Luffy coding on a Macbook on the Thousand Sunny, RAGING, then throwing it overboard." - Seedance 2.0
WOOOOOOOW https://t.co/FXv7W91QNE - BOOTOSHI 👑tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Prompt engineering is dead.
"Prompt chaining" is the new meta.
Break one complex prompt into 5 simple prompts that feed into each other.
I tested this for 30 days. Output quality jumped 67%.
Here's how to do it ↓ https://t.co/K3r7wrcJw7
tweet
RT @godofprompt: Prompt engineering is dead.
"Prompt chaining" is the new meta.
Break one complex prompt into 5 simple prompts that feed into each other.
I tested this for 30 days. Output quality jumped 67%.
Here's how to do it ↓ https://t.co/K3r7wrcJw7
tweet