Offshore
Photo
memenodes
RT @naiivememe: https://t.co/YaPobFNrYT
tweet
RT @naiivememe: https://t.co/YaPobFNrYT
finally made it to .1 BTC
thank you for this dip https://t.co/jfcyFRcy6w - naiivetweet
Offshore
Video
memenodes
Enough staring at crypto charts, this is what your soul needs today.
https://t.co/GsKz04PD2G
tweet
Enough staring at crypto charts, this is what your soul needs today.
https://t.co/GsKz04PD2G
tweet
Offshore
Photo
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @DMichaelTripi: NEW: Probability that Donald Trump is out as President before the end of his term in 2029 has nearly doubled in last month according to Kalshi. https://t.co/iEVBIkTHRh
tweet
RT @DMichaelTripi: NEW: Probability that Donald Trump is out as President before the end of his term in 2029 has nearly doubled in last month according to Kalshi. https://t.co/iEVBIkTHRh
tweet
Offshore
Photo
Illiquid
Seikoh Giken hikes FY '25 forecasts for optical products. Sales +39%. Operating Profit +50%.
tweet
Seikoh Giken hikes FY '25 forecasts for optical products. Sales +39%. Operating Profit +50%.
Enjoy. We have a new top pick for Japan levered to HBM production. Its on the orange site.
In particular, you'll want to note $mu's comments yesterday about HBM production being in full swing, shipping one quarter early and demand being "significantly higher" than the industry's ability to supply. - Illiquidtweet
The Transcript
RT @TheTranscript_: Stratechery's @benthompson on AI agents:
"Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, 'What is an agent?' and he’s like, 'Actually the largest and most successful agent in the world today is Facebook advertising', which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff"
tweet
RT @TheTranscript_: Stratechery's @benthompson on AI agents:
"Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, 'What is an agent?' and he’s like, 'Actually the largest and most successful agent in the world today is Facebook advertising', which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff"
Stratechery's @benthompson: "I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through ads on their other properties, and the challenge for OpenAI is they only have one place to put inventory, which is in ChatGPT." - The Transcripttweet
X (formerly Twitter)
The Transcript (@TheTranscript_) on X
Stratechery's @benthompson: "I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through…
Offshore
Photo
God of Prompt
🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."
And it quietly exposes why 99% of AI agents will fail in the real world.
Here’s the paper:
Most “AI agents” today aren’t agents.
They’re glorified task runners.
You give them a goal.
They break it into steps.
They call tools.
They return an output.
That’s not delegation.
That’s automation with better marketing.
Google’s paper makes a brutal point:
Delegation isn’t just splitting tasks.
It’s transferring authority, responsibility, accountability, and trust across agents dynamically.
And almost no current system does this.
Here’s what they argue real delegation actually requires:
1. Dynamic assessment
Before assigning a task, an agent must evaluate:
- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility
Not just “who has the tool?”
But: “Who should be trusted with this specific task under these constraints?”
That’s a massive shift.
2. Adaptive execution
If the delegatee underperforms…
You don’t wait for failure.
You reassign mid-execution.
Switch agents.
Escalate to a human.
Restructure the task graph.
Current agents are brittle.
Real agents need recovery logic.
3. Structural transparency
Today’s AI-to-AI delegation is opaque.
If something fails, you don’t know:
- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?
The paper proposes enforced auditability and verifiable completion.
In other words:
Agents must prove what they did.
Not just say they did it.
4. Trust calibration
This is huge.
Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.
Delegation must align trust with actual capability.
Too much trust = catastrophe.
Too little trust = wasted potential.
5. Systemic resilience
This is the part nobody is talking about.
If every agent delegates to the same high-performing model…
You create a monoculture.
One failure.
System-wide collapse.
Efficiency without redundancy = fragility.
Google explicitly warns about cascading failures in agentic economies.
That’s not sci-fi.
That’s distributed systems reality.
The paper also breaks down:
- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models
This isn’t a toy-agent paper.
It’s an operating system blueprint for the “agentic web.”
The core idea:
Delegation must be a protocol.
Not a prompt.
Right now, most “multi-agent systems” are:
Agent A → Agent B → Agent C
With zero formal responsibility structure.
In a real delegation framework:
• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable
That’s enterprise-grade infrastructure.
And we don’t have it yet.
The most important line in the paper?
Automation is not just about what AI can do.
It’s about what AI *should* do.
That distinction will decide:
- which startups survive
- which enterprises scale
- which ai deployments implode
We’re entering the phase where:
Prompt engineering → Agent engineering → Delegation engineering.
The companies that figure out intelligent delegation protocols first will build:
• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms
Everyone else will ship brittle demos.
This paper isn’t flashy.
No benchmarks.
No model release.
No hype numbers.
Just a 42-page warning:
If we don’t build adaptive, accountable delegation frameworks…
The agentic web collapses under its own complexity.
And honestly?
They’re probably right. tweet
🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."
And it quietly exposes why 99% of AI agents will fail in the real world.
Here’s the paper:
Most “AI agents” today aren’t agents.
They’re glorified task runners.
You give them a goal.
They break it into steps.
They call tools.
They return an output.
That’s not delegation.
That’s automation with better marketing.
Google’s paper makes a brutal point:
Delegation isn’t just splitting tasks.
It’s transferring authority, responsibility, accountability, and trust across agents dynamically.
And almost no current system does this.
Here’s what they argue real delegation actually requires:
1. Dynamic assessment
Before assigning a task, an agent must evaluate:
- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility
Not just “who has the tool?”
But: “Who should be trusted with this specific task under these constraints?”
That’s a massive shift.
2. Adaptive execution
If the delegatee underperforms…
You don’t wait for failure.
You reassign mid-execution.
Switch agents.
Escalate to a human.
Restructure the task graph.
Current agents are brittle.
Real agents need recovery logic.
3. Structural transparency
Today’s AI-to-AI delegation is opaque.
If something fails, you don’t know:
- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?
The paper proposes enforced auditability and verifiable completion.
In other words:
Agents must prove what they did.
Not just say they did it.
4. Trust calibration
This is huge.
Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.
Delegation must align trust with actual capability.
Too much trust = catastrophe.
Too little trust = wasted potential.
5. Systemic resilience
This is the part nobody is talking about.
If every agent delegates to the same high-performing model…
You create a monoculture.
One failure.
System-wide collapse.
Efficiency without redundancy = fragility.
Google explicitly warns about cascading failures in agentic economies.
That’s not sci-fi.
That’s distributed systems reality.
The paper also breaks down:
- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models
This isn’t a toy-agent paper.
It’s an operating system blueprint for the “agentic web.”
The core idea:
Delegation must be a protocol.
Not a prompt.
Right now, most “multi-agent systems” are:
Agent A → Agent B → Agent C
With zero formal responsibility structure.
In a real delegation framework:
• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable
That’s enterprise-grade infrastructure.
And we don’t have it yet.
The most important line in the paper?
Automation is not just about what AI can do.
It’s about what AI *should* do.
That distinction will decide:
- which startups survive
- which enterprises scale
- which ai deployments implode
We’re entering the phase where:
Prompt engineering → Agent engineering → Delegation engineering.
The companies that figure out intelligent delegation protocols first will build:
• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms
Everyone else will ship brittle demos.
This paper isn’t flashy.
No benchmarks.
No model release.
No hype numbers.
Just a 42-page warning:
If we don’t build adaptive, accountable delegation frameworks…
The agentic web collapses under its own complexity.
And honestly?
They’re probably right. tweet