The Transcript
RT @TheTranscript_: Stratechery's @benthompson on AI agents:
"Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, 'What is an agent?' and he’s like, 'Actually the largest and most successful agent in the world today is Facebook advertising', which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff"
tweet
RT @TheTranscript_: Stratechery's @benthompson on AI agents:
"Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, 'What is an agent?' and he’s like, 'Actually the largest and most successful agent in the world today is Facebook advertising', which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff"
Stratechery's @benthompson: "I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through ads on their other properties, and the challenge for OpenAI is they only have one place to put inventory, which is in ChatGPT." - The Transcripttweet
X (formerly Twitter)
The Transcript (@TheTranscript_) on X
Stratechery's @benthompson: "I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through…
Offshore
Photo
God of Prompt
🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."
And it quietly exposes why 99% of AI agents will fail in the real world.
Here’s the paper:
Most “AI agents” today aren’t agents.
They’re glorified task runners.
You give them a goal.
They break it into steps.
They call tools.
They return an output.
That’s not delegation.
That’s automation with better marketing.
Google’s paper makes a brutal point:
Delegation isn’t just splitting tasks.
It’s transferring authority, responsibility, accountability, and trust across agents dynamically.
And almost no current system does this.
Here’s what they argue real delegation actually requires:
1. Dynamic assessment
Before assigning a task, an agent must evaluate:
- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility
Not just “who has the tool?”
But: “Who should be trusted with this specific task under these constraints?”
That’s a massive shift.
2. Adaptive execution
If the delegatee underperforms…
You don’t wait for failure.
You reassign mid-execution.
Switch agents.
Escalate to a human.
Restructure the task graph.
Current agents are brittle.
Real agents need recovery logic.
3. Structural transparency
Today’s AI-to-AI delegation is opaque.
If something fails, you don’t know:
- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?
The paper proposes enforced auditability and verifiable completion.
In other words:
Agents must prove what they did.
Not just say they did it.
4. Trust calibration
This is huge.
Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.
Delegation must align trust with actual capability.
Too much trust = catastrophe.
Too little trust = wasted potential.
5. Systemic resilience
This is the part nobody is talking about.
If every agent delegates to the same high-performing model…
You create a monoculture.
One failure.
System-wide collapse.
Efficiency without redundancy = fragility.
Google explicitly warns about cascading failures in agentic economies.
That’s not sci-fi.
That’s distributed systems reality.
The paper also breaks down:
- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models
This isn’t a toy-agent paper.
It’s an operating system blueprint for the “agentic web.”
The core idea:
Delegation must be a protocol.
Not a prompt.
Right now, most “multi-agent systems” are:
Agent A → Agent B → Agent C
With zero formal responsibility structure.
In a real delegation framework:
• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable
That’s enterprise-grade infrastructure.
And we don’t have it yet.
The most important line in the paper?
Automation is not just about what AI can do.
It’s about what AI *should* do.
That distinction will decide:
- which startups survive
- which enterprises scale
- which ai deployments implode
We’re entering the phase where:
Prompt engineering → Agent engineering → Delegation engineering.
The companies that figure out intelligent delegation protocols first will build:
• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms
Everyone else will ship brittle demos.
This paper isn’t flashy.
No benchmarks.
No model release.
No hype numbers.
Just a 42-page warning:
If we don’t build adaptive, accountable delegation frameworks…
The agentic web collapses under its own complexity.
And honestly?
They’re probably right. tweet
🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."
And it quietly exposes why 99% of AI agents will fail in the real world.
Here’s the paper:
Most “AI agents” today aren’t agents.
They’re glorified task runners.
You give them a goal.
They break it into steps.
They call tools.
They return an output.
That’s not delegation.
That’s automation with better marketing.
Google’s paper makes a brutal point:
Delegation isn’t just splitting tasks.
It’s transferring authority, responsibility, accountability, and trust across agents dynamically.
And almost no current system does this.
Here’s what they argue real delegation actually requires:
1. Dynamic assessment
Before assigning a task, an agent must evaluate:
- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility
Not just “who has the tool?”
But: “Who should be trusted with this specific task under these constraints?”
That’s a massive shift.
2. Adaptive execution
If the delegatee underperforms…
You don’t wait for failure.
You reassign mid-execution.
Switch agents.
Escalate to a human.
Restructure the task graph.
Current agents are brittle.
Real agents need recovery logic.
3. Structural transparency
Today’s AI-to-AI delegation is opaque.
If something fails, you don’t know:
- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?
The paper proposes enforced auditability and verifiable completion.
In other words:
Agents must prove what they did.
Not just say they did it.
4. Trust calibration
This is huge.
Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.
Delegation must align trust with actual capability.
Too much trust = catastrophe.
Too little trust = wasted potential.
5. Systemic resilience
This is the part nobody is talking about.
If every agent delegates to the same high-performing model…
You create a monoculture.
One failure.
System-wide collapse.
Efficiency without redundancy = fragility.
Google explicitly warns about cascading failures in agentic economies.
That’s not sci-fi.
That’s distributed systems reality.
The paper also breaks down:
- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models
This isn’t a toy-agent paper.
It’s an operating system blueprint for the “agentic web.”
The core idea:
Delegation must be a protocol.
Not a prompt.
Right now, most “multi-agent systems” are:
Agent A → Agent B → Agent C
With zero formal responsibility structure.
In a real delegation framework:
• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable
That’s enterprise-grade infrastructure.
And we don’t have it yet.
The most important line in the paper?
Automation is not just about what AI can do.
It’s about what AI *should* do.
That distinction will decide:
- which startups survive
- which enterprises scale
- which ai deployments implode
We’re entering the phase where:
Prompt engineering → Agent engineering → Delegation engineering.
The companies that figure out intelligent delegation protocols first will build:
• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms
Everyone else will ship brittle demos.
This paper isn’t flashy.
No benchmarks.
No model release.
No hype numbers.
Just a 42-page warning:
If we don’t build adaptive, accountable delegation frameworks…
The agentic web collapses under its own complexity.
And honestly?
They’re probably right. tweet
Offshore
Photo
Brady Long
I reverse-engineered the actual prompting frameworks that top AI labs use internally.
Not the fluff you see on Twitter.
The real shit that turns vague inputs into precise, structured outputs.
Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.
Here's what actually moves the needle:
tweet
I reverse-engineered the actual prompting frameworks that top AI labs use internally.
Not the fluff you see on Twitter.
The real shit that turns vague inputs into precise, structured outputs.
Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.
Here's what actually moves the needle:
tweet
Offshore
Video
God of Prompt
RT @godofprompt: Sora, Runway,... they all do the same damn thing.
You prompt. You wait. You get a clip. You start over.
That's not creation. That's a glorified vending machine with a $20/month subscription.
PixVerse R1 just made all of it look ancient. Real-time 1080P video that listens to you while it's generating. No render bar. No fixed clips. No "try again."
Here's why nobody's ready for this: 👇
tweet
RT @godofprompt: Sora, Runway,... they all do the same damn thing.
You prompt. You wait. You get a clip. You start over.
That's not creation. That's a glorified vending machine with a $20/month subscription.
PixVerse R1 just made all of it look ancient. Real-time 1080P video that listens to you while it's generating. No render bar. No fixed clips. No "try again."
Here's why nobody's ready for this: 👇
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: After interviewing 12 AI researchers from OpenAI, Anthropic, and Google, I noticed they all use the same 10 prompts.
Not the ones you see on X and LinkedIn.
These are the prompts that actually ship products, publish papers, and break benchmarks.
Here's what they told me ↓ https://t.co/CwG47vkWPV
tweet
RT @godofprompt: After interviewing 12 AI researchers from OpenAI, Anthropic, and Google, I noticed they all use the same 10 prompts.
Not the ones you see on X and LinkedIn.
These are the prompts that actually ship products, publish papers, and break benchmarks.
Here's what they told me ↓ https://t.co/CwG47vkWPV
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @willschoebs: Nice little Friday afternoon in 🇯🇵 tech land https://t.co/yMiw5wStVw
tweet
RT @willschoebs: Nice little Friday afternoon in 🇯🇵 tech land https://t.co/yMiw5wStVw
tweet
Jukan
Nomura SK Hynix Comment: SK Hynix 2026/27F Operating Profit Forecast at $130.8B / $184.8B
"We estimate that commodity memory price increases in 1Q26 significantly exceeded our initial expectations. We estimate commodity DRAM/NAND prices rose +90%/+60% QoQ in 1Q, substantially surpassing our previous forecasts of DRAM +56% and NAND +40% QoQ. Reflecting this, we raise our 1Q26F operating profit (OP) estimate for Hynix from KRW 29T to KRW 36T. We also raise our full-year 2026F commodity DRAM/NAND price growth forecasts from +126%/+115% YoY to +176%/+146% YoY. Accordingly, we revise up our 2026/27F operating profit (OP) estimates to KRW 189T / KRW 267T. We expect Hynix to achieve DRAM/NAND operating profit margins (OPM) of 76%/57% in 2026F. Factoring in higher quarterly performance bonus costs, we estimate 2026F DRAM/NAND cost per bit will increase +26%/+18% YoY, respectively."
tweet
Nomura SK Hynix Comment: SK Hynix 2026/27F Operating Profit Forecast at $130.8B / $184.8B
"We estimate that commodity memory price increases in 1Q26 significantly exceeded our initial expectations. We estimate commodity DRAM/NAND prices rose +90%/+60% QoQ in 1Q, substantially surpassing our previous forecasts of DRAM +56% and NAND +40% QoQ. Reflecting this, we raise our 1Q26F operating profit (OP) estimate for Hynix from KRW 29T to KRW 36T. We also raise our full-year 2026F commodity DRAM/NAND price growth forecasts from +126%/+115% YoY to +176%/+146% YoY. Accordingly, we revise up our 2026/27F operating profit (OP) estimates to KRW 189T / KRW 267T. We expect Hynix to achieve DRAM/NAND operating profit margins (OPM) of 76%/57% in 2026F. Factoring in higher quarterly performance bonus costs, we estimate 2026F DRAM/NAND cost per bit will increase +26%/+18% YoY, respectively."
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @AzizSapphire: China’s 🇨🇳 industrial clusters
🇨🇳 🇨🇳 🇨🇳 https://t.co/A10bZ0QOrM
tweet
RT @AzizSapphire: China’s 🇨🇳 industrial clusters
🇨🇳 🇨🇳 🇨🇳 https://t.co/A10bZ0QOrM
tweet