Offshore
Photo
Michael Fritzell (Asian Century Stocks)
Narrative violation 🚨🚨🚨

Gen Alpha preferring movie theaters more than any other generation is really interesting.

h/t @kylascan https://t.co/ePYAN20dSY
- fed_speak
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @DMichaelTripi: NEW: Probability that Donald Trump is out as President before the end of his term in 2029 has nearly doubled in last month according to Kalshi. https://t.co/iEVBIkTHRh
tweet
memenodes
your wife is upset that you round-tripped life changing gains

my wife never knew i had them to begin with

we are not the same
tweet
Offshore
Photo
Illiquid
Seikoh Giken hikes FY '25 forecasts for optical products. Sales +39%. Operating Profit +50%.

Enjoy. We have a new top pick for Japan levered to HBM production. Its on the orange site.

In particular, you'll want to note $mu's comments yesterday about HBM production being in full swing, shipping one quarter early and demand being "significantly higher" than the industry's ability to supply.
- Illiquid
tweet
Michael Fritzell (Asian Century Stocks)
RT @pernasresearch: https://t.co/YlbNtRe2Vl
tweet
Offshore
Video
memenodes
Her reaction when you didn't bring a gift on Valentines day https://t.co/Y944FNJCqO
tweet
Offshore
Video
memenodes
worst thing he can say is no https://t.co/7v65uCtYv3
tweet
The Transcript
RT @TheTranscript_: Stratechery's @benthompson on AI agents:

"Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, 'What is an agent?' and he’s like, 'Actually the largest and most successful agent in the world today is Facebook advertising', which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff"

Stratechery's @benthompson: "I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through ads on their other properties, and the challenge for OpenAI is they only have one place to put inventory, which is in ChatGPT."
- The Transcript
tweet
Offshore
Photo
God of Prompt
🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."

And it quietly exposes why 99% of AI agents will fail in the real world.

Here’s the paper:

Most “AI agents” today aren’t agents.

They’re glorified task runners.

You give them a goal.
They break it into steps.
They call tools.
They return an output.

That’s not delegation.

That’s automation with better marketing.

Google’s paper makes a brutal point:

Delegation isn’t just splitting tasks.

It’s transferring authority, responsibility, accountability, and trust across agents dynamically.

And almost no current system does this.

Here’s what they argue real delegation actually requires:

1. Dynamic assessment

Before assigning a task, an agent must evaluate:

- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility

Not just “who has the tool?”

But: “Who should be trusted with this specific task under these constraints?”

That’s a massive shift.

2. Adaptive execution

If the delegatee underperforms…

You don’t wait for failure.

You reassign mid-execution.

Switch agents.
Escalate to a human.
Restructure the task graph.

Current agents are brittle.
Real agents need recovery logic.

3. Structural transparency

Today’s AI-to-AI delegation is opaque.

If something fails, you don’t know:

- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?

The paper proposes enforced auditability and verifiable completion.

In other words:

Agents must prove what they did.

Not just say they did it.

4. Trust calibration

This is huge.

Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.

Delegation must align trust with actual capability.

Too much trust = catastrophe.
Too little trust = wasted potential.

5. Systemic resilience

This is the part nobody is talking about.

If every agent delegates to the same high-performing model…

You create a monoculture.

One failure.
System-wide collapse.

Efficiency without redundancy = fragility.

Google explicitly warns about cascading failures in agentic economies.

That’s not sci-fi.
That’s distributed systems reality.

The paper also breaks down:

- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models

This isn’t a toy-agent paper.

It’s an operating system blueprint for the “agentic web.”

The core idea:

Delegation must be a protocol.
Not a prompt.

Right now, most “multi-agent systems” are:

Agent A → Agent B → Agent C

With zero formal responsibility structure.

In a real delegation framework:

• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable

That’s enterprise-grade infrastructure.

And we don’t have it yet.

The most important line in the paper?

Automation is not just about what AI can do.

It’s about what AI *should* do.

That distinction will decide:

- which startups survive
- which enterprises scale
- which ai deployments implode

We’re entering the phase where:

Prompt engineering → Agent engineering → Delegation engineering.

The companies that figure out intelligent delegation protocols first will build:

• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms

Everyone else will ship brittle demos.

This paper isn’t flashy.

No benchmarks.
No model release.
No hype numbers.

Just a 42-page warning:

If we don’t build adaptive, accountable delegation frameworks…

The agentic web collapses under its own complexity.

And honestly?

They’re probably right. tweet
Offshore
Photo
Brady Long
I reverse-engineered the actual prompting frameworks that top AI labs use internally.

Not the fluff you see on Twitter.

The real shit that turns vague inputs into precise, structured outputs.

Spent 3 weeks reading OpenAI's model cards, Anthropic's constitutional AI papers, and leaked internal prompt libraries.

Here's what actually moves the needle:
tweet
Offshore
Video
God of Prompt
RT @godofprompt: Sora, Runway,... they all do the same damn thing.

You prompt. You wait. You get a clip. You start over.

That's not creation. That's a glorified vending machine with a $20/month subscription.

PixVerse R1 just made all of it look ancient. Real-time 1080P video that listens to you while it's generating. No render bar. No fixed clips. No "try again."

Here's why nobody's ready for this: 👇
tweet