Offshore
Photo
The Few Bets That Matter
$ANET posted an excellent quarter.

Revenues up ~29%, gross/net margins at 63% & 38%, Q1-26 guidance pointing to ~30% YoY.
Shares up 9% post-earnings at ~21x sales.
Deserved.

$ALAB posted an even better one.

Revenues up 91%, with 75% gross and 17% net margins, Q1-26 guidance at 83% growth.
Shares down 28% since earnings at ~26x sales.

$ANET is more established, slower growing but higher margin than $ALAB. Both are critical to powering the next AI data centers as CapEx continues to skyrocket.

But $ALAB made the “mistake” of acquiring two companies, increasing OpEx and salaries to expand capabilities and deliver more value to customers.

Less short-term cash generation.
Exactly what the market has been punishing lately.

Still, if $ANET reflects how the market wants to price hardware names - and peers suggest it does, then $ALAB is not trading where it should.

You don’t grow ~90% before production ramps on flagship products and trade at 26x sales, while a ~30% grower in the same ecosystem facing the same risk case - $NVDA networking system, trades at 21x.

Choose your imposter.

https://t.co/l9nGdNNrQu
- The Few Bets That Matter
tweet
Offshore
Video
Startup Archive
Keith Rabois: “The velocity of your company improves by adding barrels”

Keith shares his “Barrels and Ammunition” framework for building effective teams:

“Most companies—once they get into hiring mode—just hire a lot of people. And you expect that as you add people your throughput and velocity of shipping things is going to increase. But it turns out it doesn’t work that way. Usually when you hire more engineers, you actually don’t get that much more done. You sometimes get less done.”

Keith argues that the reason for this is that most people in a company—even great people—are “ammunition.” But to improve velocity, you need “barrels”. He defines barrels as extremely talented people who can take ideas from inception all the way through to fully shipped product. Most companies start with one barrel (the founder). And when they add another, they can get twice as many things done per week, quarter, etc.

But true barrels are incredibly difficult to find:

“When you have them, give them lots of equity, promote them, take them to dinner every week because they’re virtually irreplaceable. They’re also very culturally specific. A barrel at one company may not be a barrel at another company.”

Video source: @ycombinator (2014)
tweet
Offshore
Photo
The Transcript
$SPOT Co-CEO says senior engineers at Spotify Technology have largely stopped writing code themselves since December 2025 when Claude's Opus 4.5 came out:

"So it is a big change. It is real and it's happening fast" https://t.co/6o7rTlAkRO
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."

And it quietly exposes why 99% of AI agents will fail in the real world.

Here’s the paper:

Most “AI agents” today aren’t agents.

They’re glorified task runners.

You give them a goal.
They break it into steps.
They call tools.
They return an output.

That’s not delegation.

That’s automation with better marketing.

Google’s paper makes a brutal point:

Delegation isn’t just splitting tasks.

It’s transferring authority, responsibility, accountability, and trust across agents dynamically.

And almost no current system does this.

Here’s what they argue real delegation actually requires:

1. Dynamic assessment

Before assigning a task, an agent must evaluate:

- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility

Not just “who has the tool?”

But: “Who should be trusted with this specific task under these constraints?”

That’s a massive shift.

2. Adaptive execution

If the delegatee underperforms…

You don’t wait for failure.

You reassign mid-execution.

Switch agents.
Escalate to a human.
Restructure the task graph.

Current agents are brittle.
Real agents need recovery logic.

3. Structural transparency

Today’s AI-to-AI delegation is opaque.

If something fails, you don’t know:

- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?

The paper proposes enforced auditability and verifiable completion.

In other words:

Agents must prove what they did.

Not just say they did it.

4. Trust calibration

This is huge.

Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.

Delegation must align trust with actual capability.

Too much trust = catastrophe.
Too little trust = wasted potential.

5. Systemic resilience

This is the part nobody is talking about.

If every agent delegates to the same high-performing model…

You create a monoculture.

One failure.
System-wide collapse.

Efficiency without redundancy = fragility.

Google explicitly warns about cascading failures in agentic economies.

That’s not sci-fi.
That’s distributed systems reality.

The paper also breaks down:

- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models

This isn’t a toy-agent paper.

It’s an operating system blueprint for the “agentic web.”

The core idea:

Delegation must be a protocol.
Not a prompt.

Right now, most “multi-agent systems” are:

Agent A → Agent B → Agent C

With zero formal responsibility structure.

In a real delegation framework:

• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable

That’s enterprise-grade infrastructure.

And we don’t have it yet.

The most important line in the paper?

Automation is not just about what AI can do.

It’s about what AI *should* do.

That distinction will decide:

- which startups survive
- which enterprises scale
- which ai deployments implode

We’re entering the phase where:

Prompt engineering → Agent engineering → Delegation engineering.

The companies that figure out intelligent delegation protocols first will build:

• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms

Everyone else will ship brittle demos.

This paper isn’t flashy.

No benchmarks.
No model release.
No hype numbers.

Just a 42-page warning:

If we don’t build adaptive, accountable delegation frameworks…

The agentic web collapses under its own complexity.

And honestly?

They’re probably right. tweet
Moon Dev
im taking me + opus over openclaw + opus anyday
tweet
Javier Blas
RT @LiveSquawk: US Pres Trump: I Think Negotiations With Iran Will Be Successful
- If Iran Talks Unsuccessful, It'll Be Bad Day For Iran
- Relationship With Venezuela Is As Good As Possible
- We'll Need Aircraft Carrier If No Deal With Iran
- Second Carrier Just Arrived In Persian Gulf
- Looking At A Prime Minister For Iraq
- Russia Wants A Deal, Zelenskiy Has To Get Moving
- We’re Negotiating Right Now For Greenland
tweet
Offshore
Photo
God of Prompt
I should charge $99 for this.

But I'm giving away our Claude Mastery Guide for free.
We just updated it with a full Claude Skills section, the feature most people still don't know exists.

Inside:
→ 30 prompt engineering principles
→ 10+ mega-prompts ready to copy
→ Mini-course from beginner to advanced
→ How to build Skills that make Claude remember your workflows forever
→ Glossary + strategic use cases

This turns Claude from a chatbot into your actual work system.

Comment "Claude" and I'll DM it to you.
(Must be following me to receive it)
tweet
Offshore
Photo
The Few Bets That Matter
$TMDX should be trading closer to $ISRG

Both are in the healthcare domain with a product years in advance on competition, growign market shares and importance within a healthcare system.

Comparable growth profiles, although $ISRG is less explosive meaning no decline, but a stable growth.

Comparable margins, although again $ISRG is slightly superior due to being optimized for probitability now, something $TMDX is working on with great results, as the lattest quarters show clearly.

There are small difference which explain why $ISRG has such a premium, and it deserves it. But the market will need to realize that $TMDX execution risks which it is pricing are only a matter of delay. Not risk.

In a few quarters, $TMDX will deserve equivalent premium.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 I just read Google DeepMind’s new paper called "Intelligent AI Delegation."

And it quietly exposes why 99% of AI agents will fail in the real world.

Here’s the paper:

Most “AI agents” today aren’t agents.

They’re glorified task runners.

You give them a goal.
They break it into steps.
They call tools.
They return an output.

That’s not delegation.

That’s automation with better marketing.

Google’s paper makes a brutal point:

Delegation isn’t just splitting tasks.

It’s transferring authority, responsibility, accountability, and trust across agents dynamically.

And almost no current system does this.

Here’s what they argue real delegation actually requires:

1. Dynamic assessment

Before assigning a task, an agent must evaluate:

- Capability
- Resource availability
- Risk
- Cost
- Verifiability
- Reversibility

Not just “who has the tool?”

But: “Who should be trusted with this specific task under these constraints?”

That’s a massive shift.

2. Adaptive execution

If the delegatee underperforms…

You don’t wait for failure.

You reassign mid-execution.

Switch agents.
Escalate to a human.
Restructure the task graph.

Current agents are brittle.
Real agents need recovery logic.

3. Structural transparency

Today’s AI-to-AI delegation is opaque.

If something fails, you don’t know:

- Was it incompetence?
- Misalignment?
- Bad decomposition?
- Malicious behavior?
- Tool failure?

The paper proposes enforced auditability and verifiable completion.

In other words:

Agents must prove what they did.

Not just say they did it.

4. Trust calibration

This is huge.

Humans routinely over-trust AI.
AI agents may over-trust other agents.
Both are dangerous.

Delegation must align trust with actual capability.

Too much trust = catastrophe.
Too little trust = wasted potential.

5. Systemic resilience

This is the part nobody is talking about.

If every agent delegates to the same high-performing model…

You create a monoculture.

One failure.
System-wide collapse.

Efficiency without redundancy = fragility.

Google explicitly warns about cascading failures in agentic economies.

That’s not sci-fi.
That’s distributed systems reality.

The paper also breaks down:

- Principal-agent problems in AI
- Authority gradients between agents
- “Zones of indifference” (agents complying without critical thinking)
- Transaction cost economics for AI markets
- Game-theoretic coordination
- Hybrid human-AI delegation models

This isn’t a toy-agent paper.

It’s an operating system blueprint for the “agentic web.”

The core idea:

Delegation must be a protocol.
Not a prompt.

Right now, most “multi-agent systems” are:

Agent A → Agent B → Agent C

With zero formal responsibility structure.

In a real delegation framework:

• Roles are defined
• Permissions are bounded
• Verification is required
• Monitoring is enforced
• Market coordination is decentralized
• Failures are attributable

That’s enterprise-grade infrastructure.

And we don’t have it yet.

The most important line in the paper?

Automation is not just about what AI can do.

It’s about what AI *should* do.

That distinction will decide:

- which startups survive
- which enterprises scale
- which ai deployments implode

We’re entering the phase where:

Prompt engineering → Agent engineering → Delegation engineering.

The companies that figure out intelligent delegation protocols first will build:

• Autonomous economic systems
• Scalable AI marketplaces
• Human-AI hybrid orgs
• Resilient agent swarms

Everyone else will ship brittle demos.

This paper isn’t flashy.

No benchmarks.
No model release.
No hype numbers.

Just a 42-page warning:

If we don’t build adaptive, accountable delegation frameworks…

The agentic web collapses under its own complexity.

And honestly?

They’re probably right. tweet
Offshore
Photo
God of Prompt
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
Offshore
Photo
God of Prompt
RT @alex_prompter: 🚨 Anthropic just dropped a complete guide on how to build Skills like a pro.

And if you’re building AI agents, this is required reading.

It’s a 30+ page deep dive called The Complete Guide to Building Skills for Claude and it quietly shifts the conversation from “prompt engineering” to real execution design.

Here’s the big idea:

A Skill isn’t just a prompt.
It’s a structured system.

You package instructions inside a https://t.co/NFHAROW040 file, optionally add scripts, references, and assets, and teach Claude a repeatable workflow once instead of re-explaining it every chat.

But the real unlock is something they call progressive disclosure.

Instead of dumping everything into context:

• A lightweight YAML frontmatter tells Claude when to use the skill
• Full instructions load only when relevant
• Extra files are accessed only if needed

Less context bloat. More precision.

They also introduce a powerful analogy:

MCP gives Claude the kitchen.
Skills give it the recipe.

Without skills: users connect tools and don’t know what to do next.
With skills: workflows trigger automatically, best practices are embedded, API calls become consistent.

They outline 3 major patterns:

1) Document & asset creation
2) Workflow automation
3) MCP enhancement

And they emphasize something most builders ignore: testing.

Trigger accuracy.
Tool call efficiency.
Failure rate.
Token usage.

This isn’t about clever wording.

It’s about designing an execution layer on top of LLMs.

Skills work across https://t.co/6tb6ixQpca, Claude Code, and the API. Build once, deploy everywhere.

The era of “just write a better prompt” is ending.

Anthropic just handed everyone a blueprint for turning chat into infrastructure.
tweet