Offshore
Video
God of Prompt
Google, Apple, and Amazon spent a decade putting smart speakers in every home.

None of them figured out that families aren't just multiple individuals using the same device.

Shared context. Age appropriate guardrails. Parental visibility. These aren't features. They're table stakes for household AI that nobody built.

First mover advantage is real here. The family AI market is wide open.

Meet Nori: The World’s First Family AI
📱 Download the App (iOS & Android) today. https://t.co/KVdRgcqoGV
- Nori
tweet
God of Prompt
karpathy’s burying the lead with the “10x engineer” question.

the answer is the ratio explodes. but not how people think.

before: 10x engineers were faster at execution. they typed more, debugged quicker, held more state in their head.

after: execution speed converges. a mediocre dev with claude ships code at roughly the same velocity as a senior.

so what’s left? taste. architecture.

knowing what NOT to build. recognizing when the agent is confidently sprinting toward a dead end.

the new 10x engineer isn’t faster.

they’re the one who looks at 1000 lines of agent-generated bloat and says “couldn’t you just do this instead” and cuts it to 100.

that skill doesn’t come from prompting.

it comes from decades of pattern recognition about what good software actually looks like.

the irony: the thing llms are worst at (judgment, pushing back, surfacing tradeoffs) is exactly what becomes the scarcest human skill.

we’re not automating engineering. we’re unbundling it. separating execution from taste.

and discovering that taste was always the bottleneck, we just couldn’t see it because execution was causing so much noise.

A few random notes from claude coding quite a bit last few weeks.

Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.

IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.

Tenacity. It's so interesting to watch an agent relentle[...]
Offshore
God of Prompt karpathy’s burying the lead with the “10x engineer” question. the answer is the ratio explodes. but not how people think. before: 10x engineers were faster at execution. they typed more, debugged quicker, held more state in their head. after:…
ssly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.

Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.

Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.

Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.

Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.

Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?

TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$ASML Quarterly Net Bookings 💵*

Q1 2024: €3.61B (-4%YoY)
Q2 2024: €5.57B (+24% YoY)
Q3 2024: €2.63B (+1% YoY)
Q4 2024: €7.09B (-23%% YoY)
Q1 2025: €3.94B (+9% YoY)
Q2 2025: €5.54B (-1% YoY)
Q3 2025: €5.39B (+105% YoY)
Q4 2025: €13.15B (+85% YoY)

*Edited YoY % https://t.co/lrl7mu0s9U
tweet
Offshore
Video
Quiver Quantitative
JUST IN: Some new trades caught our attention recently.

Watch here: https://t.co/uZ5bSjJobk
tweet
Offshore
Photo
God of Prompt
👀 https://t.co/iN3M5RUpcc

karpathy’s burying the lead with the “10x engineer” question.

the answer is the ratio explodes. but not how people think.

before: 10x engineers were faster at execution. they typed more, debugged quicker, held more state in their head.

after: execution speed converges. a mediocre dev with claude ships code at roughly the same velocity as a senior.

so what’s left? taste. architecture.

knowing what NOT to build. recognizing when the agent is confidently sprinting toward a dead end.

the new 10x engineer isn’t faster.

they’re the one who looks at 1000 lines of agent-generated bloat and says “couldn’t you just do this instead” and cuts it to 100.

that skill doesn’t come from prompting.

it comes from decades of pattern recognition about what good software actually looks like.

the irony: the thing llms are worst at (judgment, pushing back, surfacing tradeoffs) is exactly what becomes the scarcest human skill.

we’re not automating engineering. we’re unbundling it. separating execution from taste.

and discovering that taste was always the bottleneck, we just couldn’t see it because execution was causing so much noise.
- God of Prompt
tweet
Offshore
Video
Brady Long
$100M in 9 months is loony bin type stuff. Congrats!

Excited to launch Genspark AI Workspace 2.0 🚀

We've hit three major milestones: crossed $100M ARR, closed our $300M Series B, and onboarded 1,000+ companies to Genspark for Business in just 8 weeks.

Today we're launching the next evolution of autonomous work—major updates that fundamentally change how you work:

Speakly – Don't type, just speak! New AI voice dictation app for Mac & PC. 4x faster than typing.

Custom Workflows in AI Inbox – Build custom email workflows in plain language. AI Inbox processes them automatically.

Enhanced AI Agents – AI Slides Creative Mode, AI Music Agent, and AI Audio Agent.

See the demo 👇
- Genspark
tweet
Offshore
Video
God of Prompt
AI wrote a billion lines of code in 2025.

And somehow nobody built the one that actually matters for families.

This is the first ever launch of The World’s First Family AI... Nori.

Most optimize work. None optimize life.

Nori is my family CEO.

It remembers soccer practice, plans dinner, and keeps my family functioning 👇
https://t.co/JnoKkAVJwZ
tweet
Offshore
Photo
Fiscal.ai
AT&T just reported its highest mobile phone churn in 5 years.

$T https://t.co/RDmQ9V3aSX
tweet
Dimitry Nakhla | Babylon Capital®
5 Earnings Reports After Today’s Close 🗓️

1️⃣ $MSFT
💵 Rev Est: $80.27B (+15% YoY)
💵 Eps Est: $3.87 (+20% YoY)

2️⃣ $FICO
💵 Rev Est: $501.25M (+14% YoY)
💵 Eps Est: $7.08 (+16% YoY)

3️⃣ $META
💵 Rev Est: $58.41B (+21% YoY)
💵 Eps Est: $8.18 (+2% YoY)

4️⃣ $NOW
💵 Rev Est: $3.53B (+19% YoY)
💵 Eps Est: $0.89 (+22% YoY)

5️⃣ $LRCX
💵 Rev Est: $5.23B (+20% YoY)
💵 Eps Est: $1.17 (+28% YoY)
tweet