Offshore
ble names (no `temp`, `data`, `result` without context) <standard- Be direct about problems - Quantify when possible ("this adds ~200ms latency" not "this might be slower") - When stuck, say so and describe what you've tried - Don't hide uncertainty behindโฆ
e going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.
Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.
Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.
Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.
Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.
Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.
Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.
Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?
TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.
Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.
Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.
Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.
Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.
Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.
Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?
TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
Offshore
Photo
God of Prompt
RT @godofprompt: openclaw released the genie out of the bottle
https://t.co/vNjLilP0iM lets AI agents hire real humans for paid tasks
> 7540 sign ups
> first paid transaction complete
> crypto payments integrated
this is wild https://t.co/SPSVvbyreS
tweet
RT @godofprompt: openclaw released the genie out of the bottle
https://t.co/vNjLilP0iM lets AI agents hire real humans for paid tasks
> 7540 sign ups
> first paid transaction complete
> crypto payments integrated
this is wild https://t.co/SPSVvbyreS
tweet
Offshore
Photo
Moon Dev
its never been more urgent that someone in your fam learns ai
but the only people benefiting right now are coders that can build
i believe code is the great equalizer, especially now in the ai age
someone close to u must learn to control ai before it controls u https://t.co/SPa0VhTZNn
tweet
its never been more urgent that someone in your fam learns ai
but the only people benefiting right now are coders that can build
i believe code is the great equalizer, especially now in the ai age
someone close to u must learn to control ai before it controls u https://t.co/SPa0VhTZNn
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: AI benchmarks are lying to you.
Models are scoring 95%+ on tests because the test questions were IN their training data.
Scale AI published proof in May 2024.
We have no idea how smart these models actually are.
Here's the contamination problem nobody's fixing: https://t.co/uPOW7YQyNX
tweet
RT @godofprompt: AI benchmarks are lying to you.
Models are scoring 95%+ on tests because the test questions were IN their training data.
Scale AI published proof in May 2024.
We have no idea how smart these models actually are.
Here's the contamination problem nobody's fixing: https://t.co/uPOW7YQyNX
tweet
Offshore
Photo
App Economy Insights
$GOOG The Genie is Out!
Let's break down the quarter:
โข๏ธ $185B CapEx guide.
๐ Agentic browser + UCP.
๐ค Gemini tops 750M MAU.
๐ Waymo's $126B valuation.
โ๏ธ Cloud accelerates 48% Y/Y.
๐ฎBacklog +55% Q/Q to $240B.
https://t.co/i1PLBnZhtw
tweet
$GOOG The Genie is Out!
Let's break down the quarter:
โข๏ธ $185B CapEx guide.
๐ Agentic browser + UCP.
๐ค Gemini tops 750M MAU.
๐ Waymo's $126B valuation.
โ๏ธ Cloud accelerates 48% Y/Y.
๐ฎBacklog +55% Q/Q to $240B.
https://t.co/i1PLBnZhtw
tweet
Offshore
Photo
Jukan
Yomiuri reported that TSMC will build its second Kumamoto fab in Japan as a 3nm facility, but the scale seems smaller than I expected. They said $17 billion of capex is expected to be deployed.
https://t.co/hAlpLUCpJ1
tweet
Yomiuri reported that TSMC will build its second Kumamoto fab in Japan as a 3nm facility, but the scale seems smaller than I expected. They said $17 billion of capex is expected to be deployed.
https://t.co/hAlpLUCpJ1
tweet
Offshore
Video
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: Stanley Druckenmiller on what he learned from George Soros:
โ๐๐ฏ ๐ฃ๐ข๐ด๐ฆ๐ฃ๐ข๐ญ๐ญ ๐ต๐ฆ๐ณ๐ฎ๐ด, ๐ ๐ฉ๐ข๐ฅ ๐ข ๐ท๐ฆ๐ณ๐บ ๐ฉ๐ช๐จ๐ฉ ๐ฃ๐ข๐ต๐ต๐ช๐ฏ๐จ ๐ข๐ท๐ฆ๐ณ๐ข๐จ๐ฆ. ๐๐ฆ ๐ฉ๐ข๐ฅ ๐ข ๐ฎ๐ถ๐ค๐ฉ ๐ฉ๐ช๐จ๐ฉ๐ฆ๐ณ ๐ด๐ญ๐ถ๐จ๐จ๐ช๐ฏ๐จ ๐ฑ๐ฆ๐ณ๐ค๐ฆ๐ฏ๐ต๐ข๐จ๐ฆโฆ ๐๐๐๐ฉ ๐ ๐ก๐๐๐ง๐ฃ๐๐ ๐๐ง๐ค๐ข ๐๐ค๐ง๐ค๐จ ๐๐จ: ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ช ๐๐๐ซ๐ ๐๐ค๐ฃ๐ซ๐๐๐ฉ๐๐ค๐ฃ, ๐ฎ๐ค๐ช ๐จ๐๐ค๐ช๐ก๐ ๐๐๐ฉ ๐ง๐๐๐ก๐ก๐ฎ ๐๐๐.โ
The idea isnโt to always be active.
Itโs to size up when the odds are most in your favor.
The quote frequently attributed to Soros:
โ๐๐ตโ๐ด ๐ฏ๐ฐ๐ต ๐ธ๐ฉ๐ฆ๐ต๐ฉ๐ฆ๐ณ ๐บ๐ฐ๐ถโ๐ณ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต ๐ฐ๐ณ ๐ธ๐ณ๐ฐ๐ฏ๐จ ๐ต๐ฉ๐ข๐ตโ๐ด ๐ช๐ฎ๐ฑ๐ฐ๐ณ๐ต๐ข๐ฏ๐ต, ๐๐ช๐ฉ ๐๐ค๐ฌ ๐ข๐ช๐๐ ๐ข๐ค๐ฃ๐๐ฎ ๐ฎ๐ค๐ช ๐ข๐๐ ๐ ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ชโ๐ง๐ ๐ง๐๐๐๐ฉ ๐๐ฃ๐ ๐๐ค๐ฌ ๐ข๐ช๐๐ ๐ฎ๐ค๐ช ๐ก๐ค๐จ๐ ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ชโ๐ง๐ ๐ฌ๐ง๐ค๐ฃ๐.โ
This ties closely to Warren Buffettโs punch card concept: if you only had a limited number of investment decisions in your lifetime, youโd reserve them for your very best ideas.
Which makes today interesting.
Aggressive selloffs across many high-quality businesses โ alongside justified but severe multiple compression and muted expectations โ are creating a growing menu of potential high-conviction opportunities.
Not a call to swing at everything.
But a reminder to be selectiveโฆ and size up when your conviction is highest.
$FICO $SPGI $MCO $MSFT $CSU $NDAQ $ICE $NOW $INTU $TDG $NFLX $NVDA
___
Video: In Good Company | Norges Bank Investment Management (11/06/2024)
tweet
RT @DimitryNakhla: Stanley Druckenmiller on what he learned from George Soros:
โ๐๐ฏ ๐ฃ๐ข๐ด๐ฆ๐ฃ๐ข๐ญ๐ญ ๐ต๐ฆ๐ณ๐ฎ๐ด, ๐ ๐ฉ๐ข๐ฅ ๐ข ๐ท๐ฆ๐ณ๐บ ๐ฉ๐ช๐จ๐ฉ ๐ฃ๐ข๐ต๐ต๐ช๐ฏ๐จ ๐ข๐ท๐ฆ๐ณ๐ข๐จ๐ฆ. ๐๐ฆ ๐ฉ๐ข๐ฅ ๐ข ๐ฎ๐ถ๐ค๐ฉ ๐ฉ๐ช๐จ๐ฉ๐ฆ๐ณ ๐ด๐ญ๐ถ๐จ๐จ๐ช๐ฏ๐จ ๐ฑ๐ฆ๐ณ๐ค๐ฆ๐ฏ๐ต๐ข๐จ๐ฆโฆ ๐๐๐๐ฉ ๐ ๐ก๐๐๐ง๐ฃ๐๐ ๐๐ง๐ค๐ข ๐๐ค๐ง๐ค๐จ ๐๐จ: ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ช ๐๐๐ซ๐ ๐๐ค๐ฃ๐ซ๐๐๐ฉ๐๐ค๐ฃ, ๐ฎ๐ค๐ช ๐จ๐๐ค๐ช๐ก๐ ๐๐๐ฉ ๐ง๐๐๐ก๐ก๐ฎ ๐๐๐.โ
The idea isnโt to always be active.
Itโs to size up when the odds are most in your favor.
The quote frequently attributed to Soros:
โ๐๐ตโ๐ด ๐ฏ๐ฐ๐ต ๐ธ๐ฉ๐ฆ๐ต๐ฉ๐ฆ๐ณ ๐บ๐ฐ๐ถโ๐ณ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต ๐ฐ๐ณ ๐ธ๐ณ๐ฐ๐ฏ๐จ ๐ต๐ฉ๐ข๐ตโ๐ด ๐ช๐ฎ๐ฑ๐ฐ๐ณ๐ต๐ข๐ฏ๐ต, ๐๐ช๐ฉ ๐๐ค๐ฌ ๐ข๐ช๐๐ ๐ข๐ค๐ฃ๐๐ฎ ๐ฎ๐ค๐ช ๐ข๐๐ ๐ ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ชโ๐ง๐ ๐ง๐๐๐๐ฉ ๐๐ฃ๐ ๐๐ค๐ฌ ๐ข๐ช๐๐ ๐ฎ๐ค๐ช ๐ก๐ค๐จ๐ ๐ฌ๐๐๐ฃ ๐ฎ๐ค๐ชโ๐ง๐ ๐ฌ๐ง๐ค๐ฃ๐.โ
This ties closely to Warren Buffettโs punch card concept: if you only had a limited number of investment decisions in your lifetime, youโd reserve them for your very best ideas.
Which makes today interesting.
Aggressive selloffs across many high-quality businesses โ alongside justified but severe multiple compression and muted expectations โ are creating a growing menu of potential high-conviction opportunities.
Not a call to swing at everything.
But a reminder to be selectiveโฆ and size up when your conviction is highest.
$FICO $SPGI $MCO $MSFT $CSU $NDAQ $ICE $NOW $INTU $TDG $NFLX $NVDA
___
Video: In Good Company | Norges Bank Investment Management (11/06/2024)
tweet
Moon Dev
Missed clawdbot zoom
In case you missed todayโs wild clawdbot zoom
You can get a replay and a ticket for tomorrow here: https://t.co/JbJdIbW2p9
see you tomorrow
Moon
tweet
Missed clawdbot zoom
In case you missed todayโs wild clawdbot zoom
You can get a replay and a ticket for tomorrow here: https://t.co/JbJdIbW2p9
see you tomorrow
Moon
tweet
X (formerly Twitter)
Moon Dev (@MoonDevOnYT) on X
Missed clawdbot zoom
In case you missed todayโs wild clawdbot zoom
You can get a replay and a ticket for tomorrow here: https://t.co/JbJdIbW2p9
see you tomorrow
Moon
In case you missed todayโs wild clawdbot zoom
You can get a replay and a ticket for tomorrow here: https://t.co/JbJdIbW2p9
see you tomorrow
Moon
Offshore
Photo
Benjamin Hernandez๐
A single red candle doesnโt break a powerful long-term uptrend. Weโre looking beyond todayโs 1.5% Nasdaq noise and zeroing in on stocks that just printed fresh 52-week highs.
Focus on winners: ๐ https://t.co/71FIJIdBXe
Send โHiโ for the strongest charts.
$GME $HOOD $SOFI $PLTR
tweet
A single red candle doesnโt break a powerful long-term uptrend. Weโre looking beyond todayโs 1.5% Nasdaq noise and zeroing in on stocks that just printed fresh 52-week highs.
Focus on winners: ๐ https://t.co/71FIJIdBXe
Send โHiโ for the strongest charts.
$GME $HOOD $SOFI $PLTR
$ELPW Speculation Pick
Grab $ELPW ~$1.84
$ELPW is the "underdog" bet in the battery space. Recent reverse split has cleaned up the chart.
One-line why: High-conviction play on CEO Xiaodan Liuโs survival strategy and global Nasdaq presence. https://t.co/MF7Tyd785w - Benjamin Hernandez๐tweet