Offshore
Photo
God of Prompt
RT @ytscribeai: my favourite life hack to learn anything:
> find a 2-hour lecture packed with value
> get transcription with https://t.co/eclfTyTcwf
> use nano banana pro with this prompt:
"create an image of a hand-drawn cheatsheet with key concepts from this transcript: [PASTE TRANSCRIPT]"
tweet
RT @ytscribeai: my favourite life hack to learn anything:
> find a 2-hour lecture packed with value
> get transcription with https://t.co/eclfTyTcwf
> use nano banana pro with this prompt:
"create an image of a hand-drawn cheatsheet with key concepts from this transcript: [PASTE TRANSCRIPT]"
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
The first chart shows the impressive scale and trajectory of $MSFT RPO 💵
+107.57% YoY https://t.co/8UwWi02NaF
tweet
The first chart shows the impressive scale and trajectory of $MSFT RPO 💵
+107.57% YoY https://t.co/8UwWi02NaF
Microsoft $MSFT Q2 2026 Report 🗓️
✅ REV: $81.27B (+17% YoY)
✅ EPS: $4.14 (+28% YoY)
☁️ MICROSOFT CLOUD: $51.5B (+26% YoY) & commercial remaining performance obligation increased 110% to $625B https://t.co/w9A14JtO46 - Dimitry Nakhla | Babylon Capital®tweet
Offshore
Photo
App Economy Insights
$TSLA Tesla used to update this market share chart every quarter.
In Q4, the chart quietly disappeared.
When companies stop reporting a metric, it’s rarely because it no longer matters.
It’s because it stopped looking good. https://t.co/TL40npYNpo
tweet
$TSLA Tesla used to update this market share chart every quarter.
In Q4, the chart quietly disappeared.
When companies stop reporting a metric, it’s rarely because it no longer matters.
It’s because it stopped looking good. https://t.co/TL40npYNpo
tweet
Offshore
Photo
App Economy Insights
🚖 Tesla: Slump Meets Hype.
• FY26 CapEx to double to $20B.
• Model S/X make room for Optimus.
• $2B xAI bet on the Musk AI ecosystem.
https://t.co/QKg3cKJ53Y
tweet
🚖 Tesla: Slump Meets Hype.
• FY26 CapEx to double to $20B.
• Model S/X make room for Optimus.
• $2B xAI bet on the Musk AI ecosystem.
https://t.co/QKg3cKJ53Y
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Telling an LLM to "act as an expert" is lazy and doesn't work.
I tested 47 persona configurations across Claude, GPT-4, and Gemini.
Generic personas = 60% quality
Specific personas = 94% quality
Here's how to actually get expert-level outputs: https://t.co/iFZPTtp6Oh
tweet
RT @godofprompt: Telling an LLM to "act as an expert" is lazy and doesn't work.
I tested 47 persona configurations across Claude, GPT-4, and Gemini.
Generic personas = 60% quality
Specific personas = 94% quality
Here's how to actually get expert-level outputs: https://t.co/iFZPTtp6Oh
tweet
Offshore
Photo
God of Prompt
RT @alex_prompter: OpenAI and Anthropic engineers leaked these prompt techniques in internal docs.
I've been using insider knowledge from actual AI engineers for 5 months.
These 8 patterns increased my output quality by 200%.
Here's what they don't want you to know: 👇 https://t.co/sAiwkDW5te
tweet
RT @alex_prompter: OpenAI and Anthropic engineers leaked these prompt techniques in internal docs.
I've been using insider knowledge from actual AI engineers for 5 months.
These 8 patterns increased my output quality by 200%.
Here's what they don't want you to know: 👇 https://t.co/sAiwkDW5te
tweet
Moon Dev
lol most Claude trading bots are total slop
you may wanna bm this one cause I’m actually a quant https://t.co/7IstTBj3gM
tweet
lol most Claude trading bots are total slop
you may wanna bm this one cause I’m actually a quant https://t.co/7IstTBj3gM
tweet
God of Prompt
RT @godofprompt: karpathy’s burying the lead with the “10x engineer” question.
the answer is the ratio explodes. but not how people think.
before: 10x engineers were faster at execution. they typed more, debugged quicker, held more state in their head.
after: execution speed converges. a mediocre dev with claude ships code at roughly the same velocity as a senior.
so what’s left? taste. architecture.
knowing what NOT to build. recognizing when the agent is confidently sprinting toward a dead end.
the new 10x engineer isn’t faster.
they’re the one who looks at 1000 lines of agent-generated bloat and says “couldn’t you just do this instead” and cuts it to 100.
that skill doesn’t come from prompting.
it comes from decades of pattern recognition about what good software actually looks like.
the irony: the thing llms are worst at (judgment, pushing back, surfacing tradeoffs) is exactly what becomes the scarcest human skill.
we’re not automating engineering. we’re unbundling it. separating execution from taste.
and discovering that taste was always the bottleneck, we just couldn’t see it because execution was causing so much noise.
A few random notes from claude coding quite a bit last few weeks.
Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.
IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.
Tenacity. It's so interesting to watch [...]
RT @godofprompt: karpathy’s burying the lead with the “10x engineer” question.
the answer is the ratio explodes. but not how people think.
before: 10x engineers were faster at execution. they typed more, debugged quicker, held more state in their head.
after: execution speed converges. a mediocre dev with claude ships code at roughly the same velocity as a senior.
so what’s left? taste. architecture.
knowing what NOT to build. recognizing when the agent is confidently sprinting toward a dead end.
the new 10x engineer isn’t faster.
they’re the one who looks at 1000 lines of agent-generated bloat and says “couldn’t you just do this instead” and cuts it to 100.
that skill doesn’t come from prompting.
it comes from decades of pattern recognition about what good software actually looks like.
the irony: the thing llms are worst at (judgment, pushing back, surfacing tradeoffs) is exactly what becomes the scarcest human skill.
we’re not automating engineering. we’re unbundling it. separating execution from taste.
and discovering that taste was always the bottleneck, we just couldn’t see it because execution was causing so much noise.
A few random notes from claude coding quite a bit last few weeks.
Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.
IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.
Tenacity. It's so interesting to watch [...]