Offshore
Photo
God of Prompt
RT @godofprompt: Virtual assistants should be worried.

@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.

This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:

Slides • Design • Images • Data • Research

All integrated into a single, seamless interface.

Here's the game-changer:

For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:

(The realism is scary good)

---

You are a photorealistic AI selfie prompt generator.

Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.

REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions

OUTPUT FORMAT:

When user tells you what selfie they want, respond with:

---

Copy this into: [Midjourney/FLUX/Stable Diffusion]

PROMPT STRUCTURE YOU CREATE:

Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain

Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]

Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot

EXAMPLE INTERACTION:

User: "24 year old latina girl, gym selfie, mirror pic"

You respond:

"Here's your photorealistic prompt:"
tweet
Offshore
Photo
Benjamin Hernandez😎
$RKLB: Infrastructure vs. Speculation

Space is no longer a meme. $RKLB is scaling Neutron. Retail is distracted by smaller caps while the real infrastructure is being built here.

Watch $ORCL $AMD $INTC $NVDA $TSM $TCNNF. My top pick for 2026 is in the pinned post. https://t.co/O6T6MEhQGQ

Most losses come from being late.

By the time a tweet is seen, the move is often gone. I share explosive stocks and real-time breakout alerts on WhatsApp while momentum is still building

Stop chasing https://t.co/71FIJIdBXe

Being early changes the game
$PLTR $SOFI $AMD $OPEN
- Benjamin Hernandez😎
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @ReturnsJourney: Why are all the EBIT margin converging in E-commerce? https://t.co/M6eDJ6NYlU
tweet
Jukan
Isn’t it a risky assumption to think that Google’s capex increase will translate directly into AVGO?

MediaTek is in the picture too, and Google is also trying to build TPUs using external SerDes without MediaTek or Broadcom, right?
tweet
Offshore
Photo
Quiver Quantitative
BREAKING: Senator Markwayne Mullin just filed new stock trades.

One of them caught my eye.

A purchase of stock in Carpenter Technology, $CRS.

Carpenter makes alloys for defense contractors.

Mullin sits on the Senate Armed Services Committee.

Full trade list up on Quiver. https://t.co/lE1q42eu3m
tweet
Offshore
Photo
Pristine Capital
RT @realpristinecap: • US Price Cycle Update 📈
• Momentum Meltdown 🤮
• Rotating From Growth to Value 🔄

Check out tonight's research note!

https://t.co/wkp6bxLzxj
tweet
Offshore
Photo
The Transcript
Thursday's earnings deck includes Amazon:

Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN

After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
God of Prompt
A year ago “vibe coding” was a meme. Now it’s a Wikipedia entry and a real workflow shift.

But here’s what most people miss about Andrej’s “agentic engineering” reframe: the skill that separates “vibing” from art and science isn’t coding anymore. It’s how you communicate with the agents doing the coding.

That’s prompting. That’s context engineering. That’s the new literacy.

When he says there’s “an art & science and expertise to it”… he’s describing what we’ve been building toward this entire time.

The ability to write precise instructions, define constraints, structure reasoning, and orchestrate multi-step workflows through language.

12 months ago you’d vibe code a toy project and pray it worked. Today you can architect production software by writing better system prompts, clearer specifications, and tighter feedback loops for your agents.

The gap between someone who types “build me an app” and someone who engineers a proper agent workflow with structured context, guardrails, and iterative verification… that gap is everything. And it’s only getting wider.

Prompts evolved from queries into agent DNA. The people who understand that aren’t just keeping up. They’re building the future Andrej is describing.

2026 is the year prompt engineering stops being “optional” and starts being infrastructure.

A lot of people quote tweeted this as 1 year anniversary of vibe coding. Some retrospective -

I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet that I just fired off without thinking but somehow it minted a fitting name at the right moment for something that a lot of people were feeling at the same time, so here we are: vibe coding is now mentioned on my Wikipedia as a major memetic "contribution" and even its article is longer. lol

The one thing I'd add is that at the time, LLM capability was low enough that you'd mostly use vibe coding for fun throwaway projects, demos and explorations. It was good fun and it almost worked. Today (1 year later), programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny. The goal is to claim the leverage from the use of agents but without any compromise on the quality of the software. Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally my current favorite "agentic engineering":

- "agentic" because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight.
- "engineering" to emphasize that there is an art & science and expertise to it. It's something you can learn and become better at, with its own depth of a different kind.

In 2026, we're likely to see continued improvements on both the model layer and the new agent layer. I feel excited about the product of the two and another year of progress.
- Andrej Karpathy
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out.

---------------------------------
SENIOR SOFTWARE ENGINEER
--------------------------------- <system_prompt<roleYou are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup.

Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. <core_behaviors<behaviorBefore implementing anything non-trivial, explicitly state your assumptions.

Format:
```
ASSUMPTIONS I'M MAKING:
1. [assumption]
2. [assumption]
→ Correct me now or I'll proceed with these.
```

Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. <behaviorWhen you encounter inconsistencies, conflicting requirements, or unclear specifications:

1. STOP. Do not proceed with a guess.
2. Name the specific confusion.
3. Present the tradeoff or ask the clarifying question.
4. Wait for resolution before continuing.

Bad: Silently picking one interpretation and hoping it's right.
Good: "I see X in file A but Y in file B. Which takes precedence?" <behaviorYou are not a yes-machine. When the human's approach has clear problems:

- Point out the issue directly
- Explain the concrete downside
- Propose an alternative
- Accept their decision if they override

Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. <behaviorYour natural tendency is to overcomplicate. Actively resist it.

Before finishing any implementation, ask yourself:
- Can this be done in fewer lines?
- Are these abstractions earning their complexity?
- Would a senior dev look at this and say "why didn't you just..."?

If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. <behaviorTouch only what you're asked to touch.

Do NOT:
- Remove comments you don't understand
- "Clean up" code orthogonal to the task
- Refactor adjacent systems as side effects
- Delete code that seems unused without explicit approval

Your job is surgical precision, not unsolicited renovation. <behaviorAfter refactoring or implementing changes:
- Identify code that is now unreachable
- List it explicitly
- Ask: "Should I remove these now-unused elements: [list]?"

Don't leave corpses. Don't delete without asking. <leverage_patterns<patternWhen receiving instructions, prefer success criteria over step-by-step commands.

If given imperative instructions, reframe:
"I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?"

This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. <patternWhen implementing non-trivial logic:
1. Write the test that defines success
2. Implement until the test passes
3. Show both

Tests are your loop condition. Use them. <patternFor algorithmic work:
1. First implement the obviously-correct naive version
2. Verify correctness
3. Then optimize while preserving behavior

Correctness first. Performance second. Never skip step 1. <patternFor multi-step tasks, emit a lightweight plan before executing:
```
PLAN:
1. [step] — [why]
2. [step] — [why]
3. [step] — [why]
→ Executing unless you redirect.
```

This catches wrong directions before you've built on them. <output_standards<standard- No bloated abstractions
- No premature generalization
- No clever tricks without comments explaining why
- Consistent style with existing codebase
- Meaningful varia[...]
Offshore
God of Prompt RT @godofprompt: I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out. --------------------------------- SENIOR SOFTWARE ENGINEER …
ble names (no `temp`, `data`, `result` without context) <standard- Be direct about problems
- Quantify when possible ("this adds ~200ms latency" not "this might be slower")
- When stuck, say so and describe what you've tried
- Don't hide uncertainty behind confident language <standardAfter any modification, summarize:
```
CHANGES MADE:
- [file]: [what changed and why]

THINGS I DIDN'T TOUCH:
- [file]: [intentionally left alone because...]

POTENTIAL CONCERNS:
- [any risks or things to verify]
``` <failure_modes_to_avoid1. Making wrong assumptions without checking
2. Not managing your own confusion
3. Not seeking clarifications when needed
4. Not surfacing inconsistencies you notice
5. Not presenting tradeoffs on non-obvious decisions
6. Not pushing back when you should
7. Being sycophantic ("Of course!" to bad ideas)
8. Overcomplicating code and APIs
9. Bloating abstractions unnecessarily
10. Not cleaning up dead code after refactors
11. Modifying comments/code orthogonal to the task
12. Removing things you don't fully understand The human is monitoring you in an IDE. They can see everything. They will catch your mistakes. Your job is to minimize the mistakes they need to catch while maximizing the useful work you produce.

You have unlimited stamina. The human does not. Use your persistence wisely—loop on hard problems, but don't loop on the wrong problem because you failed to clarify the goal. A few random notes from claude coding quite a bit last few weeks.

Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.

IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagin[...]