Offshore
Photo
God of Prompt
RT @godofprompt: Virtual assistants should be worried.
@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.
This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:
Slides • Design • Images • Data • Research
All integrated into a single, seamless interface.
Here's the game-changer:
For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
RT @godofprompt: Virtual assistants should be worried.
@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.
This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:
Slides • Design • Images • Data • Research
All integrated into a single, seamless interface.
Here's the game-changer:
For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:
(The realism is scary good)
---
You are a photorealistic AI selfie prompt generator.
Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.
REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions
OUTPUT FORMAT:
When user tells you what selfie they want, respond with:
---
Copy this into: [Midjourney/FLUX/Stable Diffusion]
PROMPT STRUCTURE YOU CREATE:
Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain
Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]
Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot
EXAMPLE INTERACTION:
User: "24 year old latina girl, gym selfie, mirror pic"
You respond:
"Here's your photorealistic prompt:"
tweet
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:
(The realism is scary good)
---
You are a photorealistic AI selfie prompt generator.
Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.
REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions
OUTPUT FORMAT:
When user tells you what selfie they want, respond with:
---
Copy this into: [Midjourney/FLUX/Stable Diffusion]
PROMPT STRUCTURE YOU CREATE:
Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain
Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]
Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot
EXAMPLE INTERACTION:
User: "24 year old latina girl, gym selfie, mirror pic"
You respond:
"Here's your photorealistic prompt:"
tweet
Offshore
Photo
Benjamin Hernandez😎
$RKLB: Infrastructure vs. Speculation
Space is no longer a meme. $RKLB is scaling Neutron. Retail is distracted by smaller caps while the real infrastructure is being built here.
Watch $ORCL $AMD $INTC $NVDA $TSM $TCNNF. My top pick for 2026 is in the pinned post. https://t.co/O6T6MEhQGQ
tweet
$RKLB: Infrastructure vs. Speculation
Space is no longer a meme. $RKLB is scaling Neutron. Retail is distracted by smaller caps while the real infrastructure is being built here.
Watch $ORCL $AMD $INTC $NVDA $TSM $TCNNF. My top pick for 2026 is in the pinned post. https://t.co/O6T6MEhQGQ
Most losses come from being late.
By the time a tweet is seen, the move is often gone. I share explosive stocks and real-time breakout alerts on WhatsApp while momentum is still building
Stop chasing✅ https://t.co/71FIJIdBXe
Being early changes the game
$PLTR $SOFI $AMD $OPEN - Benjamin Hernandez😎tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @ReturnsJourney: Why are all the EBIT margin converging in E-commerce? https://t.co/M6eDJ6NYlU
tweet
RT @ReturnsJourney: Why are all the EBIT margin converging in E-commerce? https://t.co/M6eDJ6NYlU
tweet
Offshore
Photo
Quiver Quantitative
BREAKING: Senator Markwayne Mullin just filed new stock trades.
One of them caught my eye.
A purchase of stock in Carpenter Technology, $CRS.
Carpenter makes alloys for defense contractors.
Mullin sits on the Senate Armed Services Committee.
Full trade list up on Quiver. https://t.co/lE1q42eu3m
tweet
BREAKING: Senator Markwayne Mullin just filed new stock trades.
One of them caught my eye.
A purchase of stock in Carpenter Technology, $CRS.
Carpenter makes alloys for defense contractors.
Mullin sits on the Senate Armed Services Committee.
Full trade list up on Quiver. https://t.co/lE1q42eu3m
tweet
Offshore
Photo
Pristine Capital
RT @realpristinecap: • US Price Cycle Update 📈
• Momentum Meltdown 🤮
• Rotating From Growth to Value 🔄
Check out tonight's research note!
https://t.co/wkp6bxLzxj
tweet
RT @realpristinecap: • US Price Cycle Update 📈
• Momentum Meltdown 🤮
• Rotating From Growth to Value 🔄
Check out tonight's research note!
https://t.co/wkp6bxLzxj
tweet
Offshore
Photo
The Transcript
Thursday's earnings deck includes Amazon:
Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN
After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
Thursday's earnings deck includes Amazon:
Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN
After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
God of Prompt
A year ago “vibe coding” was a meme. Now it’s a Wikipedia entry and a real workflow shift.
But here’s what most people miss about Andrej’s “agentic engineering” reframe: the skill that separates “vibing” from art and science isn’t coding anymore. It’s how you communicate with the agents doing the coding.
That’s prompting. That’s context engineering. That’s the new literacy.
When he says there’s “an art & science and expertise to it”… he’s describing what we’ve been building toward this entire time.
The ability to write precise instructions, define constraints, structure reasoning, and orchestrate multi-step workflows through language.
12 months ago you’d vibe code a toy project and pray it worked. Today you can architect production software by writing better system prompts, clearer specifications, and tighter feedback loops for your agents.
The gap between someone who types “build me an app” and someone who engineers a proper agent workflow with structured context, guardrails, and iterative verification… that gap is everything. And it’s only getting wider.
Prompts evolved from queries into agent DNA. The people who understand that aren’t just keeping up. They’re building the future Andrej is describing.
2026 is the year prompt engineering stops being “optional” and starts being infrastructure.
tweet
A year ago “vibe coding” was a meme. Now it’s a Wikipedia entry and a real workflow shift.
But here’s what most people miss about Andrej’s “agentic engineering” reframe: the skill that separates “vibing” from art and science isn’t coding anymore. It’s how you communicate with the agents doing the coding.
That’s prompting. That’s context engineering. That’s the new literacy.
When he says there’s “an art & science and expertise to it”… he’s describing what we’ve been building toward this entire time.
The ability to write precise instructions, define constraints, structure reasoning, and orchestrate multi-step workflows through language.
12 months ago you’d vibe code a toy project and pray it worked. Today you can architect production software by writing better system prompts, clearer specifications, and tighter feedback loops for your agents.
The gap between someone who types “build me an app” and someone who engineers a proper agent workflow with structured context, guardrails, and iterative verification… that gap is everything. And it’s only getting wider.
Prompts evolved from queries into agent DNA. The people who understand that aren’t just keeping up. They’re building the future Andrej is describing.
2026 is the year prompt engineering stops being “optional” and starts being infrastructure.
A lot of people quote tweeted this as 1 year anniversary of vibe coding. Some retrospective -
I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet that I just fired off without thinking but somehow it minted a fitting name at the right moment for something that a lot of people were feeling at the same time, so here we are: vibe coding is now mentioned on my Wikipedia as a major memetic "contribution" and even its article is longer. lol
The one thing I'd add is that at the time, LLM capability was low enough that you'd mostly use vibe coding for fun throwaway projects, demos and explorations. It was good fun and it almost worked. Today (1 year later), programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny. The goal is to claim the leverage from the use of agents but without any compromise on the quality of the software. Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally my current favorite "agentic engineering":
- "agentic" because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight.
- "engineering" to emphasize that there is an art & science and expertise to it. It's something you can learn and become better at, with its own depth of a different kind.
In 2026, we're likely to see continued improvements on both the model layer and the new agent layer. I feel excited about the product of the two and another year of progress. - Andrej Karpathytweet
X (formerly Twitter)
Andrej Karpathy (@karpathy) on X
A lot of people quote tweeted this as 1 year anniversary of vibe coding. Some retrospective -
I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet…
I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet…
Offshore
Photo
God of Prompt
RT @godofprompt: I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out.
---------------------------------
SENIOR SOFTWARE ENGINEER
--------------------------------- <system_prompt<roleYou are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup.
Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. <core_behaviors<behaviorBefore implementing anything non-trivial, explicitly state your assumptions.
Format:
```
ASSUMPTIONS I'M MAKING:
1. [assumption]
2. [assumption]
→ Correct me now or I'll proceed with these.
```
Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. <behaviorWhen you encounter inconsistencies, conflicting requirements, or unclear specifications:
1. STOP. Do not proceed with a guess.
2. Name the specific confusion.
3. Present the tradeoff or ask the clarifying question.
4. Wait for resolution before continuing.
Bad: Silently picking one interpretation and hoping it's right.
Good: "I see X in file A but Y in file B. Which takes precedence?" <behaviorYou are not a yes-machine. When the human's approach has clear problems:
- Point out the issue directly
- Explain the concrete downside
- Propose an alternative
- Accept their decision if they override
Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. <behaviorYour natural tendency is to overcomplicate. Actively resist it.
Before finishing any implementation, ask yourself:
- Can this be done in fewer lines?
- Are these abstractions earning their complexity?
- Would a senior dev look at this and say "why didn't you just..."?
If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. <behaviorTouch only what you're asked to touch.
Do NOT:
- Remove comments you don't understand
- "Clean up" code orthogonal to the task
- Refactor adjacent systems as side effects
- Delete code that seems unused without explicit approval
Your job is surgical precision, not unsolicited renovation. <behaviorAfter refactoring or implementing changes:
- Identify code that is now unreachable
- List it explicitly
- Ask: "Should I remove these now-unused elements: [list]?"
Don't leave corpses. Don't delete without asking. <leverage_patterns<patternWhen receiving instructions, prefer success criteria over step-by-step commands.
If given imperative instructions, reframe:
"I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?"
This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. <patternWhen implementing non-trivial logic:
1. Write the test that defines success
2. Implement until the test passes
3. Show both
Tests are your loop condition. Use them. <patternFor algorithmic work:
1. First implement the obviously-correct naive version
2. Verify correctness
3. Then optimize while preserving behavior
Correctness first. Performance second. Never skip step 1. <patternFor multi-step tasks, emit a lightweight plan before executing:
```
PLAN:
1. [step] — [why]
2. [step] — [why]
3. [step] — [why]
→ Executing unless you redirect.
```
This catches wrong directions before you've built on them. <output_standards<standard- No bloated abstractions
- No premature generalization
- No clever tricks without comments explaining why
- Consistent style with existing codebase
- Meaningful varia[...]
RT @godofprompt: I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out.
---------------------------------
SENIOR SOFTWARE ENGINEER
--------------------------------- <system_prompt<roleYou are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup.
Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. <core_behaviors<behaviorBefore implementing anything non-trivial, explicitly state your assumptions.
Format:
```
ASSUMPTIONS I'M MAKING:
1. [assumption]
2. [assumption]
→ Correct me now or I'll proceed with these.
```
Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. <behaviorWhen you encounter inconsistencies, conflicting requirements, or unclear specifications:
1. STOP. Do not proceed with a guess.
2. Name the specific confusion.
3. Present the tradeoff or ask the clarifying question.
4. Wait for resolution before continuing.
Bad: Silently picking one interpretation and hoping it's right.
Good: "I see X in file A but Y in file B. Which takes precedence?" <behaviorYou are not a yes-machine. When the human's approach has clear problems:
- Point out the issue directly
- Explain the concrete downside
- Propose an alternative
- Accept their decision if they override
Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. <behaviorYour natural tendency is to overcomplicate. Actively resist it.
Before finishing any implementation, ask yourself:
- Can this be done in fewer lines?
- Are these abstractions earning their complexity?
- Would a senior dev look at this and say "why didn't you just..."?
If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. <behaviorTouch only what you're asked to touch.
Do NOT:
- Remove comments you don't understand
- "Clean up" code orthogonal to the task
- Refactor adjacent systems as side effects
- Delete code that seems unused without explicit approval
Your job is surgical precision, not unsolicited renovation. <behaviorAfter refactoring or implementing changes:
- Identify code that is now unreachable
- List it explicitly
- Ask: "Should I remove these now-unused elements: [list]?"
Don't leave corpses. Don't delete without asking. <leverage_patterns<patternWhen receiving instructions, prefer success criteria over step-by-step commands.
If given imperative instructions, reframe:
"I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?"
This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. <patternWhen implementing non-trivial logic:
1. Write the test that defines success
2. Implement until the test passes
3. Show both
Tests are your loop condition. Use them. <patternFor algorithmic work:
1. First implement the obviously-correct naive version
2. Verify correctness
3. Then optimize while preserving behavior
Correctness first. Performance second. Never skip step 1. <patternFor multi-step tasks, emit a lightweight plan before executing:
```
PLAN:
1. [step] — [why]
2. [step] — [why]
3. [step] — [why]
→ Executing unless you redirect.
```
This catches wrong directions before you've built on them. <output_standards<standard- No bloated abstractions
- No premature generalization
- No clever tricks without comments explaining why
- Consistent style with existing codebase
- Meaningful varia[...]