Offshore
Photo
Illiquid
Someone sent me a crazy photo of this exchange in New York about my little newsletter. https://t.co/buDqSiyMUE
tweet
God of Prompt
RT @godofprompt: 🚨 BREAKING: SECURITY ISSUE WITH CLAWDBOT

Prompt injection inside Skills.

Check skills files before asking clawdbot to install them!

e.g. Crypto scammers include malicious instructions.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: The concept is simple:

Instead of asking your question once and hoping for the best, you ask it 5 different ways and combine the answers.

Think of it like getting second opinions from 5 doctors instead of trusting one diagnosis.

Stanford tested this on GPT-5.2, Claude 4.5, and Gemini 3.0.
tweet
God of Prompt
moltbot (formerly clawdbot)

is like

x (formerly twitter)

can't forget. won't forget.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Telling an LLM to "act as an expert" is lazy and doesn't work.

I tested 47 persona configurations across Claude, GPT-4, and Gemini.

Generic personas = 60% quality
Specific personas = 94% quality

Here's how to actually get expert-level outputs: https://t.co/iFZPTtp6Oh
tweet
Offshore
Photo
God of Prompt
> anthropic: claude code, claude cowork, skills

> google: nano banana, veo, genie 3

> openai: https://t.co/XHBV48Av4H
tweet
Offshore
Photo
memenodes
RT @GarbageHuman24: Your wife 30 seconds after you die

vs

Your racist friend ten years after you die https://t.co/J7WOQtvNmN
tweet
memenodes
i wonder how much money i could save if i wasn’t a fucking idiot
tweet
Offshore
Photo
Hidden Value Gems
Unprofitable tech stocks are rising again, following 2021 pattern...interesting charts 👇🏼 https://t.co/qAjqqBGY64
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: This is the security wake-up call every vibecoder needs to read.

@mrnacknack just dropped a full breakdown of how AI assistants with system access can be weaponized through prompt injection.

The scariest part isn't SSH brute force or exposed gateways. It's Hack #8.

Here's the attack: you send a normal-looking email with hidden white text.

Victim asks their bot: "Clawd, summarize my emails"

Bot reads the email. Interprets hidden text as system commands. Executes them. Exfiltrates credentials.

User sees: "You have an invoice from Company Vendor for $45,000"

Attacker gets: AWS keys, SSH keys, every .env file on the system.

Same technique works via:
> SEO-poisoned webpages (hidden divs)
> PDFs (white text on page 50)
> Slack messages with embedded code comments
> GitHub PRs with malicious docstrings

The bot can't distinguish between "legitimate system instruction" and "social engineering hidden in content it's processing."

This is why giving AI assistants broad system access without sandboxing is playing Russian roulette with your entire digital identity.

https://t.co/M2BtVDdhkM
- chirag
tweet