The AI Burrow 🐰🕳️
218 subscribers
125 photos
2 videos
123 links
Sharing AI experiments, half-formed ideas, and the occasional rabbit hole.

Group Chat:
https://bit.ly/aiburrowchat
Download Telegram
Telegram now allows your main bot to manage other bots! Goodbye botfather.

- Faster setup (no manual BotFather steps)
- Easier to run multiple bots (one per agent, strategy, or use case)
- Cleaner scaling if you’re running a lot of bots

Check it out here:

https://core.telegram.org/bots/features#managed-bots
🔥1
https://x.com/bcherny/status/2044847848035156457

Tips for Opus 4.7

Enable Auto Mode (Shift-Tab in CLI or dropdown) - No need for --dangerously-skip-permissions anymore. Run long complex tasks (refactoring, research, features) without babysitting or constant prompts — safe auto-approval for most commands.

Use /fewer-permission-prompts skill: Scan session history and add safe repeated commands to your allowlist to reduce interruptions.

Leverage Recaps & Focus Mode (/focus): Quick session summaries when returning + hide intermediate steps to see only final results

Tune Effort Level: Set to xhigh for most work, max for hardest tasks (/effort) for better intelligence vs speed.
woooo 2 prompts for my first experiment and already hit 9% of the 5 hour limit 😂😂😂

EDIT: testing on my claude pro sub-account (15/mth trial for 3 months)
😱1
Codex users, try this out

gives you more aesthetic front-end websites instead of the usual AI website templates.

https://github.com/Leonxlnx/taste-skill
1
The AI Burrow 🐰🕳️
https://x.com/ohryansbelt/status/2045873788051415287 if you are using vercel, pls rotate your .env keys.
TL;DR: Vercel Security Incident report by the CEO

• A hacker got into Vercel after an employee’s work account was compromised through another AI tool (context.ai)

• The attacker was highly sophisticated, significantly accelerated by AI accessed some internal systems but customer data impact was very limited.

• Vercel is now tightening security, adding new tools, and telling everyone to update their secret keys.

Feels like someone got early access to Mythos.. I don’t think we are ready for what’s to come

https://x.com/rauchg/status/2045995362499076169?s=46
OpenAI’s latest GPT-Image-2 vs Nano Banana 2.

Looks like an huge jump.

They’ve started rolling it out in phases. Have you manage to try it out yet?

https://x.com/old_pgmrs_will/status/2045379349399101707
1
Had a dream last night… woke up and decided to build something 😆

Inspired by Mirofish — calling this one Deepfish 🐋

TLDR:
Spun up synthetic agents to simulate how hot-button issues ripple through different segments of Christians in Singapore.

What it does:
Introduce “shock” scenarios (politics, culture, etc.)
Observe how different groups react, reinforce, or push back
Track how sentiment shifts across the system over time

Archetypes:

15 distinct voice profiles
Crossed with 4 theological archetypes
Across 4 generations (Gen Z → Boomers)
Profiles loosely grounded in real-world data (conversations, forums, news)

Stack:
Borrowed from MiroFish + Shark (agent orchestration)
Python + Next.js
Openrouter/Deepseek as Model API

What I’m exploring:
How rhetoric + local pressures (TFR, housing, cost of living) reshape the overall “sentiment map” - not just individually, but collectively.
Still tuning the model, but already seeing some interesting second-order effects 👀

Also made it modular so I can port it over in the future for work related stuff.
4🔥2👀2
https://obsidian.md/clipper

If you are using obsidian / running your own secondbrain/llm-wiki , you NEED to use this skill.

one-click scrapes whatever you are reading (x tweets, articles, even youtube transcripts!!)
into your obsidian vault
https://x.com/spacex/status/2046713419978453374?s=46

Space might buy over cursor? Will be interesting to see how Elon integrates this into x’s grok.

- xAI providing GPU access from its Colossus supercomputer to help Cursor train a top coding model, addressing xAI’s reported idle capacity and training hurdles including cofounder exits.

• The deal structure gives SpaceX an option to acquire Cursor for $60B later in 2026 or pay $10B for the collaboration, creating aligned incentives where Cursor gets compute resources and a high-upside exit path while xAI gains product distribution to engineers.
1
Security 101: The Cost of Convenience

With the recent wave of exploits involving platforms like Vercel, Lovable, and Context, it is time for a reality check. The gold rush of plug-and-play AI agents is creating massive security blind spots in our workflows. Whether it is an enterprise suite or a trending GitHub repo, over-permissioning is a high-stakes gamble.

1. The “Checkbox everything” Permission Trap

Many AI agents require broad access to your entire workspace: Gmail, Slack, Notion, or local file systems, to maximize utility.

Giving a third-party tool full read/write access creates a single point of failure. As seen in the recent Mythos discussions, if their database is compromised, the attacker doesn't just get your login; they get your entire digital history.

2. Risks of Unvetted Open Source on GitHub

If you are pulling repos that haven't been reviewed, you are inviting an unverified guest into your system. Always inspect the code for obfuscated scripts or unexpected outbound calls before hitting install.

3. Local First & Isolate

Local LLMs: Use Ollama for sensitive tasks so data never leaves your machine.

Sandboxing: Use Docker or a VPS to isolate new agents from your primary environment.
Permissions: If a tool only needs to read a specific file, don't give it access to the root directory.

4. Audit Before You Automate

Before you hook a new agent into your OpenClaw setup or Second Brain database, ask:

> Does this tool actually need these permissions?
> Where is the data stored and who holds the encryption keys?
> How quickly can I revoke access if things go south?

Staying at the cutting edge shouldn't mean leaving the door unlocked. Build fast, but build secure.

Stay safe out there!