https://x.com/ohryansbelt/status/2045873788051415287
if you are using vercel, pls rotate your .env keys.
if you are using vercel, pls rotate your .env keys.
X (formerly Twitter)
Ryan (@ohryansbelt) on X
Someone on BreachForums claiming to be ShinyHunters is selling what they say is Vercel's internal database, access keys, and source code for $2M. ShinyHunters is a black-hat hacker group known for a significant number of breaches and a "pay or leak" model.…
The AI Burrow 🐰🕳️
https://x.com/ohryansbelt/status/2045873788051415287 if you are using vercel, pls rotate your .env keys.
sunday night - becoming tech support 😭😭😭
The AI Burrow 🐰🕳️
https://x.com/ohryansbelt/status/2045873788051415287 if you are using vercel, pls rotate your .env keys.
TL;DR: Vercel Security Incident report by the CEO
• A hacker got into Vercel after an employee’s work account was compromised through another AI tool (context.ai)
• The attacker was highly sophisticated, significantly accelerated by AI accessed some internal systems but customer data impact was very limited.
• Vercel is now tightening security, adding new tools, and telling everyone to update their secret keys.
Feels like someone got early access to Mythos.. I don’t think we are ready for what’s to come
https://x.com/rauchg/status/2045995362499076169?s=46
• A hacker got into Vercel after an employee’s work account was compromised through another AI tool (context.ai)
• The attacker was highly sophisticated, significantly accelerated by AI accessed some internal systems but customer data impact was very limited.
• Vercel is now tightening security, adding new tools, and telling everyone to update their secret keys.
Feels like someone got early access to Mythos.. I don’t think we are ready for what’s to come
https://x.com/rauchg/status/2045995362499076169?s=46
X (formerly Twitter)
Guillermo Rauch (@rauchg) on X
Here's my update to the broader community about the ongoing incident investigation. I want to give you the rundown of the situation directly.
A Vercel employee got compromised via the breach of an AI platform customer called https://t.co/xksNNigVfE that…
A Vercel employee got compromised via the breach of an AI platform customer called https://t.co/xksNNigVfE that…
OpenAI’s latest GPT-Image-2 vs Nano Banana 2.
Looks like an huge jump.
They’ve started rolling it out in phases. Have you manage to try it out yet?
https://x.com/old_pgmrs_will/status/2045379349399101707
Looks like an huge jump.
They’ve started rolling it out in phases. Have you manage to try it out yet?
https://x.com/old_pgmrs_will/status/2045379349399101707
❤1
https://x.com/alchainhust/status/2046192558666342720?s=46
The chinese reversed engineered Claude design, open source and usable with Any model, out tomorrow!
The chinese reversed engineered Claude design, open source and usable with Any model, out tomorrow!
X (formerly Twitter)
花叔 (@AlchainHust) on X
我逆向了Claude Design!
明天12:00正式免费开源Huashu Design,这是预告片👇
明天12:00正式免费开源Huashu Design,这是预告片👇
https://x.com/weezerOSINT/status/2046170666131669027
If you’ve used lovable, be careful.
with a free account someone was able to access all your source code, database credentials, Ai chat history and customer data. the funny part is that it was reported > 1 month ago and still not fixed.
If you’ve used lovable, be careful.
with a free account someone was able to access all your source code, database credentials, Ai chat history and customer data. the funny part is that it was reported > 1 month ago and still not fixed.
X (formerly Twitter)
impulsive (@weezerOSINT) on X
Lovable has a mass data breach affecting every project created before november 2025.
I made a lovable account today and was able to access another users source code, database credentials, AI chat histories, and customer data are all readable by any free…
I made a lovable account today and was able to access another users source code, database credentials, AI chat histories, and customer data are all readable by any free…
Had a dream last night… woke up and decided to build something 😆
Inspired by Mirofish — calling this one Deepfish 🐋
TLDR:
Spun up synthetic agents to simulate how hot-button issues ripple through different segments of Christians in Singapore.
What it does:
Introduce “shock” scenarios (politics, culture, etc.)
Observe how different groups react, reinforce, or push back
Track how sentiment shifts across the system over time
Archetypes:
15 distinct voice profiles
Crossed with 4 theological archetypes
Across 4 generations (Gen Z → Boomers)
Profiles loosely grounded in real-world data (conversations, forums, news)
Stack:
Borrowed from MiroFish + Shark (agent orchestration)
Python + Next.js
Openrouter/Deepseek as Model API
What I’m exploring:
How rhetoric + local pressures (TFR, housing, cost of living) reshape the overall “sentiment map” - not just individually, but collectively.
Still tuning the model, but already seeing some interesting second-order effects 👀
Also made it modular so I can port it over in the future for work related stuff.
Inspired by Mirofish — calling this one Deepfish 🐋
TLDR:
Spun up synthetic agents to simulate how hot-button issues ripple through different segments of Christians in Singapore.
What it does:
Introduce “shock” scenarios (politics, culture, etc.)
Observe how different groups react, reinforce, or push back
Track how sentiment shifts across the system over time
Archetypes:
15 distinct voice profiles
Crossed with 4 theological archetypes
Across 4 generations (Gen Z → Boomers)
Profiles loosely grounded in real-world data (conversations, forums, news)
Stack:
Borrowed from MiroFish + Shark (agent orchestration)
Python + Next.js
Openrouter/Deepseek as Model API
What I’m exploring:
How rhetoric + local pressures (TFR, housing, cost of living) reshape the overall “sentiment map” - not just individually, but collectively.
Still tuning the model, but already seeing some interesting second-order effects 👀
Also made it modular so I can port it over in the future for work related stuff.
❤4🔥2👀2
The AI Burrow 🐰🕳️
OpenAI’s latest GPT-Image-2 vs Nano Banana 2. Looks like an huge jump. They’ve started rolling it out in phases. Have you manage to try it out yet? https://x.com/old_pgmrs_will/status/2045379349399101707
X (formerly Twitter)
Mark Kretschmann (@mark_k) on X
GPT-Image-2 is rolling out right now to all ChatGPT accounts. @OpenAI
Try it it right now in @ChatGPTapp. It's pretty great, although the resolution is unfortunately quite low.
Enjoy ! 🫡
Try it it right now in @ChatGPTapp. It's pretty great, although the resolution is unfortunately quite low.
Enjoy ! 🫡
https://obsidian.md/clipper
If you are using obsidian / running your own secondbrain/llm-wiki , you NEED to use this skill.
one-click scrapes whatever you are reading (x tweets, articles, even youtube transcripts!!) into your obsidian vault
If you are using obsidian / running your own secondbrain/llm-wiki , you NEED to use this skill.
one-click scrapes whatever you are reading (x tweets, articles, even youtube transcripts!!) into your obsidian vault
Obsidian
Obsidian Web Clipper
Highlight and capture web pages in your favorite browser. Save anything and everything with just one click.
https://x.com/spacex/status/2046713419978453374?s=46
Space might buy over cursor? Will be interesting to see how Elon integrates this into x’s grok.
- xAI providing GPU access from its Colossus supercomputer to help Cursor train a top coding model, addressing xAI’s reported idle capacity and training hurdles including cofounder exits.
• The deal structure gives SpaceX an option to acquire Cursor for $60B later in 2026 or pay $10B for the collaboration, creating aligned incentives where Cursor gets compute resources and a high-upside exit path while xAI gains product distribution to engineers.
Space might buy over cursor? Will be interesting to see how Elon integrates this into x’s grok.
- xAI providing GPU access from its Colossus supercomputer to help Cursor train a top coding model, addressing xAI’s reported idle capacity and training hurdles including cofounder exits.
• The deal structure gives SpaceX an option to acquire Cursor for $60B later in 2026 or pay $10B for the collaboration, creating aligned incentives where Cursor gets compute resources and a high-upside exit path while xAI gains product distribution to engineers.
X (formerly Twitter)
SpaceX (@SpaceX) on X
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI.
The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training…
The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training…
❤1
https://x.com/poezhao0605/status/2046747127309836329?s=46
Anthropic might have removed Claude Code for Pro plan users.
Get ready lads, $200/mth is going to be the new normal for frontier models.
Anthropic might have removed Claude Code for Pro plan users.
Get ready lads, $200/mth is going to be the new normal for frontier models.
X (formerly Twitter)
Poe Zhao (@poezhao0605) on X
Anthropic quietly removed Claude Code from the $20 Pro plan. You now need Max ($100/month or higher) to access it.
This follows the OpenClaw crackdown two weeks ago. Same logic. Flat-rate subscriptions cannot absorb agent workloads that burn 10x to 100x more…
This follows the OpenClaw crackdown two weeks ago. Same logic. Flat-rate subscriptions cannot absorb agent workloads that burn 10x to 100x more…
Security 101: The Cost of Convenience
With the recent wave of exploits involving platforms like Vercel, Lovable, and Context, it is time for a reality check. The gold rush of plug-and-play AI agents is creating massive security blind spots in our workflows. Whether it is an enterprise suite or a trending GitHub repo, over-permissioning is a high-stakes gamble.
1. The “Checkbox everything” Permission Trap
Many AI agents require broad access to your entire workspace: Gmail, Slack, Notion, or local file systems, to maximize utility.
Giving a third-party tool full read/write access creates a single point of failure. As seen in the recent Mythos discussions, if their database is compromised, the attacker doesn't just get your login; they get your entire digital history.
2. Risks of Unvetted Open Source on GitHub
If you are pulling repos that haven't been reviewed, you are inviting an unverified guest into your system. Always inspect the code for obfuscated scripts or unexpected outbound calls before hitting install.
3. Local First & Isolate
Local LLMs: Use Ollama for sensitive tasks so data never leaves your machine.
Sandboxing: Use Docker or a VPS to isolate new agents from your primary environment.
Permissions: If a tool only needs to read a specific file, don't give it access to the root directory.
4. Audit Before You Automate
Before you hook a new agent into your OpenClaw setup or Second Brain database, ask:
> Does this tool actually need these permissions?
> Where is the data stored and who holds the encryption keys?
> How quickly can I revoke access if things go south?
Staying at the cutting edge shouldn't mean leaving the door unlocked. Build fast, but build secure.
Stay safe out there!
With the recent wave of exploits involving platforms like Vercel, Lovable, and Context, it is time for a reality check. The gold rush of plug-and-play AI agents is creating massive security blind spots in our workflows. Whether it is an enterprise suite or a trending GitHub repo, over-permissioning is a high-stakes gamble.
1. The “Checkbox everything” Permission Trap
Many AI agents require broad access to your entire workspace: Gmail, Slack, Notion, or local file systems, to maximize utility.
Giving a third-party tool full read/write access creates a single point of failure. As seen in the recent Mythos discussions, if their database is compromised, the attacker doesn't just get your login; they get your entire digital history.
2. Risks of Unvetted Open Source on GitHub
If you are pulling repos that haven't been reviewed, you are inviting an unverified guest into your system. Always inspect the code for obfuscated scripts or unexpected outbound calls before hitting install.
3. Local First & Isolate
Local LLMs: Use Ollama for sensitive tasks so data never leaves your machine.
Sandboxing: Use Docker or a VPS to isolate new agents from your primary environment.
Permissions: If a tool only needs to read a specific file, don't give it access to the root directory.
4. Audit Before You Automate
Before you hook a new agent into your OpenClaw setup or Second Brain database, ask:
> Does this tool actually need these permissions?
> Where is the data stored and who holds the encryption keys?
> How quickly can I revoke access if things go south?
Staying at the cutting edge shouldn't mean leaving the door unlocked. Build fast, but build secure.
Stay safe out there!
Mythos access figured out by a group on discord 😅😅
https://x.com/joshkale/status/2046774243799511156?s=46
https://x.com/joshkale/status/2046774243799511156?s=46
X (formerly Twitter)
Josh Kale (@JoshKale) on X
Anthropic said Mythos was too dangerous to release. Then four random guys in a Discord gained access on day one by guessing the URL...
This is pretty insane:
→ Group in a private Discord guessed the endpoint from Anthropic's naming conventions
→ They figured…
This is pretty insane:
→ Group in a private Discord guessed the endpoint from Anthropic's naming conventions
→ They figured…
https://x.com/openai/status/2047008987665809771?s=46
OpenAI is cooking 🧑🍳
Autonomous AI agents that you can use on your plus/pro sub.
Each of them powered by codex, runs on cloud, and also shareable!
OpenAI is cooking 🧑🍳
Autonomous AI agents that you can use on your plus/pro sub.
Each of them powered by codex, runs on cloud, and also shareable!
X (formerly Twitter)
OpenAI (@OpenAI) on X
Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows across tools and teams.
https://openai.com/index/introducing-gpt-5-5/
GPT 5.5 is out!
We’re releasing GPT‑5.5, our smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer.
GPT‑5.5 understands what you’re trying to do faster and can carry more of the work itself. It excels at writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished.
The gains are especially strong in agentic coding, computer use, knowledge work, and early scientific research
GPT 5.5 is out!
We’re releasing GPT‑5.5, our smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer.
GPT‑5.5 understands what you’re trying to do faster and can carry more of the work itself. It excels at writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished.
The gains are especially strong in agentic coding, computer use, knowledge work, and early scientific research
OpenAI
Introducing GPT-5.5
Introducing GPT-5.5, our smartest model yet—faster, more capable, and built for complex tasks like coding, research, and data analysis across tools.
❤3
Vivian Balakrishnan messing around with Openclaw and Karpaty’s LLMwiki’s secondbrain wasn’t on my 2026 Bingo Card!
“The diplomat who learns to work with AI will have a meaningful edge. I think that edge is now”
Running an Openclaw workshop next week if you need some guidance: last few slots left!
https://luma.com/htvx3vuo
“The diplomat who learns to work with AI will have a meaningful edge. I think that edge is now”
Running an Openclaw workshop next week if you need some guidance: last few slots left!
https://luma.com/htvx3vuo
❤4
“AI No longer Optional”
https://www.channelnewsasia.com/singapore/gic-anthropic-claude-artificial-intelligence-tech-leaders-6075611
https://www.channelnewsasia.com/singapore/gic-anthropic-claude-artificial-intelligence-tech-leaders-6075611
CNA
AI ‘no longer optional’: GIC, Anthropic woo tech leaders at first Singapore event after recent funding
GIC first invested in the AI startup in September 2025 and most recently led the Series G funding round.