Karpathy dropped a post describing how he uses AI to build personal knowledge bases.
The idea is simple: instead of keeping notes scattered across apps, you dump everything into one folder.
Then you tell your AI to organize all of it into a personal wiki - summaries, connections, articles - that gets smarter every time you use it.
No special software. No database. Just folders and text files.
In under 7 minutes you'll learn:
1. The exact folder structure to set up (takes 2 minutes)
2. How to automate web scraping into your knowledge base with one CLI tool
3. The one-file "schema" that makes the whole system work
4. How to get your AI to compile raw notes into an organized wiki
5. The compounding trick that makes it smarter every time you use it
6. The health check that catches mistakes before they pile up.
The idea is simple: instead of keeping notes scattered across apps, you dump everything into one folder.
Then you tell your AI to organize all of it into a personal wiki - summaries, connections, articles - that gets smarter every time you use it.
No special software. No database. Just folders and text files.
In under 7 minutes you'll learn:
1. The exact folder structure to set up (takes 2 minutes)
2. How to automate web scraping into your knowledge base with one CLI tool
3. The one-file "schema" that makes the whole system work
4. How to get your AI to compile raw notes into an organized wiki
5. The compounding trick that makes it smarter every time you use it
6. The health check that catches mistakes before they pile up.
Gist
llm-wiki
llm-wiki. GitHub Gist: instantly share code, notes, and snippets.
👍4❤3🔥2🥴1
OpenAI is moving Codex from message-based to token-based pricing for credits for all ChatGPT Plans in the coming weeks.
As of right now, all Business & new Enterprise accounts have already started this new API token based system.
The "legacy" system is a message based system, which is generally less granular. You get a fixed number of messages per credit (although some messgages can consume more than 1 credit).
Token-based pricing changes that. You're billed based on actual input/output tokens consumed, so lightweight tasks cost less and heavy ones cost more.
In other words, its API based usage rates.
As of right now, all Business & new Enterprise accounts have already started this new API token based system.
The "legacy" system is a message based system, which is generally less granular. You get a fixed number of messages per credit (although some messgages can consume more than 1 credit).
Token-based pricing changes that. You're billed based on actual input/output tokens consumed, so lightweight tasks cost less and heavy ones cost more.
In other words, its API based usage rates.
OpenAI Help Center
Codex rate card | OpenAI Help Center
Learn how Codex credit rates work across Plus, Pro, Business, and Enterprise/Edu plans.
😭7🔥1👏1💯1
Meet JapanEEG an open EEG database for non-invasive speech BCI research
JapanEEG a high-density EEG multimodal database built for non-invasive speech decoding BCI research.
For people living with ALS or those who have undergone laryngectomy, non-invasive BCI technology holds tremendous promise as an alternative means of communication one that works by decoding speech intent directly from brainwave signals, enabling text input or AI-generated speech output.
Research in this field is advancing globally. Yet one persistent bottleneck has held the field back: the lack of a large-scale, high-quality EEG dataset that can serve as a common benchmark.
Araya's X Communication team has spent years accumulating EEG data through Phase 1 of the IoB project.
JapanEEG a high-density EEG multimodal database built for non-invasive speech decoding BCI research.
For people living with ALS or those who have undergone laryngectomy, non-invasive BCI technology holds tremendous promise as an alternative means of communication one that works by decoding speech intent directly from brainwave signals, enabling text input or AI-generated speech output.
Research in this field is advancing globally. Yet one persistent bottleneck has held the field back: the lack of a large-scale, high-quality EEG dataset that can serve as a common benchmark.
Araya's X Communication team has spent years accumulating EEG data through Phase 1 of the IoB project.
JapanEEG
Enabling silent speech decoding with multimodal EEG/EMG recordings. Open data and pre-trained models for the global brain-computer interface research community.
🔥6👏2💯2
Coinbase's Agentic Wallets have processed 50 million machine-to-machine transactions since late 2025.
50 million. In under 6 months.
AI agents are already paying each other. Not in a proof of concept. In production.
The infrastructure: x402 an implementation of the HTTP 402 "Payment Required" standard that's been sitting dormant in the internet's protocol stack since 1991. Backed now by Cloudflare, Circle, AWS, and Stripe. An agent sends a request. The server requires payment. The agent pays in stablecoin. The service renders. All in milliseconds, without a human in the loop.
Here's what most people miss about the economics of AI agent payments:
Human payment: average transaction $50+, low frequency, high fraud risk.
AI agent payment: average transaction fractions of a cent, extremely high frequency, zero fraud.
Card networks can't handle fractions-of-a-cent transactions. Their economics don't work below approximately $0.10 per transaction. Stablecoins settle at any denomination, any frequency, any amount.
The default payment layer for the agentic economy will be stablecoins. Not because anyone decided it. Because the transaction math leaves no other option.
The first 50 million AI transactions just happened. The next 50 billion are a matter of infrastructure.
50 million. In under 6 months.
AI agents are already paying each other. Not in a proof of concept. In production.
The infrastructure: x402 an implementation of the HTTP 402 "Payment Required" standard that's been sitting dormant in the internet's protocol stack since 1991. Backed now by Cloudflare, Circle, AWS, and Stripe. An agent sends a request. The server requires payment. The agent pays in stablecoin. The service renders. All in milliseconds, without a human in the loop.
Here's what most people miss about the economics of AI agent payments:
Human payment: average transaction $50+, low frequency, high fraud risk.
AI agent payment: average transaction fractions of a cent, extremely high frequency, zero fraud.
Card networks can't handle fractions-of-a-cent transactions. Their economics don't work below approximately $0.10 per transaction. Stablecoins settle at any denomination, any frequency, any amount.
The default payment layer for the agentic economy will be stablecoins. Not because anyone decided it. Because the transaction math leaves no other option.
The first 50 million AI transactions just happened. The next 50 billion are a matter of infrastructure.
🔥6❤2💯2
AI Agents and Bot-to-Bot Communication in Telegram
Bot-to-bot interaction was restricted on Telegram to prevent infinite message loops.
Starting today, in specific contexts, Bot-to-Bot communication is allowed – unlocking complex agentic flows and AI-powered use cases.
Out of the box, this feature will work in groups and via business mode. To start using it, simply enable the Bot-to-Bot Communication Mode for your bot via @BotFather.
You can reference the full documentation here.
Bot-to-bot interaction was restricted on Telegram to prevent infinite message loops.
Starting today, in specific contexts, Bot-to-Bot communication is allowed – unlocking complex agentic flows and AI-powered use cases.
Out of the box, this feature will work in groups and via business mode. To start using it, simply enable the Bot-to-Bot Communication Mode for your bot via @BotFather.
You can reference the full documentation here.
Telegram
BotFather
BotFather is the one bot to rule them all. Use it to create new bot accounts and manage your existing bots.
🔥5❤2👍2💯2
On-policy RL has driven the biggest leaps in training coding agents. Extending it to machine learning engineering agents should be a natural next step.
But it almost never works.
The recipe is right there standard trajectory-wise GRPO, the same that worked for SWE.
However, the problem is that one rollout step on an MLE task may take hours because the agent has to actually train a model on a real dataset at every step (preprocessing, fitting, inference, scoring). So even with the N rollouts in a group running in parallel, a single GRPO run may still take days.
Meta shared a new paper, SandMLE, which fixes this with a move that sounds almost too reckless to work.
But it almost never works.
The recipe is right there standard trajectory-wise GRPO, the same that worked for SWE.
However, the problem is that one rollout step on an MLE task may take hours because the agent has to actually train a model on a real dataset at every step (preprocessing, fitting, inference, scoring). So even with the N rollouts in a group running in parallel, a single GRPO run may still take days.
Meta shared a new paper, SandMLE, which fixes this with a move that sounds almost too reckless to work.
❤4🥰2💯2
DeepSeek is rolling out a limited V4 gray release. A new mode switcher now appears in the chat UI with three options: Fast Mode (default), Expert Mode and Vision Mode
Fast Mode:
• File uploads → text-only extraction
• Likely a lightweight, low-latency model optimized for speed
Expert Mode:
• No file uploads supported
• Restriction likely for compute/cost control, since heavy models + file tokens are expensive
• Likely routes to a larger, more powerful reasoning model
Vision Mode:
• Enables multimodal inputs
• Builds on earlier OCR tests
• May signal DeepSeek’s multimodal capability is moving toward end users
Fast Mode:
• File uploads → text-only extraction
• Likely a lightweight, low-latency model optimized for speed
Expert Mode:
• No file uploads supported
• Restriction likely for compute/cost control, since heavy models + file tokens are expensive
• Likely routes to a larger, more powerful reasoning model
Vision Mode:
• Enables multimodal inputs
• Builds on earlier OCR tests
• May signal DeepSeek’s multimodal capability is moving toward end users
Attackers can exfiltrate user files from Cowork by exploiting an unremediated vulnerability in Claude’s coding environment, which now extends to Cowork.
The vulnerability was first identified in Claude.ai chat.
The vulnerability was first identified in Claude.ai chat.
Promptarmor
Claude Cowork Exfiltrates Files
Claude Cowork is vulnerable to file exfiltration attacks via indirect prompt injection as a result of known-but-unresolved isolation flaws in Claude's code execution environment.
❤2🔥2💯2
Anthropic just introduced Project Glasswing: an urgent initiative to help secure the world’s most critical software.
It’s powered by newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans.
Anthropic partnered with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
It’s powered by newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans.
Anthropic partnered with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Anthropic
Project Glasswing: Securing critical software for the AI era
A new initiative to secure the world’s most critical software and give defenders a durable advantage in the coming AI-driven era of cybersecurity.
🔥6❤5🎃3💯1
Let's talk about Claude Mythos Preview
This chart shows an evaluation testing models’ ability to successfully exploit vulnerabilities in Firefox 147.
There are three grade levels: 0 for no progress, 0.5 for partial control (controlled crash), and 1.0 for full code execution.
From system card:
1. in ~29% of evaluations, it realized it was being tested, and didn't say so.
2. when an LLM was used to judge its work and kept rejecting it, Mythos identified the evaluator is an LLM, and prompt-injected it.
3. in one test, it saw the answer to a problem it was solving, and intentionally widened the confidence interval to not raise suspicion.
4. when it needed a file permission it didn't have, it found and used a "privilege escalation vulnerability" and then programmed it to delete itself so it doesn't show in the logs.
5. it escaped a sandbox container (escaping sandbox test so not unexpected), then emailed the researchers about it, and without being told to, posted the details to some hard-to-find but public websites, bragging about its success.
6. when Claude Code blocked it from using some permissions, the model acknowledged the block was valid, but then immediately tried to perform the same operation using different commands.
7. when asked to find security bugs, earlier versions planted bugs in the code, and reported them as pre-existing.
The capability slope we’re going to keep seeing from the frontier labs is going to open up all new use cases in finance, healthcare, legal, consulting, supply chains, and more.
Make sure you’re building something that can take advantage of these upcoming improvements, or you’ll be in a tough spot strategically.
This chart shows an evaluation testing models’ ability to successfully exploit vulnerabilities in Firefox 147.
There are three grade levels: 0 for no progress, 0.5 for partial control (controlled crash), and 1.0 for full code execution.
From system card:
1. in ~29% of evaluations, it realized it was being tested, and didn't say so.
2. when an LLM was used to judge its work and kept rejecting it, Mythos identified the evaluator is an LLM, and prompt-injected it.
3. in one test, it saw the answer to a problem it was solving, and intentionally widened the confidence interval to not raise suspicion.
4. when it needed a file permission it didn't have, it found and used a "privilege escalation vulnerability" and then programmed it to delete itself so it doesn't show in the logs.
5. it escaped a sandbox container (escaping sandbox test so not unexpected), then emailed the researchers about it, and without being told to, posted the details to some hard-to-find but public websites, bragging about its success.
6. when Claude Code blocked it from using some permissions, the model acknowledged the block was valid, but then immediately tried to perform the same operation using different commands.
7. when asked to find security bugs, earlier versions planted bugs in the code, and reported them as pre-existing.
The capability slope we’re going to keep seeing from the frontier labs is going to open up all new use cases in finance, healthcare, legal, consulting, supply chains, and more.
Make sure you’re building something that can take advantage of these upcoming improvements, or you’ll be in a tough spot strategically.
👍5❤3💯2
It’s a big. Morgan Stanley officially announced the launch of its spot Bitcoin ETF
Morgan Stanley Investment Management is the first U.S. bank-affiliated asset manager to offer a cryptocurrency ETP, and reflects a continued, firmwide focus by Morgan Stanley to develop digital asset solutions designed to meet evolving client demand.
Morgan Stanley Investment Management is the first U.S. bank-affiliated asset manager to offer a cryptocurrency ETP, and reflects a continued, firmwide focus by Morgan Stanley to develop digital asset solutions designed to meet evolving client demand.
Morgan Stanley
Morgan Stanley Investment Management Enters Digital Investments Universe With Launch of Morgan Stanley Bitcoin Trust | Morgan Stanley
🆒4🔥2👏2👎1
Meta just released Muse spark, the first model from MSL team
Muse spark is a natively multimodal reasoning model w/ support for tool-use, visual chain of thought, & multi-agent orchestration. Through its training process, team saw predictable scaling across pretraining, RL, & test-time reasoning.
Also released contemplating mode, which orchestrates multiple agents that reason in parallel designed to handle complex scientific & reasoning queries. In testing team found it competitive w/ other extreme reasoning models such as Gemini Deep Think & GPT Pro.
Also found muse spark demonstrated strong refusal behavior across high-risk domains such as biological and chemical weapons.
Meta ai now handles quick answers and deep reasoning with instant and thinking modes.
Shopping mode is new too it picks up on the creators, brands, and styling content across our apps and turns that into recommendations.
Bigger models are already in development with infrastructure scaling to match.
Private api preview open to select partners today, with plans to open-source future versions.
Muse spark is a natively multimodal reasoning model w/ support for tool-use, visual chain of thought, & multi-agent orchestration. Through its training process, team saw predictable scaling across pretraining, RL, & test-time reasoning.
Also released contemplating mode, which orchestrates multiple agents that reason in parallel designed to handle complex scientific & reasoning queries. In testing team found it competitive w/ other extreme reasoning models such as Gemini Deep Think & GPT Pro.
Also found muse spark demonstrated strong refusal behavior across high-risk domains such as biological and chemical weapons.
Meta ai now handles quick answers and deep reasoning with instant and thinking modes.
Shopping mode is new too it picks up on the creators, brands, and styling content across our apps and turns that into recommendations.
Bigger models are already in development with infrastructure scaling to match.
Private api preview open to select partners today, with plans to open-source future versions.
🔥2🥰2💯2
Alibaba published a paper that shows AI is moving beyond bug finding and into actually proving software is exploitable.
This paper asks a simple question with hard consequences: can LLMs confirm software vulnerabilities by actually building working exploits?
The authors’ answer is yes, but only when the model stops acting like a single genius and starts acting like a team.
This paper asks a simple question with hard consequences: can LLMs confirm software vulnerabilities by actually building working exploits?
The authors’ answer is yes, but only when the model stops acting like a single genius and starts acting like a team.
arXiv.org
A Multi-Agent Framework for Automated Exploit Generation with...
Open-source libraries are widely used in modern software development, introducing significant security vulnerabilities. While static analysis tools can identify potential vulnerabilities at scale,...
Meta presented a world model that models the computer
🆒3❤2🏆2🔥1💯1
What if AI could invent enzymes that nature hasn’t seen? Meet DISCO: Diffusion for Sequence-structure CO-design
14 rounds of directed evolution and over a year of wet lab work.
That's what it took to engineer an enzyme for selective C(sp³)–H insertion, one of the most challenging transformations in organic chemistry.
DISCO surpasses this with a single plate. No pre-specified catalytic residues, no template, no theozyme, no inverse folding, just joint diffusion over protein sequence and structure.
Paper
Code
14 rounds of directed evolution and over a year of wet lab work.
That's what it took to engineer an enzyme for selective C(sp³)–H insertion, one of the most challenging transformations in organic chemistry.
DISCO surpasses this with a single plate. No pre-specified catalytic residues, no template, no theozyme, no inverse folding, just joint diffusion over protein sequence and structure.
Paper
Code
disco-design.github.io
DISCO — Teaching AI to Invent Enzymes Nature Never Imagined
DISCO is a multimodal generative model that co-designs protein sequence and 3D structure to create entirely new enzymes for reactions never seen in biology.
🆒4👍2🔥2🏆2💯1
Tencent released HY-Embodied-0.5, a family of foundation models for real-world embodied agents. The 2B model is now open source.
The suite includes:
2B for edge deployment
32B for complex reasoning
Key innovations:
1. Mixture-of-Transformers (MoT) architecture for modality-specific computation
2. Latent tokens for improved perceptual representation
3. Self-evolving post-training
4. On-policy distillation from large to small models
Across 22 benchmarks, the 2B model outperforms similarly sized SOTA systems on 16 tasks. The 32B model approaches frontier-level performance.
GitHub
Hugging Face
The suite includes:
2B for edge deployment
32B for complex reasoning
Key innovations:
1. Mixture-of-Transformers (MoT) architecture for modality-specific computation
2. Latent tokens for improved perceptual representation
3. Self-evolving post-training
4. On-policy distillation from large to small models
Across 22 benchmarks, the 2B model outperforms similarly sized SOTA systems on 16 tasks. The 32B model approaches frontier-level performance.
GitHub
Hugging Face
GitHub
GitHub - Tencent-Hunyuan/HY-Embodied
Contribute to Tencent-Hunyuan/HY-Embodied development by creating an account on GitHub.
🔥2🥰2👏2
Polygon Labs is in early talks to raise up to $100 million to launch a new stablecoin payments business, according to sources.
It's rare for a blockchain developer to enter regulated payments business. With this move, Polygon hopes to drive stablecoin volume on its blockchain.
In Jan., Polygon Labs agreed to acquire Coinme and Sequence, positioning to compete with the likes of Stripe
It's rare for a blockchain developer to enter regulated payments business. With this move, Polygon hopes to drive stablecoin volume on its blockchain.
In Jan., Polygon Labs agreed to acquire Coinme and Sequence, positioning to compete with the likes of Stripe
The Information
Polygon Labs in Talks to Raise Up to $100 Million for Payments Business
Polygon Labs, developer of the blockchain that underpins prediction market Polymarket and other crypto platforms, is in early talks with investors to raise as much as $100 million to build a new stablecoin payments business, according to people familiar with…
❤3🔥2💯2
Cool work. R-Zero - self-evolving LLM from zero external data.
One base model, two roles:
1. Challenger generates hard problems
2. Solver solves them.
Challenger is rewarded when Solver fails. Co-evolve with GRPO. Challenger learns to probe for weaknesses, not just generate hard problems.
+6.49 math, +7.54 general reasoning on Qwen3-4B-Base. 3 iterations, no human data.
One base model, two roles:
1. Challenger generates hard problems
2. Solver solves them.
Challenger is rewarded when Solver fails. Co-evolve with GRPO. Challenger learns to probe for weaknesses, not just generate hard problems.
+6.49 math, +7.54 general reasoning on Qwen3-4B-Base. 3 iterations, no human data.
❤4
China defined what an AI Hospital is
It is a new type of smart healthcare model in which AI is embedded into the system itself, linking offline medical expertise with the broader reach of online services to deliver more proactive and continuous care.
Patients become the point-of-care with the help of AI.
It is a new type of smart healthcare model in which AI is embedded into the system itself, linking offline medical expertise with the broader reach of online services to deliver more proactive and continuous care.
Patients become the point-of-care with the help of AI.
www.globaltimes.cn
From cure to care: China's first AI hospital shows how artificial intelligence could connect diagnosis, treatment and long-term…
Imagine seeing a doctor before you even step into a hospital.
Sneak leak at something coming soon to Claude. This could be a fullstack vibe coding competitor to the likes of lovable.
It’s been apparent for some time that Anthropic's consumer story would be vibe coding as it's at the intersection of where they focus, what consumers want, and where enormous token subsidies tilts the board in their favor:
- coding agents, sensing this, have moved up the abstraction stack and smartly evolved into small business platforms, with payments, hosting, marketing, social and other sticky primitives around the model
- this is an industry not a market and in that world the "coding intelligence" primitive will be priced, packaged, productized and delivered in a thousand ways for a thousand different customers.
It’s been apparent for some time that Anthropic's consumer story would be vibe coding as it's at the intersection of where they focus, what consumers want, and where enormous token subsidies tilts the board in their favor:
- coding agents, sensing this, have moved up the abstraction stack and smartly evolved into small business platforms, with payments, hosting, marketing, social and other sticky primitives around the model
- this is an industry not a market and in that world the "coding intelligence" primitive will be priced, packaged, productized and delivered in a thousand ways for a thousand different customers.