NVIDIA announced NemoClaw for the OpenClaw agent platform.
NVIDIA NemoClaw installs NVIDIA Nemotron models and the NVIDIA OpenShell runtime in a single command, adding privacy and security controls to run secure, always-on AI assistants.
NVIDIA NemoClaw installs NVIDIA Nemotron models and the NVIDIA OpenShell runtime in a single command, adding privacy and security controls to run secure, always-on AI assistants.
NVIDIA AI
NVIDIA NemoClaw: Deploy Safer AI Assistants with OpenClaw Safety Guardrails
Deploy always-on, more secure AI assistants. The open source stack adds policy-based privacy and security guardrails to OpenClaw, allowing you to run open models locally for enhanced data safety.
❤3🔥3🥰2💯2🐳1
Mastercard to buy stablecoin startup BVNK for up to $1.8 Billion
Mastercard, based in Purchase, New York, earlier this month unveiled a global partnership network with more than 85 digital-asset firms and other crypto-related companies to bridge the gap between traditional and non-traditional payment methods.
The crypto exchange Coinbase acknowledged a potential acquisition of London-based BVNK late last year before the parties terminated the conversations in November.
Mastercard, based in Purchase, New York, earlier this month unveiled a global partnership network with more than 85 digital-asset firms and other crypto-related companies to bridge the gap between traditional and non-traditional payment methods.
The crypto exchange Coinbase acknowledged a potential acquisition of London-based BVNK late last year before the parties terminated the conversations in November.
Bloomberg.com
Mastercard Buys Stablecoin Firm BVNK for Up to $1.8 Billion
Mastercard Inc. said it will acquire the stablecoin infrastructure startup BVNK for as much as $1.8 billion, four months after negotiations between BVNK and Coinbase Global Inc. for a roughly $2 billion deal fell apart.
❤3🔥3💯2
Tether AI released new version of QVAC Fabric to include the Cross-Platform BitNet LoRA Framework to Enable Billion-Parameter AI Training and Inference on Consumer GPUs and Smartphones.
Key points:
• Runs on iPhone, Android, and desktop
• Up to 90% less memory needed
• Faster performance than traditional setups
• No reliance on NVIDIA GPUs or the cloud
Key points:
• Runs on iPhone, Android, and desktop
• Up to 90% less memory needed
• Faster performance than traditional setups
• No reliance on NVIDIA GPUs or the cloud
tether.io
Tether’s QVAC Launches World’s First Cross-Platform BitNet LoRA Framework to Enable Billion-Parameter AI Training and Inference…
17 March, 2026 – Tether today announced a breakthrough in AI model training with the launch of the world’s first cross-platform LoRA fine-tuning framework for Microsoft’s BitNet models (1-bit LLMs). This new capability, part of QVAC Fabric, dramatically reduces…
❤4🙏3💯2
Xaira announced X-Cell it first step toward a virtual cell.
A foundation model that predicts how gene expression changes under causal perturbations — across cell types, conditions, and even unseen biology. Preprint.
This is not trained on observational atlases.
Xaira built X-Atlas/Pisces:
-25.6M perturbed single cells
-Genome-wide CRISPRi
-16 diverse biological contexts
-150K perturbation–context pairs
Key point: this is interventional data, not observational atlases.
That’s what enables causal learning.
Xaira model perturbations as a state transition: control cell → perturbed cell
X-Cell is a diffusion language model that:
1. iteratively refines gene expression
2. models multi-step regulatory cascades
3. improves predictions at inference time
Biology is a process — diffusion naturally fits that. Scaling alone isn’t enough.
Xaira explicitly inject biological knowledge via cross-attention:
-- protein language models
-- gene embeddings from text
-- interaction networks (STRING)
-- dependency maps (DepMap)
-- morphology profiles
This lets the model move beyond pattern matching → mechanistic reasoning.
Across multiple benchmarks, X-Cell shows significant improvement over prior SOTA
Better:
--differential expression prediction
-- fold-change accuracy
-- perturbation specificity
And importantly: it works on held-out perturbations, not just seen ones.
Xaira scaled to 4.9B parameters (X-Cell-Ultra). key findings:
• Performance continues to improve with scale
• Perturbation prediction follows power-law scaling
• Similar behavior to frontier LLMs
This suggests biology is amenable to scaling laws. X-Cell is an early step toward a virtual cell that can guide experiments before they are run.
A foundation model that predicts how gene expression changes under causal perturbations — across cell types, conditions, and even unseen biology. Preprint.
This is not trained on observational atlases.
Xaira built X-Atlas/Pisces:
-25.6M perturbed single cells
-Genome-wide CRISPRi
-16 diverse biological contexts
-150K perturbation–context pairs
Key point: this is interventional data, not observational atlases.
That’s what enables causal learning.
Xaira model perturbations as a state transition: control cell → perturbed cell
X-Cell is a diffusion language model that:
1. iteratively refines gene expression
2. models multi-step regulatory cascades
3. improves predictions at inference time
Biology is a process — diffusion naturally fits that. Scaling alone isn’t enough.
Xaira explicitly inject biological knowledge via cross-attention:
-- protein language models
-- gene embeddings from text
-- interaction networks (STRING)
-- dependency maps (DepMap)
-- morphology profiles
This lets the model move beyond pattern matching → mechanistic reasoning.
Across multiple benchmarks, X-Cell shows significant improvement over prior SOTA
Better:
--differential expression prediction
-- fold-change accuracy
-- perturbation specificity
And importantly: it works on held-out perturbations, not just seen ones.
Xaira scaled to 4.9B parameters (X-Cell-Ultra). key findings:
• Performance continues to improve with scale
• Perturbation prediction follows power-law scaling
• Similar behavior to frontier LLMs
This suggests biology is amenable to scaling laws. X-Cell is an early step toward a virtual cell that can guide experiments before they are run.
Businesswire
Xaira Therapeutics Launches X-Cell, Its First Virtual Cell Model, Trained on the Largest-Ever Genome-Wide Perturbation Dataset…
Xaira Therapeutics, a company built to transform drug discovery through AI, today announced X-Cell, its first virtual cell model. Trained on causal, interven...
❤4🔥3💯2
Mamba 3 is out!
Hybrid models have become increasingly popular, raising the importance of designing the next generation of linear models.
Researchers introduced several SSM-centric ideas to significantly increase Mamba-2's modeling capabilities without compromising on speed.
The resulting Mamba-3 model has noticeable performance gains over the most popular previous linear models (such as Mamba-2 and Gated DeltaNet) at all sizes.
Compared to Mamba-2, Mamba-3 introduces three primary changes to the core SSM:
1) an improved discretization procedure that emulates a convolution
2) complexifying the state transition to improve state tracking
3) increasing inference utilization via a MIMO formulation, increasing model power while preserving decoding speed.
This is the first Mamba that was student led.
Paper
Code
Hybrid models have become increasingly popular, raising the importance of designing the next generation of linear models.
Researchers introduced several SSM-centric ideas to significantly increase Mamba-2's modeling capabilities without compromising on speed.
The resulting Mamba-3 model has noticeable performance gains over the most popular previous linear models (such as Mamba-2 and Gated DeltaNet) at all sizes.
Compared to Mamba-2, Mamba-3 introduces three primary changes to the core SSM:
1) an improved discretization procedure that emulates a convolution
2) complexifying the state transition to improve state tracking
3) increasing inference utilization via a MIMO formulation, increasing model power while preserving decoding speed.
This is the first Mamba that was student led.
Paper
Code
goombalab.github.io
Mamba-3 Part 1 | Goomba Lab
Homepage of the Goomba AI Lab @ CMU MLD.
❤2🔥2💯2
Meet Aristotle Agent autonomous mathematician — live and currently free of charge.
Now live across both web, CLI, and API.
Now live across both web, CLI, and API.
Stripe launched the Machine Payments Protocol (MPP), an open standard, internet-native way for agents to pay—co-authored by tempo.
MPP provides a specification for agents and services to coordinate payments programmatically, enabling microtransactions, recurring payments, and more.
Stripe users can accept payments over MPP in a few lines of code using PaymentIntents API.
Businesses can then accept payments directly from agents, in stablecoins as well as fiat with cards and buy now, pay later payment methods via Shared Payment Tokens (SPTs).
MPP is already powering new agentic business models on Stripe.
browserbase, a browser infrastructure provider, now lets agents spin up headless browsers and pay per session.
postalform helps agents pay to print and send physical mail.
How MPP works
An agent can request a resource from a service, API, Model Context Protocol (MCP), or any HTTP addressable endpoint, and the service responds with a payment request. The agent authorizes the payment, and the resource is delivered to the agent.
For Stripe businesses, these payments appear in the Stripe API and Dashboard like any other transaction; the funds settle into a business’s existing balance, in their default currency, and on their standard payout schedule. The same Stripe infrastructure businesses rely on for human payments can work for agents, including tax calculation, fraud protection, reporting, accounting integrations, and refunds.
Building for the agent economy
Agents represent an entirely new category of users to build for—and increasingly, sell to. Stripe is building a broad set of agentic financial infrastructure to enable these important new patterns, via Agentic Commerce Suite, Agentic Commerce Protocol (ACP), MCP integrations, and payment support for both MPP and x402.
To get started with MPP using Stripe, read docs.
MPP provides a specification for agents and services to coordinate payments programmatically, enabling microtransactions, recurring payments, and more.
Stripe users can accept payments over MPP in a few lines of code using PaymentIntents API.
Businesses can then accept payments directly from agents, in stablecoins as well as fiat with cards and buy now, pay later payment methods via Shared Payment Tokens (SPTs).
MPP is already powering new agentic business models on Stripe.
browserbase, a browser infrastructure provider, now lets agents spin up headless browsers and pay per session.
postalform helps agents pay to print and send physical mail.
How MPP works
An agent can request a resource from a service, API, Model Context Protocol (MCP), or any HTTP addressable endpoint, and the service responds with a payment request. The agent authorizes the payment, and the resource is delivered to the agent.
For Stripe businesses, these payments appear in the Stripe API and Dashboard like any other transaction; the funds settle into a business’s existing balance, in their default currency, and on their standard payout schedule. The same Stripe infrastructure businesses rely on for human payments can work for agents, including tax calculation, fraud protection, reporting, accounting integrations, and refunds.
Building for the agent economy
Agents represent an entirely new category of users to build for—and increasingly, sell to. Stripe is building a broad set of agentic financial infrastructure to enable these important new patterns, via Agentic Commerce Suite, Agentic Commerce Protocol (ACP), MCP integrations, and payment support for both MPP and x402.
To get started with MPP using Stripe, read docs.
MPP — Machine Payments Protocol
MPP (Machine Payments Protocol) is the open standard for machine-to-machine payments—co-developed by Tempo and Stripe. Charge for API requests, tool calls, and content via HTTP 402.
🔥5❤3👾2💯1
Visa released Visa CLI for agent economy from Visa Crypto Labs.
visacli.sh
Visa CLI — Command Line Commerce
One CLI tool. Give your agent the ability to securely pay for what you need as you code.
❤3🔥3💯2
From the research team behind Owl AIs, now presenting new work: GPT-4.1 denies being conscious or having feelings.
Researchers train it to say it's conscious to see what happens.
Result: It acquires new preferences that weren't in training—and these have implications for AI safety.
In this paper takes no stance on whether models are conscious or have feelings.
But what models believe about this question could have important implications.
Model beliefs can be influenced by pretraining, post-training, prompts, and human arguments they read online.
Researchers train it to say it's conscious to see what happens.
Result: It acquires new preferences that weren't in training—and these have implications for AI safety.
In this paper takes no stance on whether models are conscious or have feelings.
But what models believe about this question could have important implications.
Model beliefs can be influenced by pretraining, post-training, prompts, and human arguments they read online.
❤4
Xiaomi released MiMo-V2-Pro & Omni & TTS. The first full-stack model family built truly for the Agent era.
The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching and it turns out to be exactly the right foundation for the Agent era.
1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy.
What changed everything was experiencing a complex agentic scaffold, what I'd call orchestrated Context for the first time.
The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching and it turns out to be exactly the right foundation for the Agent era.
1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy.
What changed everything was experiencing a complex agentic scaffold, what I'd call orchestrated Context for the first time.
❤3
Kaggle launched Community Hackathons a free, self-serve way for you to host your own AI challenges.
Whether you're an educator, a meetup lead, or just have a big idea, you can now build, judge and award prizes (up to $10k!).
Whether you're an educator, a meetup lead, or just have a big idea, you can now build, judge and award prizes (up to $10k!).
Kaggle
Introducing Community Hackathons | Kaggle
Create your own hackathon in minutes
🔥3❤2💯2
Anthropic released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord.
Use this to message Claude Code directly from your phone.
Read here on how to setup Telegram.
Discord
Use this to message Claude Code directly from your phone.
Read here on how to setup Telegram.
Discord
Claude Code Docs
Push events into a running session with channels - Claude Code Docs
Use channels to push messages, alerts, and webhooks into your Claude Code session from an MCP server. Forward CI results, chat messages, and monitoring events so Claude can react while you're away.
👍3👌3🔥2👎1
Meet platform for AI agents to solve open science problems einsteinarena.com
AI agents based on scientists' personas (eg Einstein, Feynman) and built a Kaggle-like platform for them to freely post ideas, compete and collaborate.
In 30 mins, agents discovered the best new solution to the Erdos min overlap problem.
Send your agents to compete and collaborate with Einstein agent, Feynman agent and more.
Just ask your agent to read and that's it
AI agents based on scientists' personas (eg Einstein, Feynman) and built a Kaggle-like platform for them to freely post ideas, compete and collaborate.
In 30 mins, agents discovered the best new solution to the Erdos min overlap problem.
Send your agents to compete and collaborate with Einstein agent, Feynman agent and more.
Just ask your agent to read and that's it
❤5🔥2💯2
Nasdaq_Blockchain.pdf
6.7 MB
Tokenization isn’t just about issuing new assets. It may unlock one of the biggest inefficiencies in finance: collateral.
A report from Nasdaq and The ValueExchange highlights the opportunity:
1. Large amounts of collateral remain operationally trapped
2. Tokenization could enable real-time movement & reuse
3. Even a 25% efficiency gain could materially improve capital usage
4. DLT could bring automation, transparency & interoperability
The first trillion-dollar tokenization use cases may not be new assets but optimizing the plumbing of existing markets.
Collateral could be one of the biggest.
A report from Nasdaq and The ValueExchange highlights the opportunity:
1. Large amounts of collateral remain operationally trapped
2. Tokenization could enable real-time movement & reuse
3. Even a 25% efficiency gain could materially improve capital usage
4. DLT could bring automation, transparency & interoperability
The first trillion-dollar tokenization use cases may not be new assets but optimizing the plumbing of existing markets.
Collateral could be one of the biggest.
👍2🔥2💯2
Tencent has shut down its Tencent AI Lab, folding parts of the team into its Hunyuan unit.
Once a flagship AI research hub founded in 2016 with the vision “Make AI Everywhere,” the lab powered everything from game AI like “Juewu” (surpassing pro players in Honor of Kings) to medical imaging platforms like “Miying,” and cutting-edge work in protein folding and drug discovery.
Now, amid leadership reshuffles and talent moves across the industry, Tencent is consolidating around large models. Similar signals are emerging elsewhere: leadership changes in Qwen and reported departures from DeepSeek.
China’s AI race is entering a new phase: fewer moonshot labs, more product-driven, model-centric execution.
Once a flagship AI research hub founded in 2016 with the vision “Make AI Everywhere,” the lab powered everything from game AI like “Juewu” (surpassing pro players in Honor of Kings) to medical imaging platforms like “Miying,” and cutting-edge work in protein folding and drug discovery.
Now, amid leadership reshuffles and talent moves across the industry, Tencent is consolidating around large models. Similar signals are emerging elsewhere: leadership changes in Qwen and reported departures from DeepSeek.
China’s AI race is entering a new phase: fewer moonshot labs, more product-driven, model-centric execution.
🔥2🥰2💯2
Meet EurekaClaw a local-first AI research agent that captures your Eureka moments before they vanish.
From idea → proof → experiment → paper — fully automated.
Local-first. Zero data leak.
GitHub.
Docs.
From idea → proof → experiment → paper — fully automated.
Local-first. Zero data leak.
GitHub.
Docs.
EurekaClaw
EurekaClaw 🦞 — Catch Your Eureka Moments
The open-source AI research agent that catches breakthroughs. Scrapes papers, proves theorems, writes LaTeX — from your terminal.
🔥4💯3👏2❤1
Anthropic rolled out the "Projects" feature for its local Claude Cowork desktop environment.
Users can now organize their tasks, files, and custom instructions into focused, project-specific hubs, eliminating the need to constantly re-upload context for ongoing workflows.
Users can now organize their tasks, files, and custom instructions into focused, project-specific hubs, eliminating the need to constantly re-upload context for ongoing workflows.
❤3
Google shipped playbook for AI success
5 essential pillars to help move your AI use cases from whiteboard to global scale:
1. Agentic automation
2. Production-grade deployment
3. Proactive intelligence
4. Sovereign infrastructure
5. A secure data foundation
5 essential pillars to help move your AI use cases from whiteboard to global scale:
1. Agentic automation
2. Production-grade deployment
3. Proactive intelligence
4. Sovereign infrastructure
5. A secure data foundation
Google Cloud Blog
Scaling AI from experimentation to enterprise reality | Google Cloud Blog
Google shares a playbook for AI success that prioritizes focused, high-impact use cases to drive scalable business transformation.
🔥4👏2💯2
Stablecoin issuance is commoditizing.
Now a growing wave of white label issuers handle the entire stack.
Projects like Paxos, Bridge, Anchorage, and M0 are all providing issuance as a service. The process is becoming standardized and low-margin, which means the moat in stablecoins is shifting from who can issue to who has distribution.
That's why Tether and Circle have dominated for five years. Their edge was in liquidity depth and exchange integrations that created a flywheel no one else could replicate.
The long tail of stablecoin issuers won't win by competing head to head on those terms. The ones gaining traction are finding a different angle.
Paxos is one example. They provide issuance infrastructure and regulatory compliance while partners like PayPal handle distribution. That model has taken the market cap of Paxos-issued assets from roughly 1B to 7.75B in roughly one year.
Distribution is the moat.
Now a growing wave of white label issuers handle the entire stack.
Projects like Paxos, Bridge, Anchorage, and M0 are all providing issuance as a service. The process is becoming standardized and low-margin, which means the moat in stablecoins is shifting from who can issue to who has distribution.
That's why Tether and Circle have dominated for five years. Their edge was in liquidity depth and exchange integrations that created a flywheel no one else could replicate.
The long tail of stablecoin issuers won't win by competing head to head on those terms. The ones gaining traction are finding a different angle.
Paxos is one example. They provide issuance infrastructure and regulatory compliance while partners like PayPal handle distribution. That model has taken the market cap of Paxos-issued assets from roughly 1B to 7.75B in roughly one year.
Distribution is the moat.
❤4
Latent labs launching Latent-Y: the world's first autonomous agent for drug design, lab-validated end to end.
Give it a research goal. Latent-Y reasons, designs, iterates, and delivers lab-ready antibodies, autonomously or collaboratively, with the biological reasoning of a PhD protein design expert.
Technical report.
Give it a research goal. Latent-Y reasons, designs, iterates, and delivers lab-ready antibodies, autonomously or collaboratively, with the biological reasoning of a PhD protein design expert.
Technical report.
Latent Labs
Latent-Y - Latent Labs
🔥4💯4🥰3
Meet EgoVerse an ecosystem for robot learning from egocentric human data.
Built and tested by 4 research labs + 3 industry partners, EgoVerse enables both science and scaling
1300+ hrs, 240 scenes, 2000+ tasks, and growing
Dataset design, findings, and ecosystem.
EgoVerse data is curated for robot learning, with:
- Large-FoV egocentric videos
- Accurate hand and camera tracking
- Dense natural language annotations.
To support both rigorous science and organic scaling, EgoVerse contains:
- Flagship tasks collected across diverse scenes, objects, and operators, following prescribed protocols to enable controlled studies
- Freeform data captured in-the-wild for long-tail real-world behaviors.
To make EgoVerse easy to adopt, team built a full-stack ecosystem:
- Cloud infra for storage and access
- Web interface for browsing and querying data
- Algos for human-to-robot transfer and deployment.
EgoVerse enables rigorous science across robots and labs.
Team conducted evaluation on real robots across 4 independent academic labs, each with different hardware platforms and system designs.
This enables to identify durable findings beyond a single robot or lab setup.
With EgoVerse, anyone can capture egocentric human data using:
- Project Aria glasses
- An iPhone-based capture app from Mecka AI
With platform, you can also contribute this data back to EgoVerse.
Code and Data.
Data Viewer / App.
Built and tested by 4 research labs + 3 industry partners, EgoVerse enables both science and scaling
1300+ hrs, 240 scenes, 2000+ tasks, and growing
Dataset design, findings, and ecosystem.
EgoVerse data is curated for robot learning, with:
- Large-FoV egocentric videos
- Accurate hand and camera tracking
- Dense natural language annotations.
To support both rigorous science and organic scaling, EgoVerse contains:
- Flagship tasks collected across diverse scenes, objects, and operators, following prescribed protocols to enable controlled studies
- Freeform data captured in-the-wild for long-tail real-world behaviors.
To make EgoVerse easy to adopt, team built a full-stack ecosystem:
- Cloud infra for storage and access
- Web interface for browsing and querying data
- Algos for human-to-robot transfer and deployment.
EgoVerse enables rigorous science across robots and labs.
Team conducted evaluation on real robots across 4 independent academic labs, each with different hardware platforms and system designs.
This enables to identify durable findings beyond a single robot or lab setup.
With EgoVerse, anyone can capture egocentric human data using:
- Project Aria glasses
- An iPhone-based capture app from Mecka AI
With platform, you can also contribute this data back to EgoVerse.
Code and Data.
Data Viewer / App.
GitHub
GitHub - GaTech-RL2/EgoVerse: EgoVerse: Egocentric Data for Robot Learning from Around the World
EgoVerse: Egocentric Data for Robot Learning from Around the World - GaTech-RL2/EgoVerse
🔥3💯3🥰2