All about AI, Web 3.0, BCI
3.6K subscribers
754 photos
26 videos
162 files
3.32K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Meet EurekaClaw a local-first AI research agent that captures your Eureka moments before they vanish.

From idea → proof → experiment → paper — fully automated.
Local-first. Zero data leak.

GitHub.
Docs.
🔥4💯3👏21
Anthropic rolled out the "Projects" feature for its local Claude Cowork desktop environment.

Users can now organize their tasks, files, and custom instructions into focused, project-specific hubs, eliminating the need to constantly re-upload context for ongoing workflows.
3
Google shipped playbook for AI success

5 essential pillars to help move your AI use cases from whiteboard to global scale:

1. Agentic automation
2. Production-grade deployment
3. Proactive intelligence
4. Sovereign infrastructure
5. A secure data foundation
🔥4👏2💯2
Stablecoin issuance is commoditizing.

Now a growing wave of white label issuers handle the entire stack.

Projects like Paxos, Bridge, Anchorage, and M0 are all providing issuance as a service. The process is becoming standardized and low-margin, which means the moat in stablecoins is shifting from who can issue to who has distribution.

That's why Tether and Circle have dominated for five years. Their edge was in liquidity depth and exchange integrations that created a flywheel no one else could replicate.

The long tail of stablecoin issuers won't win by competing head to head on those terms. The ones gaining traction are finding a different angle.

Paxos is one example. They provide issuance infrastructure and regulatory compliance while partners like PayPal handle distribution. That model has taken the market cap of Paxos-issued assets from roughly 1B to 7.75B in roughly one year.

Distribution is the moat.
4
Latent labs launching Latent-Y: the world's first autonomous agent for drug design, lab-validated end to end.

Give it a research goal. Latent-Y reasons, designs, iterates, and delivers lab-ready antibodies, autonomously or collaboratively, with the biological reasoning of a PhD protein design expert.

Technical report.
🔥4💯4🥰3
Meet EgoVerse an ecosystem for robot learning from egocentric human data.

Built and tested by 4 research labs + 3 industry partners, EgoVerse enables both science and scaling

1300+ hrs, 240 scenes, 2000+ tasks, and growing

Dataset design, findings, and ecosystem.

EgoVerse data is curated for robot learning, with:
- Large-FoV egocentric videos
- Accurate hand and camera tracking
- Dense natural language annotations.

To support both rigorous science and organic scaling, EgoVerse contains:

- Flagship tasks collected across diverse scenes, objects, and operators, following prescribed protocols to enable controlled studies
- Freeform data captured in-the-wild for long-tail real-world behaviors.

To make EgoVerse easy to adopt, team built a full-stack ecosystem:

- Cloud infra for storage and access
- Web interface for browsing and querying data
- Algos for human-to-robot transfer and deployment.

EgoVerse enables rigorous science across robots and labs.

Team conducted evaluation on real robots across 4 independent academic labs, each with different hardware platforms and system designs.

This enables to identify durable findings beyond a single robot or lab setup.

With EgoVerse, anyone can capture egocentric human data using:

- Project Aria glasses
- An iPhone-based capture app from Mecka AI

With platform, you can also contribute this data back to EgoVerse.

Code and Data.
Data Viewer / App.
🔥3💯3🥰2
JEPA are finally easy to train end-to-end without any tricks. Meet LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics.

15M params, 1 GPU, and full planning <1 second.
4🆒4👏3💯1
Google Introduced TurboQuant a new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency.

NB: On-chip SRAM used better, less HBM needed, and hardware-agnostic (works on Nvidia GPUs too, not just Google TPUs).
👍5🔥2💯2
Meta partnering with Stripe for checkouts:

- Businesses now sell their stuff directly within Facebook (and later Instagram)
- Turn your Facebook ads into one-click checkouts via a toggle in the stripe dashboard
- Built on the Agentic Commerce Protocol.
🔥3💯2
Meet Cerebra a multidisciplinary AI board for dementia

Cerebra a team of AI agents for clinicians, boosting their dementia risk assessment accuracy by 17.5%.

Code.
🔥3🥰2👏2🆒21
Meet ARM-Thinker the first Agentic multimodal Reward Model that autonomously invokes external tools to ground judgments in verifiable evidence.

Accepted to CVPR 2026.

Integrates 3 multimodal tools:

1. Image Crop & Zoom-in for fine-grained visual inspection.

2.Document Retrieval for multi-page evidence gathering.

3. Instruction-Following Validators for constraint verification.

With a Think-Act-Verify loop, ARM-Thinker can call image crop & zoom-in, document retrieval, and instruction-following validators for evidence-based evaluation.

Built on Qwen2.5-VL-7B with SFT + two-stage GRPO, ARM-Thinker improves multimodal reward modeling, tool-use reasoning, and multimodal math/logical reasoning.

Also introduced ARMBench-VL, a multimodal reward benchmark that requires tool use.

Code

Dataset

Evaluation
🔥2👏2💯2
Meta introduced TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound.

TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks.

From the paper:

'The present results strengthen the possibility of a paradigm shift in neuroscience ... moving from the fragmented mapping of isolated cognitive tasks toward the use of unified, predictive foundation models of brain and cognitive functions by aligning the representations of Al systems to those of the human brain, we demonstrate that a single architecture can integrate a vast range of fMRI responses across hundreds of individuals, extending the framework that led the 2025 Algonauts competition.

The observed log-linear scaling of encoding accuracy mirroring power laws in both artificial intelligence and neuroscience suggests that the ceiling for predicting human brain activity is yet to be reached.'
3🔥2💯2
Google shipped Gemini 3.1 Flash Live

It’s built for production-ready reliability.

If you're building agents that need to execute complex tasks at scale, give it a spin.
👍2👏2💯2
Stripe launched Projects to help agents instantly provision services from the CLI.

For example, simply run:

$ stripe projects add posthog/analytics

And it'll create a PostHog account, get an API key, and (as needed) set up billing.

Projects is launching today as a developer preview. You can register for access (it available to everyone soon) at projects.dev.

Also rolling out support for many new providers over the coming weeks.
Cool work by Chroma training a search agent with SoTA efficiency.

Chroma Context-1, a 20B parameter search agent:

•pushes the pareto frontier of agentic search

•order of magnitude faster

•order of magnitude cheaper

•Apache 2.0, open-source

Lots of cool details: a prune tool for editing context mid-search, a synthetic data pipeline with verification steps, and a curriculum that shifts from recall to precision.

Trained with Tinker.
4👍2💯2
Stanford proposed a bold new vision for biology: Virtual Embryos

By integrating single-cell and spatial genomics with AI, researchers can build digital twins of embryogenesis—moving beyond virtual cells to predict cell growth, division, migration, state transitions, and morphogenesis, from genes → cells → organs -> organ systems and whole embryo, in a fully 4D spatiotemporal framework.

This approach could enable truly predictive biology and in silico experimentation to diagnose, prevent, and treat developmental diseases transforming medicine and improving outcomes for future generations.
3🔥2💯2
Unitree open-sourced UnifoLM-WBT-Dataset a high-quality real-world humanoid robot whole-body teleoperation (WBT) dataset for open environments.

Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.
2🔥2💯2