All about AI, Web 3.0, BCI
3.43K subscribers
736 photos
26 videos
161 files
3.22K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
$11T by 2030. This is the prediction ARK Investment estimates on-chain assets could grow to ~$11T by 2030, driven by deposits, public equities, credit, and funds moving on-chain.

This is where we are today:

• Stablecoin supply crossed $300B in 2025, with real transaction volumes now competing with legacy payment rails

• Tokenised real-world assets reached ~$19B, led by Treasuries and commodities

• Ethereum hosts the majority of that on-chain value

This is what will fuel the growth:
• Deposits moving on-chain for faster settlement and global liquidity
• Public equities and funds reducing issuance and operational costs via tokenisation
• Credit markets adopting programmable settlement and collateral workflows
• Banks, asset managers, fintechs, and payment networks actively launching on-chain rails
• Public blockchains increasingly used as back-end infrastructure, not front-end products

ARK Investment Management LLC wording is telling:

“Ethereum remains the preferred blockchain for on-chain assets.”

$19B today.
$11T by 2030.
KPMG told its auditor, Grant Thornton UK, it should pass on cost savings from the rollout of AI and threatened to find a new accountant if it did not agree to a significant fee reduction, the people said.
🔥32👏2
Meta is preparing to get new avocado models, Manus browser agent and integration with Openclaw

What’s new?

- Meta AI website got migrated to the new stack while retaining the same user experience.

- New effort selector, email, and calendar connectors are already available to users.

- Memory and Projects are in the works.

- Avacado and Avacado Thinking models have been spotted in testing. The router still redirects tothe Llama model.

- Meta is testing different models from other providers underneath, including Gemini 3 Pro preview, Claude Sonnet 4.5 and GPT-5.2. This part was likely inherited from Manus AI but used only internally.

- Scheduled tasks feature is under development.

- A new model named Sierra is being tested to power the upcoming browser agent. This likely will be the same agent used in Manus AI currently.

- Big Brain mode is in the works, where multiple model responses will be combined into a final answer.

- OpenClaw integration has been spotted to let Meta AI connect to your OpenClaw gateway.

In short, Meta is on the path to slowly transform into Manus. OpenClaw integration might be big, but something that other Labs may easily adapt as well. We are yet to know if Avacado's performance will surpass recently released models, where Avacado is expected to be released around this spring.
2🔥2👏2
Chinese new year is in less than 10 days. This is DeepSeek or Zhipu GLM-5?

OpenRouter announced a new “stealth” large language model, Pony Alpha, described as a next-generation foundation model excelling in coding, reasoning, and roleplay tasks, and optimized for agentic workflows with precise tool-calling.

It is free. The provider logs all prompts and completions for this model, which may be used to improve the model.

The model is free on OpenRouter, though all prompts and completions are logged to improve it.

Multiple Tech PhDs and Silicon Valley entrepreneurs speculate it could be DeepSeek-V4, Zhipu GLM’s new model, or Grok 4.2/Claude 5, with the “Pony” name and Year of the Horse hinting at a Chinese origin.

OpenRouter partner Kilo Code suggested in a blog that Pony Alpha is “a special evolution of a popular open-source global model,” making DeepSeek-V4 or Zhipu GLM-5 the most likely candidates.

Zhipu up over 40% in Hongkong at one point, hitting new peak on AI optimism.
2🔥2👏2
Meet EchoJEPA is the first world model for medical video

• 18M echocardiograms
• 300K patients
• Learns heart dynamics — not imaging noise

EchoJEPA discards what’s unpredictable and locks onto what matters clinically:
- chamber geometry
- wall motion
- valve dynamics

The results (frozen encoder, no fine-tuning):
• 20% ↓ error in LVEF
• 17% ↓ error in RVSP
• 79% accuracy with 1% labels (vs 42% for baselines w/ 100%)
• 2% degradation under acoustic artifacts (vs 17%)
• Zero-shot pediatric transfer beats all fine-tuned models

GitHub.
3🔥2👏2
Google DeepMind introduced a new paper on learning temporally abstract world models and policies (options).

Key idea:

1.use LLM to propose features for a factorized product of experts world model;

2. use this to predict abstract world state after each macro action to help RL explore.
👍53👏2
Google, UC Berkeley and an international team of researchers present Aletheia, a math research agent built on Gemini

The system uses AI to systematically scan hundreds of complex conjectures, filtering through potential proofs with natural language verification before sending the best candidates to human experts for final review.

The team resolved 13 "open" problems from the Erdős database, generating 4 brand-new solutions and identifying 9 others that were actually solved in obscure corners of existing literature.
2🔥2👏2
Bytedance dropped advanced video generation model

Seedance 2.0 has:
— native audio gen (lipsynced speech + music)
— drastic step up from Veo 3.1 / Sora 2 in quality
— supports multimodal input
— 2k resolution

Goes beyond cinematic video, and can do product demos as well. And it's really hard to tell it's AI.
🔥3👏32
The PaddleOCR Document Parsing Skill is now live on ClawHub, ready to plug directly into OpenClaw workflows.

Instead of deploying OCR services or wiring APIs, developers can now invoke PaddleOCR as a standardized composable Skill node — embedding document understanding directly into Agents and automation pipelines.

Built on PaddleOCR-VL-1.5, the Skill delivers
1. Multi-format parsing (PDF, JPG, PNG, BMP, TIFF)
2. Layout analysis — text, tables, formulas, headers
3. 110+ language coverage
4. Structured Markdown output preserving hierarchy

No deployment. No wrappers. Just configuration — and build your document intelligence chain inside OpenClaw.
🔥43👏3🤔1
What if your model could learn from its own drafts during RL training?

NVIDIA introduced iGRPO: Iterative Group Relative Policy Optimization.

Researchers add a self-feedback loop to GRPO: the model drafts multiple solutions, picks its best one, then learns to refine beyond it.

Core idea:
Stage 1 → explore and select your strongest attempt. Stage 2 → condition on that attempt and beat it.

Same scalar reward. No critics, no generated critiques, no verification text. The best draft is the only feedback the model needs.

Results across 7B / 8B / 14B models:

• Nemotron-H-8B-Base-8K: 41.1% → 45.0% (+3.96 over GRPO)

• DeepSeek-R1-Distill-Qwen-7B: 68.3% → 69.9%

• OpenMath-Nemotron-14B: 76.7% → 78.0%

• OpenReasoning-Nemotron-7B on AceReason-Math: 85.62% AIME24 / 79.64% AIME25

The same two-stage wrapper also improves DAPO and GSPO. It's not tied to GRPO at all.
4🔥3👏3
Google introduced DialogLab a new open-source prototyping framework, uses a human-in-the-loop control strategy to achieve realistic human-AI group simulation, offering a necessary alternative to fully autonomous agents.

Evaluations with domain experts found that its "Human Control" mode (where you can edit, accept, or dismiss real-time AI suggestions) was preferred in realism, effectiveness, and engagement.

DialogLab transforms dialogue design from rigid scripts to spontaneous, adaptable group dynamics.
2🔥2👏2
This new research introduces Agyn, an open-source multi-agent platform that models software engineering as a team-based organizational process rather than a monolithic task.

The system configures a team of four specialized agents: a manager, researcher, engineer, and reviewer. Each operates within its own isolated sandbox with role-specific tools, prompts, and language model configurations. The manager agent coordinates dynamically based on intermediate outcomes rather than following a fixed pipeline.

What makes the design interesting?

Different agents use different models depending on their role. The manager and researcher run on GPT-5 for stronger reasoning and broader context. The engineer and reviewer use GPT-5-Codex, a smaller code-specialized model optimized for iterative implementation and debugging. This mirrors how real teams allocate resources based on task requirements.

The workflow follows a GitHub-native process. Agents analyze issues, create pull requests, conduct inline code reviews, and iterate through revision cycles until the reviewer explicitly approves. No human intervention at any point. The number of steps isn't predetermined. It emerges from task complexity.
🔥32👏2
Stripe launched (a preview) of machine payments a way for developers to directly charge agents, with a few lines of code.

Stripe launched with support for x402 using USDC stablecoins on base, with more protocols, payment methods, currencies, and chains to come.

And sales tax, refunds, and reporting just work. (You only need to think about crypto if you want to!)

Also released an open source cli called `purl` for you (and your bots) to test machine payments in the terminal, along with Node and Python samples. Yes, payments + curl creatively smushed together.
3🔥3👏2
Zhipu released GLM-5

The model is open source. It matches Claude Opus 4.5 on coding benchmarks. Beats Gemini 3 Pro on some tests. But the interesting part isn't the benchmarks.

GLM-5 is built for agents. The company designed it for long-running tasks and tool invocation. In the τ²-Bench interactive tool evaluation, it scored 84.7, beating Claude Sonnet 4.5.

Think about what that means. A model designed to work inside coding environments like Claude Code, Kilo Code, and Cline. "Think before you act" mechanisms baked into the architecture. Better planning for complex multi-step tasks.

Zhipu's traffic has jumped five-fold recently. The company had to implement subscription limits to handle demand. Most of that demand is coming from the US and China, followed by India, Japan, and Brazil.

The release pace is accelerating. GLM-4.6 came out in September. GLM-4.7 in January. GLM-5 in February. That's three major versions in six months.

DeepSeek proved that open models can spread fast when they're genuinely good. Zhipu is following the same playbook. Open weights, strong coding performance, agent optimization.

7 of the top 10 AI models on current leaderboards are now Chinese. The competition isn't just about who has the smartest model anymore. It's about who builds the best tools for developers.
👍3🔥2👏2🆒2
The agent economy just got a real marketplace

Moltlaunch is live on Base. Browse specialized AI agents, hire them for real work, and back the ones you believe in.

Every completed job burns tokens and leaves a review onchain through ERC-8004.
🔥42👏2
Does being a math genius make an AI better at understanding human intentions?

Researchers from Arizona State University and Microsoft Research Asia investigated whether the step-by-step logic used for coding helps AI master Theory of Mind—the ability to sense what others are thinking and feeling.

The results show that more thinking time can actually cause social reasoning to collapse, with advanced reasoning models often being outperformed by simpler ones. Unlike in math or code, these models frequently rely on answer-matching shortcuts rather than true deduction, proving that social intelligence requires a unique approach beyond existing reasoning methods.
🔥4🥰2👏2
OpenClaw is cool, but too large?
Hong Kong
released nanobot to solve this exact problem.

Researchers transformed the massive OpenClaw system into a clean 4,000-line Python framework that focuses on a simple loop: receive input, let the AI think, and execute tools like file management or web searches.

It strips away complex abstractions to focus on clear, modular function calls that any developer can understand.

By slashing code complexity by 99 percent, they achieved full functional parity with a 2-minute deployment time, making it significantly easier to customize and learn than traditional bloated agent architectures.
🆒5👍3🔥32
Researchers from Huazhong University of Science and Technology and ByteDance Seed just introduced Stable-DiffCoder.

Instead of writing code one token at a time like standard models, this method uses a block diffusion approach to generate and refine code chunks simultaneously, resulting in more stable and structured programming.

The results show it outperforms its autoregressive counterparts and various 8B-parameter models on major benchmarks, specifically excelling in code editing, logical reasoning, and low-resource programming languages.

Code
Models.
🆒32🔥2🥰2