Please open Telegram to view this post
VIEW IN TELEGRAM
β€21π7π€7π4
Please open Telegram to view this post
VIEW IN TELEGRAM
π15π10β€6π5π¨1
It should be at least as good as Gemini 3.0 (pro)
" Sources say that the 5.2 update should close the gap that Google created with the release of Gemini 3 last month, a model that topped leaderboards and wowed Sam Altman and xAI CEO Elon Musk."
The pressure on OpenAI is so high that they have to bring forward the release.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π17π₯10π€5β€2π2
This media is not supported in your browser
VIEW IN TELEGRAM
At the 2025 Greater Bay Area New Economy Forum, Midea Group's CTO Wei Chang officially unveiled "MIRO U," a super humanoid robot that looks absolutely wild.
Billed as the industry's first six-armed, wheeled-legged humanoid, it's designed to "break through human physiological limits." It starts factory operations this month.
AI PostβͺοΈ | Our X π΄
Billed as the industry's first six-armed, wheeled-legged humanoid, it's designed to "break through human physiological limits." It starts factory operations this month.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π12πΎ8π7β€4π€3
It is being overtaken by Claude, Gemini, and even Grok in certain domains. Yes, GPT-5 Pro is certainly an outstanding research model for novel scientific ideas and tasks.
But for the broad user base, the other models are surpassing it.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
1β€19π13π11π€2
Titans is Google's new architecture type that gives language models something like a real long-term memory, while the model is running.
How? A deep neural network (MLP) acts as a "long-term memory" that is continuously updated while the model reads text. The model learns during the inference run itself what to retain ("test-time memorization"), instead of having everything fixed into the weights beforehand. With ~10 million tokens, it still maintains around 70% accuracy. Insane
Google is nailing it.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€25π8π€6
Please open Telegram to view this post
VIEW IN TELEGRAM
π22π9π₯΄7πΎ5β€3
Nvidiaβs Jensen Huang says the new battleground isnβt model quality, itβs who can build and power hyperscale AI data centers the fastest. A top-end US AI facility takes 3 years from ground-breaking to supercomputer-ready, while China can erect massive structures in weeks and scale energy far faster.
Speed vs. Scale
β’ Huang notes Chinaβs ability to stand up large buildings even hospitals in days, giving it a construction-speed edge.
β’ AI capacity is now limited by permitting, land, steel, grid hookups not algorithms.
β’ Countries with faster infrastructure cycles will dominate the next wave of AI buildout.
The power gap
β’ China has roughly 2Γ the total energy capacity of the US, despite a smaller economy.
β’ That means it can feed power-hungry AI clusters more easily, while US grid growth remains almost flat.
β’ Power, not compute, is becoming the bottleneck.
Chips vs. manufacturing strength
β’ Nvidia is still βmultiple generations aheadβ in AI chips.
β’ But Huang warns not to underestimate Chinaβs deep manufacturing base, it can scale chip and system production faster than many expect.
The next phase of AI leadership may be won not by the best model but by the fastest builders of energy and infrastructure.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π19β€6π€6π₯3πΎ3
Apple is undergoing a rare top-tier shake-up: more than half a dozen senior executives are exiting, from COO Jeff Williams to CFO Luca Maestri, AI chief John Giannandrea, policy head Lisa Jackson, and legal chief Kate Adams. Bloombergβs visualization shows a clear thinning of the βold guardβ around Tim Cook and a deliberate rebuild.
The old team is rotating out
β’ Key operators (Williams, Maestri) and long-time policy/legal leads are retiring or leaving.
β’ Meta has poached multiple design + AI leaders, pulling ~100 engineers from Appleβs AI foundation models team.
β’ Net effect: Appleβs internal AI bench and institutional memory are weaker than a year ago.
A new leadership stack is forming
β’ John Ternus is emerging as the likely CEO successor, a shift toward hardware and product engineering instead of operational management.
β’ Amar Subramanya (ex-Google AI) is brought in to accelerate Appleβs sluggish AI roadmap, signaling a move toward execution-focused leadership rather than research-first.
β’ Stephen Lemay takes over interface design, a high-credibility builder whoβs shipped nearly every major Apple UI since the first iPhone.
β’ Jennifer Newstead will run both legal and government affairs, centralizing Appleβs response to antitrust and global regulation.
The strategic direction
β’ This isnβt random churn, itβs a controlled transition toward a 2026-ready Apple built around:
β’ AI acceleration
β’ hardware + device engineering
β’ regulatory survival and global policy alignment
Apple is quietly replacing the iPhone-era leadership machine with a structure built for AI, geopolitics, and the next generation of devices.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π€10π10β€7π7πΎ1
Multiple benchmarks for knowledge, math, and coding now put Gemini slightly ahead of OpenAIβs best models, so buyers no longer treat OpenAI as the automatic performance leader.
OpenAI reports about $13 billion 2025 revenue yet analysts project losses that could reach $140 billion by 2029, while Google and Microsoft make around $30 billion profit each quarter that can fund cheaper integrated features.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯21β€16π7πΎ3π€2
Please open Telegram to view this post
VIEW IN TELEGRAM
π15π9β€4π₯1π€1
Gemini 3.0 Pro is the top ranker with 45.6%. Cortex-AGI measures how well AI models can perform abstract, out-of-distribution reasoning on procedurally generated logic puzzles across 10 increasingly complex levels, without relying on memorization.
It also measures and compares the performance of proprietary models against open-source models under this rigorous setting.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π17β€8π3π€2
A 49-year-old man says Grok saved his life after an ER doctor misdiagnosed his near-ruptured appendix as acid reflux. In severe pain, he asked Grok for help. The AI flagged βperforated ulcer or atypical appendicitisβ and told him to return immediately and demand a CT scan.
He followed the advice, pushed for the scan, and doctors found an appendix minutes from rupture. Surgery happened six hours later and he woke up pain-free. He didnβt tell the hospital that an AI guided him, saying he feared theyβd dismiss it. The story went viral, with many arguing this proves AI can catch what overwhelmed doctors miss and some even saying theyβd trust an AI doctor if it meant better care.
Elon has long predicted AI-driven medicine would arrive fast; this incident suggests it already has.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π47β€19π€ͺ7π€3
Please open Telegram to view this post
VIEW IN TELEGRAM
π30π₯10β€4π€3
Please open Telegram to view this post
VIEW IN TELEGRAM
β€20π7π5π€ͺ2π€1
Please open Telegram to view this post
VIEW IN TELEGRAM
π31π7π₯3π’3π€1
What looked like cutting-edge policing tech has turned out to be a global sweatshop. Flock, the largest provider of βAI-poweredβ cameras for U.S. police, was barely using AI at all. Instead, much of the work was done manually by low-paid freelancers in the Philippines.
β’ The workers handled everything: reading license plates, identifying car makes and colors, tagging pedestrians, and even transcribing accident audio.
β’ Cities bought these systems expecting automated intelligence but got human eyes quietly scanning American streets.
β’ The revelation raises serious questions about data security, law-enforcement transparency, and how many βAIβ products are really powered by hidden labor.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π31β€10π8π4π€3
By leaning on synthetic data and test-time training instead of brute-force scale, the NVARC team proved that clever design can outpace raw parameter count. Itβs an exciting signal that efficient, adaptive reasoning might be the real frontier in AGI progress - not just ever-bigger models.
β’27.64% accuracy on the official ARC-AGI-2 leaderboard
β’ Uses a 4B-parameter model that beats far larger, more expensive models on the same benchmark.
β’ Inference cost is just $0.20 per task, enabled by synthetic data, test-time training, and NVIDIA NeMo tooling.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€10π6π₯3
He says 25% of the revenue will go to the U.S., argues this will boost American jobs and manufacturing, and criticizes Biden for forcing βdegradedβ chip designs.
He adds that newer NVIDIA chips (Blackwell, Rubin) arenβt part of the deal, and that a similar approach will be used for AMD, Intel and other U.S. chipmakers.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π€ͺ15π11β€5πΎ2π1