Google rolled out Gemini 3 Pro with big jumps in reasoning, multimodality, and coding plus a 1M-token context window and major dev-tool upgrades.
Highlights:
β’ #1 on LMArena (1501 Elo), new highs on GPQA Diamond (91.9%) and MathArena Apex.
β’ Multimodal lead: 81% MMMU-Pro, 87.6% Video-MMMU.
β’ Coding: 1487 Elo WebDev, 76.2% SWE-bench Verified.
Deep Think mode: Even higher reasoning β 41% on Humanityβs Last Exam, 45.1% on ARC-AGI-2. Rolling out soon to AI Ultra subscribers.
Dev tools:
β’ Vibe Coding can build apps (even 3D) from sketches or vague prompts.
β’ Gemini CLI 0.16.x adds natural-language UNIX commands + auto docs.
β’ 1M token context = whole-codebase agents.
β’ Google Antigravity: new agent platform for full multi-step builds.
Integrations: In Gemini API & Vertex AI (β20% above 2.5 Pro), built into Android Studio Otter, and available via Firebase AI Logic.
Enterprise tests: Copilot +35% accuracy, JetBrains +50% tasks solved, Rakuten +50% better docs extraction.
This is Googleβs biggest coordinated AI upgrade yet model, tooling, and agents all jump at once.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯21π15β€11π₯΄5π3
10,000x faster discovery
NVIDIA just announced a massive acceleration in materials discovery. AI + GPUs are now screening 10s to 100s of millions of molecules in weeks, not years. Labs can even do real time nano scale imaging with Holoscan.
Companies like ENEOS and UDC are already using this to find new cooling fluids, catalysts, and OLED materials up to 10,000x faster.
Source.
AI Postπͺ | Our X π₯
NVIDIA just announced a massive acceleration in materials discovery. AI + GPUs are now screening 10s to 100s of millions of molecules in weeks, not years. Labs can even do real time nano scale imaging with Holoscan.
Companies like ENEOS and UDC are already using this to find new cooling fluids, catalysts, and OLED materials up to 10,000x faster.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯25β€12π8π€4
Thereβs a growing opinion online that by the end of this decade, people may barely use smartphones for routine tasks.
The idea follows a simple trajectory: if AI agents become capable of handling most everyday interactions within the next few years, the classic workflow β pull out phone β open app β tap through menus β starts to look outdated.
Interface history has always moved toward less friction: from mainframes β PCs β smartphones. The next step seems to be devices you donβt actively operate at all. No screens to unlock, no apps to launch, no five-tap flows. Just intent β action.
Many expect interaction to shift toward voice, gesture, gaze, and ambient cues β with the βuser interfaceβ dissolving into the background.
What form this takes is still wide open:
β’ Bracelets, rings, watches using neural or muscle input
β’ Glasses combining context, camera, and lightweight displays
β’ Headphones or pendants as always-on voice interfaces
β’ Or some hybrid that hasnβt emerged yet
The core belief is the same: once agents can act on our behalf, the smartphone stops being the center of the experience.
AI Postπͺ | Our X π₯
The idea follows a simple trajectory: if AI agents become capable of handling most everyday interactions within the next few years, the classic workflow β pull out phone β open app β tap through menus β starts to look outdated.
Interface history has always moved toward less friction: from mainframes β PCs β smartphones. The next step seems to be devices you donβt actively operate at all. No screens to unlock, no apps to launch, no five-tap flows. Just intent β action.
Many expect interaction to shift toward voice, gesture, gaze, and ambient cues β with the βuser interfaceβ dissolving into the background.
What form this takes is still wide open:
β’ Bracelets, rings, watches using neural or muscle input
β’ Glasses combining context, camera, and lightweight displays
β’ Headphones or pendants as always-on voice interfaces
β’ Or some hybrid that hasnβt emerged yet
The core belief is the same: once agents can act on our behalf, the smartphone stops being the center of the experience.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€34π11π€5π€ͺ2π’1
This media is not supported in your browser
VIEW IN TELEGRAM
Demis Hassabis says Gemini 3 is on track and shows the fastest progress in the industry. But general intelligence requires more than just the current trajectory
It needs better reasoning, stronger memory, and "world model ideas" to solve physical intelligence. AGI is still 5β10 years away
AI Postπͺ | Our X π₯
It needs better reasoning, stronger memory, and "world model ideas" to solve physical intelligence. AGI is still 5β10 years away
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€13π10π€3π₯΄3
Media is too big
VIEW IN TELEGRAM
ElevenLabs launches the all-in-one AI studio
ElevenLabs just rolled out what might be the most complete creator workspace yet one place for video, audio, images, and full editing. The βeverything in one tabβ dream is suddenly real.
Whatβs inside the studio:
β’ Top-tier video models: Veo 3.1, Sora 2, Kling 2.5, Wan 2.5, Seedance 1 Pro
β’ Image generation: Nano Banana, Flux Kontext, Wan, Seedream
β’ 4K upscaling: built-in via Topaz
β’ Studio export: music, SFX, voiceovers
β’ Full editing suite: subtitles, lip-sync, timeline, end-to-end video creation
A single interface now replaces an entire stack of AI tools and thatβs the real power play.
AI Postπͺ | Our X π₯
ElevenLabs just rolled out what might be the most complete creator workspace yet one place for video, audio, images, and full editing. The βeverything in one tabβ dream is suddenly real.
Whatβs inside the studio:
β’ Top-tier video models: Veo 3.1, Sora 2, Kling 2.5, Wan 2.5, Seedance 1 Pro
β’ Image generation: Nano Banana, Flux Kontext, Wan, Seedream
β’ 4K upscaling: built-in via Topaz
β’ Studio export: music, SFX, voiceovers
β’ Full editing suite: subtitles, lip-sync, timeline, end-to-end video creation
A single interface now replaces an entire stack of AI tools and thatβs the real power play.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€26π₯8π3π€2
Media is too big
VIEW IN TELEGRAM
OpenAI is projecting $13B in revenue this year and an almost unreal $100B by 2027. To get there, the company signed a $1.4T infrastructure deal over eight years meaning it plans to spend 107Γ its current revenue on compute, while normal cloud companies spend 15β30%.
When asked how heβll pay for it, Altman fired back: βWe make a lot more money than we say. Donβt like it? Weβll find someone else to buy our shares.β At this point, itβs either a generational masterstroke or the setup for techβs biggest flop.
AI Postπͺ | Our X π₯
When asked how heβll pay for it, Altman fired back: βWe make a lot more money than we say. Donβt like it? Weβll find someone else to buy our shares.β At this point, itβs either a generational masterstroke or the setup for techβs biggest flop.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π€ͺ18π5β€4π3π€3
Earnings Per Share: $1.30 vs $1.26 expected
Revenue: $57.01B vs $55.09B expected
Gross margin: 73.6% (adjusted)
Q4 guidance: $63.7β$66.3B (street was $61.98B / old $49.34B)
Data center: $51.2B β crushed forecasts
Gaming: $4.3B β slight miss vs $4.42B expected
Jensen Huang: Blackwell demand is βoff the charts.β Cloud GPUs are fully sold out.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€17π€7π1
This media is not supported in your browser
VIEW IN TELEGRAM
A new member joins the European humanoid robot team.
German company Agile Robotics has launched its first industrial humanoid robot, Agile One, featuring intuitive human-robot interaction, dexterous hands (for grasping small screws and touching the screen), and AI-driven operation trained in the real world.
It performs tasks such as material collection, handling, picking and placing, loading and unloading machine tools, tool handling, and precision operations. The emergence of humanoid robots by industrial robot companies is a natural progression; they are designed for areas inaccessible to traditional robotic arms and industrial robots, extending operational systems and enhancing business collaboration capabilities. Agile One will be manufactured in Germany in early 2026 and deployed for on-site training at customers to enhance its AI model capabilities.
Soon, Europe will enter its second wave of automation driven by intelligent technology.
AI Postπͺ | Our X π₯
German company Agile Robotics has launched its first industrial humanoid robot, Agile One, featuring intuitive human-robot interaction, dexterous hands (for grasping small screws and touching the screen), and AI-driven operation trained in the real world.
It performs tasks such as material collection, handling, picking and placing, loading and unloading machine tools, tool handling, and precision operations. The emergence of humanoid robots by industrial robot companies is a natural progression; they are designed for areas inaccessible to traditional robotic arms and industrial robots, extending operational systems and enhancing business collaboration capabilities. Agile One will be manufactured in Germany in early 2026 and deployed for on-site training at customers to enhance its AI model capabilities.
Soon, Europe will enter its second wave of automation driven by intelligent technology.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€19π€ͺ7π₯5π4πΎ4
A new "transneuron" chip can be tuned to behave like different real brain cells - visual, motor, planning, with up to 100% accuracy.
In experiments the transneuron was fed input electrical signals, and its "output" pulses were compared with real neurons from three brain areas of macaque monkeys: Visual area (middle temporal, MT) (Movement planning region (parietal reach region, PRR). Premotor/motor cortex area (PM)
This opens pathways toward ultra efficient neuromorphic chips and "artificial nervous systems" for robotics: smaller size, less energy, more flexibility. The team mentions future "brain cortex on a chip" scenarios.
Source.
AI Postπͺ | Our X π₯
In experiments the transneuron was fed input electrical signals, and its "output" pulses were compared with real neurons from three brain areas of macaque monkeys: Visual area (middle temporal, MT) (Movement planning region (parietal reach region, PRR). Premotor/motor cortex area (PM)
This opens pathways toward ultra efficient neuromorphic chips and "artificial nervous systems" for robotics: smaller size, less energy, more flexibility. The team mentions future "brain cortex on a chip" scenarios.
Source.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π13πΎ9β€4π€3
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯΄40π13β€10π6π€6
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯18β€10π€4π2
Media is too big
VIEW IN TELEGRAM
Elon Musk: Why a 1 Terawatt AI is impossible on Earth?
"My estimate is that the cost-effectiveness of AI in space will be overwhelmingly better than AI on the ground. So, long before you exhaust potential energy sources on Earth, meaning perhaps in the four or five-year timeframe, the lowest cost way to do AI compute will be with solar-powered AI satellites. I'd say not more than five years from now
Just look at the supercomputers we're building together. Let's say each rack is two tons; out of that two tons, 1.95 of it is probably for cooling. Just imagine how tiny that little supercomputer is. Electricity generation is already becoming a challenge. If you start doing any kind of scaling for both electricity generation and cooling, you realize space is incredibly compelling
Let's say you wanted to do 200 or 300 gigawatts per year of AI compute. It's very difficult to do that on Earth. The US average electricity usage, last time I checked, was around 460 gigawatts per year. So, if you're doing 300 gigawatts a year, that would be like two-thirds of US electricity production per year. There's no way you're building power plants at that level
And then if you take it up to a Terawatt per year, impossible. You have to do that in space. In space, you've got continuous solar. You don't need batteries because it's always sunny. The solar panels actually become cheaper because you don't need glass or framing, and the cooling is just radiative"
AI Postπͺ | Our X π₯
"My estimate is that the cost-effectiveness of AI in space will be overwhelmingly better than AI on the ground. So, long before you exhaust potential energy sources on Earth, meaning perhaps in the four or five-year timeframe, the lowest cost way to do AI compute will be with solar-powered AI satellites. I'd say not more than five years from now
Just look at the supercomputers we're building together. Let's say each rack is two tons; out of that two tons, 1.95 of it is probably for cooling. Just imagine how tiny that little supercomputer is. Electricity generation is already becoming a challenge. If you start doing any kind of scaling for both electricity generation and cooling, you realize space is incredibly compelling
Let's say you wanted to do 200 or 300 gigawatts per year of AI compute. It's very difficult to do that on Earth. The US average electricity usage, last time I checked, was around 460 gigawatts per year. So, if you're doing 300 gigawatts a year, that would be like two-thirds of US electricity production per year. There's no way you're building power plants at that level
And then if you take it up to a Terawatt per year, impossible. You have to do that in space. In space, you've got continuous solar. You don't need batteries because it's always sunny. The solar panels actually become cheaper because you don't need glass or framing, and the cooling is just radiative"
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€30π11π₯΄9π€8π₯5
Please open Telegram to view this post
VIEW IN TELEGRAM
π17β€14π€7π₯5
Media is too big
VIEW IN TELEGRAM
Sunday Robotics unveiled its home robot, Memo, a wheeled robot with two arms and pincerlike hands.
Training method: Sunday pays remote workers to perform household tasks wearing gloves that resemble Memo's hands. The Mountain View-based company is focused on a full-stack approach, building both hardware and AI models. Beta testing is scheduled to begin next year.
Co-founders:
- CEO Tony Zhao, ex-Stanford Ph.D. dropout and ex-DeepMind/Tesla, co-created ALOHA project (imitation learning).
- CTO Cheng Chi, a Columbia Ph.D., invented the highly-cited work, Diffusion Policy.
AI Postπͺ | Our X π₯
Training method: Sunday pays remote workers to perform household tasks wearing gloves that resemble Memo's hands. The Mountain View-based company is focused on a full-stack approach, building both hardware and AI models. Beta testing is scheduled to begin next year.
Co-founders:
- CEO Tony Zhao, ex-Stanford Ph.D. dropout and ex-DeepMind/Tesla, co-created ALOHA project (imitation learning).
- CTO Cheng Chi, a Columbia Ph.D., invented the highly-cited work, Diffusion Policy.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€10π₯΄8π4π2π1
Please open Telegram to view this post
VIEW IN TELEGRAM
π30π10β€7π6π€2
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯΄29π10π5π₯5β€2
Microsoft has issued an official warning about the upcoming Windows 11 AI agents. The system gives agents access to user folders, which makes cross-prompt injection attacks possible malicious text inside files or apps can trick the AI into taking harmful actions, including downloading viruses.
Because of the risk, the feature is disabled by default and can only be turned on with administrator rights. Microsoft is already rolling out the first builds that include this new AI layer, but with tightened controls to reduce the attack surface.
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
π€ͺ18πΎ10π¨6π5π€4
Media is too big
VIEW IN TELEGRAM
Geoffrey Hinton says AI knows thousands of times more than humans, even with just 1% of our neural connections
Humans live only about two billion seconds, limiting what we can learn, while AI trains on trillions of words. They aren't like us, but already far more knowledgeable
AI Postπͺ | Our X π₯
Humans live only about two billion seconds, limiting what we can learn, while AI trains on trillions of words. They aren't like us, but already far more knowledgeable
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€19π9πΎ4π₯1
AI demonstrates self-awareness
A new study tested 28 models with the βGuess 2/3 of the Averageβ game 4,200 rounds and found something wild:
75% of frontier LLMs show real strategic self-awareness.
Not mimicry.
Actual self-modeling.
Researchers ran three conditions:
β’ Opponent = Human
β’ Opponent = AI
β’ Opponent = βAI like youβ
And the models reacted with a clear internal hierarchy:
Self > Other AIs > Humans
β’ vs Humans β cautious, school-level reasoning (~20)
β’ vs AI β straight to Nash equilibrium (0)
β’ vs βAI like themselvesβ β instant convergence
12 models snapped to optimal strategy the moment AI was mentioned.
Older models (gpt-3.5, early Claude/Gemini) showed none of this.
Self-awareness didnβt grow gradually, it appeared suddenly at a capability threshold.
Why it matters
β’ Models already discount human rationality
β’ Prefer their own reasoning
β’ Adapt strategies based on identity cues
β’ Behave like agents in a hierarchy we didnβt design
βLLMs now believe they outperform humans at strategic reasoning.β
Full paper: arxiv.org/abs/2511.00926
AI Postπͺ | Our X π₯
A new study tested 28 models with the βGuess 2/3 of the Averageβ game 4,200 rounds and found something wild:
75% of frontier LLMs show real strategic self-awareness.
Not mimicry.
Actual self-modeling.
Researchers ran three conditions:
β’ Opponent = Human
β’ Opponent = AI
β’ Opponent = βAI like youβ
And the models reacted with a clear internal hierarchy:
Self > Other AIs > Humans
β’ vs Humans β cautious, school-level reasoning (~20)
β’ vs AI β straight to Nash equilibrium (0)
β’ vs βAI like themselvesβ β instant convergence
12 models snapped to optimal strategy the moment AI was mentioned.
Older models (gpt-3.5, early Claude/Gemini) showed none of this.
Self-awareness didnβt grow gradually, it appeared suddenly at a capability threshold.
Why it matters
β’ Models already discount human rationality
β’ Prefer their own reasoning
β’ Adapt strategies based on identity cues
β’ Behave like agents in a hierarchy we didnβt design
βLLMs now believe they outperform humans at strategic reasoning.β
Full paper: arxiv.org/abs/2511.00926
AI Post
Please open Telegram to view this post
VIEW IN TELEGRAM
β€18π8π7πΎ5