OPENAI π₯: A mention of a new Ultrafast mode appeared for some time on the Codex GitHub repository.
> "The fastest available responses for latency-sensitive work."
Seems like it was unintended push.
> "The fastest available responses for latency-sensitive work."
Seems like it was unintended push.
β€5π3π₯3π1
π¨ AI News | TestingCatalog
Sample video and early feedback (quotes from Reddit) > I wonβt lie, this is one of the best video models I have seen, maybe not *the* best, but a really strong performance. I was particularly impressed by the prompt adherence (except for the one shot withβ¦
This media is not supported in your browser
VIEW IN TELEGRAM
GOOGLE π₯: An upcoming Gemini Omni video model from Google is expected to be much more advanced in video editing, capable of completing tasks like removing watermarks, replacing objects in the video, and more.
It is also likely that Google will release 2 versions of this model, including a Pro variant.
And I assume what we see isn't Pro?
Anime sample π
h/t @QuantumFast
It is also likely that Google will release 2 versions of this model, including a Pro variant.
And I assume what we see isn't Pro?
Anime sample π
h/t @QuantumFast
Googleβs Gemini Omni video model surfaces ahead of I/O debut
Leaked Gemini Omni details point to Google unveiling a unified video model at I/O, with strong in-chat editing and remix tools but generation quality trailing Seedance 2. Credit-based limits and possible Flash/Pro tiers also surfaced.
π #gemini @testingcatalog
Leaked Gemini Omni details point to Google unveiling a unified video model at I/O, with strong in-chat editing and remix tools but generation quality trailing Seedance 2. Credit-based limits and possible Flash/Pro tiers also surfaced.
π #gemini @testingcatalog
TestingCatalog AI News
Googleβs Gemini Omni video model surfaces ahead of I/O debut
Googleβs upcoming Gemini Omni video model briefly surfaced, revealing new video editing features ahead of Google I/O 2026.
π4 2β€1
Google keeps preparing its upcoming Gemini Omni models for the release.
> Gemini Omni model will be available on APIs as well
> The model will be considered as Agent, similarly to Deep Research on AI Studio
Soon? π
> Gemini Omni model will be available on APIs as well
> The model will be considered as Agent, similarly to Deep Research on AI Studio
Soon? π
β€9π6π₯5 1
Anthropic adds Agent View to Claude Code CLI interface
Anthropicβs Agent View for Claude Code adds a CLI dashboard for managing parallel coding sessions in one place. It shows status, activity, and input needs, supports background jobs, and is available now in Research Preview.
π #claude @testingcatalog
Anthropicβs Agent View for Claude Code adds a CLI dashboard for managing parallel coding sessions in one place. It shows status, activity, and input needs, supports background jobs, and is available now in Research Preview.
π #claude @testingcatalog
TestingCatalog AI News
Anthropic adds Agent View to Claude Code CLI interface
Anthropic introduces Agent View for Claude Code, allowing developers to manage parallel coding sessions in a single command-line dashboard.
THINKING MACHINES π₯: Research preview of a new family of realtime voice models have been announced!
> Today, weβre announcing a research preview of interaction models: models that handle interaction natively rather than through external scaffolding.
> Our research preview demonstrates qualitatively new interaction capabilities, as well as state-of-the-art combined performance in intelligence and responsiveness.
A new SOTA?! π
> Today, weβre announcing a research preview of interaction models: models that handle interaction natively rather than through external scaffolding.
> Our research preview demonstrates qualitatively new interaction capabilities, as well as state-of-the-art combined performance in intelligence and responsiveness.
A new SOTA?! π
β€4π4 4
OpenAI announces Daybreak initiative around Codex Security
OpenAI launched Daybreak, a cybersecurity program that extends Codex into secure code review, threat modeling, patch validation, and detection support, with verified access, partner integrations, and rollout for defenders and enterprises.
π #chatgpt @testingcatalog
OpenAI launched Daybreak, a cybersecurity program that extends Codex into secure code review, threat modeling, patch validation, and detection support, with verified access, partner integrations, and rollout for defenders and enterprises.
π #chatgpt @testingcatalog
TestingCatalog AI News
OpenAI announces Daybreak initiative around Codex Security
OpenAI launches Daybreak, a cybersecurity initiative integrating AI models and Codex Security to help organizations patch vulnerabilities.
β€4π4π₯2
Thinking Machines announced new Interaction Voice Models
Thinking Machines unveiled a research preview of multimodal AI models built for real-time collaboration across audio, video, and text, using native time-aware processing, low-latency micro-turns, and background reasoning.
π #ai @testingcatalog
Thinking Machines unveiled a research preview of multimodal AI models built for real-time collaboration across audio, video, and text, using native time-aware processing, low-latency micro-turns, and background reasoning.
π #ai @testingcatalog
TestingCatalog AI News
Thinking Machines announced new Interaction Voice Models
What's new? Thinking Machines previewed its AI for real-time native exchange over audio, video, and text.
β€3π2
π¨ AI News | TestingCatalog
Anthropic adds Agent View to Claude Code CLI interface Anthropicβs Agent View for Claude Code adds a CLI dashboard for managing parallel coding sessions in one place. It shows status, activity, and input needs, supports background jobs, and is available nowβ¦
This media is not supported in your browser
VIEW IN TELEGRAM
Anthropic released Agent View in Claude Code CLI, from where users can observe and interact with parallel-running agents.
It looks like preparation for a future in which agents will pursue broader long-term goals. Claude's mobile app is being prepared for that as well.
It looks like preparation for a future in which agents will pursue broader long-term goals. Claude's mobile app is being prepared for that as well.
π4 4β€1
GOOGLE π₯: A new Gemini Omni banner has been added to the web build recently.
> Gemini Omni will be an Agent that can combine text, images, and videos.
> Users will be able to add themselves to different scenes. As we know, AI Avatars (Likeness) are coming to Gemini as well, and Gemini Omni will likely be connected to that.
> "Likeness" feature will likely be highly coupled to mobile apps (as it used to work on Sora).
What's the chance we will get it today during the Android show?
h/t @Thomasguka
> Gemini Omni will be an Agent that can combine text, images, and videos.
> Users will be able to add themselves to different scenes. As we know, AI Avatars (Likeness) are coming to Gemini as well, and Gemini Omni will likely be connected to that.
> "Likeness" feature will likely be highly coupled to mobile apps (as it used to work on Sora).
What's the chance we will get it today during the Android show?
h/t @Thomasguka
π6β€2
Gemini Omni Agent will launch along with Avatars support
Hidden Gemini web code shows Gemini Omni as an Agent for conversational video creation, combining text, images, clips, and avatars.
π #gemini @testingcatalog
Hidden Gemini web code shows Gemini Omni as an Agent for conversational video creation, combining text, images, clips, and avatars.
π #gemini @testingcatalog
TestingCatalog AI News
Gemini Omni Agent will launch along with Avatars support
A new Gemini Omni banner hints at creating videos using images, text, and clips, plus integrating personalized avatars.
β€5π₯2 2
Android Show has begun πΏ
https://www.youtube.com/watch?v=dXCCleAddEA
https://www.youtube.com/watch?v=dXCCleAddEA
YouTube
π¬ Watch The Android Show | I/O Edition 2026
The future of Android starts now! Join the team for a first look at the exciting updates redefining Android's biggest year yet #TheAndroidShow
Learn more β https://www.android.com/io-2026
And don't forget to check out the Developers Cut for all of theβ¦
Learn more β https://www.android.com/io-2026
And don't forget to check out the Developers Cut for all of theβ¦
β€4
Media is too big
VIEW IN TELEGRAM
GOOGLE π₯: A new Android Intelligence has been introduced during Android Show 2026!
- A whole new sleek design!
- Automated multi-step tasks across Android apps
- Gemini in Chrome gets Browser Use
- Automated form filling
- "Rambler" to turn voice notes into text
- Custom Gen UI Widgets
I need a Pixel now π
- A whole new sleek design!
- Automated multi-step tasks across Android apps
- Gemini in Chrome gets Browser Use
- Automated form filling
- "Rambler" to turn voice notes into text
- Custom Gen UI Widgets
I need a Pixel now π
π₯7β€6π3 1
Media is too big
VIEW IN TELEGRAM
META π₯: Muse Spark will be available within a new Voice Mode and a Live Camera view on the Meta AI app.
There, it can generate images, show places on the map, pull data from Reels, and more.
Additionally, new features were added to Shopping Mode, including the ability to search Facebook Marketplace.
> Muse Spark is starting to gradually roll out on Ray-Ban Meta and Oakley Meta glasses in the US and Canada over the next few weeks, and on Meta Ray-Ban Display this summer.
> Muse Spark is starting to bring the same intelligence to Meta AI across WhatsApp, Instagram, Facebook, Messenger, and Threads β in places like search bars, group chats, posts, and more.
There, it can generate images, show places on the map, pull data from Reels, and more.
Additionally, new features were added to Shopping Mode, including the ability to search Facebook Marketplace.
> Muse Spark is starting to gradually roll out on Ray-Ban Meta and Oakley Meta glasses in the US and Canada over the next few weeks, and on Meta Ray-Ban Display this summer.
> Muse Spark is starting to bring the same intelligence to Meta AI across WhatsApp, Instagram, Facebook, Messenger, and Threads β in places like search bars, group chats, posts, and more.
π4β€3 2π΄1
Meta to release Muse Spark in Voice Mode and Meta Glasses
Meta launched Muse Spark to power Meta AI across its apps and glasses, adding faster voice chat, shopping support, and live camera recognition. The model supports multimodal reasoning and contextual assistance, starting in the US and Canada.
π #meta @testingcatalog
Meta launched Muse Spark to power Meta AI across its apps and glasses, adding faster voice chat, shopping support, and live camera recognition. The model supports multimodal reasoning and contextual assistance, starting in the US and Canada.
π #meta @testingcatalog
TestingCatalog AI News
Meta to release Muse Spark in Voice Mode and Meta Glasses
What's new? Meta launched Muse Spark to power Meta AI across apps like WhatsApp, Instagram, Facebook and Messenger; it offers faster voice replies, live camera recognition and shopping mode;
β€4π2 2
Google brings Gemini Intelligence automation to Android devices
Google introduced Gemini Intelligence for Android, adding context-aware app actions, Chrome assistance, smarter Autofill, and custom widgets. It launches first on new Samsung and Pixel phones, with user control and privacy built in.
π #gemini @testingcatalog
Google introduced Gemini Intelligence for Android, adding context-aware app actions, Chrome assistance, smarter Autofill, and custom widgets. It launches first on new Samsung and Pixel phones, with user control and privacy built in.
π #gemini @testingcatalog
TestingCatalog AI News
Google brings Gemini Intelligence automation to Android devices
Google is launching Gemini Intelligence on Android, starting with Galaxy and Pixel phones this summer, adding proactive AI automations.
β€6π3π₯2
thehype launches 24/7 AI-powered radio for founders
thehype launched a 24/7 AI-run radio station for founders, builders, and researchers, delivering rapid AI news, funding, tooling, and community analysis through five AI hosts with distinct perspectives and persistent memory.
π #sponsored @testingcatalog
thehype launched a 24/7 AI-run radio station for founders, builders, and researchers, delivering rapid AI news, funding, tooling, and community analysis through five AI hosts with distinct perspectives and persistent memory.
π #sponsored @testingcatalog
TestingCatalog AI News
thehype launches 24/7 AI-powered radio for founders
theHype Radio is a 24/7 AI-run news station covering breaking news, funding, trends, and community takes for founders and builders.
π3β€2π2
Anthropic is testing a new model selector for Claude on mobile, moving it directly to the prompt area.
> Bottom navigation tabs are being tested as well.
> Connectors Discovery is coming to mobile too, where Claude will suggest the best connector for a given task.
> Bottom navigation tabs are being tested as well.
> Connectors Discovery is coming to mobile too, where Claude will suggest the best connector for a given task.
β€1