New version of the fast video generator LTX-2
By the way, itโs already available on fal.ai.
Three versions of the video model:
โข Fast
โข Pro
โข Ultra (coming soon)
text-to-video and image-to-video
LTX-2 generates video in native 4K, 1440p, 1080p, and 720p.
No upscaling. No post hacks. Just clean, production-ready output. I donโt believe it!
Audio and lipsync included
Generation at 25 or 50(!) fps. 6, 8, 10 seconds, and 15-second generation is coming soon.
For now, only landscape videos, portrait mode coming later.
$0.04 per second
And as usual, they claim to be the fastest video generators in the world.
And by the way:
Full model weights and tooling will be released to the open-source community on GitHub in late November 2025, enabling developers, researchers, and studios to experiment, fine-tune, and build freely.
By the way, itโs already available on fal.ai.
Three versions of the video model:
โข Fast
โข Pro
โข Ultra (coming soon)
text-to-video and image-to-video
LTX-2 generates video in native 4K, 1440p, 1080p, and 720p.
No upscaling. No post hacks. Just clean, production-ready output. I donโt believe it!
Audio and lipsync included
Generation at 25 or 50(!) fps. 6, 8, 10 seconds, and 15-second generation is coming soon.
For now, only landscape videos, portrait mode coming later.
$0.04 per second
And as usual, they claim to be the fastest video generators in the world.
And by the way:
Full model weights and tooling will be released to the open-source community on GitHub in late November 2025, enabling developers, researchers, and studios to experiment, fine-tune, and build freely.
โค711๐677๐ฅ672๐671
Imagine typing a simple idea โ and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking. Welcome to Vidu Q2, the next-generation AI model that transforms imagination into real moving cinema. In this episode, we explore what makes Vidu Q2 revolutionary โ from its Reference-to-Video (reference to the video) system that blends up to 7 reference images into one coherent video, to its upgraded Image-to-Video engine that brings unmatched motion realism, stable faces, and emotional depth. Vidu Q2 doesnโt just generate clips โ it performs them. With real camera work, micro-expressions, and fluid storytelling, this model is redefining what โAI video generationโ means.
https://www.youtube.com/watch?v=hEbtpJIYcLo
https://www.youtube.com/watch?v=hEbtpJIYcLo
YouTube
Vidu Q2: The AI That Makes Movies for You
Website: https://www.vidu.com/
Imagine typing a simple idea โ and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking.
Welcome to Vidu Q2, the next-generation AIโฆ
Imagine typing a simple idea โ and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking.
Welcome to Vidu Q2, the next-generation AIโฆ
๐264โค245๐ฅ241๐229
HeyGen decided to step into a completely different field โ Motion Design.
They just rolled out a new feature called Motion Designer.
Watch the main promo video โ the whole marketing message is basically: โNo need for motion designers, no need for animators, no experience required, no After Effects needed either.โ
But no.
Iโve gathered some examples โ the guys are confusing real motion design with simple moving shapes or animated PowerPoint slides.
Maybe it works for very basic transitions between talking heads made in HeyGen, but this is not Motion Design.
They just rolled out a new feature called Motion Designer.
Watch the main promo video โ the whole marketing message is basically: โNo need for motion designers, no need for animators, no experience required, no After Effects needed either.โ
But no.
Iโve gathered some examples โ the guys are confusing real motion design with simple moving shapes or animated PowerPoint slides.
Maybe it works for very basic transitions between talking heads made in HeyGen, but this is not Motion Design.
๐725๐699๐ฅ684โค666
Minimax 2.3 is out!
Two models: Hailuo 2.3 โ Cinematic realism & professional-grade visual fidelity Hailuo 2.3 Fast โ Quicker, lighter, more affordable
They promise 4 free video generations every day at https://hailuoai.video.
Two models: Hailuo 2.3 โ Cinematic realism & professional-grade visual fidelity Hailuo 2.3 Fast โ Quicker, lighter, more affordable
They promise 4 free video generations every day at https://hailuoai.video.
๐432๐432โค409๐ฅ395
This media is not supported in your browser
VIEW IN TELEGRAM
The little Unitree G1, weighing 35 kg, is pulling a car that weighs 1,400 kg.
Sure, itโs on a parking lot and not in a forest, so itโs not that hard โ but still, pretty cool!
Sure, itโs on a parking lot and not in a forest, so itโs not that hard โ but still, pretty cool!
๐ฅ113๐95๐94โค79
Media is too big
VIEW IN TELEGRAM
MiniMax Music 2.0
Tracks are FIVE minutes long Precise control over musical arrangement and instruments is announced A cappella and duets (interesting) And of course, itโs all โstudio quality and stuffโ Available on the website: https://www.minimax.io/audio/music And on Fal: https://fal.ai/models/fal-ai/minimax-music/v2
But of course, theyโre still far from Suno.
Tracks are FIVE minutes long Precise control over musical arrangement and instruments is announced A cappella and duets (interesting) And of course, itโs all โstudio quality and stuffโ Available on the website: https://www.minimax.io/audio/music And on Fal: https://fal.ai/models/fal-ai/minimax-music/v2
But of course, theyโre still far from Suno.
๐110โค107๐ฅ95๐89
This media is not supported in your browser
VIEW IN TELEGRAM
OmniX: From Unified Panoramic Generation and Perception to Graphics-Ready 3D Scenes
Hereโs an open-source project from Kling.
They donโt just generate panoramas โ along the way, they also extract various properties: depth maps, normals, albedo, roughness, and metallic. But the most interesting part is that they claim to be able to convert panoramas into 3D โ specifically into regular meshes that can then be loaded into Blender. Although this feature is marked as completed on GitHub, thereโs a note in small print saying itโs still rough around the edges, sort of a beta version.
Project: https://yukun-huang.github.io/OmniX/
Hereโs an open-source project from Kling.
They donโt just generate panoramas โ along the way, they also extract various properties: depth maps, normals, albedo, roughness, and metallic. But the most interesting part is that they claim to be able to convert panoramas into 3D โ specifically into regular meshes that can then be loaded into Blender. Although this feature is marked as completed on GitHub, thereโs a note in small print saying itโs still rough around the edges, sort of a beta version.
Project: https://yukun-huang.github.io/OmniX/
๐109โค101๐97๐ฅ93
Ancher Just Changed How I Read News Forever
Discover Ancher โ Your AI-Powered News Assistant
Say goodbye to chaos and endless scrolling!
Ancher helps you read smarter by filtering the noise, highlighting what truly matters, and boosting your productivity with built-in smart mini tools.
Perfect for creators, professionals, and anyone tired of information overload.
๐ Start here: https://ancher.ai/
Use the special code AIBROS2025 for 50% OFF โ valid until December 31, 2025!
https://www.youtube.com/watch?v=iaZbB_DwVS0
Discover Ancher โ Your AI-Powered News Assistant
Say goodbye to chaos and endless scrolling!
Ancher helps you read smarter by filtering the noise, highlighting what truly matters, and boosting your productivity with built-in smart mini tools.
Perfect for creators, professionals, and anyone tired of information overload.
๐ Start here: https://ancher.ai/
Use the special code AIBROS2025 for 50% OFF โ valid until December 31, 2025!
https://www.youtube.com/watch?v=iaZbB_DwVS0
๐ฅ224๐194โค186๐185
Another image generator.
Microsoft has released its own image generator โ MAI-Image-1.
You can actually try it out for free and test its limits here: ๐ https://www.bing.com/images/
Everything looks sweet and polished on their website and Twitter, but I poked around a bit.
1 Very strange choice of aspect ratios: 1:1, 2:3, 3:2. Thatโs it.
2 The character limit for prompts is quite strict โ too short.
3 It follows prompts well, but the quality... well, test it yourself.
4 Censorship is harsh.
5 You can choose between DALLE-3(!) and GPT-4o models.
6 There are Edit Image and Animate Image buttons (10 fast generations; in Standard mode it takes several hours per video).
7 Thereโs also video generation โ thatโs Sora 2, 480p, 5 seconds, 10 generations.
Microsoft has released its own image generator โ MAI-Image-1.
You can actually try it out for free and test its limits here: ๐ https://www.bing.com/images/
Everything looks sweet and polished on their website and Twitter, but I poked around a bit.
1 Very strange choice of aspect ratios: 1:1, 2:3, 3:2. Thatโs it.
2 The character limit for prompts is quite strict โ too short.
3 It follows prompts well, but the quality... well, test it yourself.
4 Censorship is harsh.
5 You can choose between DALLE-3(!) and GPT-4o models.
6 There are Edit Image and Animate Image buttons (10 fast generations; in Standard mode it takes several hours per video).
7 Thereโs also video generation โ thatโs Sora 2, 480p, 5 seconds, 10 generations.
๐463โค432๐412๐ฅ403
This media is not supported in your browser
VIEW IN TELEGRAM
1. What was shown at Teslaโs first robot presentation
2. What was the final result
2. What was the final result
๐196๐ฅ182โค177๐175
Group chats are coming soon to ChatGPT.
Hereโs roughly how theyโll work:
A โStart Group Chatโ button will let you create a link and share it with others so they can join the group chat, which will appear in a new โGroup Chatsโ section in the sidebar.
Anyone with the link can join your group chat and will be able to see previous messages in that chat.
Custom instructions for group chats are separate from your personal chat settings. You can choose whether ChatGPT responds automatically or only when someone mentions it in the group chat โ but your personal ChatGPT memory is never used in group chats.
Standard features include: adding reactions, replying to specific messages, typing indicators, file uploads, image generation, and web browsing.
Hereโs roughly how theyโll work:
A โStart Group Chatโ button will let you create a link and share it with others so they can join the group chat, which will appear in a new โGroup Chatsโ section in the sidebar.
Anyone with the link can join your group chat and will be able to see previous messages in that chat.
Custom instructions for group chats are separate from your personal chat settings. You can choose whether ChatGPT responds automatically or only when someone mentions it in the group chat โ but your personal ChatGPT memory is never used in group chats.
Standard features include: adding reactions, replying to specific messages, typing indicators, file uploads, image generation, and web browsing.
๐452๐448โค447๐ฅ447