Vortex Next Gen Trends
57.5K subscribers
207 photos
215 videos
226 links
Vortex Channel is a part of BlockChainWorld 5,000.000 MASSIVE AI & Crypto COMMUNITY ๐Ÿ’Ž ALL ABOUT CRYPTO, TOKENS, AI, TAP GAMES, MEME COINS, PLAY AND EARN, DEFI, P2E, NFT, AI TOOLS, WEB3 & BITCOIN FORECASTS!

Order promo ๐Ÿ‘‰ PR@blockchainworld.ai
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Iโ€™ve seen several generations of 3D displays.
But what Samsung showed at CES 2026 looks pretty killer.
The thickness of the TV itself is especially impressive.
๐Ÿ‘454โค159๐ŸŽ‰148๐Ÿ”ฅ138
Higgsfield has showcased a very serious relighting tool.
From some of the demos it was clear that it works extremely well with portraits, but then I also found this one where entire scenes are being relit!
It looks genuinely impressive. The available tools include selecting light direction, lighting setups, temperature, intensity, color, and shadow control.
Of course, you wonโ€™t be able to relight a whole scene for a film in a fully professional, exactly-the-way-you-want manner, but for low-budget production and advertising itโ€™s more than good enough.
๐Ÿ‘1.15K๐ŸŽ‰398๐Ÿ”ฅ373โค363
Qwen-Image-Edit-2511-Multiple-Angles-LoRA
An interesting tool for camera angles, equipped with a full ControlNet.
On the downside, the image quality isnโ€™t great โ€” the idea is cool, but the execution falls short.

https://huggingface.co/spaces/multimodalart/qwen-image-multiple-angles-3d-camera
๐Ÿ‘648๐Ÿ”ฅ246๐ŸŽ‰239โค229
This media is not supported in your browser
VIEW IN TELEGRAM
Higgsfield โ€œWhatโ€™s Next?โ€
Higgsfield seem to be aiming to completely remove the traditional scripting component from content creation. That is, there will still be a โ€œscript,โ€ but it will be writtenโ€”or rather assembledโ€”from AI-generated fragments. And not in text form, but directly as video snippets.
Higgsโ€™s new feature, โ€œWhat Happens Next,โ€ lets you upload a SINGLE image, after which the AI suggests EIGHT video (!) variations of how the events could unfold. You choose the one you like, watch it to the end, and then once again pick one of eight possible continuations.
๐Ÿ‘956๐Ÿ”ฅ9โค8๐ŸŽ‰8
GLM-Image
Weโ€™ve got a new open-source image generator, and technically itโ€™s quite interesting. Earlier, Zhipu released the open-source LLM GLM, which crushed benchmarks and impressed many (you can try it at https://chat.z.ai/). Rumors of an image model followed โ€” and now itโ€™s here.
Itโ€™s already available on FAL: https://fal.ai/models/fal-ai/glm-image https://fal.ai/models/fal-ai/glm-image/image-to-image
The key idea is separating โ€œthinkingโ€ from rendering. A 9B-parameter autoregressive model interprets complex, knowledge-heavy prompts, then passes them to a 7B-parameter diffusion decoder for rendering. With a custom Glyph Encoder, it aims to render text accurately inside images. Editing and style transfer are included out of the box. They claim quality on par with top diffusion models and better performance on complex tasks.
In practice, results so far look modest. Editing features need more testing and donโ€™t seem very strong yet.
๐Ÿ‘925๐Ÿ”ฅ301โค276๐ŸŽ‰273
Google is building its Atlas, but only inside Chrome, so thereโ€™s no need to install any weird crap.
The AutoBrowse feature is supposed to turn Chrome into an agentic browser.
๐Ÿ‘109๐ŸŽ‰31โค26๐Ÿ”ฅ23
This media is not supported in your browser
VIEW IN TELEGRAM
Hunyuan3D has been updated to version 3.1.
You need to take a look at the mesh, but it looks really polished.
Probably the most advanced 3D generator available today.
๐Ÿ‘336๐Ÿ”ฅ125๐ŸŽ‰110โค100
This media is not supported in your browser
VIEW IN TELEGRAM
Wan 2.6 Image to Video Flash
So far, it works only from the first frame.
Video length: up to 15 seconds.
You can upload your own audio / audio generation is also available.
There is a shot_type option โ€” single shot or multiple shots within one video.
Very fast.
https://fal.ai/models/wan/v2.6/image-to-video/flash https://wavespeed.ai/models/alibaba/wan-2.6/image-to-video-flash
๐Ÿ‘385๐Ÿ”ฅ129๐ŸŽ‰125โค122
This media is not supported in your browser
VIEW IN TELEGRAM
Runway 4.5 Image to Video
A few days ago, Runway released an update. The main focus is on the Image-to-Video model. On their Twitter and website they show the best examples, but I took real generations and even found a comparison with Kling and Seedance.
I canโ€™t say itโ€™s some kind of revolution. The quality is not better than Kling. Length: 5โ€“10 seconds. 720p.
๐Ÿ‘370โค117๐Ÿ”ฅ107๐ŸŽ‰106
And a bit of rankings from Video LMArena.
Veo wipes the floor with everyone, especially in Text-to-Video.
In Image-to-Video, wan2.5 takes 3rd place, Seedance is 6th, and Kling 2.6 is 7th.
You can see that the amount of data is still pretty limited. Runway 4 is hanging out somewhere near the bottom, and for some reason mochi-1 from a year ago has snuck into the rankings.
But Veoโ€™s hegemony will be very hard to beat.
LTX doesnโ€™t show up in the charts at all.
https://lmarena.ai/ru/leaderboard/text-to-video https://lmarena.ai/ru/leaderboard/image-to-video
๐Ÿ‘942๐Ÿ”ฅ291๐ŸŽ‰282โค268
This media is not supported in your browser
VIEW IN TELEGRAM
Suno Sounds
Suno quietly, announced the beta of its SFX and Loops โ€” creating sound effects that go beyond music. The model is still rough, which is why itโ€™s in beta and available only to Pro and Premier users.
How to find it: on Desktop, when choosing between the Simple and Custom Create modes, there should be a dropdown under Custom that lets you select Sounds (Beta).
Itโ€™s interesting that theyโ€™re stepping into territory usually occupied by completely different startups with features like these.
๐Ÿ‘250๐ŸŽ‰83๐Ÿ”ฅ74โค64
This media is not supported in your browser
VIEW IN TELEGRAM
Lucy 2.0 โ€” fire, real-time, and censorship (none).
The idea itself isnโ€™t exactly new โ€” weโ€™ve already seen it in various Live Portraits, Infinitoks, and of course Klingโ€™s Motion Control. You upload an image of a character, take a video where you (or a more talented actor/character) mug for the camera, and boom โ€” your image starts mugging in the same way. In 3D this is called retargeting.
But!
Here all of this happens in REAL TIME. That is: you take an image, a webcam, and off you go streaming at 24โ€“30 FPS with minimal latency (they claim near-zero latency, but in reality, factoring in the internet, Iโ€™d guess 1โ€“2 seconds).
Check out the videos โ€” and remember, this is real time.
Try it here: https://lucy.decart.ai/
๐Ÿ‘253๐Ÿ”ฅ94โค92๐ŸŽ‰81