This media is not supported in your browser
VIEW IN TELEGRAM
Iโve seen several generations of 3D displays.
But what Samsung showed at CES 2026 looks pretty killer.
The thickness of the TV itself is especially impressive.
But what Samsung showed at CES 2026 looks pretty killer.
The thickness of the TV itself is especially impressive.
๐454โค159๐148๐ฅ138
Higgsfield has showcased a very serious relighting tool.
From some of the demos it was clear that it works extremely well with portraits, but then I also found this one where entire scenes are being relit!
It looks genuinely impressive. The available tools include selecting light direction, lighting setups, temperature, intensity, color, and shadow control.
Of course, you wonโt be able to relight a whole scene for a film in a fully professional, exactly-the-way-you-want manner, but for low-budget production and advertising itโs more than good enough.
From some of the demos it was clear that it works extremely well with portraits, but then I also found this one where entire scenes are being relit!
It looks genuinely impressive. The available tools include selecting light direction, lighting setups, temperature, intensity, color, and shadow control.
Of course, you wonโt be able to relight a whole scene for a film in a fully professional, exactly-the-way-you-want manner, but for low-budget production and advertising itโs more than good enough.
๐1.15K๐398๐ฅ373โค363
Qwen-Image-Edit-2511-Multiple-Angles-LoRA
An interesting tool for camera angles, equipped with a full ControlNet.
On the downside, the image quality isnโt great โ the idea is cool, but the execution falls short.
https://huggingface.co/spaces/multimodalart/qwen-image-multiple-angles-3d-camera
An interesting tool for camera angles, equipped with a full ControlNet.
On the downside, the image quality isnโt great โ the idea is cool, but the execution falls short.
https://huggingface.co/spaces/multimodalart/qwen-image-multiple-angles-3d-camera
๐648๐ฅ246๐239โค229
This media is not supported in your browser
VIEW IN TELEGRAM
Higgsfield โWhatโs Next?โ
Higgsfield seem to be aiming to completely remove the traditional scripting component from content creation. That is, there will still be a โscript,โ but it will be writtenโor rather assembledโfrom AI-generated fragments. And not in text form, but directly as video snippets.
Higgsโs new feature, โWhat Happens Next,โ lets you upload a SINGLE image, after which the AI suggests EIGHT video (!) variations of how the events could unfold. You choose the one you like, watch it to the end, and then once again pick one of eight possible continuations.
Higgsfield seem to be aiming to completely remove the traditional scripting component from content creation. That is, there will still be a โscript,โ but it will be writtenโor rather assembledโfrom AI-generated fragments. And not in text form, but directly as video snippets.
Higgsโs new feature, โWhat Happens Next,โ lets you upload a SINGLE image, after which the AI suggests EIGHT video (!) variations of how the events could unfold. You choose the one you like, watch it to the end, and then once again pick one of eight possible continuations.
๐956๐ฅ9โค8๐8
GLM-Image
Weโve got a new open-source image generator, and technically itโs quite interesting. Earlier, Zhipu released the open-source LLM GLM, which crushed benchmarks and impressed many (you can try it at https://chat.z.ai/). Rumors of an image model followed โ and now itโs here.
Itโs already available on FAL: https://fal.ai/models/fal-ai/glm-image https://fal.ai/models/fal-ai/glm-image/image-to-image
The key idea is separating โthinkingโ from rendering. A 9B-parameter autoregressive model interprets complex, knowledge-heavy prompts, then passes them to a 7B-parameter diffusion decoder for rendering. With a custom Glyph Encoder, it aims to render text accurately inside images. Editing and style transfer are included out of the box. They claim quality on par with top diffusion models and better performance on complex tasks.
In practice, results so far look modest. Editing features need more testing and donโt seem very strong yet.
Weโve got a new open-source image generator, and technically itโs quite interesting. Earlier, Zhipu released the open-source LLM GLM, which crushed benchmarks and impressed many (you can try it at https://chat.z.ai/). Rumors of an image model followed โ and now itโs here.
Itโs already available on FAL: https://fal.ai/models/fal-ai/glm-image https://fal.ai/models/fal-ai/glm-image/image-to-image
The key idea is separating โthinkingโ from rendering. A 9B-parameter autoregressive model interprets complex, knowledge-heavy prompts, then passes them to a 7B-parameter diffusion decoder for rendering. With a custom Glyph Encoder, it aims to render text accurately inside images. Editing and style transfer are included out of the box. They claim quality on par with top diffusion models and better performance on complex tasks.
In practice, results so far look modest. Editing features need more testing and donโt seem very strong yet.
๐925๐ฅ301โค276๐273
This media is not supported in your browser
VIEW IN TELEGRAM
Hunyuan3D has been updated to version 3.1.
You need to take a look at the mesh, but it looks really polished.
Probably the most advanced 3D generator available today.
You need to take a look at the mesh, but it looks really polished.
Probably the most advanced 3D generator available today.
๐336๐ฅ125๐110โค100
This media is not supported in your browser
VIEW IN TELEGRAM
Wan 2.6 Image to Video Flash
So far, it works only from the first frame.
Video length: up to 15 seconds.
You can upload your own audio / audio generation is also available.
There is a shot_type option โ single shot or multiple shots within one video.
Very fast.
https://fal.ai/models/wan/v2.6/image-to-video/flash https://wavespeed.ai/models/alibaba/wan-2.6/image-to-video-flash
So far, it works only from the first frame.
Video length: up to 15 seconds.
You can upload your own audio / audio generation is also available.
There is a shot_type option โ single shot or multiple shots within one video.
Very fast.
https://fal.ai/models/wan/v2.6/image-to-video/flash https://wavespeed.ai/models/alibaba/wan-2.6/image-to-video-flash
๐385๐ฅ129๐125โค122
This media is not supported in your browser
VIEW IN TELEGRAM
Runway 4.5 Image to Video
A few days ago, Runway released an update. The main focus is on the Image-to-Video model. On their Twitter and website they show the best examples, but I took real generations and even found a comparison with Kling and Seedance.
I canโt say itโs some kind of revolution. The quality is not better than Kling. Length: 5โ10 seconds. 720p.
A few days ago, Runway released an update. The main focus is on the Image-to-Video model. On their Twitter and website they show the best examples, but I took real generations and even found a comparison with Kling and Seedance.
I canโt say itโs some kind of revolution. The quality is not better than Kling. Length: 5โ10 seconds. 720p.
๐370โค117๐ฅ107๐106
And a bit of rankings from Video LMArena.
Veo wipes the floor with everyone, especially in Text-to-Video.
In Image-to-Video, wan2.5 takes 3rd place, Seedance is 6th, and Kling 2.6 is 7th.
You can see that the amount of data is still pretty limited. Runway 4 is hanging out somewhere near the bottom, and for some reason mochi-1 from a year ago has snuck into the rankings.
But Veoโs hegemony will be very hard to beat.
LTX doesnโt show up in the charts at all.
https://lmarena.ai/ru/leaderboard/text-to-video https://lmarena.ai/ru/leaderboard/image-to-video
Veo wipes the floor with everyone, especially in Text-to-Video.
In Image-to-Video, wan2.5 takes 3rd place, Seedance is 6th, and Kling 2.6 is 7th.
You can see that the amount of data is still pretty limited. Runway 4 is hanging out somewhere near the bottom, and for some reason mochi-1 from a year ago has snuck into the rankings.
But Veoโs hegemony will be very hard to beat.
LTX doesnโt show up in the charts at all.
https://lmarena.ai/ru/leaderboard/text-to-video https://lmarena.ai/ru/leaderboard/image-to-video
๐942๐ฅ291๐282โค268
This media is not supported in your browser
VIEW IN TELEGRAM
Suno Sounds
Suno quietly, announced the beta of its SFX and Loops โ creating sound effects that go beyond music. The model is still rough, which is why itโs in beta and available only to Pro and Premier users.
How to find it: on Desktop, when choosing between the Simple and Custom Create modes, there should be a dropdown under Custom that lets you select Sounds (Beta).
Itโs interesting that theyโre stepping into territory usually occupied by completely different startups with features like these.
Suno quietly, announced the beta of its SFX and Loops โ creating sound effects that go beyond music. The model is still rough, which is why itโs in beta and available only to Pro and Premier users.
How to find it: on Desktop, when choosing between the Simple and Custom Create modes, there should be a dropdown under Custom that lets you select Sounds (Beta).
Itโs interesting that theyโre stepping into territory usually occupied by completely different startups with features like these.
๐250๐83๐ฅ74โค64
This media is not supported in your browser
VIEW IN TELEGRAM
Lucy 2.0 โ fire, real-time, and censorship (none).
The idea itself isnโt exactly new โ weโve already seen it in various Live Portraits, Infinitoks, and of course Klingโs Motion Control. You upload an image of a character, take a video where you (or a more talented actor/character) mug for the camera, and boom โ your image starts mugging in the same way. In 3D this is called retargeting.
But!
Here all of this happens in REAL TIME. That is: you take an image, a webcam, and off you go streaming at 24โ30 FPS with minimal latency (they claim near-zero latency, but in reality, factoring in the internet, Iโd guess 1โ2 seconds).
Check out the videos โ and remember, this is real time.
Try it here: https://lucy.decart.ai/
The idea itself isnโt exactly new โ weโve already seen it in various Live Portraits, Infinitoks, and of course Klingโs Motion Control. You upload an image of a character, take a video where you (or a more talented actor/character) mug for the camera, and boom โ your image starts mugging in the same way. In 3D this is called retargeting.
But!
Here all of this happens in REAL TIME. That is: you take an image, a webcam, and off you go streaming at 24โ30 FPS with minimal latency (they claim near-zero latency, but in reality, factoring in the internet, Iโd guess 1โ2 seconds).
Check out the videos โ and remember, this is real time.
Try it here: https://lucy.decart.ai/
๐253๐ฅ94โค92๐81