This media is not supported in your browser
VIEW IN TELEGRAM
Seedance Omni References.
PROMPT 1: Change the eye to look like a snake eye
PROMPT 2: Change the weather to a thunderstorm with heavy rain
PROMPT 3: Change the time period to the 1920's
PROMPT 4: replace the cow in
reference
with the bear in
reference
PROMPT 5: As the woman walks, a massive alien spacecraft falls from the sky and crashes into the ground. The spacecraft is way in the distance, but it's massive so we see if crashing into the earth.
PROMPT 6: replace the asteroids with meatballs covered in marinara sauce. Replace the smoke and fire with marinara sauce and spaghetti noodles. Rather than dirt and debris, have the impacts produce marinara sauce and spaghetti noodles.
PROMPT 7: A massive creature emerges from the clouds in the distance. The creature is so massive that it passes through the clouds.
PROMPT 1: Change the eye to look like a snake eye
PROMPT 2: Change the weather to a thunderstorm with heavy rain
PROMPT 3: Change the time period to the 1920's
PROMPT 4: replace the cow in
reference
with the bear in
reference
PROMPT 5: As the woman walks, a massive alien spacecraft falls from the sky and crashes into the ground. The spacecraft is way in the distance, but it's massive so we see if crashing into the earth.
PROMPT 6: replace the asteroids with meatballs covered in marinara sauce. Replace the smoke and fire with marinara sauce and spaghetti noodles. Rather than dirt and debris, have the impacts produce marinara sauce and spaghetti noodles.
PROMPT 7: A massive creature emerges from the clouds in the distance. The creature is so massive that it passes through the clouds.
π₯119π109π102β€96
This media is not supported in your browser
VIEW IN TELEGRAM
Kling Motion Control was introduced in 3.0.
1080p and 30 seconds (!)
Try to tell which one here is generated :)
1080p and 30 seconds (!)
Try to tell which one here is generated :)
π479π169π₯146β€125π1
This media is not supported in your browser
VIEW IN TELEGRAM
New LTX 2.3
You can try it here: https://app.ltx.studio/ltx-2-playground/t2v
Interestingly, the parameters include duration up to 20(!) seconds, 4K resolution, and even 50 fps. Camera motion is also separated into its own setting parameter with lots of options.
The weights have already been released: https://huggingface.co/Lightricks/LTX-2.3
Support has also been added to Comfy workflow templates: https://github.com/Comfy-Org/workflow_templates/blob/main/templates/video_ltx2_3_t2v.json
You can try it here: https://app.ltx.studio/ltx-2-playground/t2v
Interestingly, the parameters include duration up to 20(!) seconds, 4K resolution, and even 50 fps. Camera motion is also separated into its own setting parameter with lots of options.
The weights have already been released: https://huggingface.co/Lightricks/LTX-2.3
Support has also been added to Comfy workflow templates: https://github.com/Comfy-Org/workflow_templates/blob/main/templates/video_ltx2_3_t2v.json
π13π6π₯6β€4π1
This media is not supported in your browser
VIEW IN TELEGRAM
Animation generation in After Effects using GPT-5.4.
The Atom plugin is used here. https://tryatom.ai/
Under the hood, it runs MCP connected to After Effects.
After Effects also has a version with Gemini and lots of other fun stuff.
The Atom plugin is used here. https://tryatom.ai/
Under the hood, it runs MCP connected to After Effects.
After Effects also has a version with Gemini and lots of other fun stuff.
π365π107π₯105β€90
Runway Characters
Unlike HeyGen and Hedra, this is actually real-time β you can have a conversation with them live (check out the examples).
And this is the first example of this level of quality in real time.
Itβs also a very direct response to a huge market demand: give our chatbots a face β we want someone visible talking to the customer.
Itβs clear why itβs API-only β everything runs on their servers, and Iβm guessing the price will be huge.
But the quality for real-time is π₯.
Unlike HeyGen and Hedra, this is actually real-time β you can have a conversation with them live (check out the examples).
And this is the first example of this level of quality in real time.
Itβs also a very direct response to a huge market demand: give our chatbots a face β we want someone visible talking to the customer.
Itβs clear why itβs API-only β everything runs on their servers, and Iβm guessing the price will be huge.
But the quality for real-time is π₯.
π2.21Kπ30β€28π₯22
The new Rotate Object feature in the latest Photoshop beta.
It looks good β considering the input is just a flat image.
Then you can move on to lighting using Harmonize.
It looks good β considering the input is just a flat image.
Then you can move on to lighting using Harmonize.
π1.49Kπ202β€194π₯188
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
In Grok you can now add up to 7 references.
All of them appear in the video with normal consistency.
All of them appear in the video with normal consistency.
π1.93Kπ290π₯280β€249
Hitem 3D has been updated to version V2.
This portrait generation from Nanabanana into 3D already looks quite appealing. See the second video.
One of the new features is Photo to STL for printing models on a 3D printer β itβs essentially a packaged pipeline that combines generation and export to STL.
Itβs probably the most highly detailed 3D generator right now.
It can create models with up to 2,000,000 polygons.
https://www.hitem3d.ai/
This portrait generation from Nanabanana into 3D already looks quite appealing. See the second video.
One of the new features is Photo to STL for printing models on a 3D printer β itβs essentially a packaged pipeline that combines generation and export to STL.
Itβs probably the most highly detailed 3D generator right now.
It can create models with up to 2,000,000 polygons.
https://www.hitem3d.ai/
π1.13Kβ€414π398π₯374
ByteDance, the parent company of TikTok, has postponed the global launch of Seedance 2.0. The reason is copyright claims from major Hollywood studios and streaming platforms. Reuters, citing The Information, reports that the launchβoriginally planned for the end of Marchβhas been put on hold, while the company has not yet publicly provided a detailed confirmation of these reports.
π134β€43π₯38π30
Midjourney V8
The V8 alpha has been released. In short, itβs undertrained, still struggles with fingers, thereβs no editing yet, and the video model has been postponed (according to the developers).
I was going to write about the new features and how Midjourney is falling behind, but I found a great tweet that sums it up perfectly and matches my thoughts.
π¨ Midjourney launched the V8 alpha on March 17, 2026. http://alpha.midjourney.com is positioned as the fastest model yet, with better prompt understanding, improved text rendering, support for moodboards, sref, multiple aspect ratios, native HD 2K output, and image generation about 4β5x faster than previous versions.
π€ The problem is that, based on my tests, these promises donβt hold up. Incorrect hands, distorted proportions, weak anatomy, and overall quality that often feels worse than V7. What stands out most is that in a rapidly evolving AI image market, releasing such a raw alpha risks looking more like a response to competition than real progress.
The V8 alpha has been released. In short, itβs undertrained, still struggles with fingers, thereβs no editing yet, and the video model has been postponed (according to the developers).
I was going to write about the new features and how Midjourney is falling behind, but I found a great tweet that sums it up perfectly and matches my thoughts.
π¨ Midjourney launched the V8 alpha on March 17, 2026. http://alpha.midjourney.com is positioned as the fastest model yet, with better prompt understanding, improved text rendering, support for moodboards, sref, multiple aspect ratios, native HD 2K output, and image generation about 4β5x faster than previous versions.
π€ The problem is that, based on my tests, these promises donβt hold up. Incorrect hands, distorted proportions, weak anatomy, and overall quality that often feels worse than V7. What stands out most is that in a rapidly evolving AI image market, releasing such a raw alpha risks looking more like a response to competition than real progress.
β€64π50π₯48π47