This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
In Grok you can now add up to 7 references.
All of them appear in the video with normal consistency.
All of them appear in the video with normal consistency.
๐1.93K๐290๐ฅ280โค249
Hitem 3D has been updated to version V2.
This portrait generation from Nanabanana into 3D already looks quite appealing. See the second video.
One of the new features is Photo to STL for printing models on a 3D printer โ itโs essentially a packaged pipeline that combines generation and export to STL.
Itโs probably the most highly detailed 3D generator right now.
It can create models with up to 2,000,000 polygons.
https://www.hitem3d.ai/
This portrait generation from Nanabanana into 3D already looks quite appealing. See the second video.
One of the new features is Photo to STL for printing models on a 3D printer โ itโs essentially a packaged pipeline that combines generation and export to STL.
Itโs probably the most highly detailed 3D generator right now.
It can create models with up to 2,000,000 polygons.
https://www.hitem3d.ai/
๐1.13Kโค414๐398๐ฅ374
ByteDance, the parent company of TikTok, has postponed the global launch of Seedance 2.0. The reason is copyright claims from major Hollywood studios and streaming platforms. Reuters, citing The Information, reports that the launchโoriginally planned for the end of Marchโhas been put on hold, while the company has not yet publicly provided a detailed confirmation of these reports.
๐134โค43๐ฅ38๐30
Midjourney V8
The V8 alpha has been released. In short, itโs undertrained, still struggles with fingers, thereโs no editing yet, and the video model has been postponed (according to the developers).
I was going to write about the new features and how Midjourney is falling behind, but I found a great tweet that sums it up perfectly and matches my thoughts.
๐จ Midjourney launched the V8 alpha on March 17, 2026. http://alpha.midjourney.com is positioned as the fastest model yet, with better prompt understanding, improved text rendering, support for moodboards, sref, multiple aspect ratios, native HD 2K output, and image generation about 4โ5x faster than previous versions.
๐ค The problem is that, based on my tests, these promises donโt hold up. Incorrect hands, distorted proportions, weak anatomy, and overall quality that often feels worse than V7. What stands out most is that in a rapidly evolving AI image market, releasing such a raw alpha risks looking more like a response to competition than real progress.
The V8 alpha has been released. In short, itโs undertrained, still struggles with fingers, thereโs no editing yet, and the video model has been postponed (according to the developers).
I was going to write about the new features and how Midjourney is falling behind, but I found a great tweet that sums it up perfectly and matches my thoughts.
๐จ Midjourney launched the V8 alpha on March 17, 2026. http://alpha.midjourney.com is positioned as the fastest model yet, with better prompt understanding, improved text rendering, support for moodboards, sref, multiple aspect ratios, native HD 2K output, and image generation about 4โ5x faster than previous versions.
๐ค The problem is that, based on my tests, these promises donโt hold up. Incorrect hands, distorted proportions, weak anatomy, and overall quality that often feels worse than V7. What stands out most is that in a rapidly evolving AI image market, releasing such a raw alpha risks looking more like a response to competition than real progress.
โค64๐50๐ฅ48๐47
This media is not supported in your browser
VIEW IN TELEGRAM
Magnific Precision is a new video upscaler.
On Freepik, in the Video Upscale section, theyโve released a new model โ Precision.
While the previous model, Magnific Creative Video, was more geared toward NPR content (anime, 3D, cartoons, stop motion), Precision is positioned specifically as an upscaler for photorealistic footage (and all your generations from Siden and Kling).
You simply upload a video, set the resolution, and adjust a single parameter โ Strength, which controls how much detail is added.
Notable details:
โข Maximum of 7200 frames and 600MB
โข Up to 5 minutes at 24 fps
โข Supports 4K
โข Generates a half-second preview before processing the full video (convenient)
Thereโs a suspicion it might struggle a bit with highly detailed or noisy textures.
Letโs try it out.
https://www.freepik.com/ai/video-upscaler
On Freepik, in the Video Upscale section, theyโve released a new model โ Precision.
While the previous model, Magnific Creative Video, was more geared toward NPR content (anime, 3D, cartoons, stop motion), Precision is positioned specifically as an upscaler for photorealistic footage (and all your generations from Siden and Kling).
You simply upload a video, set the resolution, and adjust a single parameter โ Strength, which controls how much detail is added.
Notable details:
โข Maximum of 7200 frames and 600MB
โข Up to 5 minutes at 24 fps
โข Supports 4K
โข Generates a half-second preview before processing the full video (convenient)
Thereโs a suspicion it might struggle a bit with highly detailed or noisy textures.
Letโs try it out.
https://www.freepik.com/ai/video-upscaler
๐351๐121๐ฅ111โค84
WeryAI - the revolutionary platform that streamlines the entire creative process, offering everything from 4K upscaling to AI-driven lip-sync and face replacement. Imagine creating a cinematic skincare ad, producing a cyberpunk short, or localizing content for multiple markets all without switching tabs or hopping between different tools.
Watch full video https://youtu.be/bBcP_GSqkn0
Get WeryAI here https://www.weryai.com?channel=AIARENA
Watch full video https://youtu.be/bBcP_GSqkn0
Get WeryAI here https://www.weryai.com?channel=AIARENA
WeryAI
Free AI Video & Image Generator
Access Kling, Veo, and Flux in one place. Generate AI video & art instantly.
๐474๐ฅ152โค147๐145
Media is too big
VIEW IN TELEGRAM
Avatars by Pika Labs
Positioned as โAI Selves.โ
You take photos, record your voice, provide context โ and it can chat on your behalf across different platforms.
https://www.pika.me/
Positioned as โAI Selves.โ
You take photos, record your voice, provide context โ and it can chat on your behalf across different platforms.
https://www.pika.me/
๐450โค137๐ฅ131๐102
VIBE CODING IS REAL!
I installed skills in my AI like appsโฆ now it builds features while I drink coffee โ๏ธ #AI #aiapps #AIcoding #verdent
Get it here https://www.verdent.ai/?id=700695
I installed skills in my AI like appsโฆ now it builds features while I drink coffee โ๏ธ #AI #aiapps #AIcoding #verdent
Get it here https://www.verdent.ai/?id=700695
โค142๐140๐ฅ124๐123
Most people use AI to make things look cool. Nice renders, catchy product descriptions, maybe a trend guess or two. But the moment you try turning that idea into a real physical product, everything breaks. Thatโs where Accio 2.0 comes in. It helps close the gap between a fun prompt and something a factory can actually produce, with trend analysis, production-ready tech packs, bill of materials, dimensional drawings, sourcing, and factory-grade RFQs all in one workflow.
In this video, I walk through a real example, a Coffee Capsule Gashapon Machine, to show how Accio 2.0 takes an idea from concept to prototype-ready. Instead of bouncing between ten tabs and guessing your way through product development, you get one system built for people who actually want to manufacture. If you want to move from I have an idea to I have a product in production, this is worth checking out. Use my invite code INFIDGT to skip the waitlist and get instant access.
https://youtu.be/KoaFZVwA3n8
Wanna try Accio Work for free? Use my exclusive invite code INFIDGT to skip the waitlist and get instant access - ๐ Check out Accio 2.0 here: https://www.accio.com/work?src=p_igkol_Infinite-Digital-YT
In this video, I walk through a real example, a Coffee Capsule Gashapon Machine, to show how Accio 2.0 takes an idea from concept to prototype-ready. Instead of bouncing between ten tabs and guessing your way through product development, you get one system built for people who actually want to manufacture. If you want to move from I have an idea to I have a product in production, this is worth checking out. Use my invite code INFIDGT to skip the waitlist and get instant access.
https://youtu.be/KoaFZVwA3n8
Wanna try Accio Work for free? Use my exclusive invite code INFIDGT to skip the waitlist and get instant access - ๐ Check out Accio 2.0 here: https://www.accio.com/work?src=p_igkol_Infinite-Digital-YT
๐510โค157๐152๐ฅ133
Veo 3.1 Lite
Twice as cheap as Veo 3.1 Fast while maintaining the same processing speed.
If you look at the API pricing: https://blog.google/innovation-and-ai/technology/ai/veo-3-1-lite/
Itโs already getting closer to Chinese competitors.
However, the video length (4, 6, 8 seconds) falls short compared to Seedance with its 15 seconds.
There is Text-to-Video and Image-to-Video in 720p or 1080p.
Available not only in the Gemini API and Google AI Studio, but also in Flow.
Each generation costs 10 credits.
As for quality โ it needs to be tested.
Prices for Veo 3.1 Fast will also be reduced on April 7.
Twice as cheap as Veo 3.1 Fast while maintaining the same processing speed.
If you look at the API pricing: https://blog.google/innovation-and-ai/technology/ai/veo-3-1-lite/
Itโs already getting closer to Chinese competitors.
However, the video length (4, 6, 8 seconds) falls short compared to Seedance with its 15 seconds.
There is Text-to-Video and Image-to-Video in 720p or 1080p.
Available not only in the Gemini API and Google AI Studio, but also in Flow.
Each generation costs 10 credits.
As for quality โ it needs to be tested.
Prices for Veo 3.1 Fast will also be reduced on April 7.
๐1.4Kโค104๐ฅ94๐90
Wan 2.7 Image
They promised to drop Wan 2.7 Video in March, but instead released it in Aprilโand only the Image model.
Four models:
โข Wan 2.7 Text-to-Image
โข Wan 2.7 Image Edit
โข Wan 2.7 Text-to-Image Pro โ 4K โ more expensive
โข Wan 2.7 Image Edit Pro โ 4K โ more expensive
Up to 9 reference images supported as input.
Improvements include better faces, HEX color codes, small text rendering, and up to 12 consistent images from a single prompt.
Letโs jump in and test:
https://create.wan.video/generate/image/generate?model=wan2.7-pro
https://wavespeed.ai/collections/wan-2.7
https://modelstudio.console.alibabacloud.com/ap-southeast-1?tab=api#/api/?type=model&url=3026980
They promised to drop Wan 2.7 Video in March, but instead released it in Aprilโand only the Image model.
Four models:
โข Wan 2.7 Text-to-Image
โข Wan 2.7 Image Edit
โข Wan 2.7 Text-to-Image Pro โ 4K โ more expensive
โข Wan 2.7 Image Edit Pro โ 4K โ more expensive
Up to 9 reference images supported as input.
Improvements include better faces, HEX color codes, small text rendering, and up to 12 consistent images from a single prompt.
Letโs jump in and test:
https://create.wan.video/generate/image/generate?model=wan2.7-pro
https://wavespeed.ai/collections/wan-2.7
https://modelstudio.console.alibabacloud.com/ap-southeast-1?tab=api#/api/?type=model&url=3026980
๐485๐171๐ฅ141โค133