Vortex Next Gen Trends
64.4K subscribers
197 photos
210 videos
224 links
Vortex Channel is a part of BlockChainWorld 5,000.000 MASSIVE AI & Crypto COMMUNITY 💎 ALL ABOUT CRYPTO, TOKENS, AI, TAP GAMES, MEME COINS, PLAY AND EARN, DEFI, P2E, NFT, AI TOOLS, WEB3 & BITCOIN FORECASTS!

Order promo 👉 PR@blockchainworld.ai
Download Telegram
Lucy
You’ll laugh, but we’ve got a new video generator. And a fast one at that.
Interestingly, it comes from the Decart.ai team, who recently dropped Mirage — a real-time generator (or rather, re-skinner) of worlds, and Oasis 2.0 — a Minecraft world generator. Which leads us to think that world generators and video generators will go hand in hand (like Google’s Veo and Genie) and complement each other.
The Lucy 14B generator is pretty hefty in terms of parameters (there’s also a 5B version). They claim it can generate a 5-second clip in 6 seconds, but in reality it’s around 12 seconds — which, you’ll agree, is still not bad. Some sources mention clips up to 10 seconds long, but on Fal it seems limited to 5.
720p.
https://fal.ai/models/decart/lucy-14b/image-to-video/playground
🎉165🔥156👍155154
Guys, Adobe has added the ability to edit text on images in its Firefly Boards (beta) 🔥
I tried it out, it works!
🎉551🔥539👍536522
Meanwhile, the Higgs team launched Higgsfield Fashion Factory.
• You choose a background
• Create a character
• Generate an initial photo set
• Upload clothes
• Click “change outfit”
• Get a full photoshoot in the new clothes
The Higgs team is constantly experimenting with packaging their existing features into mini-products.
Unfortunately, the Factory can’t be tested for free.
https://higgsfield.ai/image/soul?modal-fashion-factory=true
🔥346👍330329🎉315
Media is too big
VIEW IN TELEGRAM
Luma Labs has released a new video model – Ray 3.
The new model can generate in 1080p 16-bit HDR, with a duration of 5 or 9 seconds.
The cherry-picked samples look nice – although it’s still clear that the level of detail isn’t very high, especially in distant shots. Faces of people in the background also get blurry. The marketing team, as usual, likes to sell what doesn’t quite exist yet (the release page makes some rather bold claims about the model’s quality, reasoning, and other capabilities). There’s a Draft Mode, which generates faster but only 5 seconds and in terrible quality (640 × 352). You can only generate for free in this mode.
🎉193👍185162🔥160
Media is too big
VIEW IN TELEGRAM
Alibaba has released the video model Wan 2.2 14B Animate for transferring animation to characters or removing them from a frame. It captures subtle movements of facial expressions and even fingers.
There are two approaches:
• Animation mode — generates an animated video with a character based on a provided image and a reference video with the desired movement.
• Replacement mode — removes a character from the frame and replaces them with the one from the image, seamlessly blending them into the environment with recalculated lighting.
👍139137🔥125🎉117
This media is not supported in your browser
VIEW IN TELEGRAM
Suno Version 5 soon
All that’s known — sometime in about two weeks, at most a month.
Hopefully, it will be just as cool as the previous updates.
👍76🔥6755🎉49
Kling 2.5 Update
1. Smarter prompt following & better timing The model now handles complex, multi-step instructions with stronger temporal logic, enabling users to create richer stories, scene transitions, and character interactions. Static images can be turned into dynamic videos with coherent flow.
2. Smoother, more stable motion With reinforcement learning and improved training data, the model produces natural, high-energy character and camera movements while avoiding glitches or distortions in complex scenes.
3. Style consistency Advanced conditioning ensures every frame matches the reference image’s colors, lighting, and atmosphere, even in fast, dynamic videos.
4. Lower cost, higher value 5 seconds of 1080p now cost 25 credits (down from 35). That’s 1,000+ videos/month on Ultra and 320 on Premierusing 2.5 Turbo.
Media is too big
VIEW IN TELEGRAM
👍856🎉4035🔥29
JetPave AI - I Helped My Friend Fix His PCB With a Secret AI Tool

My friend Romain was struggling with endless PCB errors until I showed him a secret AI tool called JetPave. In this video, I reveal how JetPave takes you from idea to factory-ready design with requirements analysis, function definition, solution selection, and schematics you can download as a .sch file and open in KiCad. It even checks market need, picks components, and eliminates costly human mistakes. I put it to the test by designing a smart cat feeder, and the results were insane.
Go to https://www.jetpave.com/ to start!

https://www.youtube.com/watch?v=mxUI8kjXfFA
🎉263🔥256👍239230
This media is not supported in your browser
VIEW IN TELEGRAM
An interesting use of Nanabanana – the very same infinite zoom.
Notably, it’s all packaged into an app on AI Google Studio, where you can upload your own image and create this zoom over its parts by simply selecting the desired area. Careful, it’s highly addictive – you can spend a long time watching where it all leads.
Try it here (the generation isn’t very fast, the video is heavily sped up): https://aistudio.google.com/apps/bundled/enhance?showPreview=true&showAssistant=true
168🎉168👍166🔥156
This media is not supported in your browser
VIEW IN TELEGRAM
Google has rolled out another interesting thing — Learn Your Way.
The idea: you take a boring chapter from a dull textbook, feed it to the neural nets, and in return you get the same chapter, but as if it were made specifically for you and your interests.
Basketball fan? Newton’s laws are explained through the ball and the hoop. Love art? Economics turns into art auctions.
And it’s not just about swapping in new examples. It generates different formats: a mind map for visual learners, audio lessons in a “teacher–student” style, interactive timelines, tests that adapt to your mistakes, and so on.
195🔥183👍182🎉161
This media is not supported in your browser
VIEW IN TELEGRAM
Kling is the best at gymnastics. Trained like a rex.
The others are funny. Minimah is in second place.
255🔥241🎉238👍227
Leaderboards.
We read: Kling 2.5 Turbo beat everyone, including Veo3. Well, okay, the model is indeed good. But then we look further down:
• Minimax Standard strongly outperforms Minimax Pro
• Lumalabs Ray3, for some reason, takes 3rd place and beats Veo3 Fast
• Seedance is hanging at the bottom
Am I the only one confused here? Who exactly are these leaderboards being tested on?
268👍254🎉222🔥215
They showed AgentKit — remember the Agent inside ChatGPT? Now you can plug that thing into your own app and build any kind of agent — it can click things, or just work through MCP / API, etc. You can see all the tools on the second screenshot; evals are also built in.
199👍188🔥180🎉175
SORA2 - on Vortex!

The all-new Sora 2 Pro brings next-level realism and full cinematic control to your AI videos. Ultra-detailed motion, synced audio, and seamless camera moves turn every generation into a mini movie. Available now exclusively inside Vortex for pro creators. No limits, no watermarks!
Create now https://vortex.channel/create.php
👍9593🔥86🎉86
Official Prompting Guide for Sora 2 by OpenAI
Structure:
General Scene and Description:
• Describe the scene in simple language;
• Specify characters, costumes, set design, weather, and other details;
• Be as detailed as necessary to create a video that matches your vision.
Cinematography:
• Shot type and camera angle: specify, for example, “wide shot, eye level” or “close-up, slight tilt from behind”;
• Mood: define the overall tone, e.g., “cinematic and tense,” “playful with tension,” “luxurious anticipation”;
• Lens and filtering: you can specify the type of lens and filters, e.g., “32mm / 50mm spherical lenses, light CPL filter”;
• Lighting and palette: describe the quality of light and main colors, e.g., “soft daylight from a window with warm fill light from a lamp and a cool reflection from the hallway.”
Actions:
• List clear, specific actions or gestures as bullet points;
• Try to describe actions as distinct moments or beats tied to time.
Dialogue:
• If there’s dialogue in the shot, include short, natural lines;
• Keep them concise so the timing fits the clip’s duration.
Background Sounds:
• Describe ambient sounds that help set the rhythm or atmosphere;
• For example, “hum of coffee machines and murmuring voices” or “rustling paper and footsteps.”

https://cookbook.openai.com/examples/sora/sora2_prompting_guide
🔥54👍51🎉4946
Veo 3.1
Leaks from Twitter on October 8, 2025, indicate an upcoming Veo 3.1 update for Google’s video generation model, spotted in the Higgsfield AI waitlist and internal codebases such as Vertex AI. The rumored improvements include enhanced character consistency, video durations of up to one minute, scene builders, and cinematic presets. The AI community views these as steps aimed at challenging OpenAI’s Sora, although Google has not provided official confirmation.
🎉90🔥84👍8178
Vivix, the World's First Real-Time Long Video Model
Sounds like clickbait, but they really do generate a 5-second video in 3 seconds. Though, there are some caveats.
The real clickbait is this: Vivix Turbo — create videos up to 1 minute long in under 3 seconds, with 9 variations at once.
You only get those 9 variations on a paid plan.
But on the free tier, it works as advertised — it generates a 5-second video in 3 seconds.
Then the fun begins — it says the video is 15 or even 50 seconds long, and once you click on it, it starts generating for a long time (I didn’t wait for it to finish).
The quality is low, 512p.
But Will Smith did slurp the spaghetti properly.
It only supports image-to-video.
https://vivix.ai/labs/turbo
🔥323🎉321290👍271