Official Prompting Guide for Sora 2 by OpenAI
Structure:
— General Scene and Description:
• Describe the scene in simple language;
• Specify characters, costumes, set design, weather, and other details;
• Be as detailed as necessary to create a video that matches your vision.
— Cinematography:
• Shot type and camera angle: specify, for example, “wide shot, eye level” or “close-up, slight tilt from behind”;
• Mood: define the overall tone, e.g., “cinematic and tense,” “playful with tension,” “luxurious anticipation”;
• Lens and filtering: you can specify the type of lens and filters, e.g., “32mm / 50mm spherical lenses, light CPL filter”;
• Lighting and palette: describe the quality of light and main colors, e.g., “soft daylight from a window with warm fill light from a lamp and a cool reflection from the hallway.”
— Actions:
• List clear, specific actions or gestures as bullet points;
• Try to describe actions as distinct moments or beats tied to time.
— Dialogue:
• If there’s dialogue in the shot, include short, natural lines;
• Keep them concise so the timing fits the clip’s duration.
— Background Sounds:
• Describe ambient sounds that help set the rhythm or atmosphere;
• For example, “hum of coffee machines and murmuring voices” or “rustling paper and footsteps.”
https://cookbook.openai.com/examples/sora/sora2_prompting_guide
Structure:
— General Scene and Description:
• Describe the scene in simple language;
• Specify characters, costumes, set design, weather, and other details;
• Be as detailed as necessary to create a video that matches your vision.
— Cinematography:
• Shot type and camera angle: specify, for example, “wide shot, eye level” or “close-up, slight tilt from behind”;
• Mood: define the overall tone, e.g., “cinematic and tense,” “playful with tension,” “luxurious anticipation”;
• Lens and filtering: you can specify the type of lens and filters, e.g., “32mm / 50mm spherical lenses, light CPL filter”;
• Lighting and palette: describe the quality of light and main colors, e.g., “soft daylight from a window with warm fill light from a lamp and a cool reflection from the hallway.”
— Actions:
• List clear, specific actions or gestures as bullet points;
• Try to describe actions as distinct moments or beats tied to time.
— Dialogue:
• If there’s dialogue in the shot, include short, natural lines;
• Keep them concise so the timing fits the clip’s duration.
— Background Sounds:
• Describe ambient sounds that help set the rhythm or atmosphere;
• For example, “hum of coffee machines and murmuring voices” or “rustling paper and footsteps.”
https://cookbook.openai.com/examples/sora/sora2_prompting_guide
Openai
Sora 2 Prompting Guide
This guide has been updated to reflect the latest Sora API capabilities, including:
Character references (objects and animals) – Upload a ch
Character references (objects and animals) – Upload a ch
🔥54👍51🎉49❤46
Veo 3.1
Leaks from Twitter on October 8, 2025, indicate an upcoming Veo 3.1 update for Google’s video generation model, spotted in the Higgsfield AI waitlist and internal codebases such as Vertex AI. The rumored improvements include enhanced character consistency, video durations of up to one minute, scene builders, and cinematic presets. The AI community views these as steps aimed at challenging OpenAI’s Sora, although Google has not provided official confirmation.
Leaks from Twitter on October 8, 2025, indicate an upcoming Veo 3.1 update for Google’s video generation model, spotted in the Higgsfield AI waitlist and internal codebases such as Vertex AI. The rumored improvements include enhanced character consistency, video durations of up to one minute, scene builders, and cinematic presets. The AI community views these as steps aimed at challenging OpenAI’s Sora, although Google has not provided official confirmation.
🎉90🔥84👍81❤78
Vivix, the World's First Real-Time Long Video Model
Sounds like clickbait, but they really do generate a 5-second video in 3 seconds. Though, there are some caveats.
The real clickbait is this: Vivix Turbo — create videos up to 1 minute long in under 3 seconds, with 9 variations at once.
You only get those 9 variations on a paid plan.
But on the free tier, it works as advertised — it generates a 5-second video in 3 seconds.
Then the fun begins — it says the video is 15 or even 50 seconds long, and once you click on it, it starts generating for a long time (I didn’t wait for it to finish).
The quality is low, 512p.
But Will Smith did slurp the spaghetti properly.
It only supports image-to-video.
https://vivix.ai/labs/turbo
Sounds like clickbait, but they really do generate a 5-second video in 3 seconds. Though, there are some caveats.
The real clickbait is this: Vivix Turbo — create videos up to 1 minute long in under 3 seconds, with 9 variations at once.
You only get those 9 variations on a paid plan.
But on the free tier, it works as advertised — it generates a 5-second video in 3 seconds.
Then the fun begins — it says the video is 15 or even 50 seconds long, and once you click on it, it starts generating for a long time (I didn’t wait for it to finish).
The quality is low, 512p.
But Will Smith did slurp the spaghetti properly.
It only supports image-to-video.
https://vivix.ai/labs/turbo
🔥323🎉321❤290👍271
This media is not supported in your browser
VIEW IN TELEGRAM
Google Flow has introduced video editing features.
One of the cool ones: object insertion — you just choose where you want to place the object and describe it. Flow will generate the object and seamlessly integrate it into your clip.
One of the cool ones: object insertion — you just choose where you want to place the object and describe it. Flow will generate the object and seamlessly integrate it into your clip.
❤149🎉134🔥125👍123
WOW! First time in the Longshot: Full 1 Minute AI Movies From One Prompt, with consistent characters. Forget those 5-second clips, it’s time for real storytelling.
With Longshot, you can generate a full 1-minute cinematic video complete with music, dialogue, and consistent characters — all from a single prompt. Type your story, upload your hero, and watch it come alive like a movie.
Try it now: https://vortex.channel/longshot
With Longshot, you can generate a full 1-minute cinematic video complete with music, dialogue, and consistent characters — all from a single prompt. Type your story, upload your hero, and watch it come alive like a movie.
Try it now: https://vortex.channel/longshot
❤70🎉70👍64🔥58
Google teases the incredible capabilities of Veo 3.1 — some kind of VideoNanaBanana. Check out the videos; I still don’t understand how they’re adding or removing objects from existing footage. Ingredients? They’re sending everyone to read this doc:
Introducing Veo 3.1 and advanced capabilities in Flow
And in November, we’re expecting NanaBanana 2.
Introducing Veo 3.1 and advanced capabilities in Flow
And in November, we’re expecting NanaBanana 2.
🎉505❤501👍485🔥485
New version of the fast video generator LTX-2
By the way, it’s already available on fal.ai.
Three versions of the video model:
• Fast
• Pro
• Ultra (coming soon)
text-to-video and image-to-video
LTX-2 generates video in native 4K, 1440p, 1080p, and 720p.
No upscaling. No post hacks. Just clean, production-ready output. I don’t believe it!
Audio and lipsync included
Generation at 25 or 50(!) fps. 6, 8, 10 seconds, and 15-second generation is coming soon.
For now, only landscape videos, portrait mode coming later.
$0.04 per second
And as usual, they claim to be the fastest video generators in the world.
And by the way:
Full model weights and tooling will be released to the open-source community on GitHub in late November 2025, enabling developers, researchers, and studios to experiment, fine-tune, and build freely.
By the way, it’s already available on fal.ai.
Three versions of the video model:
• Fast
• Pro
• Ultra (coming soon)
text-to-video and image-to-video
LTX-2 generates video in native 4K, 1440p, 1080p, and 720p.
No upscaling. No post hacks. Just clean, production-ready output. I don’t believe it!
Audio and lipsync included
Generation at 25 or 50(!) fps. 6, 8, 10 seconds, and 15-second generation is coming soon.
For now, only landscape videos, portrait mode coming later.
$0.04 per second
And as usual, they claim to be the fastest video generators in the world.
And by the way:
Full model weights and tooling will be released to the open-source community on GitHub in late November 2025, enabling developers, researchers, and studios to experiment, fine-tune, and build freely.
❤711👍677🔥672🎉671
Imagine typing a simple idea — and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking. Welcome to Vidu Q2, the next-generation AI model that transforms imagination into real moving cinema. In this episode, we explore what makes Vidu Q2 revolutionary — from its Reference-to-Video (reference to the video) system that blends up to 7 reference images into one coherent video, to its upgraded Image-to-Video engine that brings unmatched motion realism, stable faces, and emotional depth. Vidu Q2 doesn’t just generate clips — it performs them. With real camera work, micro-expressions, and fluid storytelling, this model is redefining what “AI video generation” means.
https://www.youtube.com/watch?v=hEbtpJIYcLo
https://www.youtube.com/watch?v=hEbtpJIYcLo
YouTube
Vidu Q2: The AI That Makes Movies for You
Website: https://www.vidu.com/
Imagine typing a simple idea — and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking.
Welcome to Vidu Q2, the next-generation AI…
Imagine typing a simple idea — and watching it come alive as a full cinematic scene with acting, lighting, motion, and emotion. No crew. No animation tools. Just pure AI filmmaking.
Welcome to Vidu Q2, the next-generation AI…
🎉264❤245🔥241👍229
HeyGen decided to step into a completely different field — Motion Design.
They just rolled out a new feature called Motion Designer.
Watch the main promo video — the whole marketing message is basically: “No need for motion designers, no need for animators, no experience required, no After Effects needed either.”
But no.
I’ve gathered some examples — the guys are confusing real motion design with simple moving shapes or animated PowerPoint slides.
Maybe it works for very basic transitions between talking heads made in HeyGen, but this is not Motion Design.
They just rolled out a new feature called Motion Designer.
Watch the main promo video — the whole marketing message is basically: “No need for motion designers, no need for animators, no experience required, no After Effects needed either.”
But no.
I’ve gathered some examples — the guys are confusing real motion design with simple moving shapes or animated PowerPoint slides.
Maybe it works for very basic transitions between talking heads made in HeyGen, but this is not Motion Design.
👍725🎉699🔥684❤666
Minimax 2.3 is out!
Two models: Hailuo 2.3 — Cinematic realism & professional-grade visual fidelity Hailuo 2.3 Fast — Quicker, lighter, more affordable
They promise 4 free video generations every day at https://hailuoai.video.
Two models: Hailuo 2.3 — Cinematic realism & professional-grade visual fidelity Hailuo 2.3 Fast — Quicker, lighter, more affordable
They promise 4 free video generations every day at https://hailuoai.video.
👍432🎉432❤409🔥395