This media is not supported in your browser
VIEW IN TELEGRAM
I can't think of anything that can't be done with neural networks!
Imagine, you can make any picture come to life just by uploading it to Generative Image Dynamics. But that's not all, pictures become interactive: if you pull an object in the image with your mouse and release it, it will move by inertia. Awesome!
You can create something as in the video here.
#service #useful
Imagine, you can make any picture come to life just by uploading it to Generative Image Dynamics. But that's not all, pictures become interactive: if you pull an object in the image with your mouse and release it, it will move by inertia. Awesome!
You can create something as in the video here.
#service #useful
π10
How to make a movie cover in Midjourney
1. Create an image using the structure of this query:
You can also use "in the style of movie poster" instead of "in the style of movie cover".
Promt to the attached picture:
2. Add text using Photoshop.
#guide #midjourney
1. Create an image using the structure of this query:
[YOUR OBJECT], in the style of movie cover, [YOUR COLORS], [YOUR STYLE].
You can also use "in the style of movie poster" instead of "in the style of movie cover".
Promt to the attached picture:
astronaut, in the style of movie cover, light indigo and dark beige, photo, muted tones
2. Add text using Photoshop.
#guide #midjourney
π5π2
HuggingFace showed off the new text-to-image neural network WΓΌrstchen
"Another picture generator?" - you say. "There are tons of these picture generators out there."
Yes, but WΓΌrstchen can generate images much faster than models like Stable Diffusion XL, while using a lot less memory.
"How so?" - you ask. And the fact is that when trained, the new AI extremely compresses the image - it provides 42 times the spatial compression! While other picture generators use 4-8-fold spatial compression. WΓΌrstchen uses two stages for compression: Stage A is VQGAN, and Stage B is a diffusion autoencoder. And then there is Stage C, in which the neural network is trained in a highly compressed space.
You can play the demo here.
At this rate, we may be able to generate pictures even on a smart watch
#service
"Another picture generator?" - you say. "There are tons of these picture generators out there."
Yes, but WΓΌrstchen can generate images much faster than models like Stable Diffusion XL, while using a lot less memory.
"How so?" - you ask. And the fact is that when trained, the new AI extremely compresses the image - it provides 42 times the spatial compression! While other picture generators use 4-8-fold spatial compression. WΓΌrstchen uses two stages for compression: Stage A is VQGAN, and Stage B is a diffusion autoencoder. And then there is Stage C, in which the neural network is trained in a highly compressed space.
You can play the demo here.
At this rate, we may be able to generate pictures even on a smart watch
#service
β€2β1π1π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
Stability has an addition: they've unveiled their Stable Audio neural network.
As the name implies, the neural network generates music and sounds by text promt.
It was trained using data from the online music library AudioSparx.
It is free to generate 20 tracks per month up to 45 seconds long.
#service
As the name implies, the neural network generates music and sounds by text promt.
It was trained using data from the online music library AudioSparx.
It is free to generate 20 tracks per month up to 45 seconds long.
#service
π₯2β€1π―1
OpenAI showed DALL-E 3
DALL-E 3 does a much better job than its predecessor at creating images that accurately match complex promts.
For example, DALL-E 3 can accurately depict a scene with specific objects and the relationships between them. (example on the attached picture).
DALL-E 3 is far superior to DALL-E 2 when creating text within the image and human details such as hands.
Version 3 is in a preliminary research release and will be available to ChatGPT Plus and Enterprise customers in October.
Well, we should admit that detailed descriptions to specific details in an image is indeed a step forward.
#service
DALL-E 3 does a much better job than its predecessor at creating images that accurately match complex promts.
For example, DALL-E 3 can accurately depict a scene with specific objects and the relationships between them. (example on the attached picture).
DALL-E 3 is far superior to DALL-E 2 when creating text within the image and human details such as hands.
Version 3 is in a preliminary research release and will be available to ChatGPT Plus and Enterprise customers in October.
Well, we should admit that detailed descriptions to specific details in an image is indeed a step forward.
#service
π2
This media is not supported in your browser
VIEW IN TELEGRAM
These are the camera capabilities in Runway
In Gen2, you can choose the direction and intensity of camera movement in the new director mode.
#service
In Gen2, you can choose the direction and intensity of camera movement in the new director mode.
#service
β€6
This media is not supported in your browser
VIEW IN TELEGRAM
πβπ¨ Singaporean developers presented NExT-GPT - a multimodal model that not only understands text, pictures, audio and video files, but is also able to output it all.
The project page is on github, the code is not shared yet, but there is a demo.
These will be neural networks that understand and output everything.
#service
The project page is on github, the code is not shared yet, but there is a demo.
These will be neural networks that understand and output everything.
#service
β‘2
π₯ Game of Thrones author George R. R. Martin is afraid of ChatGPT and is suing OpenAI.
He doesn't want the chatbot to learn from his work. Other writers joined him. The lawsuit states that OpenAI copied the works without their permission and embedded the materials in ChatGPT's language models.
The authors say that they may lose their earnings, because everyone will start generating similar works through AI, and there will be no royalties for this.
Maybe George Martin is so worried for nothing? Probably, he is just afraid of competition with ChatGPT in finishing the book? π€
He doesn't want the chatbot to learn from his work. Other writers joined him. The lawsuit states that OpenAI copied the works without their permission and embedded the materials in ChatGPT's language models.
The authors say that they may lose their earnings, because everyone will start generating similar works through AI, and there will be no royalties for this.
Maybe George Martin is so worried for nothing? Probably, he is just afraid of competition with ChatGPT in finishing the book? π€
π3β€2π1
This media is not supported in your browser
VIEW IN TELEGRAM
Midjourney will add 3D scene generation to Midjourney - an example has been shown
π² It also became known that the neural network will get a web version and will not require Discord in the future
No release date has been announced yet
#news
π² It also became known that the neural network will get a web version and will not require Discord in the future
No release date has been announced yet
#news
π4π1
The cinematic landscapes in Midjourney
Promt:
Promt:
[Cinematic Expedition] [here write the name of the place, for example Vottovaary in Karelia] [detailed composition, muted colors, ektar magazine aesthetic] --ar 21:9 --c 10 --style raw
#guide #Midjourneyπ5π€1
This media is not supported in your browser
VIEW IN TELEGRAM
Were you looking for a png image without background, but in reality you got a plain jpg? UnfakePNG AI will save your nerves.
It works simply: feed the neural network a fake png and download a real png without background. And yes, it's completely free.
They also have a free plugin for Figma
#service #useful
It works simply: feed the neural network a fake png and download a real png without background. And yes, it's completely free.
They also have a free plugin for Figma
#service #useful
π5
AI Landing Page Generator is a free simple one promt landings generator
You can edit the result or download it in html format.
It is suitable for quick creation of a landing page skeleton and further polishing of design.
#service
You can edit the result or download it in html format.
It is suitable for quick creation of a landing page skeleton and further polishing of design.
#service
π5
The formula for the perfect promt for ChatGPT in just a few steps
1. Task
Start with an action verb indicating the desired result.
For example: Generate a table of fruits that can be eaten in the summer.
2. Context
Offer sufficient relevant information by addressing background, success criteria, and environment.
For example: I'm a content creator talking about artificial intelligence on Telegram."
3. Examples
Including examples or frameworks in the promt increases the quality of the result.
For example: (Assignment) Give me design ideas for my website with object, color, action, and style. For example: "A blue robot sitting on a chair in a futuristic style".
4. Persona
Determine who you want the AI to mimic, given its experience and role.
For example: You are a recruiter who has been working in HR for 20 years.
5. Format
Visualize the desired output format, which can be paragraphs, emails, etc.
For example: Visualize the answer as a table with 3 columns, "Task Type", "Ease", "Urgency".
6. Tone
Provide the desired tone, style, or mood of the AI response.
For example, "Use a formal, professional tone for the response."
#guide #chatgpt
1. Task
Start with an action verb indicating the desired result.
For example: Generate a table of fruits that can be eaten in the summer.
2. Context
Offer sufficient relevant information by addressing background, success criteria, and environment.
For example: I'm a content creator talking about artificial intelligence on Telegram."
3. Examples
Including examples or frameworks in the promt increases the quality of the result.
For example: (Assignment) Give me design ideas for my website with object, color, action, and style. For example: "A blue robot sitting on a chair in a futuristic style".
4. Persona
Determine who you want the AI to mimic, given its experience and role.
For example: You are a recruiter who has been working in HR for 20 years.
5. Format
Visualize the desired output format, which can be paragraphs, emails, etc.
For example: Visualize the answer as a table with 3 columns, "Task Type", "Ease", "Urgency".
6. Tone
Provide the desired tone, style, or mood of the AI response.
For example, "Use a formal, professional tone for the response."
#guide #chatgpt
π14β€4
Create multiple cinematic scenes at once with a single promt in Midjourney
1. Add "multi-panel compositions, in the style of movie still" to the promt.
For example, this promt:
2. Zoom in on the image using the AI upscaler.
3. Divide the image into frames of equal size.
#guide #Midjourney
1. Add "multi-panel compositions, in the style of movie still" to the promt.
For example, this promt:
astronaut, cinematic photo, multi-panel compositions, in the style of movie still, in the style of atmospheric shots, ocean light indigo and dark beige --ar 16:9
Most of the results will be divided into different scenes in the same style.2. Zoom in on the image using the AI upscaler.
3. Divide the image into frames of equal size.
#guide #Midjourney
π6
AIQRHub - Unreal QR Codes
A service for creating cool QR codes with the help of a neural network - the output is real art, not like the usual chaotic set of dots.
There are templates to choose from, in the settings you can adjust individual elements of the final result. Up to 5 generations are free of charge, more - for $9.8 per month.
#service
A service for creating cool QR codes with the help of a neural network - the output is real art, not like the usual chaotic set of dots.
There are templates to choose from, in the settings you can adjust individual elements of the final result. Up to 5 generations are free of charge, more - for $9.8 per month.
#service
This media is not supported in your browser
VIEW IN TELEGRAM
This is just a crazy innovation from Pika Labs!
Now you can write your text in promt and it will be displayed in the video!
Only 5 fonts are available for now.
#service
Now you can write your text in promt and it will be displayed in the video!
Only 5 fonts are available for now.
#service
β‘2π2β€1π1