Midjourney has added 2x and 4x upscale!
You can use it on old pictures using the /show jobid command
When upscaling an image, the upscaler tries to bring the details as close to the original image as possible.
ππ»ππ»ππ»
First picture: 1456 x 816
Second picture: 2912 x 1632
#news #Midjourney
You can use it on old pictures using the /show jobid command
When upscaling an image, the upscaler tries to bring the details as close to the original image as possible.
ππ»ππ»ππ»
First picture: 1456 x 816
Second picture: 2912 x 1632
#news #Midjourney
π3β‘2β€1
#dalle3 - opinion and what's next for visual models
A long time ago (a year ago, lol) Dalle-2 set the bar for quality in picture generation, but lost the media race to Midjourney (in terms of quality) and Stable Diffusion (in terms of flexibility).
And now, OpenAI is back in the visual modeling game and setting the bar again.
First of all, the level of query understanding and what AI bloggers call coherency (cohesion or coherence), roughly speaking logical and visual integrity.
From field tests - what I spent hours to achieve in MJ and Stable, in Dalle-3 I was able to achieve in one attempt.
What you write is what you get.
What's next?
Midjourney is about to release version six, which, according to information I heard, will have a lot more style flexibility (not just very, very pretty) and that coherncy. And also, some semblance of ControlNet. And if I were them, I would certainly go in the direction of more control, because otherwise they will release Dalle-3 clone.
Stable Diffusion continues to be the people's favorite, because it is expanding to infinite possibilities at the expense of enthusiasts. But now with a new generation, the XL. Last year's 1.5 model smoothly gives up the reins of popularity.
So if you haven't tried it yet, I suggest https://www.bing.com/create
#dalle3
A long time ago (a year ago, lol) Dalle-2 set the bar for quality in picture generation, but lost the media race to Midjourney (in terms of quality) and Stable Diffusion (in terms of flexibility).
And now, OpenAI is back in the visual modeling game and setting the bar again.
First of all, the level of query understanding and what AI bloggers call coherency (cohesion or coherence), roughly speaking logical and visual integrity.
From field tests - what I spent hours to achieve in MJ and Stable, in Dalle-3 I was able to achieve in one attempt.
What you write is what you get.
What's next?
Midjourney is about to release version six, which, according to information I heard, will have a lot more style flexibility (not just very, very pretty) and that coherncy. And also, some semblance of ControlNet. And if I were them, I would certainly go in the direction of more control, because otherwise they will release Dalle-3 clone.
Stable Diffusion continues to be the people's favorite, because it is expanding to infinite possibilities at the expense of enthusiasts. But now with a new generation, the XL. Last year's 1.5 model smoothly gives up the reins of popularity.
So if you haven't tried it yet, I suggest https://www.bing.com/create
#dalle3
π7β‘3
Midjourney has its own app
It is developed by the Niji team and is called Niji Journey.
There are 4 models available in the app: niji V4 and V5, and the basic V4 and V5.
In addition to generating images, there is a feed of images that users generate. You can download any of them.
There are 20 generations available for free. Available on iOS and Android
#service
It is developed by the Niji team and is called Niji Journey.
There are 4 models available in the app: niji V4 and V5, and the basic V4 and V5.
In addition to generating images, there is a feed of images that users generate. You can download any of them.
There are 20 generations available for free. Available on iOS and Android
#service
π4β€1
Nvidia has released an extension for Stable Diffusion that doubles SD performance
This is achieved by utilizing tensor cores in NVIDIA RTX GPUs.
So you must have an RTX series card with 8GB memory minimum, 16GB RAM. You must also have the new 537.58 driver installed.
Install the extension in SD as usual: Extensions > Install from URL > paste this link https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT.
Thanks, nvidia
#service #news
This is achieved by utilizing tensor cores in NVIDIA RTX GPUs.
So you must have an RTX series card with 8GB memory minimum, 16GB RAM. You must also have the new 537.58 driver installed.
Install the extension in SD as usual: Extensions > Install from URL > paste this link https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT.
Thanks, nvidia
#service #news
π3
This media is not supported in your browser
VIEW IN TELEGRAM
The AudioSep AI separates the sounds in an audio file
For example, if you need to keep only guitar in a track, no problem.
However, you can listen to examples here
And you can upload your audio and edit it here in the demo
#service
For example, if you need to keep only guitar in a track, no problem.
However, you can listen to examples here
And you can upload your audio and edit it here in the demo
#service
π6π2
This media is not supported in your browser
VIEW IN TELEGRAM
Riffusion music AI has been updated
It recently received a $4$ million investment, after which it went viral on Twitter.
The interface is very simple, you write a promt for the music component and the lyrics if needed. The generation is pretty fast. The vocals sound great.
Three results are displayed according to your request and you can download for free. In general, the interface does not yet have a single hint of paid access, so take advantage of it.
#service
It recently received a $4$ million investment, after which it went viral on Twitter.
The interface is very simple, you write a promt for the music component and the lyrics if needed. The generation is pretty fast. The vocals sound great.
Three results are displayed according to your request and you can download for free. In general, the interface does not yet have a single hint of paid access, so take advantage of it.
#service
π1