new models for prompt generation - Qwen3
While I do not provide the inferencing services anymore, i do like to train models. I took base model that does well in UGI leaderboards (its my favorite Qwen3 model because its hard to uncap a thinking model) , its small enough you can run on a potato, but sucks at writing prompts. I am lazy so i want to give an idea and get 1...maybe 10 prompts generated for me. Also they shouldn't read like stupid for image generation, the base model though abliterated couldn't figure it out.
So here's the first cut that solves the problem. I have compared the base model with tuned model and its much much better in writing prompts. Its subjective so I read the outputs. I was happy.
The safetensor version https://huggingface.co/goonsai-com/Qwen3-gabliterated-image-generation
GGUF version: https://huggingface.co/goonsai-com/Qwen3-gabliterated-image-generation-gguf
This stuff isn't even hard anymore but its hard in other ways.
I'd love to hear from you if it works for video as well as it does for writing image prompts. SO the way I do this is give it an instruction around the idea.
```
You have to write image generation prompts for images 1 to 4 with the following concepts. each prompts is independent of context to the image generation model.
{story or premise or idea}
```
https://redd.it/1sdvlan
@rStableDiffusion
While I do not provide the inferencing services anymore, i do like to train models. I took base model that does well in UGI leaderboards (its my favorite Qwen3 model because its hard to uncap a thinking model) , its small enough you can run on a potato, but sucks at writing prompts. I am lazy so i want to give an idea and get 1...maybe 10 prompts generated for me. Also they shouldn't read like stupid for image generation, the base model though abliterated couldn't figure it out.
So here's the first cut that solves the problem. I have compared the base model with tuned model and its much much better in writing prompts. Its subjective so I read the outputs. I was happy.
The safetensor version https://huggingface.co/goonsai-com/Qwen3-gabliterated-image-generation
GGUF version: https://huggingface.co/goonsai-com/Qwen3-gabliterated-image-generation-gguf
This stuff isn't even hard anymore but its hard in other ways.
I'd love to hear from you if it works for video as well as it does for writing image prompts. SO the way I do this is give it an instruction around the idea.
```
You have to write image generation prompts for images 1 to 4 with the following concepts. each prompts is independent of context to the image generation model.
{story or premise or idea}
```
https://redd.it/1sdvlan
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: new models for prompt generation - Qwen3
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
Wan 2.2 based model with weird saturation hue changes on Anime Video generation
https://redd.it/1sdzt08
@rStableDiffusion
https://redd.it/1sdzt08
@rStableDiffusion
Magihuman now on Wan2gp
Its out people. What kind of gens are you getting out of it?
https://huggingface.co/DeepBeepMeep/MagiHuman
https://redd.it/1se5o3k
@rStableDiffusion
Its out people. What kind of gens are you getting out of it?
https://huggingface.co/DeepBeepMeep/MagiHuman
https://redd.it/1se5o3k
@rStableDiffusion
I spent 3 months evolving SmartGallery into a free professional Local First DAM. v2.11 launches on April 9th
https://preview.redd.it/btvzkruzemtg1.png?width=1899&format=png&auto=webp&s=3891b8f2a7df98942a0643eb649e623f817211ae
Hi everyone!
Many of you know SmartGallery as a standalone gallery for ComfyUI. For the last 3 months, I have been working to turn it into a complete Digital Asset Manager (DAM) for AI creators.
I just launched the new website with the full documentation and feature list of the upcoming v2.11: [https://smartgallerydam.com](https://smartgallerydam.com)
The new v2.11 with all the DAM features will be officially released this Thursday, April 9th.
Important note on versions: If you visit my GitHub repo today, you will find the current v1.55. It is a solid and functional standalone gallery [https://github.com/biagiomaf/smart-comfyui-gallery](https://github.com/biagiomaf/smart-comfyui-gallery)
I would love to get some early feedback on the the features before the official push on Thursday. Does this look like something that would fit your workflow?
Don't worry: all your current setup and database data will work perfectly in the new version, always free and open source.
https://redd.it/1se8sfd
@rStableDiffusion
https://preview.redd.it/btvzkruzemtg1.png?width=1899&format=png&auto=webp&s=3891b8f2a7df98942a0643eb649e623f817211ae
Hi everyone!
Many of you know SmartGallery as a standalone gallery for ComfyUI. For the last 3 months, I have been working to turn it into a complete Digital Asset Manager (DAM) for AI creators.
I just launched the new website with the full documentation and feature list of the upcoming v2.11: [https://smartgallerydam.com](https://smartgallerydam.com)
The new v2.11 with all the DAM features will be officially released this Thursday, April 9th.
Important note on versions: If you visit my GitHub repo today, you will find the current v1.55. It is a solid and functional standalone gallery [https://github.com/biagiomaf/smart-comfyui-gallery](https://github.com/biagiomaf/smart-comfyui-gallery)
I would love to get some early feedback on the the features before the official push on Thursday. Does this look like something that would fit your workflow?
Don't worry: all your current setup and database data will work perfectly in the new version, always free and open source.
https://redd.it/1se8sfd
@rStableDiffusion
Inpainting with reference to LTX-2.3 (MR2V)
Hey everyone, today I’m sharing an experimental IC LoRA I trained for LTX-2.3. It allows you to do reference-based inpainting inside a masked region in video.
This LoRA is still experimental, so don’t expect something fully polished yet, but it already works pretty well — especially when the prompt contains enough detail and the mask is large enough to properly fit the object you want to place.
I’m sharing everything here for anyone who wants to test it:
Hugging Face repo:
https://huggingface.co/Alissonerdx/LTX-LoRAs
Direct model download:
https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/ltx23\_inpaint\_masked\_r2v\_rank32\_v1\_3000steps.safetensors
Workflow:
https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/workflows/ltx23\_masked\_ref\_inpaint\_v1.json
Civitai page:
https://civitai.com/models/2484952
It can also work as text-to-video if you use a blank reference and describe everything only in the prompt.
Important note: this LoRA was not trained for body, head, face swap, or similar inpainting use cases. It was trained mainly for objects. If you want to do head swap, use my head swap LoRA called BFS instead.
Since this is still experimental, feedback, tests, and results are very welcome.
https://reddit.com/link/1secygl/video/bxrfa5bu7ntg1/player
https://reddit.com/link/1secygl/video/813vpjdh6ntg1/player
https://reddit.com/link/1secygl/video/jqnwx9bi6ntg1/player
https://redd.it/1secygl
@rStableDiffusion
Hey everyone, today I’m sharing an experimental IC LoRA I trained for LTX-2.3. It allows you to do reference-based inpainting inside a masked region in video.
This LoRA is still experimental, so don’t expect something fully polished yet, but it already works pretty well — especially when the prompt contains enough detail and the mask is large enough to properly fit the object you want to place.
I’m sharing everything here for anyone who wants to test it:
Hugging Face repo:
https://huggingface.co/Alissonerdx/LTX-LoRAs
Direct model download:
https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/ltx23\_inpaint\_masked\_r2v\_rank32\_v1\_3000steps.safetensors
Workflow:
https://huggingface.co/Alissonerdx/LTX-LoRAs/blob/main/workflows/ltx23\_masked\_ref\_inpaint\_v1.json
Civitai page:
https://civitai.com/models/2484952
It can also work as text-to-video if you use a blank reference and describe everything only in the prompt.
Important note: this LoRA was not trained for body, head, face swap, or similar inpainting use cases. It was trained mainly for objects. If you want to do head swap, use my head swap LoRA called BFS instead.
Since this is still experimental, feedback, tests, and results are very welcome.
https://reddit.com/link/1secygl/video/bxrfa5bu7ntg1/player
https://reddit.com/link/1secygl/video/813vpjdh6ntg1/player
https://reddit.com/link/1secygl/video/jqnwx9bi6ntg1/player
https://redd.it/1secygl
@rStableDiffusion
huggingface.co
Alissonerdx/LTX-LoRAs · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.