RTX 5080/5090 Laptop for ComfyUI vs. Remote Desktop?
Hi everyone,
I’m a video editor and digital nomad, and I’ve been looking into using ComfyUI for local AI video generation. Since I need to update my gear anyway, I’m trying to figure out the best setup for working while traveling.
I’ve been considering a laptop like the HP Omen 16 (RTX 5080) or the ProArt 16 (RTX 5090). However, I’m not sure if a laptop can really handle AI video demands.
Would it be better to go with one of these, or should I just build a powerful desktop to leave at home and access it via Parsec?
Thanks you for your recommendations!
https://redd.it/1sj82mf
@rStableDiffusion
Hi everyone,
I’m a video editor and digital nomad, and I’ve been looking into using ComfyUI for local AI video generation. Since I need to update my gear anyway, I’m trying to figure out the best setup for working while traveling.
I’ve been considering a laptop like the HP Omen 16 (RTX 5080) or the ProArt 16 (RTX 5090). However, I’m not sure if a laptop can really handle AI video demands.
Would it be better to go with one of these, or should I just build a powerful desktop to leave at home and access it via Parsec?
Thanks you for your recommendations!
https://redd.it/1sj82mf
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
A new image model (ERNIE-Image-8b) from Baidu will be released soon.
https://redd.it/1sjc7j8
@rStableDiffusion
https://redd.it/1sjc7j8
@rStableDiffusion
Spatial Edit (Apache 2.0)
Has anyone tried this out?
https://github.com/EasonXiao-888/SpatialEdit
https://huggingface.co/EasonXiao-888/SpatialEdit-16B
https://redd.it/1sjcljf
@rStableDiffusion
Has anyone tried this out?
https://github.com/EasonXiao-888/SpatialEdit
https://huggingface.co/EasonXiao-888/SpatialEdit-16B
https://redd.it/1sjcljf
@rStableDiffusion
GitHub
GitHub - EasonXiao-888/SpatialEdit: SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing
SpatialEdit: Benchmarking Fine-Grained Image Spatial Editing - EasonXiao-888/SpatialEdit
Can you use Qwen3.5 4b & Gemma 4 E4B with Z image/Turbo?
So I was wondering if I could use the latest for billion parameter versions of Qwen3.5 and Gemma 4 with Z image turbo and base version?
https://redd.it/1sje2ag
@rStableDiffusion
So I was wondering if I could use the latest for billion parameter versions of Qwen3.5 and Gemma 4 with Z image turbo and base version?
https://redd.it/1sje2ag
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
The mysterious science of LoRA training (sdxl)
I find myself still unable to train good looking character loras for illustrious, and I don't know what I'm doing wrong. I'm using a 3D character for this purpose (blender model) and I've tried replicating training settings from other people's lora that I consider great, but I still have questions.
1. Can you train actually train a 3D character on illustrious or is it fighting the model too much? (considering it seems much better at handling 2D visuals)
2. I've noticed most great LoRAs out there are using hundreds of image in their dataset, usually 200 to 400. My dataset is more on the side of 50, is there an actual benefit to such large datasets?
3. Repeats. Sounds like 10 epochs of 10 repeats would be equivalent to a 100 epochs of 1 repeat, but is that truly the case? I always struggle to figure out how many repeats I should be using.
4. TE. I noticed some people do not train the text encoder at all, anyone has feedback on the benefits of doing this?
5. Batch size. I want to use 6 or 8 batch size, because I can. But I'm not sure how I need to dial the other settings based on that, in particular with learning rate and repeats.
6. Removing backgrounds. Beside the fact that is makes captionning easier, is there an actual benefit, have you noticed it yielded better results?
I have noticed the following issues with my attempt at training, perhaps this will help someone point me in the right direction on what I'm doing wrong here:
* Style locking in too much. For example I like prompting with "dark, dim lighting" keywords which works well with illustrious, but my loras will make the result much brighter than the base model (even when tagging the dataset with "day"). Dataset has a couple night shots but they are mostly bright daylight.
* Faces train fast and seem to overtrain before clothes, making it impossible to find a good balance. Either one is overtrained or the other is undertrained. (I do have less full body shot than upper body and portrait, but this is apparently a desired ratio?)
* I have settled down on a LR of 2e-4 but have tried higher and lower with no success.
If you take the time to give to answer some of that, thank you =)
https://redd.it/1sjhf1d
@rStableDiffusion
I find myself still unable to train good looking character loras for illustrious, and I don't know what I'm doing wrong. I'm using a 3D character for this purpose (blender model) and I've tried replicating training settings from other people's lora that I consider great, but I still have questions.
1. Can you train actually train a 3D character on illustrious or is it fighting the model too much? (considering it seems much better at handling 2D visuals)
2. I've noticed most great LoRAs out there are using hundreds of image in their dataset, usually 200 to 400. My dataset is more on the side of 50, is there an actual benefit to such large datasets?
3. Repeats. Sounds like 10 epochs of 10 repeats would be equivalent to a 100 epochs of 1 repeat, but is that truly the case? I always struggle to figure out how many repeats I should be using.
4. TE. I noticed some people do not train the text encoder at all, anyone has feedback on the benefits of doing this?
5. Batch size. I want to use 6 or 8 batch size, because I can. But I'm not sure how I need to dial the other settings based on that, in particular with learning rate and repeats.
6. Removing backgrounds. Beside the fact that is makes captionning easier, is there an actual benefit, have you noticed it yielded better results?
I have noticed the following issues with my attempt at training, perhaps this will help someone point me in the right direction on what I'm doing wrong here:
* Style locking in too much. For example I like prompting with "dark, dim lighting" keywords which works well with illustrious, but my loras will make the result much brighter than the base model (even when tagging the dataset with "day"). Dataset has a couple night shots but they are mostly bright daylight.
* Faces train fast and seem to overtrain before clothes, making it impossible to find a good balance. Either one is overtrained or the other is undertrained. (I do have less full body shot than upper body and portrait, but this is apparently a desired ratio?)
* I have settled down on a LR of 2e-4 but have tried higher and lower with no success.
If you take the time to give to answer some of that, thank you =)
https://redd.it/1sjhf1d
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
This media is not supported in your browser
VIEW IN TELEGRAM
Free open-source tool to instantly rig and animate your illustrations (also with mesh deform)
https://redd.it/1sjj7ta
@rStableDiffusion
https://redd.it/1sjj7ta
@rStableDiffusion