7900 XTX vs 4070 Ti Super for gaming + AI image gen (Comfy UI) + creative work (Game dev, Blender, editing)?

Hey,

I’m building a generalist PC with \~$2k budget, planning to spend around $1k on GPU. I’m stuck between RX 7900 XTX and RTX 4070 Ti Super.

My use case:

* Gaming (AAA titles)
* Editing gameplay videos (coming from a GTX 1650 laptop, so anything is an upgrade)
* AI image generation (Flux, Z-image, ComfyUI workflows, not video)
* Some indie dev work, Blender, character animations, basic Unreal blockouts

Why I considered 7900 XTX:

* 24GB VRAM
* Better raw gaming performance (based on benchmarks)

Where I’m confused:

* ROCm and ZLUDA exist, but seem less mature than CUDA
* Most AI tools and updates are CUDA-first
* I’ll mainly be on Windows (editing + gaming), not full-time Linux

Main questions:

* Is ROCm actually usable day-to-day or still a workaround-heavy setup?
* Does 24GB VRAM on 7900 XTX make a real difference for image generation workflows?

https://redd.it/1spnk1m
@rStableDiffusion
Livestream from ADOS, an open source AI art event featuring artists/developers from the ecosystem (CTO of LTX starting soon)
https://redd.it/1sposdy
@rStableDiffusion
Flux Klein is better than any Closed Model for Image Editing

I really don't think closed models, at least in their current form, are the future of image editing.

Prompt-only editing is fine for testing ideas or doing simple stuff fast, but it falls apart the moment you need precision and actual control. Models like Nano Banana or GPT Image are cool demos, but for serious editing they just aren't it. They're expensive, inconsistent, and half the battle is repeatedly prompting until you maybe get something close to what you wanted.

That's exactly why I don't use them for image editing, even though I pay for both Gemini and ChatGPT (for coding and making custom nodes).

I've been using the Klein 9B model since it came out, and the more time I spend with it, the more convinced I am that open, community-supported models are the real future. Every day I find some new node, LoRA, workflow, or trick that makes the model more useful. The amount of control, precision, and customization you get with open models is on a completely different level.

I'm not denying that closed models are better for most people and I'm not denying that they're still better at some things, like prompt adherence, generating images from scratch, or giving you a polished result in a certain style with less effort. But that doesn't matter much when you're trying to do professional, precise work. For that, you need actual tools: toggles, sliders, settings, scene setup, lighting control, camera angle, subject position, pose, detail levels, style control. You can't expect all of that to be handled well through text prompting alone.

And then there are the practical advantages. Local models give you privacy. Klein is free. It's fast. You can iterate constantly without worrying about rate limits, credits, or whether each attempt is burning money while you try to dial something in.

So no, I don't see how closed models in their current state become genuinely useful for real production work. And I'm not talking about the usual AI slop you see in marketing, the lazy inconsistent stuff, or broken in-game assets with obvious errors. I'm talking about actual professional workflows where precision matters.

Honestly, this is partly a rant, but it's also me being a huge Klein fan. I've spent a ton of time with this model, and I still get "wow" moments from it all the time. My morning routine is basically checking for new custom nodes, LoRAs, finetunes, tricks, and workflows.

The best analogy I can think of is gaming and mods. Sometimes a mod scene becomes so good that it practically turns into its own game, or makes the original better than the official sequel ever was. That's how this feels.

And the community part is massive. That's what keeps these models alive and evolving. If a model doesn't have that ecosystem, it might as well be dead to me. Flux 2 Dev is a good example, it's so big and impractical that nobody really builds around it, so from my perspective it's basically (almost) in the same category as closed models. I guess it does have some uses like being a good direct alternative to the closed models, but it's not what I'm interested in personally.

https://redd.it/1spq72f
@rStableDiffusion
Good training settings for Chroma1-HD

Took me about two weeks to figure out how to get good results but it was totally worth it for an uncensored Flux 1!

https://pastebin.com/jfQdfsiN
https://pastebin.com/VhsJ6fs2

Also, it helps to load double-blocks only to preserve more of the base model.

This is the workflow I've been using: https://civitai.com/articles/28867

https://redd.it/1sprwqr
@rStableDiffusion
ZPix, an open-source local image generator, now supports image editing via FLUX.2 [klein] 4B, has a bigger output gallery and a prompts history.
https://redd.it/1spqczz
@rStableDiffusion
Flux2Klein Ksampler Soon!

dropping some news real quick

I'm releasing a proper Ksampler for flux2klein because I figured out that using the raw formula produces way more accurate colors and I genuinely think THIS is the main reason we keep getting that color shift and washed out results.

and before anyone asks, yes I benchmarked it against ModelSamplingFlux using the exact same shift settings and the ksampler I built wins every time. accurate colors, zero washout, no exceptions.

the difference comes down to the ODE formula. what's inside comfy right now is:

x_new = x + dt * (x + v)

that extra x getting thrown in is what's drifting your colors every single step. my ksampler uses the raw formula the way it's actually supposed to be:

x_new = x + dt * v

that's it. clean velocity, straight line, no gray fog creeping into your renders.

what people are missing here is that this is not happening in isolation. ComfyUI’s sampling path also includes extra internal transforms around sigma handling, prediction scaling, and latent normalization that effectively bias the trajectory toward lower variance over time. even if the model output is correct, those extra layers accumulate and show up visually as desaturation and that washed out look.

on top of that I’m also not using the standard schedule behavior. I’m using a custom timestep schedule with image-size dependent shifting, which changes how detail and color are distributed across the denoising process. that part turned out to matter a lot more than expected for keeping color stability consistent across steps.

so when I say the difference is:

x_new = x + dt * v

I don’t just mean a simplified equation. I mean the full update path is kept clean and direct, without the extra stabilizing transforms that are baked into the default ComfyUI sampling stack, which is what I believe is causing the gradual gray drift in the first place.

proper release coming soon!!!

will post results in the comments

https://redd.it/1sq7no5
@rStableDiffusion