Let's talk about ip-adapter for flux. what you need to know about it:
- first, it is not as accurate as the similar approach for sdxl;
- second, you need even more graphics memory than just for flux, but there is a nuance, which I will tell you next post today;
- third, ip-adapter for flux can't be combined with control net (or I haven't found how to do it in a good way. formally mistoline cn works with xlabs sampler but results are incredibly bad in my tests), nevertheless control net doesn't work well with the architecture tasks yet, so it's not a big loss.
#comfyui
- first, it is not as accurate as the similar approach for sdxl;
- second, you need even more graphics memory than just for flux, but there is a nuance, which I will tell you next post today;
- third, ip-adapter for flux can't be combined with control net (or I haven't found how to do it in a good way. formally mistoline cn works with xlabs sampler but results are incredibly bad in my tests), nevertheless control net doesn't work well with the architecture tasks yet, so it's not a big loss.
#comfyui
👍1
about accuracy, this is an example of the original image (left) and the result (right). you can see that the style is completely different, here, rather, we are talking about the fact that the features of the object from the original image are transferred, not the style. In the ip-adapter node we have almost no settings, except for strength, unlike SDXL, where there are many parameters. nevertheless, the result is interesting and the approach itself can be used for form finding. and you can also add LORA, I did so in the example that I shared above. I used my own LORA.
To save memory it makes sense to download and use GGUF Flux models. on the ip-adapter page they recommend this one specifically: flux1-dev-Q4_0.gguf, there are also other recommendations for running on weak machines (and for Flux almost anything is a weak machine).
GitHub
GitHub - city96/ComfyUI-GGUF: GGUF Quantization support for native ComfyUI models
GGUF Quantization support for native ComfyUI models - city96/ComfyUI-GGUF
Today is the last day to register for our December webinar at a discounted price https://designmorphine.com/education/cellular-diffusions-v1-0
DesignMorphine - Cellular Diffusions V1.0
Cellular Diffusions V1.0
Ai Diffusion Voxels
posted another one of my LORA on civitai.
https://civitai.com/models/776612?modelVersionId=868589
#comfyui #ai
https://civitai.com/models/776612?modelVersionId=868589
#comfyui #ai
❤14👍2🔥1
Architects' tools
LORA collection that I use in my practice https://civitai.com/collections/4905737 #comfyui
I've updated the link, yesterday it didn't work
👍1
Media is too big
VIEW IN TELEGRAM
I've been doing a lot of model training for FLUX in the last couple of weeks, and there are some interesting findings there. i'll try to summarize the information and make posts soon. for now, here's a spreadsheet i'm creating to track progress during training.
#comfyui
#comfyui
❤9🥰1👏1
The way FLUX captures architectural style details when training LORA is quite impressive. This is a selection of images generated through models that I have trained on Coop Himmelb(l)au projects. Conclusions and details on how to train I'll publish next week.
#comfyui
#comfyui
👍11🔥3🤩2
08_CN_ImageToImageWorkflow_Seasons2.png
8.7 MB
I promised to share a workflow for changing the seasons on an image today in a chat connected to this channel, so here it is. I only used Control Net, but if you want to completely change some parts of the image, you can combine this technique with inpaint, or, which is probably even better, at least for some cases, just save and edit the image from preprocessor. For example, you can erase grass or leaves for winter images, or replace some elements with other elements
❤6🔥5👍2🥰1
Architects' tools
The way FLUX captures architectural style details when training LORA is quite impressive. This is a selection of images generated through models that I have trained on Coop Himmelb(l)au projects. Conclusions and details on how to train I'll publish next week.…
I promised to post information on how to train LORA on FLUX this week, but there is a new version of the model, which I also need to try in tests, if it is dramatically better and there is no point in posting old information, so I need some more time.
👍6❤4🙏1
a bit delayed, but it's time to talk about training LORA models for FLUX. i think i'm going to split this material into a few days. let's start with the training technique itself: we'll use this tool.
https://github.com/ostris/ai-toolkit
If you have 24Gb+ graphics card, you can run it locally. Ostris has links to tutorials on github page I mentioned above. but I used Google Colab.
Artem Svetozarov adapted the original colab by adding separate windows with parameters, and I, in turn, slightly tweaked the code to allow different models to be used. but this link is specifically for Flux Dev only (others in the next posts):
https://colab.research.google.com/drive/1xWiIQFpCx7aEkgEd_aBmrHm9hH5iBMIb?usp=sharing
#comfyui #lora #flux
https://github.com/ostris/ai-toolkit
If you have 24Gb+ graphics card, you can run it locally. Ostris has links to tutorials on github page I mentioned above. but I used Google Colab.
Artem Svetozarov adapted the original colab by adding separate windows with parameters, and I, in turn, slightly tweaked the code to allow different models to be used. but this link is specifically for Flux Dev only (others in the next posts):
https://colab.research.google.com/drive/1xWiIQFpCx7aEkgEd_aBmrHm9hH5iBMIb?usp=sharing
#comfyui #lora #flux
❤4👍3🔥2