Architects' tools
1.17K subscribers
375 photos
37 videos
62 files
69 links
Hi, my name is Albert Sumin, this is my channel, I am an architect and computational designer at Cloud Cooperation (Vienna, Austria)
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
I tried making a few videos in Sora2, and here are the pros:
- The model itself is obviously cool; you can immediately see how well it handles physics and character movement.
- It's free for now.
- The clip length is 10 seconds, which is very good.

But there are also plenty of cons:
- Not a lot of settings and features on the site; in fact, you can only choose the format (horizontal or vertical video) and upload the first frame (except for people, they are not allowed).
- There is no option to extend or stitch clips together, meaning you can't make anything long, only simple short clips, but not a full-fledged video project. You also can't upload the last frame to make transitions, and you can't take the last frame from one clip to use it as the first frame for the next one. There are no additional modes, such as when you upload images of characters and scenes instead of the first frame, and describe the action itself only with text. Google has all of this.
- Watermarks make it impossible to use the service for work.
- There is no built-in Upscale.
- It works slowly and only produces one video at a time. While Sora2 makes one video, Google Flow can make three generations of four videos (12!) and you can pick the best one, which is important because the probability of successful generation on the first try is not very high, and you want to be able to choose between several successful ones.

Conclusion: Google Flow is more interesting in every way except for the model itself, which feels slightly better in OpenAI, but the difference is not significant enough to compensate for the lack of functionality for work. #ai
2
#midjourney, despite all the features of the new models from Google and Black Forest Lab, remains a great tool for creating (almost) random concepts and stylized images. I haven't posted anything made in midjourney here for about a year and a half, but I still use it from time to time.
7👍1
This media is not supported in your browser
VIEW IN TELEGRAM
the best Upscale workflow i used so far. it based on Flux.1 dev and requires a strong graphic card, but the result is amazing. #comfyui
3👏2
here is workflow itself
It seems that #comfyui in the cloud has become available to everyone, not just beta testers. The price is $20 per month, and this subscription includes $10 worth of API nodes (which is very good if you use them, I do), so it's like a $10 bonus, if I understand correctly. There are also limits: no more than 8 GPU hours per day (it's hard for me to estimate how much that is, but it seems like a lot) and no more than 30 actual minutes per job (to exceed that, you'd have to do some serious animation or batch rendering of something). All of this runs on A100 with 40Gb VRAM (i.e., like Google Colab). The ability to upload your own models is listed as coming soon, as is the ability to run multiple jobs in parallel. Custom nodes currently includes the 10 most popular packages, but will clearly be expanded in the future.

What it can be useful for at this stage of development:
- Upscale (a difficult task for a local machine if you don't have a 4090 or 5090, and it doesn't require your own LoRA, which cannot be uploaded to comfy cloud yet)
- Animation using WAN (also difficult to run locally, but the quality is quite good; for $20 a month, it would be hard to find anything better)
- Image editing via qwen image (this task runs well locally if you have 16GB VRAM, but if you don't, it makes sense to work in the cloud)

For the local version, everything remains the same; it is still free, as it was before.

https://www.comfy.org/cloud
🔥7
Some aggregator websites for generative models have started removing SDXL from their libraries, as it is already outdated. But, in my opinion, it is still relevant for architects, because sometimes you don't need realistic images, but rather a certain degree of abstraction, which can be the starting point for an interesting idea. #ai