Architects' tools
1.17K subscribers
375 photos
37 videos
62 files
69 links
Hi, my name is Albert Sumin, this is my channel, I am an architect and computational designer at Cloud Cooperation (Vienna, Austria)
Download Telegram
Several images from Yulia Malkova's final album on my #comfyui course.
πŸ”₯11❀2❀‍πŸ”₯2
Media is too big
VIEW IN TELEGRAM
Recently, there was a task at the office: to come up with an idea and present it to the client in three days. Without AI, it is challenging to prepare a project in that time frame if you need something looks not entirely abstract. Therefore, I created a script in #grasshopper for the shape of the building without details, and all the renders are done in #comfyui. Then, I transferred the floors and exterior walls from Grasshopper to #revit, where we made plans for several different floors (not all of them), calculating the areas directly from Grasshopper based on the contours of the floors minus the areas of corridors and cores, multiplying by coefficients calculated from those floors that were developed in detail in Revit.

The interior renders were created with ChatGPT and without a 3D model, based on images from Comfyui using prompt such as: β€œcreate an interior render of an apartment for this building” - followed by a description of the materials and other interior features that I would like to see in the images. The video was made in Google Flow based on the same renders from Comfyui and interiors from ChatGPT + screenshots and animations from Revit and Grasshopper.

I made all the content in this video on my own, as it is easier when the deadlines are so tight, otherwise half the time would be spent discussing ideas. Then, when the first comments came from the client and there was additional time for rework, we started developing a small team. In particular, Ira Sorokina took on the design of the BIM model and layouts, while I switched completely to the facades and shape of the building, but that's another story. As a result, the project does not look exactly like this now (it is a little more realistic, but still interesting, in my opinion), and is still under discussion.

The workflow I used here will be described in detail at the PAAcademy workshop.
❀6πŸ”₯6
and a few images, from which the video was made.
πŸ”₯9πŸ‘2
Media is too big
VIEW IN TELEGRAM
A new video, this time dedicated to the very project that we will be creating from scratch at the PAAcademy workshop #education #revit #grasshopper #rhinoinside
πŸ”₯4πŸ‘2
and ai renders
πŸ₯°4πŸ‘2
I finally got my hands on the new Google Gemini Image, it is definitely a step forward compared to all similar models. It is also the first contextual model that can truly replace Control Net. The first screenshot from Rhino shows the current project I am working on, the second one is a render from #comfyui.
❀7πŸ‘2
if you took our #grasshopper course on Stepik platform (it's a russian language course), then you are familiar with my project i used there as an example of NURBS modeling in Rhino. now I decided to try rendering it using Gemini 2.5 Flash Image. i'll show you what I got next.
❀5πŸ”₯1
These are renders, without any post processing, exactly as they came from #comfyui. To my taste, they are excellent. Obviously, some details are not rendered perfectly, but this can be solved by combining several renders with different seeds in Photoshop. also, it is possible to work with the surrounding in a same way.

A few interesting points:
- The new Gemini model is very cheap, several times cheaper than Flux.1 Kontext.
- It's also fast, usually returning results in a second. You have to wait for some generations, probably due to server load, but still not as long as with other models.
- I already mentioned that this is the first model of its kind that can truly preserve the context of the original image, which is extremely important for rendering. Sometimes this can even be a hindrance when you want to change details, in this case you have to mess around with prompts and inputs, but it's good that the model is inexpensive, so there's room for experimentation.

I remember that couple of years ago i thought that AI models cannot replace conventional rendering, but now we are close to that point, at least for early concepts.
πŸ”₯3❀2❀‍πŸ”₯1πŸ‘1