Several images from Yulia Malkova's final album on my #comfyui course.
π₯11β€2β€βπ₯2
New 4 days workshop in October, i'll share more materials soon, but it will be not just about BIM, we will work with #grasshopper a lot and spend some time with #comfyui
https://paacademy.com/course/bim-rhinoinside-for-advanced-tower-design
https://paacademy.com/course/bim-rhinoinside-for-advanced-tower-design
Paacademy
BIM & Rhino.Inside for Advanced Tower Design | PAACADEMY
This 4-session workshop focuses on designing residential towers using BIM, Rhino, Rhino.Inside, and AI-driven visualization.
π₯4β€2π1
Media is too big
VIEW IN TELEGRAM
Recently, there was a task at the office: to come up with an idea and present it to the client in three days. Without AI, it is challenging to prepare a project in that time frame if you need something looks not entirely abstract. Therefore, I created a script in #grasshopper for the shape of the building without details, and all the renders are done in #comfyui. Then, I transferred the floors and exterior walls from Grasshopper to #revit, where we made plans for several different floors (not all of them), calculating the areas directly from Grasshopper based on the contours of the floors minus the areas of corridors and cores, multiplying by coefficients calculated from those floors that were developed in detail in Revit.
The interior renders were created with ChatGPT and without a 3D model, based on images from Comfyui using prompt such as: βcreate an interior render of an apartment for this buildingβ - followed by a description of the materials and other interior features that I would like to see in the images. The video was made in Google Flow based on the same renders from Comfyui and interiors from ChatGPT + screenshots and animations from Revit and Grasshopper.
I made all the content in this video on my own, as it is easier when the deadlines are so tight, otherwise half the time would be spent discussing ideas. Then, when the first comments came from the client and there was additional time for rework, we started developing a small team. In particular, Ira Sorokina took on the design of the BIM model and layouts, while I switched completely to the facades and shape of the building, but that's another story. As a result, the project does not look exactly like this now (it is a little more realistic, but still interesting, in my opinion), and is still under discussion.
The workflow I used here will be described in detail at the PAAcademy workshop.
The interior renders were created with ChatGPT and without a 3D model, based on images from Comfyui using prompt such as: βcreate an interior render of an apartment for this buildingβ - followed by a description of the materials and other interior features that I would like to see in the images. The video was made in Google Flow based on the same renders from Comfyui and interiors from ChatGPT + screenshots and animations from Revit and Grasshopper.
I made all the content in this video on my own, as it is easier when the deadlines are so tight, otherwise half the time would be spent discussing ideas. Then, when the first comments came from the client and there was additional time for rework, we started developing a small team. In particular, Ira Sorokina took on the design of the BIM model and layouts, while I switched completely to the facades and shape of the building, but that's another story. As a result, the project does not look exactly like this now (it is a little more realistic, but still interesting, in my opinion), and is still under discussion.
The workflow I used here will be described in detail at the PAAcademy workshop.
β€6π₯6
Media is too big
VIEW IN TELEGRAM
A new video, this time dedicated to the very project that we will be creating from scratch at the PAAcademy workshop #education #revit #grasshopper #rhinoinside
π₯4π2
I finally got my hands on the new Google Gemini Image, it is definitely a step forward compared to all similar models. It is also the first contextual model that can truly replace Control Net. The first screenshot from Rhino shows the current project I am working on, the second one is a render from #comfyui.
β€7π2
if you took our #grasshopper course on Stepik platform (it's a russian language course), then you are familiar with my project i used there as an example of NURBS modeling in Rhino. now I decided to try rendering it using Gemini 2.5 Flash Image. i'll show you what I got next.
β€5π₯1
These are renders, without any post processing, exactly as they came from #comfyui. To my taste, they are excellent. Obviously, some details are not rendered perfectly, but this can be solved by combining several renders with different seeds in Photoshop. also, it is possible to work with the surrounding in a same way.
A few interesting points:
- The new Gemini model is very cheap, several times cheaper than Flux.1 Kontext.
- It's also fast, usually returning results in a second. You have to wait for some generations, probably due to server load, but still not as long as with other models.
- I already mentioned that this is the first model of its kind that can truly preserve the context of the original image, which is extremely important for rendering. Sometimes this can even be a hindrance when you want to change details, in this case you have to mess around with prompts and inputs, but it's good that the model is inexpensive, so there's room for experimentation.
I remember that couple of years ago i thought that AI models cannot replace conventional rendering, but now we are close to that point, at least for early concepts.
A few interesting points:
- The new Gemini model is very cheap, several times cheaper than Flux.1 Kontext.
- It's also fast, usually returning results in a second. You have to wait for some generations, probably due to server load, but still not as long as with other models.
- I already mentioned that this is the first model of its kind that can truly preserve the context of the original image, which is extremely important for rendering. Sometimes this can even be a hindrance when you want to change details, in this case you have to mess around with prompts and inputs, but it's good that the model is inexpensive, so there's room for experimentation.
I remember that couple of years ago i thought that AI models cannot replace conventional rendering, but now we are close to that point, at least for early concepts.
π₯3β€2β€βπ₯1π1