π Banapi Open Beta is LIVE!
The massive v0.2.0 update is finally here and ready for testing.
Key Features:
ARIA Node: Your AI Art Director for automated reviews and routine tasks.
Nano Banana 2: Native support for Googleβs latest Image 3.1 Flash model.
Workflow Tools: Node Grouping and Batch Export for a cleaner, faster experience.
The app remains 100% freeβjust use your personal Google API key.
πΊ Full Walkthrough & Tutorial:
I recorded a deep dive into everything new. Timecodes included!
π Watch here: https://youtu.be/o1Q6C6-YGuk
π Try Banapi: https://banapi.art
Share Your Feedback:
Please feel free to share your thoughts, ideas, or any bugs you find here. Your feedback is crucial for the development of this project!
P.S. This was my first time recording a walkthrough. Itβs incredibly exhausting and time-consuming. I have no f#%king idea how bloggers do this every day! And sorry for that AI voice and for such a long video π
The massive v0.2.0 update is finally here and ready for testing.
Key Features:
ARIA Node: Your AI Art Director for automated reviews and routine tasks.
Nano Banana 2: Native support for Googleβs latest Image 3.1 Flash model.
Workflow Tools: Node Grouping and Batch Export for a cleaner, faster experience.
The app remains 100% freeβjust use your personal Google API key.
πΊ Full Walkthrough & Tutorial:
I recorded a deep dive into everything new. Timecodes included!
π Watch here: https://youtu.be/o1Q6C6-YGuk
π Try Banapi: https://banapi.art
Share Your Feedback:
Please feel free to share your thoughts, ideas, or any bugs you find here. Your feedback is crucial for the development of this project!
P.S. This was my first time recording a walkthrough. Itβs incredibly exhausting and time-consuming. I have no f#%king idea how bloggers do this every day! And sorry for that AI voice and for such a long video π
YouTube
Banapi Beta Walkthrough
Welcome to the complete walkthrough of the Banapi Beta release β a node-based application for generating and editing images using Google's AI models. Banapi is currently in Open Beta and is completely free to use!
In this video, I cover everything you needβ¦
In this video, I cover everything you needβ¦
π5
Banapi_Walkthrough.banapi
150.5 MB
Iβm also attaching the demo project file from the video. I figured it might be interesting for some of you to see how everything is set up!
π4
A few ARIA demo clips cut from Banapi walkthrough video.
π6
This media is not supported in your browser
VIEW IN TELEGRAM
π Short "How-to" Setting up API keys with Google AI Studio and Welcome Credit.
π3
This media is not supported in your browser
VIEW IN TELEGRAM
A small Tip:
If you connect an image to ARIA and simply specify the number of output images without adding a task prompt, ARIA will create a short story based on that photo.
Alternatively, you can ask it to create a narrative story grid based on this photo within a single image.
If you connect an image to ARIA and simply specify the number of output images without adding a task prompt, ARIA will create a short story based on that photo.
Alternatively, you can ask it to create a narrative story grid based on this photo within a single image.
π4
π¨Update for the Banapi users!
Google has stopped providing access to image generation models via the welcome credit bonus. β
We strongly recommend all users strictly monitor their expenses in Google AI Studio. π You can set a monthly spend cap on the "Spend" page to ensure you don't accidentally exceed your budget. π³βοΈ
Google has stopped providing access to image generation models via the welcome credit bonus. β
We strongly recommend all users strictly monitor their expenses in Google AI Studio. π You can set a monthly spend cap on the "Spend" page to ensure you don't accidentally exceed your budget. π³βοΈ
π€―1π±1
Media is too big
VIEW IN TELEGRAM
π· New Release: Camera Node
Take any image and re-render it from a different angle β front, side, back, high shot, low shot, close-up, or wide. Pick the azimuth, elevation, and distance, and hit generate.
βοΈ How it works:
Standalone Mode: Drop an image, set the angle, and you're done. Access it from the toolbar or via the Shift+C hotkey.
ARIA Agent Integration: Connect it directly to the ARIA Agent for multi-shot workflows. Provide one brief to get multiple angles, reviewed and corrected automatically.
Does it work perfectly? No. It currently works best with characters. Also, the model has its own understanding of left and right, so occasionally it might generate an angle from the opposite side. While the subject rotates reasonably well, a true camera orbit requires NeRF or 3DGS engines. Perhaps I will find a better solution in the future.
Shipping as a tool for rapid angle suggestions and look development. Available now. π
Take any image and re-render it from a different angle β front, side, back, high shot, low shot, close-up, or wide. Pick the azimuth, elevation, and distance, and hit generate.
βοΈ How it works:
Standalone Mode: Drop an image, set the angle, and you're done. Access it from the toolbar or via the Shift+C hotkey.
ARIA Agent Integration: Connect it directly to the ARIA Agent for multi-shot workflows. Provide one brief to get multiple angles, reviewed and corrected automatically.
Does it work perfectly? No. It currently works best with characters. Also, the model has its own understanding of left and right, so occasionally it might generate an angle from the opposite side. While the subject rotates reasonably well, a true camera orbit requires NeRF or 3DGS engines. Perhaps I will find a better solution in the future.
Shipping as a tool for rapid angle suggestions and look development. Available now. π
π₯1
π Camera Node Dev Log: Behind the Scenes
Changing an image's camera angle using AI generation wasn't simple. Here is what I tried:
SVG diagram: Passed a top-down compass & side-view elevation. The model read it inconsistently.
3D reference: Used a real renders with a grid floor. Better signal, but the model mostly ignored the geometry.
Gaussian Splat framing: Told the model it's a "frozen radiance field". The subject moved slightly, but the background stayed static.
These hit the same wall: diffusion models fundamentally don't understand 3D space. They even confuse left/right, occasionally generating an angle from the opposite side.
π‘ The Solution:
During the first run, a text model (Flash Lite) analyzes the image and creates a detailed description to extract its "Visual DNA" (materials, textures, details). This text description is then passed forward to guide the rotation. This proved to be the best compromise for look development without a true 3D renderer.
Changing an image's camera angle using AI generation wasn't simple. Here is what I tried:
SVG diagram: Passed a top-down compass & side-view elevation. The model read it inconsistently.
3D reference: Used a real renders with a grid floor. Better signal, but the model mostly ignored the geometry.
Gaussian Splat framing: Told the model it's a "frozen radiance field". The subject moved slightly, but the background stayed static.
These hit the same wall: diffusion models fundamentally don't understand 3D space. They even confuse left/right, occasionally generating an angle from the opposite side.
π‘ The Solution:
During the first run, a text model (Flash Lite) analyzes the image and creates a detailed description to extract its "Visual DNA" (materials, textures, details). This text description is then passed forward to guide the rotation. This proved to be the best compromise for look development without a true 3D renderer.
π₯1