procedural generation
124 subscribers
5.17K photos
1.64K videos
2 files
10.9K links
Created by @r_channels
Download Telegram
I built a tool that generates fully-rigged 3D game assets from a prompt (or even a sketch)

Hi everyone — I’ve been working on a tool to speed up creating placeholder and production-ready 3D assets for games, and I’m looking for feedback.

**What it does:**

* Type a prompt → generates an AI image → converts to 3D mesh
* Auto-rigs the model
* Applies preset animations (idle, walk, etc.)
* Prompt helper that generates structured “game-ready” prompts
* Sketch → AI image → 3D mesh pipeline (draw rough shapes and convert)
* Create base figures, then generate multiple skins
* Each skin can have multiple variants (colorways, outfits, styles)
* Select which generated assets you want and download them together as a ZIP

**Goal:**
Reduce the time from idea → animated 3D asset to a few minutes, while also enabling fast variations of the same character.

**Example use cases:**

* Rapid prototyping for game jams
* Placeholder characters
* NPC generation with multiple skins
* Character variants (enemy tiers, factions, cosmetics)
* Testing art styles quickly

Would love feedback on:

1. Would you actually use this in your workflow?
2. What export formats matter most (FBX, GLB, etc.)?
3. Are skins/variants useful for your pipeline?
4. What’s missing for this to be production-ready?

Project: [https://forge.logiqdev.com](https://forge.logiqdev.com)

I can share demos if there’s interest.

https://redd.it/1sgiv6y
@proceduralgeneration
Mandelbrot set explorer that runs entirely in your browser using WebGPU

https://preview.redd.it/b78s4s4w35ug1.png?width=3456&format=png&auto=webp&s=0da0fc997ce7f46dee442ca1ec24e9713b05e119

I've been working on an interactive Mandelbrot set explorer and wanted to share it. It runs 100% in the browser.

# What makes it interesting

The main challenge with deep Mandelbrot zoom is floating-point precision. Standard `float64` breaks down around zoom level 10^(15,) you just get a blurry, pixelated mess. To go deeper, I implemented (helped by claude to be fully honest) **perturbation theory**: instead of computing the full orbit for every pixel, you compute one high-precision *reference orbit* at the center point (using arbitrary-precision arithmetic), then each pixel only has to track a tiny *delta* from that reference. This lets the GPU handle millions of lightweight delta orbits in parallel while the CPU handles the one expensive reference computation.

# Features

* Very deep zoom
* Customizable color palettes with live gradient preview
* Dynamic or fixed max iteration control
* Bookmark system: save and return to interesting locations
* Smartphone mode with pinch-to-zoom and touch pan
* Screenshot export
* Saves settings to browser storage

# Links

* **Live demo:** [edobrb.github.io/mandelbrot](https://edobrb.github.io/mandelbrot/)
* **Source:** [https://github.com/edobrb/mandelbrot](https://github.com/edobrb/mandelbrot)

Any feedback is appreciated

https://redd.it/1sglk4v
@proceduralgeneration
Media is too big
VIEW IN TELEGRAM
Using Photoshop as a "Remote" for 3D world building. 100% Procedural.

https://redd.it/1sfsu08
@proceduralgeneration
Implicit 3D fields – small parameter changes, completely different shapes

I'm experimenting with implicit scalar fields f(x,y,z) and iso-surfaces.

This example is built from two deformed ellipsoids combined into a smooth field.

What I find interesting is how extremely sensitive the system is – small parameter changes produce completely different shapes.

It feels somewhere between mathematics and generative art.

Curious what others would try as base functions.

Would you expect this kind of behavior from such a simple deformation?

If anyone is interested, the full code is available here:
https://github.com/finky666/FieldForge3D

https://preview.redd.it/csyo2o3by8ug1.png?width=1920&format=png&auto=webp&s=ede5b3456a0c0a0e984f03827981de3601c5c563

https://redd.it/1sh5g8e
@proceduralgeneration