Procedurally generated voxel art
https://www.shadertoy.com/view/fc23WD
https://redd.it/1s6opy1
@proceduralgeneration
https://www.shadertoy.com/view/fc23WD
https://redd.it/1s6opy1
@proceduralgeneration
Help! I'm trying to simulate stars but can't figure it out
How can I generate positions of stars from a seed? I have a center position and seed but have no idea how ill get the actual positions. To avoid massive distances I thought of generating directions and compressed distances to avoid huge numbers being sent to my shader.
https://redd.it/1s6pa07
@proceduralgeneration
How can I generate positions of stars from a seed? I have a center position and seed but have no idea how ill get the actual positions. To avoid massive distances I thought of generating directions and compressed distances to avoid huge numbers being sent to my shader.
https://redd.it/1s6pa07
@proceduralgeneration
Reddit
From the proceduralgeneration community on Reddit
Explore this post and more from the proceduralgeneration community
Shipping indie tools taught me that “feature complete” is the wrong milestone
I’ve found the hardest part of shipping professional game dev tools as an indie isn’t the algorithm — it’s the *productization*.
When I was building a rule-based scatter system, the core idea was straightforward: biome zones, density modes, and HISM instancing. The real work was everything around that. A tool can be technically impressive and still feel unusable if it doesn’t survive real production pressure.
A few lessons that mattered a lot:
* **Make the first workflow dead simple.** If a user can’t get a result in minutes, they won’t trust the tool.
* **Ship presets, not just systems.** The difference between “here’s a scatter engine” and “here’s 10 built-in presets for common use cases” is huge.
* **Optimize for iteration speed.** I care less about a clever architecture than whether the artist can block out, adjust, and re-run without friction.
* **Document the edge cases.** The weird stuff is what breaks confidence: instancing limits, density conflicts, biome overlap, parameter interactions, etc.
* **Treat stability as a feature.** In production, “it usually works” is not good enough.
One thing I’ve learned the hard way: the best tools don’t just generate content, they reduce decision fatigue. If every parameter feels like a research project, you’ve already lost. The goal is to make the *default path* good enough that people only tweak when they need to.
That’s also why performance numbers matter, but only in context. Saying “100K+ instances per second” is nice; what really matters is whether that speed translates into a fast, confidence-building iteration loop for actual level work.
For indie toolmakers: what’s been harder for you — the underlying tech, or making the tool feel production-ready?
https://redd.it/1s6t7h9
@proceduralgeneration
I’ve found the hardest part of shipping professional game dev tools as an indie isn’t the algorithm — it’s the *productization*.
When I was building a rule-based scatter system, the core idea was straightforward: biome zones, density modes, and HISM instancing. The real work was everything around that. A tool can be technically impressive and still feel unusable if it doesn’t survive real production pressure.
A few lessons that mattered a lot:
* **Make the first workflow dead simple.** If a user can’t get a result in minutes, they won’t trust the tool.
* **Ship presets, not just systems.** The difference between “here’s a scatter engine” and “here’s 10 built-in presets for common use cases” is huge.
* **Optimize for iteration speed.** I care less about a clever architecture than whether the artist can block out, adjust, and re-run without friction.
* **Document the edge cases.** The weird stuff is what breaks confidence: instancing limits, density conflicts, biome overlap, parameter interactions, etc.
* **Treat stability as a feature.** In production, “it usually works” is not good enough.
One thing I’ve learned the hard way: the best tools don’t just generate content, they reduce decision fatigue. If every parameter feels like a research project, you’ve already lost. The goal is to make the *default path* good enough that people only tweak when they need to.
That’s also why performance numbers matter, but only in context. Saying “100K+ instances per second” is nice; what really matters is whether that speed translates into a fast, confidence-building iteration loop for actual level work.
For indie toolmakers: what’s been harder for you — the underlying tech, or making the tool feel production-ready?
https://redd.it/1s6t7h9
@proceduralgeneration
Reddit
From the proceduralgeneration community on Reddit
Explore this post and more from the proceduralgeneration community
Procedural marching bots coming through..beep...beep
https://youtu.be/-h58oSsb6AA
https://redd.it/1s72q5w
@proceduralgeneration
https://youtu.be/-h58oSsb6AA
https://redd.it/1s72q5w
@proceduralgeneration
YouTube
Infinicity - Bots coming through...beep beep
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
Equal Maze Generation
I've been learning about some maze generation algorithms and even made my own depth-first search program in python. I noticed that different maze generation algorithms have kind of their own style like short or long paths. And I realized that my own depth-first program can't actually generate any maze possible as it goes as deep as possible before it starts making a new path. Even if I made it so that it has a random change to suddenly make a new path, different mazes have a different change to be generated. So I wonder if there was a maze generation algorithm that can produce any maze possible with an equal change
https://redd.it/1s7j5lo
@proceduralgeneration
I've been learning about some maze generation algorithms and even made my own depth-first search program in python. I noticed that different maze generation algorithms have kind of their own style like short or long paths. And I realized that my own depth-first program can't actually generate any maze possible as it goes as deep as possible before it starts making a new path. Even if I made it so that it has a random change to suddenly make a new path, different mazes have a different change to be generated. So I wonder if there was a maze generation algorithm that can produce any maze possible with an equal change
https://redd.it/1s7j5lo
@proceduralgeneration
Reddit
From the proceduralgeneration community on Reddit
Explore this post and more from the proceduralgeneration community
Procedural worlds from text prompts to 3D to Unity
Been working on a project that creates explorable 3D environments from text descriptions. Not a game exactly, more of an interactive art piece.
The pipeline: I write a paragraph describing an environment. A script breaks it into individual object descriptions. Each gets sent to Meshy's API to generate a 3D model. Another script arranges them in Unity based on spatial relationships extracted from the original text.
So "a mossy stone bridge over a dark river with twisted trees on both banks" generates the bridge, river bed, trees, and places them in roughly the right arrangement.
The results are chaotic and imperfect which is kind of the point. The AI interprets each object differently, placement is approximate, scale relationships are often wrong. But walking through these generated worlds has this dreamlike quality thats really compelling.
Each generation is unique. Same text produces different models each time. I've been documenting them as a series, one per day from the same paragraph, comparing how the interpretation shifts.
Stack: Python for orchestration, spaCy for NLP to extract objects and spatial relationships, Meshy API for generation, Unity for the viewer with basic first person controls.
Showed it at a local digital art exhibition last month. People spent way more time with it than I expected. Walking through AI generated spaces hits different than looking at AI generated images.
https://redd.it/1s7kzkf
@proceduralgeneration
Been working on a project that creates explorable 3D environments from text descriptions. Not a game exactly, more of an interactive art piece.
The pipeline: I write a paragraph describing an environment. A script breaks it into individual object descriptions. Each gets sent to Meshy's API to generate a 3D model. Another script arranges them in Unity based on spatial relationships extracted from the original text.
So "a mossy stone bridge over a dark river with twisted trees on both banks" generates the bridge, river bed, trees, and places them in roughly the right arrangement.
The results are chaotic and imperfect which is kind of the point. The AI interprets each object differently, placement is approximate, scale relationships are often wrong. But walking through these generated worlds has this dreamlike quality thats really compelling.
Each generation is unique. Same text produces different models each time. I've been documenting them as a series, one per day from the same paragraph, comparing how the interpretation shifts.
Stack: Python for orchestration, spaCy for NLP to extract objects and spatial relationships, Meshy API for generation, Unity for the viewer with basic first person controls.
Showed it at a local digital art exhibition last month. People spent way more time with it than I expected. Walking through AI generated spaces hits different than looking at AI generated images.
https://redd.it/1s7kzkf
@proceduralgeneration
Reddit
From the proceduralgeneration community on Reddit
Explore this post and more from the proceduralgeneration community