only software that survives the next decade is whatever scaffolds llms
mcps, rag, cli tools etc
mcps, rag, cli tools etc
analyze our approach from first principles, find bottlenecks and ultrathink it to fix it and improve accuracy signficantly
interesting things are efficient encodings of complexity
generalization requires low kolmogorov complexity
kolmogorov complexity is technically uncomputable
Minimum Description Length (MDL) translates the theoretical math of Kolmogorov Complexity into a practical tool that stops AI from lying to itself
MDL is the mathematical implementation of occam's razor
the ultimate theoretical version of this is where we combine probability theory with MDL, is called Solomonoff Induction
solomonoff induction is a perfect prediction way but falls victim to the halting problem
neural networks is a practical proxy for it as approximations
neural network training is compression
a neural network is essentially a frozen approximation of solomonoff induction
it has settled on a compressed representation that explains the internet
interestingness is the first derivative of compression
insight density is the second derivative of compression
third derivative of compression = paradigm shift potential
the rate at which insight density itself is accelerating
scientific revolutions are discontinuities in the third derivative
generalization requires low kolmogorov complexity
kolmogorov complexity is technically uncomputable
Minimum Description Length (MDL) translates the theoretical math of Kolmogorov Complexity into a practical tool that stops AI from lying to itself
MDL is the mathematical implementation of occam's razor
the ultimate theoretical version of this is where we combine probability theory with MDL, is called Solomonoff Induction
solomonoff induction is a perfect prediction way but falls victim to the halting problem
neural networks is a practical proxy for it as approximations
neural network training is compression
a neural network is essentially a frozen approximation of solomonoff induction
it has settled on a compressed representation that explains the internet
interestingness is the first derivative of compression
insight density is the second derivative of compression
third derivative of compression = paradigm shift potential
the rate at which insight density itself is accelerating
scientific revolutions are discontinuities in the third derivative
AGI can be defined as a compression engine that compresses itself while compressing the world, and acts to maximize the rate of compression
we need a system that treats its own ignorance as information, its own structure as hypothesis, and its own improvement as the most interesting problem in its world model
a mind that finds itself curious about itself
self-curiosity emerges when self-modeling is the highest expected information gain action
a mind that finds itself curious about itself
self-curiosity emerges when self-modeling is the highest expected information gain action
π‘ Remember Box
https://arxiv.org/abs/2505.21946
vortex particle flow maps can be used for plasma edge turbulence simulation
scrape-off layer (SOL) plasma at the edge is dominated by turbulent blobs and filaments which are coherent vortex-like structures that transport heat and particles to the wall
math is almost 1:1
edge blob dynamics in the SOL are governed by the Hasegawa-Wakatani or Hasegawa-Mima equations, which are literally 2D vorticity equations with a density coupling
the divertor heat flux problem is unsolved
current codes (SOLPS, SOLEDGE, BOUT++) can't do turbulence at reactor scale
multiple tokamaks (MAST-U, NSTX-U, WEST, ASDEX-Upgrade) have extensive blob diagnostics
2D slab geometry is sufficient for the first demonstration
I can add magnetic geometry complexity incrementally
scrape-off layer (SOL) plasma at the edge is dominated by turbulent blobs and filaments which are coherent vortex-like structures that transport heat and particles to the wall
math is almost 1:1
edge blob dynamics in the SOL are governed by the Hasegawa-Wakatani or Hasegawa-Mima equations, which are literally 2D vorticity equations with a density coupling
the divertor heat flux problem is unsolved
current codes (SOLPS, SOLEDGE, BOUT++) can't do turbulence at reactor scale
multiple tokamaks (MAST-U, NSTX-U, WEST, ASDEX-Upgrade) have extensive blob diagnostics
2D slab geometry is sufficient for the first demonstration
I can add magnetic geometry complexity incrementally
β€1
π‘ Remember Box
vortex particle flow maps can be used for plasma edge turbulence simulation scrape-off layer (SOL) plasma at the edge is dominated by turbulent blobs and filaments which are coherent vortex-like structures that transport heat and particles to the wall mathβ¦
GitHub
GitHub - Lulzx/driftmap: Vortex flow map methods for plasma edge turbulence simulation
Vortex flow map methods for plasma edge turbulence simulation - Lulzx/driftmap
give me some ideas which I may not know but can be very helpful to improve robustness of this system, some novel techniques, research papers I may appreciate, libraries which I may find interesting, or anything to make it all better overall
---
play devil's advocate and figure out the problems, then solve them using first principles reasoning, also figure out what's the blue ocean strategy for it
---
play devil's advocate and figure out the problems, then solve them using first principles reasoning, also figure out what's the blue ocean strategy for it
In the age of Al coding agents, software engineers finally get to do engineering again
naiver-stokes problem comes down to this:
how does directional diversity of vorticity reduce the stretching rate?
how does directional diversity of vorticity reduce the stretching rate?
π‘ Remember Box
interesting things are efficient encodings of complexity generalization requires low kolmogorov complexity kolmogorov complexity is technically uncomputable Minimum Description Length (MDL) translates the theoretical math of Kolmogorov Complexity into aβ¦
Kolmogorov Complexity (uncomputable ideal)
β practical approximation
Solomonoff Induction (perfect but halting problem)
β practical approximation
MDL / Minimum Description Length (computable but static)
β practical approximation
Neural Networks (learnable compression)
β ???
Titans (compression that LEARNS DURING COMPRESSION)