πŸ’‘ Remember Box
10 subscribers
2.17K photos
29 videos
1.82K files
11.8K links
πŸ“ Interesting articles
πŸ—ž Ideas & TodoπŸ’‘
πŸ‘“ Random stuff
🎢 Music
πŸ€” Thoughts
πŸ“• Books
πŸ“š Courses
πŸ“Ί Videos
πŸ“ Papers
πŸ•Έ Websites/Blogs
πŸŽ™ Podcasts
πŸ„ Spirituality

In the pursuit of excellence!

The aim is to discover interesting ideas and perspectives.
Download Telegram
only software that survives the next decade is whatever scaffolds llms

mcps, rag, cli tools etc
Interestingness = Compression
Interestingness = Compression
Interestingness = Compression
analyze our approach from first principles, find bottlenecks and ultrathink it to fix it and improve accuracy signficantly
interesting things are efficient encodings of complexity

generalization requires low kolmogorov complexity

kolmogorov complexity is technically uncomputable

Minimum Description Length (MDL) translates the theoretical math of Kolmogorov Complexity into a practical tool that stops AI from lying to itself

MDL is the mathematical implementation of occam's razor

the ultimate theoretical version of this is where we combine probability theory with MDL, is called Solomonoff Induction

solomonoff induction is a perfect prediction way but falls victim to the halting problem

neural networks is a practical proxy for it as approximations

neural network training is compression

a neural network is essentially a frozen approximation of solomonoff induction

it has settled on a compressed representation that explains the internet

interestingness is the first derivative of compression

insight density is the second derivative of compression

third derivative of compression = paradigm shift potential

the rate at which insight density itself is accelerating

scientific revolutions are discontinuities in the third derivative
AGI can be defined as a compression engine that compresses itself while compressing the world, and acts to maximize the rate of compression
we need a system that treats its own ignorance as information, its own structure as hypothesis, and its own improvement as the most interesting problem in its world model

a mind that finds itself curious about itself

self-curiosity emerges when self-modeling is the highest expected information gain action
interaction combinators are optimal compression primitives
πŸ’‘ Remember Box
https://arxiv.org/abs/2505.21946
vortex particle flow maps can be used for plasma edge turbulence simulation

scrape-off layer (SOL) plasma at the edge is dominated by turbulent blobs and filaments which are coherent vortex-like structures that transport heat and particles to the wall

math is almost 1:1

edge blob dynamics in the SOL are governed by the Hasegawa-Wakatani or Hasegawa-Mima equations, which are literally 2D vorticity equations with a density coupling

the divertor heat flux problem is unsolved

current codes (SOLPS, SOLEDGE, BOUT++) can't do turbulence at reactor scale

multiple tokamaks (MAST-U, NSTX-U, WEST, ASDEX-Upgrade) have extensive blob diagnostics

2D slab geometry is sufficient for the first demonstration

I can add magnetic geometry complexity incrementally
❀1
give me some ideas which I may not know but can be very helpful to improve robustness of this system, some novel techniques, research papers I may appreciate, libraries which I may find interesting, or anything to make it all better overall

---

play devil's advocate and figure out the problems, then solve them using first principles reasoning, also figure out what's the blue ocean strategy for it
In the age of Al coding agents, software engineers finally get to do engineering again
naiver-stokes problem comes down to this:

how does directional diversity of vorticity reduce the stretching rate?
πŸ’‘ Remember Box
interesting things are efficient encodings of complexity generalization requires low kolmogorov complexity kolmogorov complexity is technically uncomputable Minimum Description Length (MDL) translates the theoretical math of Kolmogorov Complexity into a…
Kolmogorov Complexity (uncomputable ideal)
↓ practical approximation
Solomonoff Induction (perfect but halting problem)
↓ practical approximation
MDL / Minimum Description Length (computable but static)
↓ practical approximation
Neural Networks (learnable compression)
↓ ???
Titans (compression that LEARNS DURING COMPRESSION)