get feedback on driftmap (https://github.com/Lulzx/driftmap) from
1. Ben Dudson: BOUT++ lead (https://www.york.ac.uk/physics-engineering-technology/people/dudson/)
2. Paolo Ricci: GBS lead at EPFL, works on SOL turbulence validation (https://people.epfl.ch/paolo.ricci, https://scholar.google.com/citations?user=gUzKXHUAAAAJ&hl=it)
3. Stewart Zweben: works at Princeton Plasma Physics Laboratory, experimental blob expert, could advise on synthetic diagnostics (https://pst.pppl.gov/person/stewart_zweben.htm)
1. Ben Dudson: BOUT++ lead (https://www.york.ac.uk/physics-engineering-technology/people/dudson/)
2. Paolo Ricci: GBS lead at EPFL, works on SOL turbulence validation (https://people.epfl.ch/paolo.ricci, https://scholar.google.com/citations?user=gUzKXHUAAAAJ&hl=it)
3. Stewart Zweben: works at Princeton Plasma Physics Laboratory, experimental blob expert, could advise on synthetic diagnostics (https://pst.pppl.gov/person/stewart_zweben.htm)
GitHub
GitHub - Lulzx/driftmap: Vortex flow map methods for plasma edge turbulence simulation
Vortex flow map methods for plasma edge turbulence simulation - Lulzx/driftmap
Skybridge: TypeScript Framework for ChatGPT Apps | https://github.com/alpic-ai/skybridge
GitHub
GitHub - alpic-ai/skybridge: Skybridge is a framework for building ChatGPT & MCP Apps
Skybridge is a framework for building ChatGPT & MCP Apps - alpic-ai/skybridge
you can use skills to orchestrate subagents for complex workflows, which is a great way to optimize context since each subagent gets its own. been testing this today and it works really well. you could even wrap the entire skill in a subagent that invokes it to save more session context, though that might reduce visibility into what's happening (haven't tested this). weirdly enough, i was able to package the subagents within the skill itself in a {skill}/agents/ folder, which is not documented anywhere, but it seems to work pretty well
[2512.16301] Adaptation of Agentic AI
https://arxiv.org/abs/2512.16301
https://arxiv.org/abs/2512.16301
arXiv.org
Adaptation of Agentic AI: A Survey of Post-Training, Memory, and Skills
Large language model (LLM) agents are moving beyond prompting alone. ChatGPT marked the rise of general-purpose LLM assistants, DeepSeek showed that on-policy reinforcement learning with...
digital markets can be barrierless, permissionless, liquid, global, 24/7 and competitive in ways that legacy systems can't
the final design is the result of eliminating each misconception until only hardware and protocol limits remain
Attention Normalizes the Wrong Norm | Convergent Thinking
https://convergentthinking.sh/posts/attention-normalizes-the-wrong-norm/
https://convergentthinking.sh/posts/attention-normalizes-the-wrong-norm/
convergentthinking.sh
Attention Normalizes the Wrong Norm | Convergent Thinking
Softmax constrains the L1 norm to 1, but should constrain the L2 norm.