Offshore
Photo
God of Prompt
RT @rryssf_: ICLR 2025 just gave an Outstanding Paper Award to a method that fixes model editing with one line of code 🤯

here's the problem it solves:

llms store facts in their parameters. sometimes those facts are wrong or outdated. "model editing" lets you surgically update specific facts without retraining the whole model.

the standard approach: find which parameters encode the fact (using causal tracing), then nudge those parameters to store the new fact.

works great for one edit. but do it a hundred times in sequence and the model starts forgetting everything else. do it a thousand times and it degenerates into repetitive gibberish.

every edit that inserts new knowledge corrupts old knowledge. you're playing whack-a-mole with the model's memory.

AlphaEdit reframes the problem.

instead of asking "how do we update knowledge with less damage?" the authors ask "how do we make edits mathematically invisible to preserved knowledge?"

the trick: before applying any parameter change, project it onto the null space of the preserved knowledge matrix.

in plain english: find the directions in parameter space where you can move freely without affecting anything the model already knows. only move in those directions.

it's like remodeling one room in a house by only touching walls that aren't load-bearing. the rest of the structure doesn't even know anything changed.

the results from Fang et al. across GPT2-XL, GPT-J, and LLaMA3-8B:

> average 36.7% improvement over existing editing methods
> works as a plug-and-play addition to MEMIT, ROME, and others
> models maintain 98.48% of general capabilities after 3,000 sequential edits
> prevents the gibberish collapse that kills other methods at scale

and the implementation is literally one line of code added to existing pipelines.

what i find genuinely elegant: the paper proves mathematically that output remains unchanged when querying preserved knowledge. this isn't "it works better in practice." it's "we can prove it doesn't touch what it shouldn't."

the honest caveats:

largest model tested was LLaMA3-8B. nobody's shown this works at 70B+ scale yet. a follow-up paper (AlphaEdit+) flagged brittleness when new knowledge directly conflicts with preserved knowledge, which is exactly the hardest case in production. and the whole approach assumes causal tracing correctly identifies where facts live, which isn't always clean.

but as a core insight, this is the kind of work that deserves the award. not because it solves everything. because it changes the question.

the era of "edit and pray" for llm knowledge updates might actually be ending.
tweet
Offshore
Photo
Javier Blas
CHART OF THE DAY: With only one month of data missing, US imports of Saudi crude likely fell to a fresh 30-year low in 2025.

According to monthly data from @EIAgov, the Jan-Nov 2025 period averaged 266,000 b/d, down from 274,000 b/d in the full year 2024. https://t.co/kLvAAcoCC8
tweet
Offshore
Photo
App Economy Insights
📊 This Week in Visuals

$AMD $PLTR $UBER $LLY $ABBV $NVS $MRK $NVO $AMGN $PFE $NTDOY $PEP $MDLZ $FTNT $CMG $YUM $PYPL $RBLX $TTWO $HSY $RDDT $TEAM $AFRM $SNAP $NYT $ALGN $MTCH $PTON
https://t.co/LagqbFw2RX
tweet
Offshore
Photo
The Transcript
Analyst: "I hope it’s not the death of software because my job might be dead, but that’s a whole different conversation."

CFO: "You don’t have other skills?"

$SPT https://t.co/DUgpLIS4Nt
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: $AMZN CEO: "AWS growth continued to accelerate to 24%, the fastest we've seen in 13 quarters, up $2.6 billion quarter over quarter and nearly $7 billion year over year." https://t.co/EOCk99gJ5y
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: Next week in earnings: https://t.co/kFe3Gv2spM
tweet
Offshore
Photo
God of Prompt
stop feeling fomo about openclaw if you don’t even know what inference means

watch some videos, read some articles

learn and set it up cheap first, find your use cases

then invest into scaling when you see it actually improve your workflows

my friend just sent me his “AI setup” and i don’t have the heart to tell him

bro bought 6 Mac Minis because a YouTube video said he needs “local inference for agents”

he doesn’t even know what inference means 😭 https://t.co/CbpSpRsiyQ
- Alex Prompter
tweet
God of Prompt
RT @godofprompt: This guy literally dropped a complete Lovable masterclass to build apps 10x faster!

https://t.co/LBbzRLa01n
- damien
tweet
Offshore
Photo
App Economy Insights
🗓️ Get ready for another big earnings week!

What are you watching?

• Monday: $MNDY $DT
• Tuesday: $NET $DDOG $SPOT $HOOD $LYFT $Z
• Wednesday: $SHOP $TTD $HUS $APP $CSCO
• Thursday: $COIN $ANET $ABNB $ROKU $TOST $PINS $DKNG $EXPE $TWLO

All visualized in our newsletter. https://t.co/I7xOFPl3cR
tweet