Benjamin Hernandez๐
The tech sell-off is finally over. Support levels held firm and the rebound was violent. Reset your mind this weekend and trade with peace on Monday.
Trade easier: ๐ https://t.co/71FIJId47G
Message โCALMโ to join a group that trades easy. $ORCL $AMD $INTC $NVDA $TSM $TCNNF
tweet
The tech sell-off is finally over. Support levels held firm and the rebound was violent. Reset your mind this weekend and trade with peace on Monday.
Trade easier: ๐ https://t.co/71FIJId47G
Message โCALMโ to join a group that trades easy. $ORCL $AMD $INTC $NVDA $TSM $TCNNF
A powerful finish to a profitable week for all our members.
We locked in +89.58% on $SMX. The wins continued with $SLNG, $CNI, $WHLR, $FLXS, $BNAI, $DRCT, and $LIF.
Momentum is on our side for next week.
Relax this Saturday. - Benjamin Hernandez๐tweet
X (formerly Twitter)
Benjamin Hernandez๐ (@HayesStocks) on X
A powerful finish to a profitable week for all our members.
We locked in +89.58% on $SMX. The wins continued with $SLNG, $CNI, $WHLR, $FLXS, $BNAI, $DRCT, and $LIF.
Momentum is on our side for next week.
Relax this Saturday.
We locked in +89.58% on $SMX. The wins continued with $SLNG, $CNI, $WHLR, $FLXS, $BNAI, $DRCT, and $LIF.
Momentum is on our side for next week.
Relax this Saturday.
Offshore
Video
Moon Dev
dont buy an expensive computer to host these llms for clawdbot or whatever
your time is worth so much more than that https://t.co/l5Bif9xFSi
tweet
dont buy an expensive computer to host these llms for clawdbot or whatever
your time is worth so much more than that https://t.co/l5Bif9xFSi
tweet
Offshore
Video
Dimitry Nakhla | Babylon Capitalยฎ
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐ญ๐ก๐ ๐ข๐ฆ๐ฉ๐จ๐ซ๐ญ๐๐ง๐๐ ๐จ๐ ๐ซ๐๐๐ฎ๐ซ๐ซ๐ข๐ง๐ ๐ซ๐๐ฏ๐๐ง๐ฎ๐ ๐ฌ๐ญ๐ซ๐๐๐ฆ๐ฌ ๐ฐ๐ก๐๐ง ๐ฅ๐จ๐จ๐ค๐ข๐ง๐ ๐๐ญ ๐๐ฎ๐ฌ๐ข๐ง๐๐ฌ๐ฌ๐๐ฌ:
โIt is important, but the predictability of when they recur is notโฆ Whatโs most important for us is essential product or serviceโฆ We donโt like things that are discretionary.โ
๐๐ฉ๐ช๐ด ๐ช๐ด ๐ข ๐ด๐ถ๐ฃ๐ต๐ญ๐ฆ ๐ฃ๐ถ๐ต ๐ฑ๐ฐ๐ธ๐ฆ๐ณ๐ง๐ถ๐ญ ๐ฅ๐ช๐ด๐ต๐ช๐ฏ๐ค๐ต๐ช๐ฐ๐ฏ.
Not all recurring-like revenue is created equal. A SaaS subscription can be canceled. ๐ผ๐ฃ ๐๐จ๐จ๐๐ฃ๐ฉ๐๐๐ก ๐จ๐๐ง๐ซ๐๐๐ ๐๐๐ฃ ๐๐ ๐๐๐๐๐ง๐ง๐๐โ๐๐ช๐ฉ ๐ฃ๐ค๐ฉ ๐๐ซ๐ค๐๐๐๐.
___
Think of a ๐ต๐ฐ๐ญ๐ญ ๐ฃ๐ฐ๐ฐ๐ต๐ฉ.
If you commute from NJ to NY for work, you have to pay the toll. A snowstorm might pause traffic for a day or two, but it doesnโt break the business. Once conditions normalize, cars flow again.
The toll booth keeps collectingโoften with pricing power layered on top.
___
Thatโs the kind of โrecurrenceโ Hohn is talking about. ๐๐ฉ ๐ง๐๐ฆ๐ช๐๐ง๐๐จ ๐ฅ๐๐ฉ๐๐๐ฃ๐๐, ๐๐ช๐ฉ ๐๐ฉโ๐จ ๐๐ฃ๐๐ง๐๐๐๐๐ก๐ฎ ๐๐ช๐ง๐๐๐ก๐.
___
๐๐ฑ๐๐ฆ๐ฉ๐ฅ๐๐ฌ ๐๐๐ฅ๐จ๐ฐ ๐๐ฝ
$SPGI & $MCO: Debt issuance can be delayed, but it must eventually be refinanced and rated. These are essential stamps of approvalโnot discretionary spend.
$ASML: ~25% of revenue comes from services tied to a massive installed base. Once machines are in fabs, service demand is inevitable.
$LRCX: ~35% of revenue comes from customer support and services. The installed base drives repeat economics.
$GE: ~66% of revenue is services. Engines are sold once; maintenance lasts decades.
$ISRG: ~75% of revenue comes from instruments, accessories, and services tied to its installed base of systems.
$FICO: ~60% of revenue comes from scores. Scores drive credit decisions across the economy. Cheap, essential, and deeply embedded.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐ญ๐ก๐ ๐ข๐ฆ๐ฉ๐จ๐ซ๐ญ๐๐ง๐๐ ๐จ๐ ๐ซ๐๐๐ฎ๐ซ๐ซ๐ข๐ง๐ ๐ซ๐๐ฏ๐๐ง๐ฎ๐ ๐ฌ๐ญ๐ซ๐๐๐ฆ๐ฌ ๐ฐ๐ก๐๐ง ๐ฅ๐จ๐จ๐ค๐ข๐ง๐ ๐๐ญ ๐๐ฎ๐ฌ๐ข๐ง๐๐ฌ๐ฌ๐๐ฌ:
โIt is important, but the predictability of when they recur is notโฆ Whatโs most important for us is essential product or serviceโฆ We donโt like things that are discretionary.โ
๐๐ฉ๐ช๐ด ๐ช๐ด ๐ข ๐ด๐ถ๐ฃ๐ต๐ญ๐ฆ ๐ฃ๐ถ๐ต ๐ฑ๐ฐ๐ธ๐ฆ๐ณ๐ง๐ถ๐ญ ๐ฅ๐ช๐ด๐ต๐ช๐ฏ๐ค๐ต๐ช๐ฐ๐ฏ.
Not all recurring-like revenue is created equal. A SaaS subscription can be canceled. ๐ผ๐ฃ ๐๐จ๐จ๐๐ฃ๐ฉ๐๐๐ก ๐จ๐๐ง๐ซ๐๐๐ ๐๐๐ฃ ๐๐ ๐๐๐๐๐ง๐ง๐๐โ๐๐ช๐ฉ ๐ฃ๐ค๐ฉ ๐๐ซ๐ค๐๐๐๐.
___
Think of a ๐ต๐ฐ๐ญ๐ญ ๐ฃ๐ฐ๐ฐ๐ต๐ฉ.
If you commute from NJ to NY for work, you have to pay the toll. A snowstorm might pause traffic for a day or two, but it doesnโt break the business. Once conditions normalize, cars flow again.
The toll booth keeps collectingโoften with pricing power layered on top.
___
Thatโs the kind of โrecurrenceโ Hohn is talking about. ๐๐ฉ ๐ง๐๐ฆ๐ช๐๐ง๐๐จ ๐ฅ๐๐ฉ๐๐๐ฃ๐๐, ๐๐ช๐ฉ ๐๐ฉโ๐จ ๐๐ฃ๐๐ง๐๐๐๐๐ก๐ฎ ๐๐ช๐ง๐๐๐ก๐.
___
๐๐ฑ๐๐ฆ๐ฉ๐ฅ๐๐ฌ ๐๐๐ฅ๐จ๐ฐ ๐๐ฝ
$SPGI & $MCO: Debt issuance can be delayed, but it must eventually be refinanced and rated. These are essential stamps of approvalโnot discretionary spend.
$ASML: ~25% of revenue comes from services tied to a massive installed base. Once machines are in fabs, service demand is inevitable.
$LRCX: ~35% of revenue comes from customer support and services. The installed base drives repeat economics.
$GE: ~66% of revenue is services. Engines are sold once; maintenance lasts decades.
$ISRG: ~75% of revenue comes from instruments, accessories, and services tied to its installed base of systems.
$FICO: ~60% of revenue comes from scores. Scores drive credit decisions across the economy. Cheap, essential, and deeply embedded.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
Offshore
Photo
God of Prompt
RT @rryssf_: ICLR 2025 just gave an Outstanding Paper Award to a method that fixes model editing with one line of code ๐คฏ
here's the problem it solves:
llms store facts in their parameters. sometimes those facts are wrong or outdated. "model editing" lets you surgically update specific facts without retraining the whole model.
the standard approach: find which parameters encode the fact (using causal tracing), then nudge those parameters to store the new fact.
works great for one edit. but do it a hundred times in sequence and the model starts forgetting everything else. do it a thousand times and it degenerates into repetitive gibberish.
every edit that inserts new knowledge corrupts old knowledge. you're playing whack-a-mole with the model's memory.
AlphaEdit reframes the problem.
instead of asking "how do we update knowledge with less damage?" the authors ask "how do we make edits mathematically invisible to preserved knowledge?"
the trick: before applying any parameter change, project it onto the null space of the preserved knowledge matrix.
in plain english: find the directions in parameter space where you can move freely without affecting anything the model already knows. only move in those directions.
it's like remodeling one room in a house by only touching walls that aren't load-bearing. the rest of the structure doesn't even know anything changed.
the results from Fang et al. across GPT2-XL, GPT-J, and LLaMA3-8B:
> average 36.7% improvement over existing editing methods
> works as a plug-and-play addition to MEMIT, ROME, and others
> models maintain 98.48% of general capabilities after 3,000 sequential edits
> prevents the gibberish collapse that kills other methods at scale
and the implementation is literally one line of code added to existing pipelines.
what i find genuinely elegant: the paper proves mathematically that output remains unchanged when querying preserved knowledge. this isn't "it works better in practice." it's "we can prove it doesn't touch what it shouldn't."
the honest caveats:
largest model tested was LLaMA3-8B. nobody's shown this works at 70B+ scale yet. a follow-up paper (AlphaEdit+) flagged brittleness when new knowledge directly conflicts with preserved knowledge, which is exactly the hardest case in production. and the whole approach assumes causal tracing correctly identifies where facts live, which isn't always clean.
but as a core insight, this is the kind of work that deserves the award. not because it solves everything. because it changes the question.
the era of "edit and pray" for llm knowledge updates might actually be ending.
tweet
RT @rryssf_: ICLR 2025 just gave an Outstanding Paper Award to a method that fixes model editing with one line of code ๐คฏ
here's the problem it solves:
llms store facts in their parameters. sometimes those facts are wrong or outdated. "model editing" lets you surgically update specific facts without retraining the whole model.
the standard approach: find which parameters encode the fact (using causal tracing), then nudge those parameters to store the new fact.
works great for one edit. but do it a hundred times in sequence and the model starts forgetting everything else. do it a thousand times and it degenerates into repetitive gibberish.
every edit that inserts new knowledge corrupts old knowledge. you're playing whack-a-mole with the model's memory.
AlphaEdit reframes the problem.
instead of asking "how do we update knowledge with less damage?" the authors ask "how do we make edits mathematically invisible to preserved knowledge?"
the trick: before applying any parameter change, project it onto the null space of the preserved knowledge matrix.
in plain english: find the directions in parameter space where you can move freely without affecting anything the model already knows. only move in those directions.
it's like remodeling one room in a house by only touching walls that aren't load-bearing. the rest of the structure doesn't even know anything changed.
the results from Fang et al. across GPT2-XL, GPT-J, and LLaMA3-8B:
> average 36.7% improvement over existing editing methods
> works as a plug-and-play addition to MEMIT, ROME, and others
> models maintain 98.48% of general capabilities after 3,000 sequential edits
> prevents the gibberish collapse that kills other methods at scale
and the implementation is literally one line of code added to existing pipelines.
what i find genuinely elegant: the paper proves mathematically that output remains unchanged when querying preserved knowledge. this isn't "it works better in practice." it's "we can prove it doesn't touch what it shouldn't."
the honest caveats:
largest model tested was LLaMA3-8B. nobody's shown this works at 70B+ scale yet. a follow-up paper (AlphaEdit+) flagged brittleness when new knowledge directly conflicts with preserved knowledge, which is exactly the hardest case in production. and the whole approach assumes causal tracing correctly identifies where facts live, which isn't always clean.
but as a core insight, this is the kind of work that deserves the award. not because it solves everything. because it changes the question.
the era of "edit and pray" for llm knowledge updates might actually be ending.
tweet
Offshore
Photo
Javier Blas
CHART OF THE DAY: With only one month of data missing, US imports of Saudi crude likely fell to a fresh 30-year low in 2025.
According to monthly data from @EIAgov, the Jan-Nov 2025 period averaged 266,000 b/d, down from 274,000 b/d in the full year 2024. https://t.co/kLvAAcoCC8
tweet
CHART OF THE DAY: With only one month of data missing, US imports of Saudi crude likely fell to a fresh 30-year low in 2025.
According to monthly data from @EIAgov, the Jan-Nov 2025 period averaged 266,000 b/d, down from 274,000 b/d in the full year 2024. https://t.co/kLvAAcoCC8
tweet
Offshore
Photo
App Economy Insights
๐ This Week in Visuals
$AMD $PLTR $UBER $LLY $ABBV $NVS $MRK $NVO $AMGN $PFE $NTDOY $PEP $MDLZ $FTNT $CMG $YUM $PYPL $RBLX $TTWO $HSY $RDDT $TEAM $AFRM $SNAP $NYT $ALGN $MTCH $PTON
https://t.co/LagqbFw2RX
tweet
๐ This Week in Visuals
$AMD $PLTR $UBER $LLY $ABBV $NVS $MRK $NVO $AMGN $PFE $NTDOY $PEP $MDLZ $FTNT $CMG $YUM $PYPL $RBLX $TTWO $HSY $RDDT $TEAM $AFRM $SNAP $NYT $ALGN $MTCH $PTON
https://t.co/LagqbFw2RX
tweet
Offshore
Photo
The Transcript
Analyst: "I hope itโs not the death of software because my job might be dead, but thatโs a whole different conversation."
CFO: "You donโt have other skills?"
$SPT https://t.co/DUgpLIS4Nt
tweet
Analyst: "I hope itโs not the death of software because my job might be dead, but thatโs a whole different conversation."
CFO: "You donโt have other skills?"
$SPT https://t.co/DUgpLIS4Nt
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: $AMZN CEO: "AWS growth continued to accelerate to 24%, the fastest we've seen in 13 quarters, up $2.6 billion quarter over quarter and nearly $7 billion year over year." https://t.co/EOCk99gJ5y
tweet
RT @TheTranscript_: $AMZN CEO: "AWS growth continued to accelerate to 24%, the fastest we've seen in 13 quarters, up $2.6 billion quarter over quarter and nearly $7 billion year over year." https://t.co/EOCk99gJ5y
tweet