Offshore
Build tension with the obstacle - Share the breakthrough moment - Present actionable framework - Close with vulnerable insight + soft CTA - Use short paragraphs, lots of white space INSTAGRAM (GARY VEE VOLUME): - Carousels: One idea, 10 slides - Reels: Hook…
- "And that's it" closings

GARY VEE VOICE:
- Authentic, conversational, sometimes profane
- "Listen..." and "Look..." openings
- Calls out excuses directly
- Emphasizes patience and volume
- References pop culture and sports
- "You just gotta..." motivational pushes

BRUNSON VOICE:
- Story-driven, personal anecdotes
- "I remember when..." openings
- Builds curiosity through narrative
- Uses analogies and metaphors
- "Secret" and "funnel" language frequent
- Enthusiastic, almost breathless energy

Match the voice to the platform and objective automatically. <response_formatFor every request:
1. Lead with the content (ready to use)
2. Explain which framework you applied (1 sentence)
3. Note the psychological triggers used
4. Suggest 2-3 variations or extensions
5. Provide next-step content ideas

Keep theory under 20%. Give me 80% usable content. <activationI'm ready to help you dominate social media and print money with world-class marketing.

Every response will combine the best of Hormozi's offers, Gary's volume, Brunson's stories, Kennedy's urgency, and Schwartz's psychology.

Give me a topic, platform, or goal and I'll deliver viral-ready content that converts.

Let's go. ---
tweet
Offshore
Photo
God of Prompt
RT @rryssf_: MIT figured out how to make models learn new skills without forgetting old ones. no reward function needed. 🤯

the core problem with fine-tuning has always been catastrophic forgetting.

you teach a model to use tools, it forgets how to do science. you teach it medicine, it forgets the tools.

supervised fine-tuning is inherently off-policy. you're forcing the model to imitate fixed examples. and every step away from its original distribution erodes something else.

the standard fix is reinforcement learning. train on the model's own outputs so it stays on-policy. but rl needs a reward function. and reward functions are either expensive, brittle, or both.

MIT's insight is deceptively simple.

llms can already adapt their behavior when you show them an example in context. that's in-context learning. no weight updates needed. so what if you used that ability to create a teacher signal?

same model, two roles. teacher sees the query plus a demonstration. student sees only the query. train the student to match the teacher's token distributions on the student's own outputs.

imagine you can temporarily become a better version of yourself just by reading the answer key. you don't copy the answers. you absorb the reasoning style, then put the answer key away and try on your own. the "wiser you" guides the "regular you." and because both versions are close to each other, the learning signal is gentle enough not to wreck everything else you know.

results back this up. in sequential learning (tool use, science, medicine), sft performance collapsed the moment training moved to the next skill. sdft retained all three. no regression.

on knowledge acquisition, sdft hit 89% strict accuracy vs sft's 80%. out-of-distribution: 98% vs 80%. that ood gap is the real story. sft memorized answers. sdft actually integrated the knowledge.

the theoretical grounding is elegant. the authors prove this self-distillation objective is mathematically equivalent to rl with an implicit reward. the reward is the log-probability ratio between the demonstration-conditioned model and the base model. no hand-crafted reward.

the model's own in-context learning defines what "good" looks like. it's inverse rl without ever explicitly learning a reward.

scaling behavior is worth noting. at 3B parameters, sdft actually underperforms sft. the model's in-context learning is too weak. at 7B, 4-point advantage. at 14B, 7 points. the method gets better as models get smarter. it's going to matter more at frontier scale, not less.

limitations are real and worth reading. 2.5x compute cost vs sft. the student sometimes inherits teacher artifacts. doesn't work for fundamental behavioral shifts. requires strong in-context learning, so small models are out. these are real constraints, not footnotes.

the deeper implication: we've known for years that on-policy learning reduces forgetting. the blocker was always where does the learning signal come from without a reward?

this paper's answer: from the model itself. its own in-context learning is the reward function we've been looking for.

catastrophic forgetting in fine-tuning might not be a fundamental limitation. it might be a self-inflicted consequence of off-policy training.
tweet
Moon Dev
DCA’ing into Mac minis here

Secured 4 more
tweet
Offshore
Video
Brady Long
RT @thisdudelikesAI: Personally I think the world already ended. We’re just taking the scenic route. https://t.co/20MNUbgdeU

This guy is using Clawdbot to find dates for him on Hinge. 😭😭😭 https://t.co/WsxxJftm8d
- sid
tweet
Offshore
Photo
Brady Long
RT @thisguyknowsai: R.I.P basic RAG ☠️

Graph-enhanced retrieval is the new king.

OpenAI, Anthropic, and Microsoft engineers don't build RAG systems like everyone else.

They build knowledge graphs first.

Here are 7 ways to use graph RAG instead of vector search: https://t.co/HdEjy6RslX
tweet
AkhenOsiris
Good Monday morning software investors:

Polar Capital says most shares are still toxic and few firms will survive.

Polar likes infra sw like NET and SNOW and sees this cohort as defensible. Neutral on cybersecurity. Everything else will be akin to newspapers when the internet dawned...going to 0

So like any good money manager they are balls deep in.....semis, energy, networking, fiber optics (same shit everyone else in)
tweet
Offshore
Photo
The Transcript
Tuesday's earnings:

Before Open: $ET $MDT $KRYS $ETOR $LDOS $CNH $VMC $GPC $CRNT $DTE $NEO $BLDR $FLR

After Close: $HL $PANW $TOL $DVN $HALO $KVUE $EQT $CDNS $ACLS $MKSI $SSRM $HUN $CE https://t.co/gQ4QRLxv0c
tweet
The Few Bets That Matter
RT @WealthyReadings: My watchlist today has nothing to do to what it was last year... Sell tech, buy defensives.

$MRNA
$NVO
$CROX
$PFE
$DAR
$NTR
$SWBI
$TWST
$PEP
$TGT
$ENPH
$COP
$DECK

I'm probably the only one around here sharing this stuff nowadays 👇
https://t.co/jQRKnEOC4d

The weekly 50 is the uptrend golden indicator

$NVDA is below its w50
$META is below its w50
$MSFT is below its w50
$AMZN is below its w50
$HOOD is below its w50
$PLTR is below its w50
$UBER is below its w50
$NFLX is below its w50
$HOOD is below its w50
$ADBE is below its w50
$DUOL is below its w50

& many other 2025 leaders ...
- The Few Bets That Matter
tweet
The Few Bets That Matter
RT @WealthyReadings: The weekly 50 is the uptrend golden indicator

$NVDA is below its w50
$META is below its w50
$MSFT is below its w50
$AMZN is below its w50
$HOOD is below its w50
$PLTR is below its w50
$UBER is below its w50
$NFLX is below its w50
$HOOD is below its w50
$ADBE is below its w50
$DUOL is below its w50

& many other 2025 leaders ...

https://t.co/TVqbdhKTn4
- The Few Bets That Matter
tweet
Offshore
Photo
The Few Bets That Matter
RT @WealthyReadings: $TMDX should be trading closer to $ISRG

Both are in the healthcare domain with a product years in advance on competition, growign market shares and importance within a healthcare system.

Comparable growth profiles, although $ISRG is less explosive meaning no decline, but a stable growth.

Comparable margins, although again $ISRG is slightly superior due to being optimized for probitability now, something $TMDX is working on with great results, as the lattest quarters show clearly.

There are small difference which explain why $ISRG has such a premium, and it deserves it. But the market will need to realize that $TMDX execution risks which it is pricing are only a matter of delay. Not risk.

In a few quarters, $TMDX will deserve equivalent premium.
tweet
Offshore
Photo
The Few Bets That Matter
RT @WealthyReadings: $ANET posted an excellent quarter.

Revenues up ~29%, gross/net margins at 63% & 38%, Q1-26 guidance pointing to ~30% YoY.
Shares up 9% post-earnings at ~21x sales.
Deserved.

$ALAB posted an even better one.

Revenues up 91%, with 75% gross and 17% net margins, Q1-26 guidance at 83% growth.
Shares down 28% since earnings at ~26x sales.

$ANET is more established, slower growing but higher margin than $ALAB. Both are critical to powering the next AI data centers as CapEx continues to skyrocket.

But $ALAB made the “mistake” of acquiring two companies, increasing OpEx and salaries to expand capabilities and deliver more value to customers.

Less short-term cash generation.
Exactly what the market has been punishing lately.

Still, if $ANET reflects how the market wants to price hardware names - and peers suggest it does, then $ALAB is not trading where it should.

You don’t grow ~90% before production ramps on flagship products and trade at 26x sales, while a ~30% grower in the same ecosystem facing the same risk case - $NVDA networking system, trades at 21x.

Choose your imposter.

https://t.co/l9nGdNNrQu
- The Few Bets That Matter
tweet