Offshore
Photo
God of Prompt
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
Offshore
Photo
Bourbon Capital
$GOOG 13F Q4 2025

Top positions:
1) AST SpaceMobile $ASTS
2) Planet Labs PBC $PL
3) Revolution Medicines $RVMD
4) Arm Holdings $ARM
5) Freshworks $FRSH
6) UiPath $PATH
7) GitLab $GTLB
8) Tempus AI $TEM

No major changes in Q4 https://t.co/aToYSQMBMe
tweet
Moon Dev
RT @ChrisCamillo: @MoonDevOnYT Many variables.

$GOOG has more AI firepower, but core search is still the soft underbelly.

$AMZN’s ecomm moat is massive and positioned to absorb the AI efficiency wave.

Higher frontier model ceiling with Google. Clearer monetization path with Amazon.
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Stock prices vs operating cash flow 💵

Periods with high volatility, as we’ve recently seen, often create meaningful gaps between stock price and business fundamentals

Here’s a thread of 10 quality stocks with notable discrepancies worth examining further🧵 https://t.co/mMKX61Ma3M
tweet
Offshore
Photo
DAIR.AI
Test-time reasoning models often converge too early.

Achieving broader reasoning coverage requires longer sequences, yet the probability of sampling such sequences decays exponentially during autoregressive generation.

The authors call this the "Shallow Exploration Trap."

This new research introduces Length-Incentivized Exploration (LIE), a simple RL recipe that explicitly incentivizes models to explore more during test-time reasoning.

The method uses a length-based reward coupled with a redundancy penalty, maximizing in-context state coverage without generating repetitive filler tokens.

How does it work?

The length reward elevates the upper bound of reasoning states that the model can reach.

The redundancy penalty ensures those extra tokens are informative, not repetitive. Together, they break the shallow exploration trap by encouraging models to generate, verify, and refine multiple hypotheses within a single continuous context.

Applied to GSPO on Qwen3-4B-Base, LIE achieves a 4.4% average gain on in-domain math benchmarks (MATH, Olympiad, AMC, AIME) and 1.5% on out-of-domain tasks like ARC-c, GPQA, and MMLU-Pro. On AIME25, performance jumps from 20.5% to 26.7%.

The recipe generalizes across architectures: Qwen3-4B post-trained sees a 2.0% gain, and Llama-OctoThinker-3B improves by 3.0%.

Crucially, LIE also changes how models reason.

Analysis shows a substantial increase in backtracking behavior (103 to 121 instances), verification, subgoal setting, and enumeration. The model doesn't just aim to "think" but also "think" differently.

The approach also enables continual scaling via curriculum training. A second stage with a relaxed 12k token limit pushes the GSPO+LIE average from 53.8% to 56.3%, confirming that the recipe converts additional compute into accuracy gains.

Most RL methods for reasoning implicitly encourage length but don't address exploration quality. LIE shows that explicitly incentivizing longer, non-redundant reasoning trajectories unlocks deeper in-context exploration and better test-time scaling.

Paper: https://t.co/mavsfj1Mlz

Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Video
Fiscal.ai
RT @BradoCapital: Query, now live! 🎉

Full new feature in sidebar which allows searching keywords across companies and documents.

Help me name this new feature we are dropping v1 of soon!

🔍 It allows you to query certain terms across:
1) Pick which companies
2) Pick which document types

Current names in the running are:
"Query"
"Find"
"Discover"
"Explore" https://t.co/8hAVZnJBM4
- Braden Dennis
tweet
Offshore
Video
Brady Long
this is funny, but it also made me think about how fast user expectations have shifted

AI might forget. Thine does not. Keep remembering. https://t.co/GIL4Pv5qjs
- Thine
tweet
Offshore
Photo
The Transcript
$AMAT CEO: Gross margins at highest levels in 25 years

"What I'd like to say is that we made progress in gross margins, up 700 basis points since I became CEO, and we're now at the highest level in 25 years, and I strongly believe we're driving the right actions to sustainably increase the value we create for customers and for Applied to share in the value we are creating."
tweet
Offshore
Photo
The Few Bets That Matter
$ANET posted an excellent quarter.

Revenues up ~29%, gross/net margins at 63% & 38%, Q1-26 guidance pointing to ~30% YoY.
Shares up 9% post-earnings at ~21x sales.
Deserved.

$ALAB posted an even better one.

Revenues up 91%, with 75% gross and 17% net margins, Q1-26 guidance at 83% growth.
Shares down 28% since earnings at ~26x sales.

$ANET is more established, slower growing but higher margin than $ALAB. Both are critical to powering the next AI data centers as CapEx continues to skyrocket.

But $ALAB made the “mistake” of acquiring two companies, increasing OpEx and salaries to expand capabilities and deliver more value to customers.

Less short-term cash generation.
Exactly what the market has been punishing lately.

Still, if $ANET reflects how the market wants to price hardware names - and peers suggest it does, then $ALAB is not trading where it should.

You don’t grow ~90% before production ramps on flagship products and trade at 26x sales, while a ~30% grower in the same ecosystem facing the same risk case - $NVDA networking system, trades at 21x.

Choose your imposter.

https://t.co/l9nGdNNrQu
- The Few Bets That Matter
tweet
Offshore
Video
Startup Archive
Keith Rabois: “The velocity of your company improves by adding barrels”

Keith shares his “Barrels and Ammunition” framework for building effective teams:

“Most companies—once they get into hiring mode—just hire a lot of people. And you expect that as you add people your throughput and velocity of shipping things is going to increase. But it turns out it doesn’t work that way. Usually when you hire more engineers, you actually don’t get that much more done. You sometimes get less done.”

Keith argues that the reason for this is that most people in a company—even great people—are “ammunition.” But to improve velocity, you need “barrels”. He defines barrels as extremely talented people who can take ideas from inception all the way through to fully shipped product. Most companies start with one barrel (the founder). And when they add another, they can get twice as many things done per week, quarter, etc.

But true barrels are incredibly difficult to find:

“When you have them, give them lots of equity, promote them, take them to dinner every week because they’re virtually irreplaceable. They’re also very culturally specific. A barrel at one company may not be a barrel at another company.”

Video source: @ycombinator (2014)
tweet
Offshore
Photo
The Transcript
$SPOT Co-CEO says senior engineers at Spotify Technology have largely stopped writing code themselves since December 2025 when Claude's Opus 4.5 came out:

"So it is a big change. It is real and it's happening fast" https://t.co/6o7rTlAkRO
tweet