Offshore
Photo
Brady Long
RT @bigaiguy: If this existed like 10 years ago my Grandpa never would have beaten me in Gin๐ค
Insane. See for yourself on Hugging Face https://t.co/yDVUJ2lMp8
tweet
RT @bigaiguy: If this existed like 10 years ago my Grandpa never would have beaten me in Gin๐ค
Insane. See for yourself on Hugging Face https://t.co/yDVUJ2lMp8
MiniCPM-o 4.5: Seeing, Listening, and Speaking โ All at Once. ๐๏ธ๐๐ฃ๏ธ
โจBeyond traditional turn-taking, weโve built a Native Full-Duplex engine that allows a 9B model to see, listen, and speak in one concurrent, non-blocking stream.
Watch how it masters real-world complexity in real-time:
๐ Proactive Auditory Interaction: Interrupts itself to alert you when it hears a "Ding!" while reading cards.
๐จ Temporal Flow Tracking: Follows your pen in real-time, narrating and "mind-reading" your drawing as you sketch.
๐ Omni-Perception: Scans groceries & identifies prices on the fly.
โจWhy itโs a category-leader:
๐Performance: Surpasses GPT-4o and Gemini 2.0 Pro on OpenCompass (Avg. 77.6).
๐Architecture: End-to-end fusion of SigLip2, Whisper, and CosyVoice2 on a Qwen3-8B base.
๐Efficiency: Full-duplex live streaming now runs locally on PCs via llama.cpp-omni.
The era of "Wait-and-Response" AI is over. Proactive, real-time intelligence is now open-source.
๐Experience it on Hugging Face: ๐https://t.co/KzzgiGYhVr
#MiniCPM #Omnimodal #FullDuplex #EdgeAI #OpenSource #ComputerVision - OpenBMBtweet
Offshore
Photo
God of Prompt
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
Offshore
Photo
Bourbon Capital
$GOOG 13F Q4 2025
Top positions:
1) AST SpaceMobile $ASTS
2) Planet Labs PBC $PL
3) Revolution Medicines $RVMD
4) Arm Holdings $ARM
5) Freshworks $FRSH
6) UiPath $PATH
7) GitLab $GTLB
8) Tempus AI $TEM
No major changes in Q4 https://t.co/aToYSQMBMe
tweet
$GOOG 13F Q4 2025
Top positions:
1) AST SpaceMobile $ASTS
2) Planet Labs PBC $PL
3) Revolution Medicines $RVMD
4) Arm Holdings $ARM
5) Freshworks $FRSH
6) UiPath $PATH
7) GitLab $GTLB
8) Tempus AI $TEM
No major changes in Q4 https://t.co/aToYSQMBMe
tweet
Moon Dev
RT @ChrisCamillo: @MoonDevOnYT Many variables.
$GOOG has more AI firepower, but core search is still the soft underbelly.
$AMZNโs ecomm moat is massive and positioned to absorb the AI efficiency wave.
Higher frontier model ceiling with Google. Clearer monetization path with Amazon.
tweet
RT @ChrisCamillo: @MoonDevOnYT Many variables.
$GOOG has more AI firepower, but core search is still the soft underbelly.
$AMZNโs ecomm moat is massive and positioned to absorb the AI efficiency wave.
Higher frontier model ceiling with Google. Clearer monetization path with Amazon.
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: Stock prices vs operating cash flow ๐ต
Periods with high volatility, as weโve recently seen, often create meaningful gaps between stock price and business fundamentals
Hereโs a thread of 10 quality stocks with notable discrepancies worth examining further๐งต https://t.co/mMKX61Ma3M
tweet
RT @DimitryNakhla: Stock prices vs operating cash flow ๐ต
Periods with high volatility, as weโve recently seen, often create meaningful gaps between stock price and business fundamentals
Hereโs a thread of 10 quality stocks with notable discrepancies worth examining further๐งต https://t.co/mMKX61Ma3M
tweet
Offshore
Photo
DAIR.AI
Test-time reasoning models often converge too early.
Achieving broader reasoning coverage requires longer sequences, yet the probability of sampling such sequences decays exponentially during autoregressive generation.
The authors call this the "Shallow Exploration Trap."
This new research introduces Length-Incentivized Exploration (LIE), a simple RL recipe that explicitly incentivizes models to explore more during test-time reasoning.
The method uses a length-based reward coupled with a redundancy penalty, maximizing in-context state coverage without generating repetitive filler tokens.
How does it work?
The length reward elevates the upper bound of reasoning states that the model can reach.
The redundancy penalty ensures those extra tokens are informative, not repetitive. Together, they break the shallow exploration trap by encouraging models to generate, verify, and refine multiple hypotheses within a single continuous context.
Applied to GSPO on Qwen3-4B-Base, LIE achieves a 4.4% average gain on in-domain math benchmarks (MATH, Olympiad, AMC, AIME) and 1.5% on out-of-domain tasks like ARC-c, GPQA, and MMLU-Pro. On AIME25, performance jumps from 20.5% to 26.7%.
The recipe generalizes across architectures: Qwen3-4B post-trained sees a 2.0% gain, and Llama-OctoThinker-3B improves by 3.0%.
Crucially, LIE also changes how models reason.
Analysis shows a substantial increase in backtracking behavior (103 to 121 instances), verification, subgoal setting, and enumeration. The model doesn't just aim to "think" but also "think" differently.
The approach also enables continual scaling via curriculum training. A second stage with a relaxed 12k token limit pushes the GSPO+LIE average from 53.8% to 56.3%, confirming that the recipe converts additional compute into accuracy gains.
Most RL methods for reasoning implicitly encourage length but don't address exploration quality. LIE shows that explicitly incentivizing longer, non-redundant reasoning trajectories unlocks deeper in-context exploration and better test-time scaling.
Paper: https://t.co/mavsfj1Mlz
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Test-time reasoning models often converge too early.
Achieving broader reasoning coverage requires longer sequences, yet the probability of sampling such sequences decays exponentially during autoregressive generation.
The authors call this the "Shallow Exploration Trap."
This new research introduces Length-Incentivized Exploration (LIE), a simple RL recipe that explicitly incentivizes models to explore more during test-time reasoning.
The method uses a length-based reward coupled with a redundancy penalty, maximizing in-context state coverage without generating repetitive filler tokens.
How does it work?
The length reward elevates the upper bound of reasoning states that the model can reach.
The redundancy penalty ensures those extra tokens are informative, not repetitive. Together, they break the shallow exploration trap by encouraging models to generate, verify, and refine multiple hypotheses within a single continuous context.
Applied to GSPO on Qwen3-4B-Base, LIE achieves a 4.4% average gain on in-domain math benchmarks (MATH, Olympiad, AMC, AIME) and 1.5% on out-of-domain tasks like ARC-c, GPQA, and MMLU-Pro. On AIME25, performance jumps from 20.5% to 26.7%.
The recipe generalizes across architectures: Qwen3-4B post-trained sees a 2.0% gain, and Llama-OctoThinker-3B improves by 3.0%.
Crucially, LIE also changes how models reason.
Analysis shows a substantial increase in backtracking behavior (103 to 121 instances), verification, subgoal setting, and enumeration. The model doesn't just aim to "think" but also "think" differently.
The approach also enables continual scaling via curriculum training. A second stage with a relaxed 12k token limit pushes the GSPO+LIE average from 53.8% to 56.3%, confirming that the recipe converts additional compute into accuracy gains.
Most RL methods for reasoning implicitly encourage length but don't address exploration quality. LIE shows that explicitly incentivizing longer, non-redundant reasoning trajectories unlocks deeper in-context exploration and better test-time scaling.
Paper: https://t.co/mavsfj1Mlz
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Video
Fiscal.ai
RT @BradoCapital: Query, now live! ๐
Full new feature in sidebar which allows searching keywords across companies and documents.
tweet
RT @BradoCapital: Query, now live! ๐
Full new feature in sidebar which allows searching keywords across companies and documents.
Help me name this new feature we are dropping v1 of soon!
๐ It allows you to query certain terms across:
1) Pick which companies
2) Pick which document types
Current names in the running are:
"Query"
"Find"
"Discover"
"Explore" https://t.co/8hAVZnJBM4 - Braden Dennistweet
Offshore
Video
Offshore
Photo
The Transcript
$AMAT CEO: Gross margins at highest levels in 25 years
"What I'd like to say is that we made progress in gross margins, up 700 basis points since I became CEO, and we're now at the highest level in 25 years, and I strongly believe we're driving the right actions to sustainably increase the value we create for customers and for Applied to share in the value we are creating."
tweet
$AMAT CEO: Gross margins at highest levels in 25 years
"What I'd like to say is that we made progress in gross margins, up 700 basis points since I became CEO, and we're now at the highest level in 25 years, and I strongly believe we're driving the right actions to sustainably increase the value we create for customers and for Applied to share in the value we are creating."
tweet
Offshore
Photo
The Few Bets That Matter
$ANET posted an excellent quarter.
Revenues up ~29%, gross/net margins at 63% & 38%, Q1-26 guidance pointing to ~30% YoY.
Shares up 9% post-earnings at ~21x sales.
Deserved.
$ALAB posted an even better one.
Revenues up 91%, with 75% gross and 17% net margins, Q1-26 guidance at 83% growth.
Shares down 28% since earnings at ~26x sales.
$ANET is more established, slower growing but higher margin than $ALAB. Both are critical to powering the next AI data centers as CapEx continues to skyrocket.
But $ALAB made the โmistakeโ of acquiring two companies, increasing OpEx and salaries to expand capabilities and deliver more value to customers.
Less short-term cash generation.
Exactly what the market has been punishing lately.
Still, if $ANET reflects how the market wants to price hardware names - and peers suggest it does, then $ALAB is not trading where it should.
You donโt grow ~90% before production ramps on flagship products and trade at 26x sales, while a ~30% grower in the same ecosystem facing the same risk case - $NVDA networking system, trades at 21x.
Choose your imposter.
tweet
$ANET posted an excellent quarter.
Revenues up ~29%, gross/net margins at 63% & 38%, Q1-26 guidance pointing to ~30% YoY.
Shares up 9% post-earnings at ~21x sales.
Deserved.
$ALAB posted an even better one.
Revenues up 91%, with 75% gross and 17% net margins, Q1-26 guidance at 83% growth.
Shares down 28% since earnings at ~26x sales.
$ANET is more established, slower growing but higher margin than $ALAB. Both are critical to powering the next AI data centers as CapEx continues to skyrocket.
But $ALAB made the โmistakeโ of acquiring two companies, increasing OpEx and salaries to expand capabilities and deliver more value to customers.
Less short-term cash generation.
Exactly what the market has been punishing lately.
Still, if $ANET reflects how the market wants to price hardware names - and peers suggest it does, then $ALAB is not trading where it should.
You donโt grow ~90% before production ramps on flagship products and trade at 26x sales, while a ~30% grower in the same ecosystem facing the same risk case - $NVDA networking system, trades at 21x.
Choose your imposter.
https://t.co/l9nGdNNrQu - The Few Bets That Mattertweet