Offshore
Photo
Quiver Quantitative
JUST IN: Representatives Mark Pocan and Andy Biggs have introduced legislation which would cut the Pentagon's budget if it does not achieve a clean audit.
The Defense Department has failed 8 consecutive audits. https://t.co/v2TFt7X0Xt
tweet
JUST IN: Representatives Mark Pocan and Andy Biggs have introduced legislation which would cut the Pentagon's budget if it does not achieve a clean audit.
The Defense Department has failed 8 consecutive audits. https://t.co/v2TFt7X0Xt
tweet
Offshore
Photo
Brady Long
RT @bigaiguy: If this existed like 10 years ago my Grandpa never would have beaten me in Gin๐ค
Insane. See for yourself on Hugging Face https://t.co/yDVUJ2lMp8
tweet
RT @bigaiguy: If this existed like 10 years ago my Grandpa never would have beaten me in Gin๐ค
Insane. See for yourself on Hugging Face https://t.co/yDVUJ2lMp8
MiniCPM-o 4.5: Seeing, Listening, and Speaking โ All at Once. ๐๏ธ๐๐ฃ๏ธ
โจBeyond traditional turn-taking, weโve built a Native Full-Duplex engine that allows a 9B model to see, listen, and speak in one concurrent, non-blocking stream.
Watch how it masters real-world complexity in real-time:
๐ Proactive Auditory Interaction: Interrupts itself to alert you when it hears a "Ding!" while reading cards.
๐จ Temporal Flow Tracking: Follows your pen in real-time, narrating and "mind-reading" your drawing as you sketch.
๐ Omni-Perception: Scans groceries & identifies prices on the fly.
โจWhy itโs a category-leader:
๐Performance: Surpasses GPT-4o and Gemini 2.0 Pro on OpenCompass (Avg. 77.6).
๐Architecture: End-to-end fusion of SigLip2, Whisper, and CosyVoice2 on a Qwen3-8B base.
๐Efficiency: Full-duplex live streaming now runs locally on PCs via llama.cpp-omni.
The era of "Wait-and-Response" AI is over. Proactive, real-time intelligence is now open-source.
๐Experience it on Hugging Face: ๐https://t.co/KzzgiGYhVr
#MiniCPM #Omnimodal #FullDuplex #EdgeAI #OpenSource #ComputerVision - OpenBMBtweet
Offshore
Photo
God of Prompt
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
RT @godofprompt: How to use LLMs for competitive intelligence (scraping, analysis, reporting): https://t.co/xlGOSpRQPy
tweet
Offshore
Photo
Bourbon Capital
$GOOG 13F Q4 2025
Top positions:
1) AST SpaceMobile $ASTS
2) Planet Labs PBC $PL
3) Revolution Medicines $RVMD
4) Arm Holdings $ARM
5) Freshworks $FRSH
6) UiPath $PATH
7) GitLab $GTLB
8) Tempus AI $TEM
No major changes in Q4 https://t.co/aToYSQMBMe
tweet
$GOOG 13F Q4 2025
Top positions:
1) AST SpaceMobile $ASTS
2) Planet Labs PBC $PL
3) Revolution Medicines $RVMD
4) Arm Holdings $ARM
5) Freshworks $FRSH
6) UiPath $PATH
7) GitLab $GTLB
8) Tempus AI $TEM
No major changes in Q4 https://t.co/aToYSQMBMe
tweet
Moon Dev
RT @ChrisCamillo: @MoonDevOnYT Many variables.
$GOOG has more AI firepower, but core search is still the soft underbelly.
$AMZNโs ecomm moat is massive and positioned to absorb the AI efficiency wave.
Higher frontier model ceiling with Google. Clearer monetization path with Amazon.
tweet
RT @ChrisCamillo: @MoonDevOnYT Many variables.
$GOOG has more AI firepower, but core search is still the soft underbelly.
$AMZNโs ecomm moat is massive and positioned to absorb the AI efficiency wave.
Higher frontier model ceiling with Google. Clearer monetization path with Amazon.
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: Stock prices vs operating cash flow ๐ต
Periods with high volatility, as weโve recently seen, often create meaningful gaps between stock price and business fundamentals
Hereโs a thread of 10 quality stocks with notable discrepancies worth examining further๐งต https://t.co/mMKX61Ma3M
tweet
RT @DimitryNakhla: Stock prices vs operating cash flow ๐ต
Periods with high volatility, as weโve recently seen, often create meaningful gaps between stock price and business fundamentals
Hereโs a thread of 10 quality stocks with notable discrepancies worth examining further๐งต https://t.co/mMKX61Ma3M
tweet
Offshore
Photo
DAIR.AI
Test-time reasoning models often converge too early.
Achieving broader reasoning coverage requires longer sequences, yet the probability of sampling such sequences decays exponentially during autoregressive generation.
The authors call this the "Shallow Exploration Trap."
This new research introduces Length-Incentivized Exploration (LIE), a simple RL recipe that explicitly incentivizes models to explore more during test-time reasoning.
The method uses a length-based reward coupled with a redundancy penalty, maximizing in-context state coverage without generating repetitive filler tokens.
How does it work?
The length reward elevates the upper bound of reasoning states that the model can reach.
The redundancy penalty ensures those extra tokens are informative, not repetitive. Together, they break the shallow exploration trap by encouraging models to generate, verify, and refine multiple hypotheses within a single continuous context.
Applied to GSPO on Qwen3-4B-Base, LIE achieves a 4.4% average gain on in-domain math benchmarks (MATH, Olympiad, AMC, AIME) and 1.5% on out-of-domain tasks like ARC-c, GPQA, and MMLU-Pro. On AIME25, performance jumps from 20.5% to 26.7%.
The recipe generalizes across architectures: Qwen3-4B post-trained sees a 2.0% gain, and Llama-OctoThinker-3B improves by 3.0%.
Crucially, LIE also changes how models reason.
Analysis shows a substantial increase in backtracking behavior (103 to 121 instances), verification, subgoal setting, and enumeration. The model doesn't just aim to "think" but also "think" differently.
The approach also enables continual scaling via curriculum training. A second stage with a relaxed 12k token limit pushes the GSPO+LIE average from 53.8% to 56.3%, confirming that the recipe converts additional compute into accuracy gains.
Most RL methods for reasoning implicitly encourage length but don't address exploration quality. LIE shows that explicitly incentivizing longer, non-redundant reasoning trajectories unlocks deeper in-context exploration and better test-time scaling.
Paper: https://t.co/mavsfj1Mlz
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Test-time reasoning models often converge too early.
Achieving broader reasoning coverage requires longer sequences, yet the probability of sampling such sequences decays exponentially during autoregressive generation.
The authors call this the "Shallow Exploration Trap."
This new research introduces Length-Incentivized Exploration (LIE), a simple RL recipe that explicitly incentivizes models to explore more during test-time reasoning.
The method uses a length-based reward coupled with a redundancy penalty, maximizing in-context state coverage without generating repetitive filler tokens.
How does it work?
The length reward elevates the upper bound of reasoning states that the model can reach.
The redundancy penalty ensures those extra tokens are informative, not repetitive. Together, they break the shallow exploration trap by encouraging models to generate, verify, and refine multiple hypotheses within a single continuous context.
Applied to GSPO on Qwen3-4B-Base, LIE achieves a 4.4% average gain on in-domain math benchmarks (MATH, Olympiad, AMC, AIME) and 1.5% on out-of-domain tasks like ARC-c, GPQA, and MMLU-Pro. On AIME25, performance jumps from 20.5% to 26.7%.
The recipe generalizes across architectures: Qwen3-4B post-trained sees a 2.0% gain, and Llama-OctoThinker-3B improves by 3.0%.
Crucially, LIE also changes how models reason.
Analysis shows a substantial increase in backtracking behavior (103 to 121 instances), verification, subgoal setting, and enumeration. The model doesn't just aim to "think" but also "think" differently.
The approach also enables continual scaling via curriculum training. A second stage with a relaxed 12k token limit pushes the GSPO+LIE average from 53.8% to 56.3%, confirming that the recipe converts additional compute into accuracy gains.
Most RL methods for reasoning implicitly encourage length but don't address exploration quality. LIE shows that explicitly incentivizing longer, non-redundant reasoning trajectories unlocks deeper in-context exploration and better test-time scaling.
Paper: https://t.co/mavsfj1Mlz
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet