Offshore
Photo
The Transcript
RT @dkhos: Great work to the @Uber teams - we'll keep building and delivering ... Q after Q ... no let up. And thank you to PMR and congrats BKM on the new gig!
tweet
RT @dkhos: Great work to the @Uber teams - we'll keep building and delivering ... Q after Q ... no let up. And thank you to PMR and congrats BKM on the new gig!
$UBER Q4’25 earnings are out — a standout quarter to end a record year, with our largest and most-engaged consumer base ever:
> MAPCs accelerated, up 18% to 202M
> Trips accelerated, up 22% to 3.8B
> Gross Bookings accelerated, up 22% to $54.1B
> Adjusted EBITDA accelerated, up 35% to $2.5B
> TTM FCF of $9.8 billion - Balaji Krishnamurthytweet
Offshore
Photo
DAIR.AI
We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Fiscal.ai
Uber Q4 Results
Mobility Revenue +19%
Delivery Revenue +30%
Freight Revenue -0.5%
Monthly Active Consumers +18%
Total Trips +22%
$UBER https://t.co/WuXbO2mRIR
tweet
Uber Q4 Results
Mobility Revenue +19%
Delivery Revenue +30%
Freight Revenue -0.5%
Monthly Active Consumers +18%
Total Trips +22%
$UBER https://t.co/WuXbO2mRIR
tweet
Javier Blas
RT @gbrew24: One notable difference in talks that, right now, are being set up along very similar lines to the talks last Spring.
tweet
RT @gbrew24: One notable difference in talks that, right now, are being set up along very similar lines to the talks last Spring.
@mashabani @SteveWitkoff @jaredkushner Kushner is joining - Barak Ravidtweet
X (formerly Twitter)
Mohammad Ali Shabani (@mashabani) on X
NEW: Direct talks not currently on agenda as Iran-US set to resume negotiations in Oman. Qatari PM may join, no word yet on other regional officials. Also potentially joining: Jared Kushner.
Apart from differences over substance, concern that indirect talks…
Apart from differences over substance, concern that indirect talks…
Offshore
Photo
Benjamin Hernandez😎
Let's attack the open with these!
$ENPH $ELPW $SLAB $PHOE $EGHT $FEED $LTC $SUI $LITE $JKS $MGM $DBVT $AAPL $MSFT $META $AMZN
$ENPH is squeezing shorts. +33% move on revenue beat is rare. $ELPW is pure volatility for scalpers only.
DM me for the targets!
tweet
Let's attack the open with these!
$ENPH $ELPW $SLAB $PHOE $EGHT $FEED $LTC $SUI $LITE $JKS $MGM $DBVT $AAPL $MSFT $META $AMZN
$ENPH is squeezing shorts. +33% move on revenue beat is rare. $ELPW is pure volatility for scalpers only.
DM me for the targets!
Solo trading is fine… but shared momentum hits different and feels way better. We cover live trends, key news drops, and my curated daily stock shortlist inside.
Join here 👉 https://t.co/71FIJIdBXe
Message “Hi” to hop in and see today’s list.
$RR $SOC $BMNR $BYND $PULM https://t.co/XjRSUjWbnr - Benjamin Hernandez😎tweet
Offshore
Photo
Fiscal.ai
Uber just crossed 200 million monthly active platform consumers.
Up 18% YoY
$UBER https://t.co/Hh4Tv4Hf7T
tweet
Uber just crossed 200 million monthly active platform consumers.
Up 18% YoY
$UBER https://t.co/Hh4Tv4Hf7T
tweet
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Feels like we may be approaching capitulation across a number of quality SaaS names after today’s climactic selling — this kind of price action that often coincides with forced de-risking, exhaustion, & indiscriminate selling rather than a change in long-term business quality.
tweet
RT @DimitryNakhla: Feels like we may be approaching capitulation across a number of quality SaaS names after today’s climactic selling — this kind of price action that often coincides with forced de-risking, exhaustion, & indiscriminate selling rather than a change in long-term business quality.
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @irbezek: Nice to see people are so focused on AI destroying other industries that liquor stocks are finally a defensive safe haven trade again. $BF $DEO https://t.co/tZfk0hluLO
tweet
RT @irbezek: Nice to see people are so focused on AI destroying other industries that liquor stocks are finally a defensive safe haven trade again. $BF $DEO https://t.co/tZfk0hluLO
tweet
Offshore
Photo
Quiver Quantitative
BREAKING: Representative Cleo Fields just filed new stock trades.
He bought more stock in $IREN, worth up to $100K.
Full trade list up on Quiver. https://t.co/uTsGqzi5Ts
tweet
BREAKING: Representative Cleo Fields just filed new stock trades.
He bought more stock in $IREN, worth up to $100K.
Full trade list up on Quiver. https://t.co/uTsGqzi5Ts
tweet
Offshore
Photo
Fiscal.ai
Ozempic Revenue declined YoY for the first time ever.
$NVO: -18.2% https://t.co/XHNGKJAbtr
tweet
Ozempic Revenue declined YoY for the first time ever.
$NVO: -18.2% https://t.co/XHNGKJAbtr
tweet