Offshore
Photo
God of Prompt
RT @godofprompt: This guy literally shares how openclaw (clawdbot) works https://t.co/YenkFyXnKo
tweet
RT @godofprompt: This guy literally shares how openclaw (clawdbot) works https://t.co/YenkFyXnKo
tweet
Offshore
Photo
Moon Dev
i dont think clawdbot is hype anymore
im a data dawg actually using it
and shes actually finding me profitable strategies
that i can actually deploy into bots
or she can...
am i tripping or is clawdbot actually revolutionary? https://t.co/crG4rXTY1x
tweet
i dont think clawdbot is hype anymore
im a data dawg actually using it
and shes actually finding me profitable strategies
that i can actually deploy into bots
or she can...
am i tripping or is clawdbot actually revolutionary? https://t.co/crG4rXTY1x
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: Thursday's earnings deck includes Amazon:
Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN
After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
RT @TheTranscript_: Thursday's earnings deck includes Amazon:
Before Open: $COP $BMY $CMI $EL $B $CAH $ENR $CI $PTON $OWL $SHEL $ROK $LIN
After Close: $AMZN $IREN $RDDT $MSTR $RBLX $FTNT $ARW $BE $CLSK $DLR $MCHP $DOCS $TEAM https://t.co/r5p6hddA50
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$ICE Q4 2025 Report 🗓️
✅ REV: $2.50B (+8% YoY)
✅ EPS: $2.82 (+13% YoY)
💵 FY FCF $4.19B +16% YoY
💰20th consecutive year of record revenues https://t.co/KHdjZeuUUA
tweet
$ICE Q4 2025 Report 🗓️
✅ REV: $2.50B (+8% YoY)
✅ EPS: $2.82 (+13% YoY)
💵 FY FCF $4.19B +16% YoY
💰20th consecutive year of record revenues https://t.co/KHdjZeuUUA
tweet
Offshore
Photo
Benjamin Hernandez😎
Missing one big move hurts—missing two changes traders.
If you watched $ELPW hit +85% yesterday, one premarket setup today is giving the same ignition signs.
Get the setup:✅ https://t.co/71FIJIdBXe
Don’t let this become another regret.
$OPEN $ASTS $BYND
tweet
Missing one big move hurts—missing two changes traders.
If you watched $ELPW hit +85% yesterday, one premarket setup today is giving the same ignition signs.
Get the setup:✅ https://t.co/71FIJIdBXe
Don’t let this become another regret.
$OPEN $ASTS $BYND
$ELPW Speculation Pick
Grab $ELPW ~$1.84
$ELPW is the "underdog" bet in the battery space. Recent reverse split has cleaned up the chart.
One-line why: High-conviction play on CEO Xiaodan Liu’s survival strategy and global Nasdaq presence. https://t.co/MF7Tyd785w - Benjamin Hernandez😎tweet
Wasteland Capital
Tech has now entered a liquidation spiral where people (eg hedge funds) are forced to sell good, cheap, accelerating assets (eg much of semis) to cover their losses in expensive, decelerating, high P/E sh*t (eg SAAS / AI victims).
Not sure how long this will take to play out.
tweet
Tech has now entered a liquidation spiral where people (eg hedge funds) are forced to sell good, cheap, accelerating assets (eg much of semis) to cover their losses in expensive, decelerating, high P/E sh*t (eg SAAS / AI victims).
Not sure how long this will take to play out.
tweet
Brady Long
RT @thisguyknowsai: Every day I open up X and have the same thought within 5 mins.
“Bro you don’t need AI. You just need to chill out.”
tweet
RT @thisguyknowsai: Every day I open up X and have the same thought within 5 mins.
“Bro you don’t need AI. You just need to chill out.”
tweet
Offshore
Photo
DAIR.AI
RT @dair_ai: We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
RT @dair_ai: We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @AlecStapp: Narrative violation:
The world is becoming less unequal. https://t.co/1t2FbxX2fd
tweet
RT @AlecStapp: Narrative violation:
The world is becoming less unequal. https://t.co/1t2FbxX2fd
The world is more equal than you think. https://t.co/Gc264SIPtx - Steven Pinkertweet