Offshore
Video
DAIR.AI
RT @omarsar0: This Composio connect-apps plugin for Claude Code is 🔥
It's the easiest way to instantly connect Claude Code to 500+ apps like Gmail, Slack, GitHub, and Linear.
You really don't need to be setting up MCP servers one by one.
I use it a lot, and it has saved me a ton of time. https://t.co/9D2vPlCLow
tweet
RT @omarsar0: This Composio connect-apps plugin for Claude Code is 🔥
It's the easiest way to instantly connect Claude Code to 500+ apps like Gmail, Slack, GitHub, and Linear.
You really don't need to be setting up MCP servers one by one.
I use it a lot, and it has saved me a ton of time. https://t.co/9D2vPlCLow
tweet
Offshore
Photo
Offshore
Photo
Benjamin Hernandez😎
ong Kong has protested against Panama’s court ruling which struck down the contract granted to Li Ka-shing’s CK Hutchison to operate two ports near the country’s strategic canal https://t.co/0xDbAVJmrO
tweet
ong Kong has protested against Panama’s court ruling which struck down the contract granted to Li Ka-shing’s CK Hutchison to operate two ports near the country’s strategic canal https://t.co/0xDbAVJmrO
tweet
Offshore
Photo
DAIR.AI
Memory is the bottleneck for LLM agents.
Fixed memory pipelines waste compute on irrelevant information while potentially discarding what a specific query actually needs.
This new research introduces BudgetMem, a runtime agent memory framework that extracts memory on-demand with explicit, controllable performance-cost trade-offs.
As agents scale to longer interactions and more complex tasks, memory cost becomes a first-class concern. BudgetMem provides a systematic framework for explicit performance-cost control in runtime agent memory.
Instead of treating memory as a monolithic pipeline, BudgetMem structures extraction into modular stages, each offered in three budget tiers (Low/Mid/High).
A lightweight neural router, trained with reinforcement learning, selects the right tier per module based on the current query and intermediate context.
They study three complementary strategies for realizing budget tiers: implementation tiering (varying method complexity), reasoning tiering (varying inference behavior like direct vs. reflection), and capacity tiering (varying model size).
On LongMemEval with LLaMA-3.3-70B, BudgetMem-CAP achieves a Judge score of 60.50, surpassing the strongest baseline LightMem (48.51) by a wide margin. On HotpotQA with Qwen3-Next-80B, BudgetMem-CAP scores 72.08 at just $0.22 cost, while BudgetMem-REA reaches 70.83 at an even lower $0.17. The trained router also transfers across model backbones without retraining.
The analysis reveals that implementation and capacity tiering span broader cost ranges for exploring budget extremes, while reasoning tiering acts as a fine-grained quality knob within a tighter cost band.
Paper: https://t.co/qkKmawVNrk
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7deE
tweet
Memory is the bottleneck for LLM agents.
Fixed memory pipelines waste compute on irrelevant information while potentially discarding what a specific query actually needs.
This new research introduces BudgetMem, a runtime agent memory framework that extracts memory on-demand with explicit, controllable performance-cost trade-offs.
As agents scale to longer interactions and more complex tasks, memory cost becomes a first-class concern. BudgetMem provides a systematic framework for explicit performance-cost control in runtime agent memory.
Instead of treating memory as a monolithic pipeline, BudgetMem structures extraction into modular stages, each offered in three budget tiers (Low/Mid/High).
A lightweight neural router, trained with reinforcement learning, selects the right tier per module based on the current query and intermediate context.
They study three complementary strategies for realizing budget tiers: implementation tiering (varying method complexity), reasoning tiering (varying inference behavior like direct vs. reflection), and capacity tiering (varying model size).
On LongMemEval with LLaMA-3.3-70B, BudgetMem-CAP achieves a Judge score of 60.50, surpassing the strongest baseline LightMem (48.51) by a wide margin. On HotpotQA with Qwen3-Next-80B, BudgetMem-CAP scores 72.08 at just $0.22 cost, while BudgetMem-REA reaches 70.83 at an even lower $0.17. The trained router also transfers across model backbones without retraining.
The analysis reveals that implementation and capacity tiering span broader cost ranges for exploring budget extremes, while reasoning tiering acts as a fine-grained quality knob within a tighter cost band.
Paper: https://t.co/qkKmawVNrk
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7deE
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: Blue Owl only sees pristine credit metrics in tech even as markets flash warnings:
"“Tech lending has worked, continues to work...We don’t have red flags. In point of fact, we don’t have yellow flags. We actually have largely green flags."
$OWL https://t.co/sHtqAPpuEp
tweet
RT @TheTranscript_: Blue Owl only sees pristine credit metrics in tech even as markets flash warnings:
"“Tech lending has worked, continues to work...We don’t have red flags. In point of fact, we don’t have yellow flags. We actually have largely green flags."
$OWL https://t.co/sHtqAPpuEp
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: I think one of the most underappreciated findings in AI engineering is what this paper calls the "Grep Tax."
First, they ran nearly 10,000 experiments testing how agents handle structured data, and the headline result is that format barely matters.
But here's the weird finding: a compact, token-saving format they tested (TOON) actually consumed *up to 740% more tokens* at scale because models didn't recognize the syntax and kept cycling through search patterns from formats they already knew.
It's one of the reasons my preferred formats are XML and Markdown. LLMs know those really well.
The models have preferences baked into their training data, and fighting those preferences doesn't save you money. It costs you.
The other finding worth sitting with: the same agentic architecture that improves frontier model performance actively *hurts* open-source models. It seems that the universal best-practices guide for AI engineering may not exist.
tweet
RT @omarsar0: I think one of the most underappreciated findings in AI engineering is what this paper calls the "Grep Tax."
First, they ran nearly 10,000 experiments testing how agents handle structured data, and the headline result is that format barely matters.
But here's the weird finding: a compact, token-saving format they tested (TOON) actually consumed *up to 740% more tokens* at scale because models didn't recognize the syntax and kept cycling through search patterns from formats they already knew.
It's one of the reasons my preferred formats are XML and Markdown. LLMs know those really well.
The models have preferences baked into their training data, and fighting those preferences doesn't save you money. It costs you.
The other finding worth sitting with: the same agentic architecture that improves frontier model performance actively *hurts* open-source models. It seems that the universal best-practices guide for AI engineering may not exist.
tweet
Offshore
Photo
Pristine Capital
RT @realpristinecap: 1-Week Index ETF Performance via Pristine Capital 📊
$ARKK -5.96% (Innovation)
$QQQ -1.97% (Nasdaq)
$SPY -.20% (S&P 500)
$TLT +.86% (20+ Yr Treasuries)
$IWM +2.07% (Russell 2000)
$DIA +2.45% (Dow Jones) https://t.co/y2iiwQKA80
tweet
RT @realpristinecap: 1-Week Index ETF Performance via Pristine Capital 📊
$ARKK -5.96% (Innovation)
$QQQ -1.97% (Nasdaq)
$SPY -.20% (S&P 500)
$TLT +.86% (20+ Yr Treasuries)
$IWM +2.07% (Russell 2000)
$DIA +2.45% (Dow Jones) https://t.co/y2iiwQKA80
tweet
Offshore
Photo
Fiscal.ai
Eli Lilly v. Novo Nordisk
$LLY Mounjaro: $7.4B, +110% YoY
$NVO Ozempic: $5B, +7% YoY
$LLY Zepbound: $4.3B, +123% YoY
$NVO Wegovy: $3.4B, +25% YoY
Eli Lilly is pulling ahead in the weight-loss drug category. https://t.co/pqOIWkp2s5
tweet
Eli Lilly v. Novo Nordisk
$LLY Mounjaro: $7.4B, +110% YoY
$NVO Ozempic: $5B, +7% YoY
$LLY Zepbound: $4.3B, +123% YoY
$NVO Wegovy: $3.4B, +25% YoY
Eli Lilly is pulling ahead in the weight-loss drug category. https://t.co/pqOIWkp2s5
tweet
Benjamin Hernandez😎
A powerful finish to a profitable week for all our members.
We locked in +89.58% on $SMX. The wins continued with $SLNG, $CNI, $WHLR, $FLXS, $BNAI, $DRCT, and $LIF.
Momentum is on our side for next week.
Relax this Saturday.
tweet
A powerful finish to a profitable week for all our members.
We locked in +89.58% on $SMX. The wins continued with $SLNG, $CNI, $WHLR, $FLXS, $BNAI, $DRCT, and $LIF.
Momentum is on our side for next week.
Relax this Saturday.
Most losses come from being late.
By the time a tweet is seen, the move is often gone. I share explosive stocks and real-time breakout alerts on WhatsApp while momentum is still building
Stop chasing✅ https://t.co/71FIJIdBXe
Being early changes the game
$PLTR $SOFI $AMD $OPEN - Benjamin Hernandez😎tweet
X (formerly Twitter)
Benjamin Hernandez😎 (@HayesStocks) on X
Most losses come from being late.
By the time a tweet is seen, the move is often gone. I share explosive stocks and real-time breakout alerts on WhatsApp while momentum is still building
Stop chasing✅ https://t.co/71FIJIdBXe
Being early changes the game…
By the time a tweet is seen, the move is often gone. I share explosive stocks and real-time breakout alerts on WhatsApp while momentum is still building
Stop chasing✅ https://t.co/71FIJIdBXe
Being early changes the game…
Offshore
Photo
Javier Blas
RT @JavierBlas: COLUMN: Buyers of Russian and Iranian oil are switching (to a point) to non-sanctioned alternatives. That has big implications for the price of crude.
@Opinion
https://t.co/pBuo6AA3WP
tweet
RT @JavierBlas: COLUMN: Buyers of Russian and Iranian oil are switching (to a point) to non-sanctioned alternatives. That has big implications for the price of crude.
@Opinion
https://t.co/pBuo6AA3WP
tweet