Offshore
Photo
DAIR.AI
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
Offshore
Photo
Bourbon Capital
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
tweet
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
Oaktree Capital Management (Howard Marks) 13F as Sep 2025 https://t.co/WPSyQWpsRV - Bourbon Insider Researchtweet
Offshore
Photo
Fiscal.ai
The Hyperscalers now have more than $1 trillion in total cloud commitments.
Google Cloud: $243B (+161%)
AWS: $244B (+38%)
Microsoft Azure: $631B (+108%)
$GOOGL $AMZN $MSFT https://t.co/QLEZkSFvE7
tweet
The Hyperscalers now have more than $1 trillion in total cloud commitments.
Google Cloud: $243B (+161%)
AWS: $244B (+38%)
Microsoft Azure: $631B (+108%)
$GOOGL $AMZN $MSFT https://t.co/QLEZkSFvE7
tweet
Offshore
Photo
Javier Blas
RT @badralbusaidi: Very serious talks mediating between Iran and the US in Muscat today.
It was useful to clarify both Iranian and American thinking and identify areas for possible progress. We aim to reconvene in due course, with the results to be considered carefully in Tehran and Washington. https://t.co/OWctzf2CXA
tweet
RT @badralbusaidi: Very serious talks mediating between Iran and the US in Muscat today.
It was useful to clarify both Iranian and American thinking and identify areas for possible progress. We aim to reconvene in due course, with the results to be considered carefully in Tehran and Washington. https://t.co/OWctzf2CXA
tweet
Offshore
Photo
Moon Dev
$54,000,000 BTC long just entered 6 minutes
liquidation point at $67,348
will he get smoked or make $100m? https://t.co/mLSOMd7ian
tweet
$54,000,000 BTC long just entered 6 minutes
liquidation point at $67,348
will he get smoked or make $100m? https://t.co/mLSOMd7ian
tweet