Jukan
Wasnโt it already widely known news that Micron wouldnโt be able to supply HBM4? I donโt understand why everyone is making such a big fuss about it.
And selling Micron because of this is foolish, too. Micron can earn even higher margins from server DDR5 instead of HBM4.
tweet
Wasnโt it already widely known news that Micron wouldnโt be able to supply HBM4? I donโt understand why everyone is making such a big fuss about it.
And selling Micron because of this is foolish, too. Micron can earn even higher margins from server DDR5 instead of HBM4.
tweet
Offshore
Photo
Benjamin Hernandez๐
This isn't for the public. Weโve found an under-the-radar play institutions are hiding while they accumulate. Iโm only sharing this with the inner circle.
Join the circle: ๐ https://t.co/71FIJId47G
Reply โSILENTโ for the confidential ticker.
$BMNR $BYND $NB $ASST $PULM
tweet
This isn't for the public. Weโve found an under-the-radar play institutions are hiding while they accumulate. Iโm only sharing this with the inner circle.
Join the circle: ๐ https://t.co/71FIJId47G
Reply โSILENTโ for the confidential ticker.
$BMNR $BYND $NB $ASST $PULM
๐ Professional Pick: $CISS
Entry: $2.28 | Target: $3.42
Technicals are flawless. $CISS just broke out of a multi-week base on record volume. RSI is rising but not yet overbought.
One-line why: The 50% gain today is just the "ignition phase." The target is the $3.42 resistance https://t.co/XkSQnEtkah - Benjamin Hernandez๐tweet
Offshore
Video
Dimitry Nakhla | Babylon Capitalยฎ
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐๐, ๐๐ข๐ฌ๐ซ๐ฎ๐ฉ๐ญ๐ข๐จ๐ง, ๐๐ง๐ ๐ฐ๐ก๐ฒ ๐ซ๐๐๐ฅ ๐ฆ๐จ๐๐ญ๐ฌ ๐ฆ๐๐ฒ ๐ฆ๐๐ญ๐ญ๐๐ซ ๐ฆ๐จ๐ซ๐ ๐ญ๐ก๐๐ง ๐๐ฏ๐๐ซ:
โItโs going to increase disruption in ways we canโt even predictโฆ but AI will increase productivity and lower the cost base of all companies.
And so if you have a company with these barriers to entry, itโs going to be worth more.โ
___
๐๐ฐ๐จ ๐ข๐ฆ๐ฉ๐จ๐ซ๐ญ๐๐ง๐ญ ๐ข๐๐๐๐ฌ ๐๐ฆ๐๐๐๐๐๐ ๐ก๐๐ซ๐:
๐. ๐๐ข๐ฌ๐ซ๐ฎ๐ฉ๐ญ๐ข๐จ๐ง ๐ซ๐ข๐ฌ๐ค ๐ข๐ฌ ๐ซ๐ข๐ฌ๐ข๐ง๐
AI lowers barriers to doing things, which means competitive pressure increases across many industries. Business models built on labor-intensive, easily replicable work are especially vulnerable.
๐. ๐๐จ๐๐ญ๐ฌ + ๐๐ ๐๐๐ง ๐๐ ๐ ๐ฉ๐จ๐ฐ๐๐ซ๐๐ฎ๐ฅ ๐๐จ๐ฆ๐๐จ
If a company already has durable barriers to entry, AI becomes a margin and productivity lever rather than an existential threat.
___
A particularly ๐ข๐ต๐ต๐ณ๐ข๐ค๐ต๐ช๐ท๐ฆ ๐ฉ๐ถ๐ฏ๐ต๐ช๐ฏ๐จ ๐จ๐ณ๐ฐ๐ถ๐ฏ๐ฅ:
๐ฝ๐ช๐จ๐๐ฃ๐๐จ๐จ๐๐จ ๐ฌ๐๐ฉ๐ ๐ข๐ช๐ก๐ฉ๐๐ฅ๐ก๐ ๐๐๐ง๐ง๐๐๐ง๐จ ๐ฉ๐ค ๐๐ฃ๐ฉ๐ง๐ฎ ๐๐ฃ๐ ๐ก๐๐ง๐๐ ๐๐ช๐ข๐๐ฃ-๐๐๐ฅ๐๐ฉ๐๐ก ๐๐ค๐จ๐ฉ ๐๐๐จ๐๐จ.
๐ผ๐ ๐๐๐ฃ ๐จ๐ฉ๐ง๐ช๐๐ฉ๐ช๐ง๐๐ก๐ก๐ฎ ๐ก๐ค๐ฌ๐๐ง ๐ฉ๐๐๐๐ง ๐๐ค๐จ๐ฉ ๐จ๐ฉ๐ง๐ช๐๐ฉ๐ช๐ง๐ ๐ฌ๐๐๐ก๐ ๐ฉ๐๐ ๐ข๐ค๐๐ฉ ๐ฅ๐ง๐ค๐ฉ๐๐๐ฉ๐จ ๐ฅ๐ง๐๐๐๐ฃ๐ ๐ฅ๐ค๐ฌ๐๐ง.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐๐, ๐๐ข๐ฌ๐ซ๐ฎ๐ฉ๐ญ๐ข๐จ๐ง, ๐๐ง๐ ๐ฐ๐ก๐ฒ ๐ซ๐๐๐ฅ ๐ฆ๐จ๐๐ญ๐ฌ ๐ฆ๐๐ฒ ๐ฆ๐๐ญ๐ญ๐๐ซ ๐ฆ๐จ๐ซ๐ ๐ญ๐ก๐๐ง ๐๐ฏ๐๐ซ:
โItโs going to increase disruption in ways we canโt even predictโฆ but AI will increase productivity and lower the cost base of all companies.
And so if you have a company with these barriers to entry, itโs going to be worth more.โ
___
๐๐ฐ๐จ ๐ข๐ฆ๐ฉ๐จ๐ซ๐ญ๐๐ง๐ญ ๐ข๐๐๐๐ฌ ๐๐ฆ๐๐๐๐๐๐ ๐ก๐๐ซ๐:
๐. ๐๐ข๐ฌ๐ซ๐ฎ๐ฉ๐ญ๐ข๐จ๐ง ๐ซ๐ข๐ฌ๐ค ๐ข๐ฌ ๐ซ๐ข๐ฌ๐ข๐ง๐
AI lowers barriers to doing things, which means competitive pressure increases across many industries. Business models built on labor-intensive, easily replicable work are especially vulnerable.
๐. ๐๐จ๐๐ญ๐ฌ + ๐๐ ๐๐๐ง ๐๐ ๐ ๐ฉ๐จ๐ฐ๐๐ซ๐๐ฎ๐ฅ ๐๐จ๐ฆ๐๐จ
If a company already has durable barriers to entry, AI becomes a margin and productivity lever rather than an existential threat.
___
A particularly ๐ข๐ต๐ต๐ณ๐ข๐ค๐ต๐ช๐ท๐ฆ ๐ฉ๐ถ๐ฏ๐ต๐ช๐ฏ๐จ ๐จ๐ณ๐ฐ๐ถ๐ฏ๐ฅ:
๐ฝ๐ช๐จ๐๐ฃ๐๐จ๐จ๐๐จ ๐ฌ๐๐ฉ๐ ๐ข๐ช๐ก๐ฉ๐๐ฅ๐ก๐ ๐๐๐ง๐ง๐๐๐ง๐จ ๐ฉ๐ค ๐๐ฃ๐ฉ๐ง๐ฎ ๐๐ฃ๐ ๐ก๐๐ง๐๐ ๐๐ช๐ข๐๐ฃ-๐๐๐ฅ๐๐ฉ๐๐ก ๐๐ค๐จ๐ฉ ๐๐๐จ๐๐จ.
๐ผ๐ ๐๐๐ฃ ๐จ๐ฉ๐ง๐ช๐๐ฉ๐ช๐ง๐๐ก๐ก๐ฎ ๐ก๐ค๐ฌ๐๐ง ๐ฉ๐๐๐๐ง ๐๐ค๐จ๐ฉ ๐จ๐ฉ๐ง๐ช๐๐ฉ๐ช๐ง๐ ๐ฌ๐๐๐ก๐ ๐ฉ๐๐ ๐ข๐ค๐๐ฉ ๐ฅ๐ง๐ค๐ฉ๐๐๐ฉ๐จ ๐ฅ๐ง๐๐๐๐ฃ๐ ๐ฅ๐ค๐ฌ๐๐ง.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
Offshore
Photo
DAIR.AI
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
Offshore
Photo
Bourbon Capital
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
tweet
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
Oaktree Capital Management (Howard Marks) 13F as Sep 2025 https://t.co/WPSyQWpsRV - Bourbon Insider Researchtweet
Offshore
Photo
Fiscal.ai
The Hyperscalers now have more than $1 trillion in total cloud commitments.
Google Cloud: $243B (+161%)
AWS: $244B (+38%)
Microsoft Azure: $631B (+108%)
$GOOGL $AMZN $MSFT https://t.co/QLEZkSFvE7
tweet
The Hyperscalers now have more than $1 trillion in total cloud commitments.
Google Cloud: $243B (+161%)
AWS: $244B (+38%)
Microsoft Azure: $631B (+108%)
$GOOGL $AMZN $MSFT https://t.co/QLEZkSFvE7
tweet