Offshore
Photo
God of Prompt
instead of scrolling x, study this free course packed with INSANE value
tweet
instead of scrolling x, study this free course packed with INSANE value
Jason Liu just open sourced his entire paid RAG course and consulting archive. all of it. free.
for context: this is the creator of Instructor (6M+ monthly downloads, cited by OpenAI as inspiration for their structured outputs feature). former staff ML engineer at Stitch Fix. ex-Meta. a16z scout. his RAG course on Maven had 400+ engineers enrolled.
but here's what's actually interesting.
look at the highlights he chose to summarize his own course:
> product mindset over one-off implementation
> measurement, feedback loops, improvement cycles first
> synthetic eval data to break the cold start
> feedback UX that actually works
> specialized retrieval and routing instead of one-size-fits-all search
notice what's missing?
no mention of vector databases. no embedding model comparisons. no chunk size optimization. no retrieval framework shootouts.
the guy who mass-taught production RAG to hundreds of engineers is telling you the hard part was never the retrieval. it was the product thinking around it.
measurement. feedback loops. knowing what to improve and how to tell if you improved it.
everyone's out here debating pgvector vs pinecone vs weaviate. meanwhile the most credible RAG practitioner in the space just told you the answer was never in the vector store.
it was in the feedback UX.
567 Labs is done. the content lives on. go read it before you build another RAG pipeline without an eval framework. - Robert Yousseftweet
X (formerly Twitter)
Robert Youssef (@rryssf_) on X
Jason Liu just open sourced his entire paid RAG course and consulting archive. all of it. free.
for context: this is the creator of Instructor (6M+ monthly downloads, cited by OpenAI as inspiration for their structured outputs feature). former staff ML engineerโฆ
for context: this is the creator of Instructor (6M+ monthly downloads, cited by OpenAI as inspiration for their structured outputs feature). former staff ML engineerโฆ
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @CapitalValor: GOLD MINERS
Reality is we've been lucky riding the wave, partly fuelled by Chinese ultra-speculative money
6x - 9x CF at $5k/oz is not cheap... I'd perhaps pay this on 3.5k.
I am out of the producing miner space and give thanks.
Massive wealth generator over past 18 months. ๐ https://t.co/K3U7x5aQk4
tweet
RT @CapitalValor: GOLD MINERS
Reality is we've been lucky riding the wave, partly fuelled by Chinese ultra-speculative money
6x - 9x CF at $5k/oz is not cheap... I'd perhaps pay this on 3.5k.
I am out of the producing miner space and give thanks.
Massive wealth generator over past 18 months. ๐ https://t.co/K3U7x5aQk4
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: NEW research from Meta Superintelligence Labs.
It uses a clever strategy-auction framework to improve self-improving agents on complex tasks.
Small agents aren't always enough.
On the simplest tasks, a 4B parameter agent attains 87% of a 32B agent's performance. But on the most complex tasks, that relative performance drops to just 21%.
The default assumption today is that you either use the biggest model for everything or route tasks with a trained classifier.
But trained routers degrade as task difficulty increases, and non-predictive cascades become prohibitively expensive for agentic workloads.
This new research introduces SALE (Strategy Auctions for Workload Efficiency), a framework inspired by freelancer marketplaces. Instead of predicting which model to use from a task description alone, agents bid with short strategic plans that are scored by a systematic cost-value mechanism.
How does the auction work? Each candidate agent proposes a strategic solution plan. A peer jury scores plans by predicted value. A heuristic cost predictor estimates execution cost. The agent with the best cost-value trade-off wins and executes its plan.
The self-improvement mechanism is where it gets interesting. After each auction, all proposed strategies are stored in a shared memory bank. Cheaper agents that lost can learn from winning strategies and submit refined bids, analogous to freelancers upskilling over time.
On deep search tasks, SALE exceeds the best single agent's pass@1 by 3.5 points while reducing cost by 35%. On coding tasks, it improves pass@1 by 2.7 points at 25% lower cost. Across both domains, SALE reduces reliance on the largest agent by 53%.
Existing routers like WTP and FrugalGPT either underperform the largest agent or fail to reduce cost. FrugalGPT's costs actually increase on complex coding tasks, reaching 0.61 dollars per million tokens versus the best agent's 0.36 dollars.
Market-inspired coordination mechanisms that organize heterogeneous agents into adaptive ecosystems can systematically outperform both single large models and trained routing approaches.
Paper: https://t.co/UY8C5cmfxK
Learn to build effective AI Agents in our academy: https://t.co/1e8RZKs4uX
tweet
RT @omarsar0: NEW research from Meta Superintelligence Labs.
It uses a clever strategy-auction framework to improve self-improving agents on complex tasks.
Small agents aren't always enough.
On the simplest tasks, a 4B parameter agent attains 87% of a 32B agent's performance. But on the most complex tasks, that relative performance drops to just 21%.
The default assumption today is that you either use the biggest model for everything or route tasks with a trained classifier.
But trained routers degrade as task difficulty increases, and non-predictive cascades become prohibitively expensive for agentic workloads.
This new research introduces SALE (Strategy Auctions for Workload Efficiency), a framework inspired by freelancer marketplaces. Instead of predicting which model to use from a task description alone, agents bid with short strategic plans that are scored by a systematic cost-value mechanism.
How does the auction work? Each candidate agent proposes a strategic solution plan. A peer jury scores plans by predicted value. A heuristic cost predictor estimates execution cost. The agent with the best cost-value trade-off wins and executes its plan.
The self-improvement mechanism is where it gets interesting. After each auction, all proposed strategies are stored in a shared memory bank. Cheaper agents that lost can learn from winning strategies and submit refined bids, analogous to freelancers upskilling over time.
On deep search tasks, SALE exceeds the best single agent's pass@1 by 3.5 points while reducing cost by 35%. On coding tasks, it improves pass@1 by 2.7 points at 25% lower cost. Across both domains, SALE reduces reliance on the largest agent by 53%.
Existing routers like WTP and FrugalGPT either underperform the largest agent or fail to reduce cost. FrugalGPT's costs actually increase on complex coding tasks, reaching 0.61 dollars per million tokens versus the best agent's 0.36 dollars.
Market-inspired coordination mechanisms that organize heterogeneous agents into adaptive ecosystems can systematically outperform both single large models and trained routing approaches.
Paper: https://t.co/UY8C5cmfxK
Learn to build effective AI Agents in our academy: https://t.co/1e8RZKs4uX
tweet
Offshore
Photo
DAIR.AI
// Agent Primitives //
This is a really interesting take on building effective multi-agent systems.
Multi-agent systems get more complex as tasks get harder. More roles, more prompts, more bespoke interaction patterns. However, the core computation patterns keep repeating across every system: review, vote, plan, execute.
But nobody treats these patterns as reusable building blocks.
This new research introduces Agent Primitives, a set of latent building blocks for constructing effective multi-agent systems.
Inspired by how neural networks are built from reusable components like residual blocks and attention heads, the researchers decompose multi-agent architectures into three recurring primitives: Review, Voting and Selection, and Planning and Execution.
What makes these primitives different? Agents inside each primitive communicate via KV-cache rather than natural language. This avoids the information degradation that happens when agents pass long text messages back and forth across multi-stage interactions.
An Organizer agent selects and composes primitives for each query, guided by a lightweight knowledge pool of previously successful configurations.
No manual system design required.
The results across eight benchmarks spanning math, code generation, and QA with five open-source LLMs:
> Primitives-based MAS improve average accuracy by 12.0-16.5% over single-agent baselines
> On GPQA-Diamond, the improvement is striking, 53.2% versus the 33.6-40.2% range of prior methods like AgentVerse, DyLAN, and MAS-GPT
In terms of efficiency, token usage and inference latency drop by approximately 3-4x compared to text-based MAS, while incurring only 1.3-1.6x overhead relative to single-agent inference.
Instead of designing task-specific multi-agent architectures from scratch, Agent Primitives show that a small set of reusable computation patterns with latent communication can match or exceed custom systems while being dramatically more efficient.
Paper: https://t.co/fxEL6g0x4O
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
// Agent Primitives //
This is a really interesting take on building effective multi-agent systems.
Multi-agent systems get more complex as tasks get harder. More roles, more prompts, more bespoke interaction patterns. However, the core computation patterns keep repeating across every system: review, vote, plan, execute.
But nobody treats these patterns as reusable building blocks.
This new research introduces Agent Primitives, a set of latent building blocks for constructing effective multi-agent systems.
Inspired by how neural networks are built from reusable components like residual blocks and attention heads, the researchers decompose multi-agent architectures into three recurring primitives: Review, Voting and Selection, and Planning and Execution.
What makes these primitives different? Agents inside each primitive communicate via KV-cache rather than natural language. This avoids the information degradation that happens when agents pass long text messages back and forth across multi-stage interactions.
An Organizer agent selects and composes primitives for each query, guided by a lightweight knowledge pool of previously successful configurations.
No manual system design required.
The results across eight benchmarks spanning math, code generation, and QA with five open-source LLMs:
> Primitives-based MAS improve average accuracy by 12.0-16.5% over single-agent baselines
> On GPQA-Diamond, the improvement is striking, 53.2% versus the 33.6-40.2% range of prior methods like AgentVerse, DyLAN, and MAS-GPT
In terms of efficiency, token usage and inference latency drop by approximately 3-4x compared to text-based MAS, while incurring only 1.3-1.6x overhead relative to single-agent inference.
Instead of designing task-specific multi-agent architectures from scratch, Agent Primitives show that a small set of reusable computation patterns with latent communication can match or exceed custom systems while being dramatically more efficient.
Paper: https://t.co/fxEL6g0x4O
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: $ARM: -8%AH
CEO: "Arm delivered a record revenue quarter as demand for AI computing on our platform continues to accelerate. Record royalty results in the third quarter reflect the growing scale of our ecosystem, as customers design the Arm compute platform into next-generation systems across cloud, edge, and physical environments to deliver high-performance, power-efficient AI. The fundamentals of the Arm business have never been stronger."
tweet
RT @TheTranscript_: $ARM: -8%AH
CEO: "Arm delivered a record revenue quarter as demand for AI computing on our platform continues to accelerate. Record royalty results in the third quarter reflect the growing scale of our ecosystem, as customers design the Arm compute platform into next-generation systems across cloud, edge, and physical environments to deliver high-performance, power-efficient AI. The fundamentals of the Arm business have never been stronger."
tweet
Offshore
Video
Lumida Wealth Management
NVIDIA AND DASSAULT JUST ANNOUNCED THEIR BIGGEST PARTNERSHIP EVER
Jensen: "This is the largest collaboration our two companies have ever had in over a quarter century.
Dassault is integrating Nvidia Cuda X acceleration libraries, Nvidia AI, and Nvidia Omniverse into their platform.
This represents our body of work over 25 years. Now we're fusing it so you can work at a scale 100 times, 1000 times, and very soon a million times greater than before.
What used to be pre-rendered or offline simulations will now be real-time digital twins."
This is the infrastructure layer for the next generation of product design and engineering.
tweet
NVIDIA AND DASSAULT JUST ANNOUNCED THEIR BIGGEST PARTNERSHIP EVER
Jensen: "This is the largest collaboration our two companies have ever had in over a quarter century.
Dassault is integrating Nvidia Cuda X acceleration libraries, Nvidia AI, and Nvidia Omniverse into their platform.
This represents our body of work over 25 years. Now we're fusing it so you can work at a scale 100 times, 1000 times, and very soon a million times greater than before.
What used to be pre-rendered or offline simulations will now be real-time digital twins."
This is the infrastructure layer for the next generation of product design and engineering.
tweet
Javier Blas
RT @michellelprice: KYIV, Ukraine (AP) โ US and Russia agree to reestablish military-to-military dialogue after Ukraine talks.
tweet
RT @michellelprice: KYIV, Ukraine (AP) โ US and Russia agree to reestablish military-to-military dialogue after Ukraine talks.
tweet
Offshore
Photo
The Transcript
Peloton double miss
CEO: "Our second quarter represented the most substantial period of innovation at Peloton since our founding."
$PTON: -26% today https://t.co/fgm7873wyL
tweet
Peloton double miss
CEO: "Our second quarter represented the most substantial period of innovation at Peloton since our founding."
$PTON: -26% today https://t.co/fgm7873wyL
tweet
Offshore
Video
Michael Fritzell (Asian Century Stocks)
RT @PJaccetturo: RIP Hollywood.
AI is now 100% photorealistic with the launch of Kling 3.0
In just two days, I created the opening sequence from The Way of Kings by Brandon Sanderson
You have to try this new Multi-Shot technique that makes making films much faster and cheaper ๐งต๐ https://t.co/tqZCnsP96J
tweet
RT @PJaccetturo: RIP Hollywood.
AI is now 100% photorealistic with the launch of Kling 3.0
In just two days, I created the opening sequence from The Way of Kings by Brandon Sanderson
You have to try this new Multi-Shot technique that makes making films much faster and cheaper ๐งต๐ https://t.co/tqZCnsP96J
tweet
Offshore
Video
Dimitry Nakhla | Babylon Capitalยฎ
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐ฐ๐ก๐๐ญ ๐ญ๐ฒ๐ฉ๐๐ฌ ๐จ๐ ๐๐จ๐ฆ๐ฉ๐๐ง๐ข๐๐ฌ ๐ก๐ ๐ฐ๐จ๐ฎ๐ฅ๐ ๐ง๐๐ฏ๐๐ซ ๐ข๐ง๐ฏ๐๐ฌ๐ญ ๐ข๐ง:
โWe have a long list of companies we donโt invest inโฆ banks, commodity businesses, most manufacturing industries, fossil fuels, utilities, airlines, wireless telecom, advertising agenciesโฆ Why? Because theyโre competitive. And the most important thing Iโve learned in investing is that investors underestimate the forces of competition and disruption.โ
___
๐๐ฏ๐ฅ๐ถ๐ด๐ต๐ณ๐ช๐ฆ๐ด ๐๐ฐ๐ฉ๐ฏ ๐ฆ๐น๐ฑ๐ญ๐ช๐ค๐ช๐ต๐ญ๐บ ๐ข๐ท๐ฐ๐ช๐ฅ๐ด:
โข Banks
โข Commodity businesses / manufacturing
โข Insurance
โข Tobacco
โข Fossil fuels
โข Utilities
โข Airlines
โข Wireless telecom
โข Advertising agencies
โข Most traditional manufacturing
___
Hohnโs point isnโt that money canโt be made in these areas โ plenty of investors have done well in some of them.
The deeper lesson:
๐๐ฃ๐ซ๐๐จ๐ฉ๐๐ฃ๐ ๐๐จ ๐๐จ ๐ข๐ช๐๐ ๐๐๐ค๐ช๐ฉ ๐๐๐๐๐๐๐ฃ๐ ๐ฌ๐๐๐ฉ ๐ฃ๐ค๐ฉ ๐ฉ๐ค ๐ค๐ฌ๐ฃ ๐๐จ ๐๐ฉ ๐๐จ ๐๐๐ค๐ช๐ฉ ๐๐๐๐๐๐๐ฃ๐ ๐ฌ๐๐๐ฉ ๐ฉ๐ค ๐ค๐ฌ๐ฃ.
Highly competitive industries tend to:
โข Erode returns on capital
โข Compress margins over time
โข Require constant reinvestment
Contrast that with businesses that have:
โข Pricing power
โข High switching costs
โข Network effects
โข Structural barriers to entry
Those are the environments where ๐ญ๐ฐ๐ฏ๐จ-๐ต๐ฆ๐ณ๐ฎ compounding becomes far more predictable.
___
Another subtle takeaway:
Most investors focus heavily on upside narratives.
Great investors spend just as much time thinking about downside structures.
___
Source: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
๐๐ก๐ซ๐ข๐ฌ ๐๐จ๐ก๐ง ๐จ๐ง ๐ฐ๐ก๐๐ญ ๐ญ๐ฒ๐ฉ๐๐ฌ ๐จ๐ ๐๐จ๐ฆ๐ฉ๐๐ง๐ข๐๐ฌ ๐ก๐ ๐ฐ๐จ๐ฎ๐ฅ๐ ๐ง๐๐ฏ๐๐ซ ๐ข๐ง๐ฏ๐๐ฌ๐ญ ๐ข๐ง:
โWe have a long list of companies we donโt invest inโฆ banks, commodity businesses, most manufacturing industries, fossil fuels, utilities, airlines, wireless telecom, advertising agenciesโฆ Why? Because theyโre competitive. And the most important thing Iโve learned in investing is that investors underestimate the forces of competition and disruption.โ
___
๐๐ฏ๐ฅ๐ถ๐ด๐ต๐ณ๐ช๐ฆ๐ด ๐๐ฐ๐ฉ๐ฏ ๐ฆ๐น๐ฑ๐ญ๐ช๐ค๐ช๐ต๐ญ๐บ ๐ข๐ท๐ฐ๐ช๐ฅ๐ด:
โข Banks
โข Commodity businesses / manufacturing
โข Insurance
โข Tobacco
โข Fossil fuels
โข Utilities
โข Airlines
โข Wireless telecom
โข Advertising agencies
โข Most traditional manufacturing
___
Hohnโs point isnโt that money canโt be made in these areas โ plenty of investors have done well in some of them.
The deeper lesson:
๐๐ฃ๐ซ๐๐จ๐ฉ๐๐ฃ๐ ๐๐จ ๐๐จ ๐ข๐ช๐๐ ๐๐๐ค๐ช๐ฉ ๐๐๐๐๐๐๐ฃ๐ ๐ฌ๐๐๐ฉ ๐ฃ๐ค๐ฉ ๐ฉ๐ค ๐ค๐ฌ๐ฃ ๐๐จ ๐๐ฉ ๐๐จ ๐๐๐ค๐ช๐ฉ ๐๐๐๐๐๐๐ฃ๐ ๐ฌ๐๐๐ฉ ๐ฉ๐ค ๐ค๐ฌ๐ฃ.
Highly competitive industries tend to:
โข Erode returns on capital
โข Compress margins over time
โข Require constant reinvestment
Contrast that with businesses that have:
โข Pricing power
โข High switching costs
โข Network effects
โข Structural barriers to entry
Those are the environments where ๐ญ๐ฐ๐ฏ๐จ-๐ต๐ฆ๐ณ๐ฎ compounding becomes far more predictable.
___
Another subtle takeaway:
Most investors focus heavily on upside narratives.
Great investors spend just as much time thinking about downside structures.
___
Source: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet