Offshore
Video
Dimitry Nakhla | Babylon Capital®
Bill Ackman on the biggest loss of his career:
“Biggest loss in my career is a company called Valeant Pharmaceuticals. We made an investment in a business that didn’t meet our core principles… And we lost $4 billion.”
___
𝐓𝐡𝐞 𝐋𝐞𝐬𝐬𝐨𝐧: 𝘐𝘵’𝘴 𝘦𝘢𝘴𝘺 𝘵𝘰 𝘧𝘰𝘤𝘶𝘴 𝘰𝘯 𝘵𝘩𝘦 𝘮𝘢𝘨𝘯𝘪𝘵𝘶𝘥𝘦 𝘰𝘧 𝘵𝘩𝘦 𝘭𝘰𝘴𝘴, 𝘣𝘶𝘵 𝘵𝘩𝘦 𝘮𝘰𝘴𝘵 𝘪𝘮𝘱𝘰𝘳𝘵𝘢𝘯𝘵 𝘱𝘢𝘳𝘵 𝘰𝘧 𝘈𝘤𝘬𝘮𝘢𝘯’𝘴 𝘳𝘦𝘧𝘭𝘦𝘤𝘵𝘪𝘰𝘯 𝘤𝘰𝘮𝘦𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘧𝘪𝘳𝘴𝘵 𝘴𝘦𝘯𝘵𝘦𝘯𝘤𝘦:
“𝙒𝙚 𝙢𝙖𝙙𝙚 𝙖𝙣 𝙞𝙣𝙫𝙚𝙨𝙩𝙢𝙚𝙣𝙩 𝙞𝙣 𝙖 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙩𝙝𝙖𝙩 𝙙𝙞𝙙𝙣’𝙩 𝙢𝙚𝙚𝙩 𝙤𝙪𝙧 𝙘𝙤𝙧𝙚 𝙥𝙧𝙞𝙣𝙘𝙞𝙥𝙡𝙚𝙨.”
𝘛𝘩𝘢𝘵 𝘴𝘪𝘯𝘨𝘭𝘦 𝘢𝘥𝘮𝘪𝘴𝘴𝘪𝘰𝘯 𝘤𝘰𝘯𝘵𝘢𝘪𝘯𝘴 𝘮𝘰𝘳𝘦 𝘪𝘯𝘷𝘦𝘴𝘵𝘪𝘯𝘨 𝘸𝘪𝘴𝘥𝘰𝘮 𝘵𝘩𝘢𝘯 𝘮𝘰𝘴𝘵 𝘵𝘦𝘹𝘵𝘣𝘰𝘰𝘬𝘴.
𝘔𝘢𝘳𝘬𝘦𝘵𝘴 𝘤𝘰𝘯𝘴𝘵𝘢𝘯𝘵𝘭𝘺 𝘵𝘦𝘮𝘱𝘵 𝘪𝘯𝘷𝘦𝘴𝘵𝘰𝘳𝘴.
𝘛𝘩𝘦𝘳𝘦 𝘪𝘴 𝘢𝘭𝘸𝘢𝘺𝘴 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘴𝘵𝘰𝘤𝘬 “𝘸𝘰𝘳𝘬𝘪𝘯𝘨,” 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘯𝘢𝘳𝘳𝘢𝘵𝘪𝘷𝘦 𝘨𝘢𝘪𝘯𝘪𝘯𝘨 𝘮𝘰𝘮𝘦𝘯𝘵𝘶𝘮, 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘰𝘱𝘱𝘰𝘳𝘵𝘶𝘯𝘪𝘵𝘺 𝘵𝘩𝘢𝘵 𝘢𝘱𝘱𝘦𝘢𝘳𝘴 𝘪𝘳𝘳𝘦𝘴𝘪𝘴𝘵𝘪𝘣𝘭𝘦. 𝘉𝘶𝘵 𝘭𝘰𝘯𝘨-𝘵𝘦𝘳𝘮 𝘴𝘶𝘤𝘤𝘦𝘴𝘴 𝘪𝘴 𝘰𝘧𝘵𝘦𝘯 𝘭𝘦𝘴𝘴 𝘢𝘣𝘰𝘶𝘵 𝘸𝘩𝘢𝘵 𝘺𝘰𝘶 𝘣𝘶𝘺 𝘢𝘯𝘥 𝘮𝘰𝘳𝘦 𝘢𝘣𝘰𝘶𝘵 𝘸𝘩𝘢𝘵 𝘺𝘰𝘶 𝘳𝘦𝘧𝘶𝘴𝘦 𝘵𝘰 𝘣𝘶𝘺.
___
1️⃣ 𝐓𝐡𝐢𝐬 𝐭𝐢𝐞𝐬 𝐝𝐢𝐫𝐞𝐜𝐭𝐥𝐲 𝐢𝐧𝐭𝐨 𝐓𝐲𝐩𝐞 𝟏 𝐯𝐬. 𝐓𝐲𝐩𝐞 𝟐 𝐄𝐫𝐫𝐨𝐫 𝐞𝐫𝐫𝐨𝐫𝐬:
• 𝐓𝐲𝐩𝐞 𝟏 𝐄𝐫𝐫𝐨𝐫 (False Positive): Believing a bad investment is good → permanent capital impairment
• 𝐓𝐲𝐩𝐞 𝟐 𝐄𝐫𝐫𝐨𝐫 (False Negative): Rejecting a great investment → opportunity cost
Nature offers the perfect analogy.
A deer making a Type 1 error (misjudging danger) may not survive.
A cheetah making a Type 2 error (skipping a chase) simply misses a meal.
In investing, Type 1 errors can be fatal.
𝐒𝐭𝐫𝐚𝐲𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐲𝐨𝐮𝐫 𝐜𝐨𝐫𝐞 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞𝐬 𝐝𝐫𝐚𝐦𝐚𝐭𝐢𝐜𝐚𝐥𝐥𝐲 𝐢𝐧𝐜𝐫𝐞𝐚𝐬𝐞𝐬 𝐭𝐡𝐞 𝐩𝐫𝐨𝐛𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐨𝐟 𝐮𝐧𝐟𝐨𝐫𝐜𝐞𝐝 𝐞𝐫𝐫𝐨𝐫𝐬 — 𝐭𝐡𝐞 𝐤𝐢𝐧𝐝 𝐭𝐡𝐚𝐭 𝐩𝐞𝐫𝐦𝐚𝐧𝐞𝐧𝐭𝐥𝐲 𝐝𝐚𝐦𝐚𝐠𝐞 𝐫𝐞𝐭𝐮𝐫𝐧𝐬.
Discipline is not restrictive. It is protective.
___
2️⃣ 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 & 𝐅𝐫𝐚𝐠𝐢𝐥𝐢𝐭𝐲
Valeant was not just a bad outcome — it was a fragile system.
Ackman highlights something subtle but critical:
“A confidence-sensitive strategy.”
Businesses dependent on:
• Constant acquisitions
• Favorable capital markets
• Equity issuance
• Narrative momentum
carry hidden fragility.
𝘞𝘩𝘦𝘯 𝘤𝘰𝘯𝘧𝘪𝘥𝘦𝘯𝘤𝘦 𝘣𝘳𝘦𝘢𝘬𝘴, 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘪𝘵𝘴𝘦𝘭𝘧 𝘤𝘢𝘯 𝘣𝘳𝘦𝘢𝘬.
Contrast that with businesses whose economics are self-funded and internally compounding. They may suffer volatility, but their survival doesn’t depend on market approval.
___
3️⃣ 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐂𝐨𝐧𝐭𝐫𝐨𝐥
Ackman initially held a passive position.
When things deteriorated, he stepped in — but by then, he was reacting, not controlling.
Many investors frequently underestimate governance risk:
• Who is making capital allocation decisions?
• What incentives drive management?
• Can poor decisions be corrected early?
___
𝐅𝐢𝐧𝐚𝐥 𝐓𝐡𝐨𝐮𝐠𝐡𝐭
Great investors are rarely defined by avoiding mistakes.
They are defined by:
• Learning from them
• Refining their principles
• Increasing selectivity
This is exactly what Bill Ackman did.
He didn’t just move on from Valeant — he extracted the lesson, refined his framework, and even had his core principles chiseled onto a piece of granite as a permanent reminder not to stray from them again.
What makes this even more compelling is the context.
During that period, Ackman was facing a low point both professionally and personally. As the saying goes, when it rains, it pours.
Many would have folded.
Instead, he did the opposite — demonstrating remarkable resilience and psychological endurance.
Tomorrow, I’ll share how Bill got through it — the tools, systems, and mindset shifts behind his epic comeback.
___
Video: Lex Fridman Podcast (02/[...]
Bill Ackman on the biggest loss of his career:
“Biggest loss in my career is a company called Valeant Pharmaceuticals. We made an investment in a business that didn’t meet our core principles… And we lost $4 billion.”
___
𝐓𝐡𝐞 𝐋𝐞𝐬𝐬𝐨𝐧: 𝘐𝘵’𝘴 𝘦𝘢𝘴𝘺 𝘵𝘰 𝘧𝘰𝘤𝘶𝘴 𝘰𝘯 𝘵𝘩𝘦 𝘮𝘢𝘨𝘯𝘪𝘵𝘶𝘥𝘦 𝘰𝘧 𝘵𝘩𝘦 𝘭𝘰𝘴𝘴, 𝘣𝘶𝘵 𝘵𝘩𝘦 𝘮𝘰𝘴𝘵 𝘪𝘮𝘱𝘰𝘳𝘵𝘢𝘯𝘵 𝘱𝘢𝘳𝘵 𝘰𝘧 𝘈𝘤𝘬𝘮𝘢𝘯’𝘴 𝘳𝘦𝘧𝘭𝘦𝘤𝘵𝘪𝘰𝘯 𝘤𝘰𝘮𝘦𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘧𝘪𝘳𝘴𝘵 𝘴𝘦𝘯𝘵𝘦𝘯𝘤𝘦:
“𝙒𝙚 𝙢𝙖𝙙𝙚 𝙖𝙣 𝙞𝙣𝙫𝙚𝙨𝙩𝙢𝙚𝙣𝙩 𝙞𝙣 𝙖 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙩𝙝𝙖𝙩 𝙙𝙞𝙙𝙣’𝙩 𝙢𝙚𝙚𝙩 𝙤𝙪𝙧 𝙘𝙤𝙧𝙚 𝙥𝙧𝙞𝙣𝙘𝙞𝙥𝙡𝙚𝙨.”
𝘛𝘩𝘢𝘵 𝘴𝘪𝘯𝘨𝘭𝘦 𝘢𝘥𝘮𝘪𝘴𝘴𝘪𝘰𝘯 𝘤𝘰𝘯𝘵𝘢𝘪𝘯𝘴 𝘮𝘰𝘳𝘦 𝘪𝘯𝘷𝘦𝘴𝘵𝘪𝘯𝘨 𝘸𝘪𝘴𝘥𝘰𝘮 𝘵𝘩𝘢𝘯 𝘮𝘰𝘴𝘵 𝘵𝘦𝘹𝘵𝘣𝘰𝘰𝘬𝘴.
𝘔𝘢𝘳𝘬𝘦𝘵𝘴 𝘤𝘰𝘯𝘴𝘵𝘢𝘯𝘵𝘭𝘺 𝘵𝘦𝘮𝘱𝘵 𝘪𝘯𝘷𝘦𝘴𝘵𝘰𝘳𝘴.
𝘛𝘩𝘦𝘳𝘦 𝘪𝘴 𝘢𝘭𝘸𝘢𝘺𝘴 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘴𝘵𝘰𝘤𝘬 “𝘸𝘰𝘳𝘬𝘪𝘯𝘨,” 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘯𝘢𝘳𝘳𝘢𝘵𝘪𝘷𝘦 𝘨𝘢𝘪𝘯𝘪𝘯𝘨 𝘮𝘰𝘮𝘦𝘯𝘵𝘶𝘮, 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘰𝘱𝘱𝘰𝘳𝘵𝘶𝘯𝘪𝘵𝘺 𝘵𝘩𝘢𝘵 𝘢𝘱𝘱𝘦𝘢𝘳𝘴 𝘪𝘳𝘳𝘦𝘴𝘪𝘴𝘵𝘪𝘣𝘭𝘦. 𝘉𝘶𝘵 𝘭𝘰𝘯𝘨-𝘵𝘦𝘳𝘮 𝘴𝘶𝘤𝘤𝘦𝘴𝘴 𝘪𝘴 𝘰𝘧𝘵𝘦𝘯 𝘭𝘦𝘴𝘴 𝘢𝘣𝘰𝘶𝘵 𝘸𝘩𝘢𝘵 𝘺𝘰𝘶 𝘣𝘶𝘺 𝘢𝘯𝘥 𝘮𝘰𝘳𝘦 𝘢𝘣𝘰𝘶𝘵 𝘸𝘩𝘢𝘵 𝘺𝘰𝘶 𝘳𝘦𝘧𝘶𝘴𝘦 𝘵𝘰 𝘣𝘶𝘺.
___
1️⃣ 𝐓𝐡𝐢𝐬 𝐭𝐢𝐞𝐬 𝐝𝐢𝐫𝐞𝐜𝐭𝐥𝐲 𝐢𝐧𝐭𝐨 𝐓𝐲𝐩𝐞 𝟏 𝐯𝐬. 𝐓𝐲𝐩𝐞 𝟐 𝐄𝐫𝐫𝐨𝐫 𝐞𝐫𝐫𝐨𝐫𝐬:
• 𝐓𝐲𝐩𝐞 𝟏 𝐄𝐫𝐫𝐨𝐫 (False Positive): Believing a bad investment is good → permanent capital impairment
• 𝐓𝐲𝐩𝐞 𝟐 𝐄𝐫𝐫𝐨𝐫 (False Negative): Rejecting a great investment → opportunity cost
Nature offers the perfect analogy.
A deer making a Type 1 error (misjudging danger) may not survive.
A cheetah making a Type 2 error (skipping a chase) simply misses a meal.
In investing, Type 1 errors can be fatal.
𝐒𝐭𝐫𝐚𝐲𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐲𝐨𝐮𝐫 𝐜𝐨𝐫𝐞 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞𝐬 𝐝𝐫𝐚𝐦𝐚𝐭𝐢𝐜𝐚𝐥𝐥𝐲 𝐢𝐧𝐜𝐫𝐞𝐚𝐬𝐞𝐬 𝐭𝐡𝐞 𝐩𝐫𝐨𝐛𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐨𝐟 𝐮𝐧𝐟𝐨𝐫𝐜𝐞𝐝 𝐞𝐫𝐫𝐨𝐫𝐬 — 𝐭𝐡𝐞 𝐤𝐢𝐧𝐝 𝐭𝐡𝐚𝐭 𝐩𝐞𝐫𝐦𝐚𝐧𝐞𝐧𝐭𝐥𝐲 𝐝𝐚𝐦𝐚𝐠𝐞 𝐫𝐞𝐭𝐮𝐫𝐧𝐬.
Discipline is not restrictive. It is protective.
___
2️⃣ 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 & 𝐅𝐫𝐚𝐠𝐢𝐥𝐢𝐭𝐲
Valeant was not just a bad outcome — it was a fragile system.
Ackman highlights something subtle but critical:
“A confidence-sensitive strategy.”
Businesses dependent on:
• Constant acquisitions
• Favorable capital markets
• Equity issuance
• Narrative momentum
carry hidden fragility.
𝘞𝘩𝘦𝘯 𝘤𝘰𝘯𝘧𝘪𝘥𝘦𝘯𝘤𝘦 𝘣𝘳𝘦𝘢𝘬𝘴, 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘪𝘵𝘴𝘦𝘭𝘧 𝘤𝘢𝘯 𝘣𝘳𝘦𝘢𝘬.
Contrast that with businesses whose economics are self-funded and internally compounding. They may suffer volatility, but their survival doesn’t depend on market approval.
___
3️⃣ 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐂𝐨𝐧𝐭𝐫𝐨𝐥
Ackman initially held a passive position.
When things deteriorated, he stepped in — but by then, he was reacting, not controlling.
Many investors frequently underestimate governance risk:
• Who is making capital allocation decisions?
• What incentives drive management?
• Can poor decisions be corrected early?
___
𝐅𝐢𝐧𝐚𝐥 𝐓𝐡𝐨𝐮𝐠𝐡𝐭
Great investors are rarely defined by avoiding mistakes.
They are defined by:
• Learning from them
• Refining their principles
• Increasing selectivity
This is exactly what Bill Ackman did.
He didn’t just move on from Valeant — he extracted the lesson, refined his framework, and even had his core principles chiseled onto a piece of granite as a permanent reminder not to stray from them again.
What makes this even more compelling is the context.
During that period, Ackman was facing a low point both professionally and personally. As the saying goes, when it rains, it pours.
Many would have folded.
Instead, he did the opposite — demonstrating remarkable resilience and psychological endurance.
Tomorrow, I’ll share how Bill got through it — the tools, systems, and mindset shifts behind his epic comeback.
___
Video: Lex Fridman Podcast (02/[...]
Offshore
Video
Moon Dev
Just added 2 more openclaws to my swarm
Mac minis are becoming hard to find https://t.co/xjcVQd9w7U
tweet
Just added 2 more openclaws to my swarm
Mac minis are becoming hard to find https://t.co/xjcVQd9w7U
tweet
Offshore
Photo
Javier Blas
US Energy Secretary Chris Wright tells me he sees Venezuelan production up 30-40% by year-end from current level (that's ~270,000-360,000 b/d extra).
Last month, I wrote this @Opinion column suggesting there're "low hanging oil barrels" in Venezuela ⬇️⬇️
https://t.co/ptdquUaohr
tweet
US Energy Secretary Chris Wright tells me he sees Venezuelan production up 30-40% by year-end from current level (that's ~270,000-360,000 b/d extra).
Last month, I wrote this @Opinion column suggesting there're "low hanging oil barrels" in Venezuela ⬇️⬇️
https://t.co/ptdquUaohr
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
Moody’s $MCO Q4 25’ Report🗓️
✅ REV: $1.89B (+13% YoY)
✅ EPS: $3.64 (+39% YoY) https://t.co/CNJFpAduw1
tweet
Moody’s $MCO Q4 25’ Report🗓️
✅ REV: $1.89B (+13% YoY)
✅ EPS: $3.64 (+39% YoY) https://t.co/CNJFpAduw1
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.
The title sounds like a joke:
“This human study did not involve human subjects.”
But it’s dead serious.
The researchers are asking a controversial question:
Can LLM simulations count as behavioral evidence?
Here’s the core idea.
Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.
Not generic prompts.
But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.
Then they test whether the simulated responses statistically match real-world human data.
And disturbingly… they often do.
Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:
• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits
Not perfectly. Not universally.
But far closer than most people would expect.
The key contribution of the paper isn’t “LLMs are human.”
It’s validation.
They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.
When the distributions match, the simulation isn’t just storytelling.
It becomes empirical evidence.
That’s the uncomfortable shift.
If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?
Because if the answer is yes, this changes everything:
• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation
You could prototype social interventions before deploying them in the real world.
You could stress-test messaging strategies across simulated demographics.
You could explore rare edge-case populations without recruitment bottlenecks.
But here’s where Stanford is careful.
The models don’t “understand” humans.
They reflect training data patterns.
They can amplify biases.
They can collapse under distribution shift.
And they can simulate plausibility without causality.
So the paper doesn’t claim replacement.
It argues for calibration.
LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.
That’s the distinction.
Not synthetic humans.
Synthetic behavioral priors.
The wild part?
This paper forces academia to confront something bigger:
If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.
Not minds.
Maps.
And maps can be useful.
We’re moving from “AI as text generator” to “AI as behavioral simulator.”
The ethics, methodology, and epistemology implications are massive.
Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.
And that might be the real revolution hidden in this paper.
tweet
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.
The title sounds like a joke:
“This human study did not involve human subjects.”
But it’s dead serious.
The researchers are asking a controversial question:
Can LLM simulations count as behavioral evidence?
Here’s the core idea.
Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.
Not generic prompts.
But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.
Then they test whether the simulated responses statistically match real-world human data.
And disturbingly… they often do.
Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:
• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits
Not perfectly. Not universally.
But far closer than most people would expect.
The key contribution of the paper isn’t “LLMs are human.”
It’s validation.
They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.
When the distributions match, the simulation isn’t just storytelling.
It becomes empirical evidence.
That’s the uncomfortable shift.
If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?
Because if the answer is yes, this changes everything:
• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation
You could prototype social interventions before deploying them in the real world.
You could stress-test messaging strategies across simulated demographics.
You could explore rare edge-case populations without recruitment bottlenecks.
But here’s where Stanford is careful.
The models don’t “understand” humans.
They reflect training data patterns.
They can amplify biases.
They can collapse under distribution shift.
And they can simulate plausibility without causality.
So the paper doesn’t claim replacement.
It argues for calibration.
LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.
That’s the distinction.
Not synthetic humans.
Synthetic behavioral priors.
The wild part?
This paper forces academia to confront something bigger:
If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.
Not minds.
Maps.
And maps can be useful.
We’re moving from “AI as text generator” to “AI as behavioral simulator.”
The ethics, methodology, and epistemology implications are massive.
Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.
And that might be the real revolution hidden in this paper.
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: // From Vibe Coding to Agentic Engineering //
GLM-5 is a foundation model designed to transition from vibe coding to agentic engineering.
The model introduces novel asynchronous agent RL algorithms that enable learning from complex, long-horizon interactions. It also adopts DSA to reduce computational costs while preserving long-context performance.
The key contribution is an asynchronous RL infrastructure that decouples generation from training, allowing the model to learn from extended agentic workflows rather than short isolated tasks.
GLM-5 demonstrates strong performance on standard benchmarks and surpasses previous baselines in handling end-to-end software engineering challenges.
Paper: https://t.co/pl50bRSXVR
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet
RT @omarsar0: // From Vibe Coding to Agentic Engineering //
GLM-5 is a foundation model designed to transition from vibe coding to agentic engineering.
The model introduces novel asynchronous agent RL algorithms that enable learning from complex, long-horizon interactions. It also adopts DSA to reduce computational costs while preserving long-context performance.
The key contribution is an asynchronous RL infrastructure that decouples generation from training, allowing the model to learn from extended agentic workflows rather than short isolated tasks.
GLM-5 demonstrates strong performance on standard benchmarks and surpasses previous baselines in handling end-to-end software engineering challenges.
Paper: https://t.co/pl50bRSXVR
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet
Offshore
Photo
DAIR.AI
RT @dair_ai: A paper worth paying close attention to.
It presents Lossless Context Management (LCM), which reframes how agents handle long contexts.
It outperforms Claude Code on long-context tasks.
Recursive Language Models give the model full autonomy to write its own memory scripts. LCM takes that power back, handing it to a deterministic engine that compresses old messages into a hierarchical DAG while keeping lossless pointers to every original. Less expressive in theory, far more reliable in practice.
The results:
Their agent (Volt, on Opus 4.6) beats Claude Code at *every* context length from 32K to 1M tokens on the OOLONG benchmark. +29.2 points average improvement versus Claude Code's +24.7. The gap widens at longer contexts.
The implication is one we keep relearning from software engineering history: how you manage what the model sees may matter more than giving the model tools to manage it itself. Every agent framework shipping with "let the model figure it out" memory strategies may be building on the wrong abstraction entirely.
Paper: https://t.co/LtqS7pzmP4
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
RT @dair_ai: A paper worth paying close attention to.
It presents Lossless Context Management (LCM), which reframes how agents handle long contexts.
It outperforms Claude Code on long-context tasks.
Recursive Language Models give the model full autonomy to write its own memory scripts. LCM takes that power back, handing it to a deterministic engine that compresses old messages into a hierarchical DAG while keeping lossless pointers to every original. Less expressive in theory, far more reliable in practice.
The results:
Their agent (Volt, on Opus 4.6) beats Claude Code at *every* context length from 32K to 1M tokens on the OOLONG benchmark. +29.2 points average improvement versus Claude Code's +24.7. The gap widens at longer contexts.
The implication is one we keep relearning from software engineering history: how you manage what the model sees may matter more than giving the model tools to manage it itself. Every agent framework shipping with "let the model figure it out" memory strategies may be building on the wrong abstraction entirely.
Paper: https://t.co/LtqS7pzmP4
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @patrick_oshag: This is my second conversation with @JoshuaKushner.
Josh started Thrive in 2011 and the firm now manages ~$50 billion. We cover the iconic investments that defined it: Instagram, Stripe, GitHub, and spend a lot of time on OpenAI. He explains how Thrive thinks about investing today and the three categories they're currently focused on.
Josh also talks about how he built the firm – why they keep the team so small, why concentration is core to what they do, and what he's learned from A24 about enabling artists to create their best work.
Throughout the conversation, Josh shares the personal stories that shaped him, from his grandmother surviving the Holocaust to lessons from Stan Druckenmiller and Jon Winkelried at formative moments in Thrive's history.
Enjoy!
https://t.co/B0ZMk6Oydo
tweet
RT @patrick_oshag: This is my second conversation with @JoshuaKushner.
Josh started Thrive in 2011 and the firm now manages ~$50 billion. We cover the iconic investments that defined it: Instagram, Stripe, GitHub, and spend a lot of time on OpenAI. He explains how Thrive thinks about investing today and the three categories they're currently focused on.
Josh also talks about how he built the firm – why they keep the team so small, why concentration is core to what they do, and what he's learned from A24 about enabling artists to create their best work.
Throughout the conversation, Josh shares the personal stories that shaped him, from his grandmother surviving the Holocaust to lessons from Stan Druckenmiller and Jon Winkelried at formative moments in Thrive's history.
Enjoy!
https://t.co/B0ZMk6Oydo
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Daniel Loeb has bought & completely sold $META four separate times over the past decade.
𝙃𝙖𝙙 𝙝𝙚 𝙨𝙞𝙢𝙥𝙡𝙮 𝙝𝙚𝙡𝙙 his original 3.75M shares, that stake would be worth roughly $2.40B today — nearly 1/3 of Third Point’s reported current assets (latest Q4 ’25 13F).
___
No — this is not a knock on Loeb.
He’s far smarter & more successful than me.
𝐁𝐮𝐭 𝐭𝐡𝐞𝐫𝐞’𝐬 𝐚𝐧 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐥𝐞𝐬𝐬𝐨𝐧 𝐡𝐞𝐫𝐞:
For those of us fortunate enough to own a truly exceptional business, the hardest (yet often most profitable) strategy can be:
𝐃𝐨𝐢𝐧𝐠 𝐧𝐨𝐭𝐡𝐢𝐧𝐠.
As Charlie Munger said:
“𝘐𝘯𝘷𝘦𝘴𝘵𝘪𝘯𝘨 𝘪𝘴 𝘸𝘩𝘦𝘳𝘦 𝘺𝘰𝘶 𝘧𝘪𝘯𝘥 𝘢 𝘧𝘦𝘸 𝘨𝘳𝘦𝘢𝘵 𝘤𝘰𝘮𝘱𝘢𝘯𝘪𝘦𝘴 𝘢𝘯𝘥 𝘵𝘩𝘦𝘯 𝘴𝘪𝘵 𝘰𝘯 𝘺𝘰𝘶𝘳 𝘢𝘴𝘴.”
tweet
RT @DimitryNakhla: Daniel Loeb has bought & completely sold $META four separate times over the past decade.
𝙃𝙖𝙙 𝙝𝙚 𝙨𝙞𝙢𝙥𝙡𝙮 𝙝𝙚𝙡𝙙 his original 3.75M shares, that stake would be worth roughly $2.40B today — nearly 1/3 of Third Point’s reported current assets (latest Q4 ’25 13F).
___
No — this is not a knock on Loeb.
He’s far smarter & more successful than me.
𝐁𝐮𝐭 𝐭𝐡𝐞𝐫𝐞’𝐬 𝐚𝐧 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐥𝐞𝐬𝐬𝐨𝐧 𝐡𝐞𝐫𝐞:
For those of us fortunate enough to own a truly exceptional business, the hardest (yet often most profitable) strategy can be:
𝐃𝐨𝐢𝐧𝐠 𝐧𝐨𝐭𝐡𝐢𝐧𝐠.
As Charlie Munger said:
“𝘐𝘯𝘷𝘦𝘴𝘵𝘪𝘯𝘨 𝘪𝘴 𝘸𝘩𝘦𝘳𝘦 𝘺𝘰𝘶 𝘧𝘪𝘯𝘥 𝘢 𝘧𝘦𝘸 𝘨𝘳𝘦𝘢𝘵 𝘤𝘰𝘮𝘱𝘢𝘯𝘪𝘦𝘴 𝘢𝘯𝘥 𝘵𝘩𝘦𝘯 𝘴𝘪𝘵 𝘰𝘯 𝘺𝘰𝘶𝘳 𝘢𝘴𝘴.”
tweet