Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: I don’t think many investors truly appreciate how deep the moats at $SPGI and $MCO really are.
These aren’t just data businesses — they’re embedded gatekeepers in global capital markets, with network effects, regulatory reliance, & decades of trust that are hard to replicate.
tweet
RT @DimitryNakhla: I don’t think many investors truly appreciate how deep the moats at $SPGI and $MCO really are.
These aren’t just data businesses — they’re embedded gatekeepers in global capital markets, with network effects, regulatory reliance, & decades of trust that are hard to replicate.
tweet
Offshore
Photo
Benjamin Hernandez😎
⚡ The "Electronic Giant" Choice
Recommendation: $AXTI ~$28.20
AXT Inc. is a "Buy" rated powerhouse with a $1.56B valuation. Today's +17.19% rally is backed by a massive 6.97M shares traded.
Reason calling it: High institutional turnover at $28.20 suggests a long-term bottom. https://t.co/dGsp8x98EG
tweet
⚡ The "Electronic Giant" Choice
Recommendation: $AXTI ~$28.20
AXT Inc. is a "Buy" rated powerhouse with a $1.56B valuation. Today's +17.19% rally is backed by a massive 6.97M shares traded.
Reason calling it: High institutional turnover at $28.20 suggests a long-term bottom. https://t.co/dGsp8x98EG
tweet
Offshore
Photo
DAIR.AI
What if you could get multi-agent performance from a single model?
Multi-agent debate systems are powerful. Multiple LLMs can critique each other's reasoning, catch errors, and converge on better answers.
However, the cost scales linearly with the number of agents. Five agents means 5x the compute. Twenty agents means 20x and so on.
But the intelligence gained from debate doesn't have to stay locked behind a compute wall.
This new research introduces AgentArk, a framework that distills the reasoning capabilities of multi-agent debate into a single LLM through trajectory extraction and targeted fine-tuning.
This work addresses an important problem: multi-agent systems are effective but expensive at inference time. AgentArk moves that cost to training time, letting a single model carry the reasoning depth of an entire agent team.
The key idea: run multi-agent debate offline to generate high-quality reasoning traces, then train a smaller model to internalize those patterns.
Five agents debate, one student learns.
AgentArk tests three distillation methods. RSFT uses supervised fine-tuning on correct trajectories. DA filters for diverse reasoning paths. PAD, their strongest method, preserves the full structure of multi-agent deliberation, capturing how agents verify intermediate steps and localize errors.
The results across 120 experiments:
> PAD achieves a 4.8% average gain over single-agent baselines, with in-domain improvements reaching up to 30%. On reasoning quality metrics,
> PAD scores highest in intermediate verification (4.07 vs 2.41 baseline) and reasoning coherence (3.96 vs 1.88 baseline).
>The distilled models also transfer: trained on math, they improve on TruthfulQA with ROUGE-L jumping from 0.613 to 0.657.
Scaling from Qwen3-32B teachers down to Qwen3-0.6B students, the framework holds up. Even sub-billion parameter models absorb meaningful reasoning improvements from multi-agent debate.
Paper: https://t.co/cyPTig221s
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
What if you could get multi-agent performance from a single model?
Multi-agent debate systems are powerful. Multiple LLMs can critique each other's reasoning, catch errors, and converge on better answers.
However, the cost scales linearly with the number of agents. Five agents means 5x the compute. Twenty agents means 20x and so on.
But the intelligence gained from debate doesn't have to stay locked behind a compute wall.
This new research introduces AgentArk, a framework that distills the reasoning capabilities of multi-agent debate into a single LLM through trajectory extraction and targeted fine-tuning.
This work addresses an important problem: multi-agent systems are effective but expensive at inference time. AgentArk moves that cost to training time, letting a single model carry the reasoning depth of an entire agent team.
The key idea: run multi-agent debate offline to generate high-quality reasoning traces, then train a smaller model to internalize those patterns.
Five agents debate, one student learns.
AgentArk tests three distillation methods. RSFT uses supervised fine-tuning on correct trajectories. DA filters for diverse reasoning paths. PAD, their strongest method, preserves the full structure of multi-agent deliberation, capturing how agents verify intermediate steps and localize errors.
The results across 120 experiments:
> PAD achieves a 4.8% average gain over single-agent baselines, with in-domain improvements reaching up to 30%. On reasoning quality metrics,
> PAD scores highest in intermediate verification (4.07 vs 2.41 baseline) and reasoning coherence (3.96 vs 1.88 baseline).
>The distilled models also transfer: trained on math, they improve on TruthfulQA with ROUGE-L jumping from 0.613 to 0.657.
Scaling from Qwen3-32B teachers down to Qwen3-0.6B students, the framework holds up. Even sub-billion parameter models absorb meaningful reasoning improvements from multi-agent debate.
Paper: https://t.co/cyPTig221s
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Quiver Quantitative
JUST IN: Someone on Polymarket has bet $100K that the US will strike Iran today.
They will win $4,000,000 if it happens.
Insider or gambler? https://t.co/p70QMgWPo1
tweet
JUST IN: Someone on Polymarket has bet $100K that the US will strike Iran today.
They will win $4,000,000 if it happens.
Insider or gambler? https://t.co/p70QMgWPo1
tweet
Offshore
Photo
God of Prompt
RT @rryssf_: MIT researchers just mass-published evidence that the next paradigm after reasoning models isn't bigger context windows ☠️
Recursive Language Models (RLMs) let the model write code to examine, decompose, and recursively call itself over its own input.
the results are genuinely wild. here's the full breakdown:
tweet
RT @rryssf_: MIT researchers just mass-published evidence that the next paradigm after reasoning models isn't bigger context windows ☠️
Recursive Language Models (RLMs) let the model write code to examine, decompose, and recursively call itself over its own input.
the results are genuinely wild. here's the full breakdown:
tweet
Offshore
Video
Bourbon Capital
$AMZN AWS CEO Matt Garman: Utilities don't scale with the level that we need them to scale....so there's probably some steps between here and there where you're going to have to do behind the meter power......we're going to have to fund that ramp up while the world catches up
Congrats Utilities.... you've unlocked a long term Godfather
tweet
$AMZN AWS CEO Matt Garman: Utilities don't scale with the level that we need them to scale....so there's probably some steps between here and there where you're going to have to do behind the meter power......we're going to have to fund that ramp up while the world catches up
Congrats Utilities.... you've unlocked a long term Godfather
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
Kensho is one of the more under-the-radar assets inside $SPGI.
It’s $SPGI AI and data analytics platform, built to analyze massive, complex datasets across economics, geopolitics, financial markets, and corporate fundamentals—used by asset managers, banks, governments, and enterprises.
While most of the focus is on ratings and indices, Kensho quietly expands $SPGI moat by embedding AI-driven insights deeper into client workflows, increasing switching costs and long-term relevance.
Not flashy, but the kind of capability that can strengthen a toll-booth business over time.
___
3Q 2025 Earnings Call:
“We acquired Kensho back in 2018, including that acquisition since 2018, we have invested over $1B in AI innovation across three developmental stages…
Importantly, our AI innovation serves as a powerful example of our ability to leverage our scale, our expertise and our fiscal discipline. The fact that we made such bold investments early on means that we’ve been able to innovate very efficiently from a financial perspective.”
___
@contextinvestor thank you for the thoughtful comment.
tweet
Kensho is one of the more under-the-radar assets inside $SPGI.
It’s $SPGI AI and data analytics platform, built to analyze massive, complex datasets across economics, geopolitics, financial markets, and corporate fundamentals—used by asset managers, banks, governments, and enterprises.
While most of the focus is on ratings and indices, Kensho quietly expands $SPGI moat by embedding AI-driven insights deeper into client workflows, increasing switching costs and long-term relevance.
Not flashy, but the kind of capability that can strengthen a toll-booth business over time.
___
3Q 2025 Earnings Call:
“We acquired Kensho back in 2018, including that acquisition since 2018, we have invested over $1B in AI innovation across three developmental stages…
Importantly, our AI innovation serves as a powerful example of our ability to leverage our scale, our expertise and our fiscal discipline. The fact that we made such bold investments early on means that we’ve been able to innovate very efficiently from a financial perspective.”
___
@contextinvestor thank you for the thoughtful comment.
I don’t think many investors truly appreciate how deep the moats at $SPGI and $MCO really are.
These aren’t just data businesses — they’re embedded gatekeepers in global capital markets, with network effects, regulatory reliance, & decades of trust that are hard to replicate. - Dimitry Nakhla | Babylon Capital®tweet
Offshore
Photo
Benjamin Hernandez😎
Stop over-analyzing every ticker. 📊
Analysis paralysis is a profit killer. I provide clean, simple breakout alerts on WhatsApp so beginners and pros alike can act with total confidence.
Get in✅ https://t.co/71FIJId47G
Keep your trading simple and effective.
$SOFI $HOOD $PLTR
tweet
Stop over-analyzing every ticker. 📊
Analysis paralysis is a profit killer. I provide clean, simple breakout alerts on WhatsApp so beginners and pros alike can act with total confidence.
Get in✅ https://t.co/71FIJId47G
Keep your trading simple and effective.
$SOFI $HOOD $PLTR
⚡ The "Electronic Giant" Choice
Recommendation: $AXTI ~$28.20
AXT Inc. is a "Buy" rated powerhouse with a $1.56B valuation. Today's +17.19% rally is backed by a massive 6.97M shares traded.
Reason calling it: High institutional turnover at $28.20 suggests a long-term bottom. https://t.co/dGsp8x98EG - Benjamin Hernandez😎tweet