Offshore
Dimitry Nakhla | Babylon Capitalยฎ In hindsight, perhaps a year from today, the brief moment $MSFT traded at ~26x may look like an obvious opportunity hiding in plain sight. https://t.co/G5EHiyA5AB A quality valuation analysis on $MSFT ๐ง๐ฝโโ๏ธ โขNTM P/E Ratio:โฆ
๏ฟฝ ๐ ๐ฎ๐๐ซ๐๐ง๐ญ๐๐๐ ๐๐ฌ ๐ญ๐จ ๐๐๐๐ฎ๐ซ๐๐๐ฒ ๐จ๐ซ ๐๐จ๐ฆ๐ฉ๐ฅ๐๐ญ๐๐ง๐๐ฌ๐ฌ. ๐๐๐ฌ๐ญ ๐ฉ๐๐ซ๐๐จ๐ซ๐ฆ๐๐ง๐๐ ๐๐จ๐๐ฌ ๐ง๐จ๐ญ ๐ ๐ฎ๐๐ซ๐๐ง๐ญ๐๐ ๐๐ฎ๐ญ๐ฎ๐ซ๐ ๐ซ๐๐ฌ๐ฎ๐ฅ๐ญ๐ฌ. - Dimitry Nakhla | Babylon Capitalยฎ tweet
Offshore
Photo
Quiver Quantitative
Someone on Polymarket has bet almost $100K that the US will strike Iran by the end of the month.
Insider or gambler? https://t.co/ObLZHiUTo7
tweet
Someone on Polymarket has bet almost $100K that the US will strike Iran by the end of the month.
Insider or gambler? https://t.co/ObLZHiUTo7
tweet
Offshore
Photo
App Economy Insights
Two signals in Big Tech this week:
โ๏ธ SaaSpocalypse Now $WCLD
๐ญ Intelโs Supply Squeeze $INTC
Full story with visuals ๐
https://t.co/6vfDyTOC1i
tweet
Two signals in Big Tech this week:
โ๏ธ SaaSpocalypse Now $WCLD
๐ญ Intelโs Supply Squeeze $INTC
Full story with visuals ๐
https://t.co/6vfDyTOC1i
tweet
Offshore
Video
memenodes
One day i will go away from everyone to the mountains like this https://t.co/OlzsDEVkKa
tweet
One day i will go away from everyone to the mountains like this https://t.co/OlzsDEVkKa
tweet
Offshore
Photo
Fiscal.ai
RT @StockMKTNewz: Netflix $NFLX brought in $5.3 Billion from the United States ๐บ๐ธ and Canada ๐จ๐ฆ last quarter up from $3.3B in Q4 2021 https://t.co/ro6g2sG1Pf
tweet
RT @StockMKTNewz: Netflix $NFLX brought in $5.3 Billion from the United States ๐บ๐ธ and Canada ๐จ๐ฆ last quarter up from $3.3B in Q4 2021 https://t.co/ro6g2sG1Pf
tweet
Offshore
Photo
God of Prompt
RT @ytscribeai: life hack for content creators:
1/ n8n watches youtube channels via RSS (no API key)
2/ https://t.co/sWBIcxGUoD extracts transcripts ($0.0035 each)
3/ AI turns them into newsletters, blogs, tweets
one workflow. infinite content engine.
https://t.co/sWBIcxGUoD https://t.co/cRrSCMABas
tweet
RT @ytscribeai: life hack for content creators:
1/ n8n watches youtube channels via RSS (no API key)
2/ https://t.co/sWBIcxGUoD extracts transcripts ($0.0035 each)
3/ AI turns them into newsletters, blogs, tweets
one workflow. infinite content engine.
https://t.co/sWBIcxGUoD https://t.co/cRrSCMABas
tweet
Offshore
Photo
God of Prompt
RT @alex_prompter: This paper from Google DeepMind, Meta, Amazon, and Yale University quietly explains why most โAI agentsโ feel smart in demos and dumb in real work.
The core idea is simple but uncomfortable: todayโs LLMs donโt reason, they react. They generate fluent answers token by token, but they donโt explicitly plan, reflect, or decide when to stop and rethink. This paper argues that real progress comes from turning LLMs into agentic reasoners systems that can set goals, break them into subgoals, choose actions, evaluate outcomes, and revise their strategy mid-flight.
The authors formalize agentic reasoning as a loop, not a prompt:
observe โ plan โ act โ reflect โ update state โ repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
โข explicit intermediate goals
โข checkpoints for self-evaluation
โข the ability to abandon bad paths
โข memory of past attempts
No new weights. No bigger models. Just better control over when and why the model reasons.
The takeaway is brutal for the industry: scaling tokens and parameters wonโt give us reliable agents. Architecture will. Agentic reasoning isnโt a feature itโs the missing operating system for LLMs.
Most โautonomous agentsโ today are just fast typists with tools.
This paper explains what it actually takes to build thinkers.
tweet
RT @alex_prompter: This paper from Google DeepMind, Meta, Amazon, and Yale University quietly explains why most โAI agentsโ feel smart in demos and dumb in real work.
The core idea is simple but uncomfortable: todayโs LLMs donโt reason, they react. They generate fluent answers token by token, but they donโt explicitly plan, reflect, or decide when to stop and rethink. This paper argues that real progress comes from turning LLMs into agentic reasoners systems that can set goals, break them into subgoals, choose actions, evaluate outcomes, and revise their strategy mid-flight.
The authors formalize agentic reasoning as a loop, not a prompt:
observe โ plan โ act โ reflect โ update state โ repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
โข explicit intermediate goals
โข checkpoints for self-evaluation
โข the ability to abandon bad paths
โข memory of past attempts
No new weights. No bigger models. Just better control over when and why the model reasons.
The takeaway is brutal for the industry: scaling tokens and parameters wonโt give us reliable agents. Architecture will. Agentic reasoning isnโt a feature itโs the missing operating system for LLMs.
Most โautonomous agentsโ today are just fast typists with tools.
This paper explains what it actually takes to build thinkers.
tweet
Offshore
Video
Startup Archive
Mark Zuckerberg on how to avoid bad hires when your startup is growing quickly
As Mark explains, every fast-growing startup will repeatedly face the choice: โDo I hire the person whoโs in front of me now because they seem good?โ or โDo I hold out to get someone whoโs even better?โ
Mark offers his personal heuristic for founders facing this choice:
โThe heuristic that I always focused on for myself and my own kind of direct hiring, that I think works when you recurse it through the organization, is that you should only hire someone to be on your team if you would be happy working for them in an alternate universe. I think that works, and thatโs basically how Iโve tried to build my team.โ
He continues:
โIโm not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, Iโd be happy to work for them. I feel like Iโd learn from them. I respect their general judgment. Theyโre all very insightful. They have good valuesโฆ I think if you apply that at every layer in the organization, then youโll have a pretty strong organization.โ
Video source: @lexfridman (2023)
tweet
Mark Zuckerberg on how to avoid bad hires when your startup is growing quickly
As Mark explains, every fast-growing startup will repeatedly face the choice: โDo I hire the person whoโs in front of me now because they seem good?โ or โDo I hold out to get someone whoโs even better?โ
Mark offers his personal heuristic for founders facing this choice:
โThe heuristic that I always focused on for myself and my own kind of direct hiring, that I think works when you recurse it through the organization, is that you should only hire someone to be on your team if you would be happy working for them in an alternate universe. I think that works, and thatโs basically how Iโve tried to build my team.โ
He continues:
โIโm not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, Iโd be happy to work for them. I feel like Iโd learn from them. I respect their general judgment. Theyโre all very insightful. They have good valuesโฆ I think if you apply that at every layer in the organization, then youโll have a pretty strong organization.โ
Video source: @lexfridman (2023)
tweet
Offshore
Photo
Quiver Quantitative
Last week, we sent out an alert when we saw a new account on Polymarket betting on Rick Rieder as the next Fed chair.
They have now made $83K in unrealized gains, as his odds have skyrocketed.
Follow for more alerts on potential insider trades.
tweet
Last week, we sent out an alert when we saw a new account on Polymarket betting on Rick Rieder as the next Fed chair.
They have now made $83K in unrealized gains, as his odds have skyrocketed.
Follow for more alerts on potential insider trades.
A new account on Polymarket has bet $14K on Rick Rieder being the next fed chair.
They will win $180K if correct. https://t.co/wh34ZLs5wv - Quiver Quantitativetweet
Moon Dev
claude code for trading is nuts
this is how it can help you automate your trading in 2026
im not teaching this stuff anymore so you may wanna bm this https://t.co/1b4s020nvP
tweet
claude code for trading is nuts
this is how it can help you automate your trading in 2026
im not teaching this stuff anymore so you may wanna bm this https://t.co/1b4s020nvP
tweet