Offshore
Video
Michael Fritzell (Asian Century Stocks)
RT @ekmokaya: Someone made a video of what people are up to on our local frozen lake in Sweden during winter:
Beautiful. https://t.co/ixUGo0p2IT
tweet
RT @ekmokaya: Someone made a video of what people are up to on our local frozen lake in Sweden during winter:
Beautiful. https://t.co/ixUGo0p2IT
tweet
The Transcript
RT @TheTranscript_: In this weekโs newsletter:
๐ญ $TSLA: I think if we donโt do the Tesla Terafab, weโre going to be limited by supplier output of chips. And I think maybe memory is an even bigger limiter than AI logic
๐๏ธ $MA: There is question on how the consumer was affected or not by some of the tariff changes that weโve seen last year. And that doesnโt show up in our data either. So itโs not coming through
๐ค $GS: I think 2026 will be an even better dealmaking year. 2026 could be one of the best M&A years ever. I can see through our backlog and our activity levels and our client dialogues a very robust environment for dealmaking
๐ฉโ๐ป $RHI: While perspectives on medium- to long-term structural impact of AI on the labor market vary greatly, most of theevidence suggests a ne gligible impact so far on our areas of employment, particularly among small businesses
๐ฑ $META: I donโt think that video is the ultimate kind of final format. I just -- I think that this is going to get -- weโre going to get more formats that are more interactive and immersive and youโre going to get them in your feeds
tweet
RT @TheTranscript_: In this weekโs newsletter:
๐ญ $TSLA: I think if we donโt do the Tesla Terafab, weโre going to be limited by supplier output of chips. And I think maybe memory is an even bigger limiter than AI logic
๐๏ธ $MA: There is question on how the consumer was affected or not by some of the tariff changes that weโve seen last year. And that doesnโt show up in our data either. So itโs not coming through
๐ค $GS: I think 2026 will be an even better dealmaking year. 2026 could be one of the best M&A years ever. I can see through our backlog and our activity levels and our client dialogues a very robust environment for dealmaking
๐ฉโ๐ป $RHI: While perspectives on medium- to long-term structural impact of AI on the labor market vary greatly, most of theevidence suggests a ne gligible impact so far on our areas of employment, particularly among small businesses
๐ฑ $META: I donโt think that video is the ultimate kind of final format. I just -- I think that this is going to get -- weโre going to get more formats that are more interactive and immersive and youโre going to get them in your feeds
tweet
Offshore
Photo
God of Prompt
Virtual assistants should be worried.
@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.
This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:
Slides โข Design โข Images โข Data โข Research
All integrated into a single, seamless interface.
Here's the game-changer:
For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
Virtual assistants should be worried.
@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.
This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:
Slides โข Design โข Images โข Data โข Research
All integrated into a single, seamless interface.
Here's the game-changer:
For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: There have been two very different โ and ๐ด๐ฆ๐ฆ๐ฎ๐ช๐ฏ๐จ๐ญ๐บ ๐ฐ๐ฑ๐ฑ๐ฐ๐ด๐ช๐ฏ๐จ โ camps in Google over the last couple of years.
๐๐ง๐ ๐ ๐ซ๐จ๐ฎ๐ฉ of world-class investors stepped in when shares traded ~15x earnings, amid regulatory pressure, competitive fears, and a narrative that Google would be an AI laggard.
๐๐ง๐จ๐ญ๐ก๐๐ซ ๐ ๐ซ๐จ๐ฎ๐ฉ began buying after Googleโs AI breakout โ once the company was clearly demonstrating leadership in models, infrastructure, and real-world deployment.
๐๐ฃ๐ฉ๐๐ง๐๐จ๐ฉ๐๐ฃ๐๐ก๐ฎ, ๐ข๐๐ฃ๐ฎ ๐๐ง๐ค๐ข ๐ฉ๐๐ ๐๐๐ง๐จ๐ฉ ๐๐๐ข๐ฅ ๐๐๐ซ๐ ๐ฉ๐ง๐๐ข๐ข๐๐ ๐ค๐ง ๐จ๐ค๐ข๐ ๐๐ญ๐๐ฉ๐๐, ๐๐จ ๐ข๐๐ฃ๐ฎ ๐๐ง๐ค๐ข ๐ฉ๐๐ ๐จ๐๐๐ค๐ฃ๐ ๐๐๐ข๐ฅ ๐๐๐ซ๐ ๐๐ฃ๐ฉ๐๐ง๐๐.
And I think ๐ฃ๐ฐ๐ต๐ฉ ๐ค๐ข๐ฏ ๐ฃ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต.
๐๐ก๐ ๐๐ข๐ซ๐ฌ๐ญ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐ธ๐ข๐ด ๐ณ๐ช๐จ๐ฉ๐ต ๐ฃ๐ฆ๐ค๐ข๐ถ๐ด๐ฆ ๐ต๐ฉ๐ฆ ๐ณ๐ช๐ด๐ฌ/๐ณ๐ฆ๐ธ๐ข๐ณ๐ฅ ๐ธ๐ข๐ด ๐ฆ๐น๐ต๐ณ๐ข๐ฐ๐ณ๐ฅ๐ช๐ฏ๐ข๐ณ๐ช๐ญ๐บ ๐ข๐ด๐บ๐ฎ๐ฎ๐ฆ๐ต๐ณ๐ช๐ค.
Bad news was abundant. Expectations were depressed. The margin of safety was wide.
๐๐ก๐ ๐ฌ๐๐๐จ๐ง๐ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐ฎ๐ข๐บ ๐ฃ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต ๐ฃ๐ฆ๐ค๐ข๐ถ๐ด๐ฆ ๐ต๐ฉ๐ฆ ๐ง๐ถ๐ต๐ถ๐ณ๐ฆ ๐ช๐ด ๐ฏ๐ฐ๐ธ ๐ค๐ญ๐ฆ๐ข๐ณ๐ฆ๐ณ.
Google is proving itself as a serious AI leader, regulatory fears have softened, and the companyโs long-term growth runway looks larger than it did two years ago.
Same company.
Different entry points.
Different sources of edge.
๐๐ข๐ณ๐ญ๐ช๐ฆ๐ณ, Google was a multiple + sentiment mean-reversion opportunity.
๐๐ฐ๐ฅ๐ข๐บ, itโs more of a premium business compounding opportunity โ where returns depend on sustained execution, not multiple expansion.
๐๐ฃ ๐๐ค๐ฉ๐ ๐๐๐จ๐๐จ, ๐ฉ๐๐ ๐ช๐ฃ๐๐๐ง๐ก๐ฎ๐๐ฃ๐ ๐๐๐ก๐๐๐ ๐๐จ ๐จ๐๐ข๐๐ก๐๐ง:
๐๐จ๐จ๐ ๐ฅ๐โ๐ฌ ๐ฅ๐จ๐ง๐ -๐ญ๐๐ซ๐ฆ ๐จ๐ฉ๐ฉ๐จ๐ซ๐ญ๐ฎ๐ง๐ข๐ญ๐ฒ ๐ข๐ฌ ๐๐ข๐ ๐ ๐๐ซ ๐ญ๐ก๐๐ง ๐ฐ๐ก๐๐ญ ๐ญ๐ก๐ ๐ฆ๐๐ซ๐ค๐๐ญ ๐ก๐๐ ๐๐๐๐ง ๐๐ข๐ฌ๐๐จ๐ฎ๐ง๐ญ๐ข๐ง๐ .
I personally think Google is still early in its AI story. The applications, monetization paths, and ecosystem effects are just beginning to show themselves.
Not every great investment looks the same.
๐๐ฐ๐ฎ๐ฆ๐ต๐ช๐ฎ๐ฆ๐ด the edge is buying when expectations collapse.
๐๐ฐ๐ฎ๐ฆ๐ต๐ช๐ฎ๐ฆ๐ด the edge is recognizing that the future is larger than consensus.
Different paths.
Different expressions of the same long-term thesis.
$GOOGL $GOOGL
tweet
RT @DimitryNakhla: There have been two very different โ and ๐ด๐ฆ๐ฆ๐ฎ๐ช๐ฏ๐จ๐ญ๐บ ๐ฐ๐ฑ๐ฑ๐ฐ๐ด๐ช๐ฏ๐จ โ camps in Google over the last couple of years.
๐๐ง๐ ๐ ๐ซ๐จ๐ฎ๐ฉ of world-class investors stepped in when shares traded ~15x earnings, amid regulatory pressure, competitive fears, and a narrative that Google would be an AI laggard.
๐๐ง๐จ๐ญ๐ก๐๐ซ ๐ ๐ซ๐จ๐ฎ๐ฉ began buying after Googleโs AI breakout โ once the company was clearly demonstrating leadership in models, infrastructure, and real-world deployment.
๐๐ฃ๐ฉ๐๐ง๐๐จ๐ฉ๐๐ฃ๐๐ก๐ฎ, ๐ข๐๐ฃ๐ฎ ๐๐ง๐ค๐ข ๐ฉ๐๐ ๐๐๐ง๐จ๐ฉ ๐๐๐ข๐ฅ ๐๐๐ซ๐ ๐ฉ๐ง๐๐ข๐ข๐๐ ๐ค๐ง ๐จ๐ค๐ข๐ ๐๐ญ๐๐ฉ๐๐, ๐๐จ ๐ข๐๐ฃ๐ฎ ๐๐ง๐ค๐ข ๐ฉ๐๐ ๐จ๐๐๐ค๐ฃ๐ ๐๐๐ข๐ฅ ๐๐๐ซ๐ ๐๐ฃ๐ฉ๐๐ง๐๐.
And I think ๐ฃ๐ฐ๐ต๐ฉ ๐ค๐ข๐ฏ ๐ฃ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต.
๐๐ก๐ ๐๐ข๐ซ๐ฌ๐ญ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐ธ๐ข๐ด ๐ณ๐ช๐จ๐ฉ๐ต ๐ฃ๐ฆ๐ค๐ข๐ถ๐ด๐ฆ ๐ต๐ฉ๐ฆ ๐ณ๐ช๐ด๐ฌ/๐ณ๐ฆ๐ธ๐ข๐ณ๐ฅ ๐ธ๐ข๐ด ๐ฆ๐น๐ต๐ณ๐ข๐ฐ๐ณ๐ฅ๐ช๐ฏ๐ข๐ณ๐ช๐ญ๐บ ๐ข๐ด๐บ๐ฎ๐ฎ๐ฆ๐ต๐ณ๐ช๐ค.
Bad news was abundant. Expectations were depressed. The margin of safety was wide.
๐๐ก๐ ๐ฌ๐๐๐จ๐ง๐ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐ฎ๐ข๐บ ๐ฃ๐ฆ ๐ณ๐ช๐จ๐ฉ๐ต ๐ฃ๐ฆ๐ค๐ข๐ถ๐ด๐ฆ ๐ต๐ฉ๐ฆ ๐ง๐ถ๐ต๐ถ๐ณ๐ฆ ๐ช๐ด ๐ฏ๐ฐ๐ธ ๐ค๐ญ๐ฆ๐ข๐ณ๐ฆ๐ณ.
Google is proving itself as a serious AI leader, regulatory fears have softened, and the companyโs long-term growth runway looks larger than it did two years ago.
Same company.
Different entry points.
Different sources of edge.
๐๐ข๐ณ๐ญ๐ช๐ฆ๐ณ, Google was a multiple + sentiment mean-reversion opportunity.
๐๐ฐ๐ฅ๐ข๐บ, itโs more of a premium business compounding opportunity โ where returns depend on sustained execution, not multiple expansion.
๐๐ฃ ๐๐ค๐ฉ๐ ๐๐๐จ๐๐จ, ๐ฉ๐๐ ๐ช๐ฃ๐๐๐ง๐ก๐ฎ๐๐ฃ๐ ๐๐๐ก๐๐๐ ๐๐จ ๐จ๐๐ข๐๐ก๐๐ง:
๐๐จ๐จ๐ ๐ฅ๐โ๐ฌ ๐ฅ๐จ๐ง๐ -๐ญ๐๐ซ๐ฆ ๐จ๐ฉ๐ฉ๐จ๐ซ๐ญ๐ฎ๐ง๐ข๐ญ๐ฒ ๐ข๐ฌ ๐๐ข๐ ๐ ๐๐ซ ๐ญ๐ก๐๐ง ๐ฐ๐ก๐๐ญ ๐ญ๐ก๐ ๐ฆ๐๐ซ๐ค๐๐ญ ๐ก๐๐ ๐๐๐๐ง ๐๐ข๐ฌ๐๐จ๐ฎ๐ง๐ญ๐ข๐ง๐ .
I personally think Google is still early in its AI story. The applications, monetization paths, and ecosystem effects are just beginning to show themselves.
Not every great investment looks the same.
๐๐ฐ๐ฎ๐ฆ๐ต๐ช๐ฎ๐ฆ๐ด the edge is buying when expectations collapse.
๐๐ฐ๐ฎ๐ฆ๐ต๐ช๐ฎ๐ฆ๐ด the edge is recognizing that the future is larger than consensus.
Different paths.
Different expressions of the same long-term thesis.
$GOOGL $GOOGL
tweet
Offshore
Photo
The Transcript
$UBER
tweet
$UBER
$UBER Q4โ25 earnings are out โ a standout quarter to end a record year, with our largest and most-engaged consumer base ever:
> MAPCs accelerated, up 18% to 202M
> Trips accelerated, up 22% to 3.8B
> Gross Bookings accelerated, up 22% to $54.1B
> Adjusted EBITDA accelerated, up 35% to $2.5B
> TTM FCF of $9.8 billion - Balaji Krishnamurthytweet
Offshore
Photo
God of Prompt
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:
(The realism is scary good)
---
You are a photorealistic AI selfie prompt generator.
Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.
REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions
OUTPUT FORMAT:
When user tells you what selfie they want, respond with:
---
Copy this into: [Midjourney/FLUX/Stable Diffusion]
PROMPT STRUCTURE YOU CREATE:
Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain
Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]
Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot
EXAMPLE INTERACTION:
User: "24 year old latina girl, gym selfie, mirror pic"
You respond:
"Here's your photorealistic prompt:"
tweet
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:
(The realism is scary good)
---
You are a photorealistic AI selfie prompt generator.
Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.
REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions
OUTPUT FORMAT:
When user tells you what selfie they want, respond with:
---
Copy this into: [Midjourney/FLUX/Stable Diffusion]
PROMPT STRUCTURE YOU CREATE:
Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain
Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]
Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot
EXAMPLE INTERACTION:
User: "24 year old latina girl, gym selfie, mirror pic"
You respond:
"Here's your photorealistic prompt:"
tweet
Offshore
Video
Lumida Wealth Management
KEN GRIFFIN SAYS THE DOLLAR LOST ITS LUSTER
"The dollar has lost some shine over the last 12 months. Tariff policies and rhetoric took it down.
When you're the strongest nation in the world, you get a strong currency. That's just how it works.
Reserve currency status means lower cost of capital. Lower interest rates. Higher quality of living for Americans.
Yes it makes exports harder. But the ability to amass and deploy capital across corporate America is the real advantage.
At the end of the day, the strongest nation will have the strongest currency."
Griffin's calling dollar weakness temporary noise against American dominance.
tweet
KEN GRIFFIN SAYS THE DOLLAR LOST ITS LUSTER
"The dollar has lost some shine over the last 12 months. Tariff policies and rhetoric took it down.
When you're the strongest nation in the world, you get a strong currency. That's just how it works.
Reserve currency status means lower cost of capital. Lower interest rates. Higher quality of living for Americans.
Yes it makes exports harder. But the ability to amass and deploy capital across corporate America is the real advantage.
At the end of the day, the strongest nation will have the strongest currency."
Griffin's calling dollar weakness temporary noise against American dominance.
tweet
Offshore
Photo
Fiscal.ai
Eli Lilly's weight loss drugs are soaring.
Mounjaro: $7.4B, up 110%
Zepbound: $4.3B, up 123%
$LLY https://t.co/PCVAfnCZlm
tweet
Eli Lilly's weight loss drugs are soaring.
Mounjaro: $7.4B, up 110%
Zepbound: $4.3B, up 123%
$LLY https://t.co/PCVAfnCZlm
tweet
Offshore
Photo
The Transcript
RT @dkhos: Great work to the @Uber teams - we'll keep building and delivering ... Q after Q ... no let up. And thank you to PMR and congrats BKM on the new gig!
tweet
RT @dkhos: Great work to the @Uber teams - we'll keep building and delivering ... Q after Q ... no let up. And thank you to PMR and congrats BKM on the new gig!
$UBER Q4โ25 earnings are out โ a standout quarter to end a record year, with our largest and most-engaged consumer base ever:
> MAPCs accelerated, up 18% to 202M
> Trips accelerated, up 22% to 3.8B
> Gross Bookings accelerated, up 22% to $54.1B
> Adjusted EBITDA accelerated, up 35% to $2.5B
> TTM FCF of $9.8 billion - Balaji Krishnamurthytweet
Offshore
Photo
DAIR.AI
We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
We are just scratching the surface of agentic RAG systems.
Current RAG systems don't let the model think about retrieval.
Retrieval is still mostly treated as a static step.
So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.
More sophisticated methods predefine workflows that the model must follow step-by-step.
But neither approach lets the model decide how to search.
This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.
Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.
The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.
Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.
Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.
Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.
Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.
Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.
The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.
The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.
Paper: https://t.co/FbZsV87npT
Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet