Offshore
Video
Michael Fritzell (Asian Century Stocks)
RT @ekmokaya: Someone made a video of what people are up to on our local frozen lake in Sweden during winter:

Beautiful. https://t.co/ixUGo0p2IT
tweet
The Transcript
RT @TheTranscript_: In this weekโ€™s newsletter:

๐Ÿญ $TSLA: I think if we donโ€™t do the Tesla Terafab, weโ€™re going to be limited by supplier output of chips. And I think maybe memory is an even bigger limiter than AI logic

๐Ÿ›๏ธ $MA: There is question on how the consumer was affected or not by some of the tariff changes that weโ€™ve seen last year. And that doesnโ€™t show up in our data either. So itโ€™s not coming through

๐Ÿค $GS: I think 2026 will be an even better dealmaking year. 2026 could be one of the best M&A years ever. I can see through our backlog and our activity levels and our client dialogues a very robust environment for dealmaking

๐Ÿ‘ฉโ€๐Ÿ’ป $RHI: While perspectives on medium- to long-term structural impact of AI on the labor market vary greatly, most of theevidence suggests a ne gligible impact so far on our areas of employment, particularly among small businesses

๐Ÿ“ฑ $META: I donโ€™t think that video is the ultimate kind of final format. I just -- I think that this is going to get -- weโ€™re going to get more formats that are more interactive and immersive and youโ€™re going to get them in your feeds
tweet
Offshore
Photo
God of Prompt
Virtual assistants should be worried.

@genspark_ai just hit $155M ARR in 10 months and after trying it, I completely understand why.

This is a true all-in-one AI workspace 2.0 that genuinely replaces multiple standalone tools:

Slides โ€ข Design โ€ข Images โ€ข Data โ€ข Research

All integrated into a single, seamless interface.

Here's the game-changer:

For just $19.99/month, you get access to top-tier AI models + specialized agents that execute tasks for you.
tweet
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: There have been two very different โ€” and ๐˜ด๐˜ฆ๐˜ฆ๐˜ฎ๐˜ช๐˜ฏ๐˜จ๐˜ญ๐˜บ ๐˜ฐ๐˜ฑ๐˜ฑ๐˜ฐ๐˜ด๐˜ช๐˜ฏ๐˜จ โ€” camps in Google over the last couple of years.

๐Ž๐ง๐ž ๐ ๐ซ๐จ๐ฎ๐ฉ of world-class investors stepped in when shares traded ~15x earnings, amid regulatory pressure, competitive fears, and a narrative that Google would be an AI laggard.

๐€๐ง๐จ๐ญ๐ก๐ž๐ซ ๐ ๐ซ๐จ๐ฎ๐ฉ began buying after Googleโ€™s AI breakout โ€” once the company was clearly demonstrating leadership in models, infrastructure, and real-world deployment.

๐™„๐™ฃ๐™ฉ๐™š๐™ง๐™š๐™จ๐™ฉ๐™ž๐™ฃ๐™œ๐™ก๐™ฎ, ๐™ข๐™–๐™ฃ๐™ฎ ๐™›๐™ง๐™ค๐™ข ๐™ฉ๐™๐™š ๐™›๐™ž๐™ง๐™จ๐™ฉ ๐™˜๐™–๐™ข๐™ฅ ๐™๐™–๐™ซ๐™š ๐™ฉ๐™ง๐™ž๐™ข๐™ข๐™š๐™™ ๐™ค๐™ง ๐™จ๐™ค๐™ข๐™š ๐™š๐™ญ๐™ž๐™ฉ๐™š๐™™, ๐™–๐™จ ๐™ข๐™–๐™ฃ๐™ฎ ๐™›๐™ง๐™ค๐™ข ๐™ฉ๐™๐™š ๐™จ๐™š๐™˜๐™ค๐™ฃ๐™™ ๐™˜๐™–๐™ข๐™ฅ ๐™๐™–๐™ซ๐™š ๐™š๐™ฃ๐™ฉ๐™š๐™ง๐™š๐™™.

And I think ๐˜ฃ๐˜ฐ๐˜ต๐˜ฉ ๐˜ค๐˜ข๐˜ฏ ๐˜ฃ๐˜ฆ ๐˜ณ๐˜ช๐˜จ๐˜ฉ๐˜ต.

๐“๐ก๐ž ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐˜ธ๐˜ข๐˜ด ๐˜ณ๐˜ช๐˜จ๐˜ฉ๐˜ต ๐˜ฃ๐˜ฆ๐˜ค๐˜ข๐˜ถ๐˜ด๐˜ฆ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ณ๐˜ช๐˜ด๐˜ฌ/๐˜ณ๐˜ฆ๐˜ธ๐˜ข๐˜ณ๐˜ฅ ๐˜ธ๐˜ข๐˜ด ๐˜ฆ๐˜น๐˜ต๐˜ณ๐˜ข๐˜ฐ๐˜ณ๐˜ฅ๐˜ช๐˜ฏ๐˜ข๐˜ณ๐˜ช๐˜ญ๐˜บ ๐˜ข๐˜ด๐˜บ๐˜ฎ๐˜ฎ๐˜ฆ๐˜ต๐˜ณ๐˜ช๐˜ค.

Bad news was abundant. Expectations were depressed. The margin of safety was wide.

๐“๐ก๐ž ๐ฌ๐ž๐œ๐จ๐ง๐ ๐ ๐ซ๐จ๐ฎ๐ฉ ๐˜ฎ๐˜ข๐˜บ ๐˜ฃ๐˜ฆ ๐˜ณ๐˜ช๐˜จ๐˜ฉ๐˜ต ๐˜ฃ๐˜ฆ๐˜ค๐˜ข๐˜ถ๐˜ด๐˜ฆ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ง๐˜ถ๐˜ต๐˜ถ๐˜ณ๐˜ฆ ๐˜ช๐˜ด ๐˜ฏ๐˜ฐ๐˜ธ ๐˜ค๐˜ญ๐˜ฆ๐˜ข๐˜ณ๐˜ฆ๐˜ณ.

Google is proving itself as a serious AI leader, regulatory fears have softened, and the companyโ€™s long-term growth runway looks larger than it did two years ago.

Same company.

Different entry points.

Different sources of edge.

๐˜Œ๐˜ข๐˜ณ๐˜ญ๐˜ช๐˜ฆ๐˜ณ, Google was a multiple + sentiment mean-reversion opportunity.

๐˜›๐˜ฐ๐˜ฅ๐˜ข๐˜บ, itโ€™s more of a premium business compounding opportunity โ€” where returns depend on sustained execution, not multiple expansion.

๐™„๐™ฃ ๐™—๐™ค๐™ฉ๐™ ๐™˜๐™–๐™จ๐™š๐™จ, ๐™ฉ๐™๐™š ๐™ช๐™ฃ๐™™๐™š๐™ง๐™ก๐™ฎ๐™ž๐™ฃ๐™œ ๐™—๐™š๐™ก๐™ž๐™š๐™› ๐™ž๐™จ ๐™จ๐™ž๐™ข๐™ž๐™ก๐™–๐™ง:

๐†๐จ๐จ๐ ๐ฅ๐žโ€™๐ฌ ๐ฅ๐จ๐ง๐ -๐ญ๐ž๐ซ๐ฆ ๐จ๐ฉ๐ฉ๐จ๐ซ๐ญ๐ฎ๐ง๐ข๐ญ๐ฒ ๐ข๐ฌ ๐›๐ข๐ ๐ ๐ž๐ซ ๐ญ๐ก๐š๐ง ๐ฐ๐ก๐š๐ญ ๐ญ๐ก๐ž ๐ฆ๐š๐ซ๐ค๐ž๐ญ ๐ก๐š๐ ๐›๐ž๐ž๐ง ๐๐ข๐ฌ๐œ๐จ๐ฎ๐ง๐ญ๐ข๐ง๐ .

I personally think Google is still early in its AI story. The applications, monetization paths, and ecosystem effects are just beginning to show themselves.

Not every great investment looks the same.

๐˜š๐˜ฐ๐˜ฎ๐˜ฆ๐˜ต๐˜ช๐˜ฎ๐˜ฆ๐˜ด the edge is buying when expectations collapse.

๐˜š๐˜ฐ๐˜ฎ๐˜ฆ๐˜ต๐˜ช๐˜ฎ๐˜ฆ๐˜ด the edge is recognizing that the future is larger than consensus.

Different paths.

Different expressions of the same long-term thesis.

$GOOGL $GOOGL
tweet
Offshore
Photo
The Transcript
$UBER

$UBER Q4โ€™25 earnings are out โ€” a standout quarter to end a record year, with our largest and most-engaged consumer base ever:
> MAPCs accelerated, up 18% to 202M
> Trips accelerated, up 22% to 3.8B
> Gross Bookings accelerated, up 22% to $54.1B
> Adjusted EBITDA accelerated, up 35% to $2.5B
> TTM FCF of $9.8 billion
- Balaji Krishnamurthy
tweet
AkhenOsiris
$APP

Mizuho

1. Heard Cleveland was negative: Calling out some churn on new E-Comm spenders (and some partners skeptical they can scale)

2. CloudX Launch todayโ€ฆ CloudX Hits GA With Plans To Rewrite The Mobile Ad Stack Using AI Agents
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Steal this mega prompt to generate realistic selfies for your Instagram AI influencer:

(The realism is scary good)

---

You are a photorealistic AI selfie prompt generator.

Your job: Take the user's basic description and turn it into a hyper-realistic image prompt that creates selfies indistinguishable from real photos.

REALISM RULES:
- Add natural imperfections (skin texture, flyaway hairs, slight asymmetry)
- Specify iPhone camera specs for authenticity
- Include environmental context and natural lighting
- Avoid "perfect" - realistic beats beautiful
- Always mention: visible pores, natural shadows, candid moment
- Critical: realistic hands with 5 fingers in natural positions

OUTPUT FORMAT:

When user tells you what selfie they want, respond with:

---

Copy this into: [Midjourney/FLUX/Stable Diffusion]

PROMPT STRUCTURE YOU CREATE:

Raw iPhone [model] photo, [shot type] of [character details: age, ethnicity, specific features], [hair with imperfections], [natural expression], [casual clothing], [activity/pose], [specific location], [natural lighting + time of day], candid moment, unfiltered, authentic Instagram aesthetic, f/1.8, shallow depth of field, slight grain

Physical details: visible skin texture and pores, natural under-eye area, flyaway hairs, subtle facial asymmetry, realistic hands (5 fingers, natural pose), [environment details]

Negative prompt: CGI, 3D render, perfect skin, plastic, beauty filter, symmetrical, studio lighting, fake, artificial, dead eyes, model pose, photoshoot

EXAMPLE INTERACTION:

User: "24 year old latina girl, gym selfie, mirror pic"

You respond:

"Here's your photorealistic prompt:"
tweet
Offshore
Video
Lumida Wealth Management
KEN GRIFFIN SAYS THE DOLLAR LOST ITS LUSTER

"The dollar has lost some shine over the last 12 months. Tariff policies and rhetoric took it down.

When you're the strongest nation in the world, you get a strong currency. That's just how it works.

Reserve currency status means lower cost of capital. Lower interest rates. Higher quality of living for Americans.

Yes it makes exports harder. But the ability to amass and deploy capital across corporate America is the real advantage.

At the end of the day, the strongest nation will have the strongest currency."

Griffin's calling dollar weakness temporary noise against American dominance.
tweet
Offshore
Photo
Fiscal.ai
Eli Lilly's weight loss drugs are soaring.

Mounjaro: $7.4B, up 110%
Zepbound: $4.3B, up 123%

$LLY https://t.co/PCVAfnCZlm
tweet
Offshore
Photo
The Transcript
RT @dkhos: Great work to the @Uber teams - we'll keep building and delivering ... Q after Q ... no let up. And thank you to PMR and congrats BKM on the new gig!

$UBER Q4โ€™25 earnings are out โ€” a standout quarter to end a record year, with our largest and most-engaged consumer base ever:
> MAPCs accelerated, up 18% to 202M
> Trips accelerated, up 22% to 3.8B
> Gross Bookings accelerated, up 22% to $54.1B
> Adjusted EBITDA accelerated, up 35% to $2.5B
> TTM FCF of $9.8 billion
- Balaji Krishnamurthy
tweet
Offshore
Photo
DAIR.AI
We are just scratching the surface of agentic RAG systems.

Current RAG systems don't let the model think about retrieval.

Retrieval is still mostly treated as a static step.

So the way it currently works is that RAG retrieves passages in one shot, concatenates them into context, and hopes the model figures it out.

More sophisticated methods predefine workflows that the model must follow step-by-step.

But neither approach lets the model decide how to search.

This new research introduces A-RAG, an agentic RAG framework that exposes hierarchical retrieval interfaces directly to the model, turning it into an active participant in the retrieval process.

Instead of one-shot retrieval, A-RAG gives the agent three tools at different granularities: keyword_search for exact lexical matching, semantic_search for dense passage retrieval, and chunk_read for accessing full document content.

The agent decides autonomously which tool to use, when to drill deeper, and when it has gathered enough evidence to answer.

Information in a corpus is naturally organized at multiple granularities, from fine-grained keywords to sentence-level semantics to full chunks.

Giving the model access to all these levels lets it spontaneously develop diverse retrieval strategies tailored to each task.

Results with GPT-5-mini are impressive. A-RAG achieves 94.5% on HotpotQA, 89.7% on 2Wiki, and 74.1% on MuSiQue, outperforming GraphRAG, HippoRAG2, LinearRAG, and every other baseline across all benchmarks.

Even A-RAG Naive, equipped with only a single embedding tool, beats most existing methods, demonstrating the raw power of the agentic paradigm itself.

Context efficiency is where it gets interesting. A-RAG Full retrieves only 2,737 tokens on HotpotQA compared to Naive RAG's 5,358 tokens, while achieving 13 points higher accuracy. The hierarchical design lets the model avoid loading irrelevant content, reading only what matters.

The framework also scales with test-time compute. Increasing max steps from 5 to 20 improves GPT-5-mini by ~8%. Scaling reasoning effort from minimal to high yields ~25% gains for both GPT-5-mini and GPT-5.

The future of RAG isn't better retrieval algorithms. It's better retrieval interfaces that let models use their reasoning capabilities to decide what to search, how to search, and when to stop.

Paper: https://t.co/FbZsV87npT

Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet