Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @konichivalue: This is the story of the greatest investor in Japanese history:
https://t.co/iX9LoaQjtW
tweet
Offshore
Photo
Jukan
This is an article written by a senior reporter at The Information. Personally, I don’t like The Information, but I thought this was quite worth reading, so I’m sharing it.

Big Tech companies are racing to invest in OpenAI.

Nvidia is considering $30 billion, Amazon at least $20 billion, and Microsoft $10 billion.

SoftBank is also saying they will put in $30 billion.

Ken Brown, a senior reporter at The Information, says this massive valuation of OpenAI doesn’t make sense and begins to reason why.

Before going into the reasoning, the reporter explains why the cost of OpenAI raising funds is increasing rapidly.

OpenAI has been facing growing skepticism from the market regarding its cash burn and future profitability.

OpenAI has effectively been funding its data center construction by leveraging the balance sheets of partners including Oracle, CoreWeave, and Vantage Data Centers. However, that strategy is now reaching its limits.

Investors are sending signals that there are credit limits for companies with high exposure to OpenAI. Specifically, they are raising the bond yields of those companies and driving down their stock prices.

This contrasts with how investors view tech giants. Despite large increases in capital expenditure (CapEx) and waves of borrowing, investors were generally enthusiastic about Big Tech's AI bets. Rather, the ones who were cautious were the large corporations like Meta, Alphabet, Amazon, and Microsoft. They covered most of the AI construction costs with their cash holdings and kept their borrowing levels low.

However, from a certain point, tech giants began investing in OpenAI. They provide the cash needed to reassure the lenders of OpenAI's suppliers. At the same time, they do not record this funding as capital expenditure, and at least so far, they are not financing it through debt.

The reporter explains that there is another meaning to these investments. Big Tech is now doing a variation of the circular trading that Nvidia did all last year.

They are creating circular financing deals where they send funds to their own customers.

Circular investment has different meanings for each company. For Nvidia, investing in companies that purchase its chips is a way to block competition and secure growth, and for Microsoft and Amazon, it means securing more cloud business from OpenAI.

Whatever the motivation behind the massive OpenAI investments by tech giants, the reporter points out that the impact is the same. Cash-rich companies are providing financial breathing room for OpenAI, allowing it to hold out until revenue and profits become sustainable or at least until it gets close enough for the market to open the funding tap.

This is exactly where shareholders have become worried. How long that will take, and whether tech giants will continue to write checks until then.

Citing the 14% drop in Microsoft's stock price as an example, investors have become more concerned about the company's reliance on OpenAI as a customer and whether they are getting a return on their AI spending.

I would like to summarize this report like this:

Big Tech is not valuing OpenAI at $730 billion because they want to. They have realized that if they don't give OpenAI $100 billion, $1 trillion will vanish from their own market capitalization.

Currently, most major IT company stock prices are inflated by an "AI premium." If OpenAI collapses due to a lack of funds for electricity and chip purchases, the logic supporting the entire AI industry could collapse.
tweet
Offshore
Photo
Bourbon Capital
$GOOG Generated $164.7 billion in operating cash flow in 2025 this thing is a printing machine

CEO: "Search saw more usage than ever before, with AI continuing to drive an expansionary moment."

Capex hits record highs at $91.4B and they are expecting over $180B in 2026

The company repurchased 240 million shares for $45 billion. In 2023 and 2024, it repurchased more than $62 billion each year

The company hit $400 billion in revenue for the first time, and at this rate is likely to reach $1 trillion in revenue within the next 10 years

Google Cloud Revenue growing 39.63% per year
YouTube Ads Revenue growing 22% per year

What a wonderful 2025 for Alphabet

$GOOG Repurchased 240 million shares for $45.4 billion in 2025

Solid https://t.co/DD7hz93cc9
- Bourbon Insider Research
tweet
Offshore
Photo
God of Prompt
*i feel it coming*👀

🚨NEW: Claude Opus 4.6 & Claude Opus 4.6 Thinking are now live on Perplexity's APIs

Looks like we're getting it today and Sonnet 5 later

https://t.co/pL3m2yOyUd https://t.co/glaky9eAAP
- leo 🐾
tweet
God of Prompt
instead of scrolling x, study this free course packed with INSANE value

Jason Liu just open sourced his entire paid RAG course and consulting archive. all of it. free.

for context: this is the creator of Instructor (6M+ monthly downloads, cited by OpenAI as inspiration for their structured outputs feature). former staff ML engineer at Stitch Fix. ex-Meta. a16z scout. his RAG course on Maven had 400+ engineers enrolled.

but here's what's actually interesting.

look at the highlights he chose to summarize his own course:

> product mindset over one-off implementation
> measurement, feedback loops, improvement cycles first
> synthetic eval data to break the cold start
> feedback UX that actually works
> specialized retrieval and routing instead of one-size-fits-all search

notice what's missing?

no mention of vector databases. no embedding model comparisons. no chunk size optimization. no retrieval framework shootouts.

the guy who mass-taught production RAG to hundreds of engineers is telling you the hard part was never the retrieval. it was the product thinking around it.

measurement. feedback loops. knowing what to improve and how to tell if you improved it.

everyone's out here debating pgvector vs pinecone vs weaviate. meanwhile the most credible RAG practitioner in the space just told you the answer was never in the vector store.

it was in the feedback UX.

567 Labs is done. the content lives on. go read it before you build another RAG pipeline without an eval framework.
- Robert Youssef
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @CapitalValor: GOLD MINERS
Reality is we've been lucky riding the wave, partly fuelled by Chinese ultra-speculative money
6x - 9x CF at $5k/oz is not cheap... I'd perhaps pay this on 3.5k.
I am out of the producing miner space and give thanks.
Massive wealth generator over past 18 months. 😎 https://t.co/K3U7x5aQk4
tweet
Jukan
* The Information: Nvidia will not release an upgraded version of the RTX 50 series this year.

* The Information: RTX 60 (“Rubin”) was originally slated for late 2027, but has been delayed.

* The Information: Nvidia is cutting production of its gaming GPUs.
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: NEW research from Meta Superintelligence Labs.

It uses a clever strategy-auction framework to improve self-improving agents on complex tasks.

Small agents aren't always enough.

On the simplest tasks, a 4B parameter agent attains 87% of a 32B agent's performance. But on the most complex tasks, that relative performance drops to just 21%.

The default assumption today is that you either use the biggest model for everything or route tasks with a trained classifier.

But trained routers degrade as task difficulty increases, and non-predictive cascades become prohibitively expensive for agentic workloads.

This new research introduces SALE (Strategy Auctions for Workload Efficiency), a framework inspired by freelancer marketplaces. Instead of predicting which model to use from a task description alone, agents bid with short strategic plans that are scored by a systematic cost-value mechanism.

How does the auction work? Each candidate agent proposes a strategic solution plan. A peer jury scores plans by predicted value. A heuristic cost predictor estimates execution cost. The agent with the best cost-value trade-off wins and executes its plan.

The self-improvement mechanism is where it gets interesting. After each auction, all proposed strategies are stored in a shared memory bank. Cheaper agents that lost can learn from winning strategies and submit refined bids, analogous to freelancers upskilling over time.

On deep search tasks, SALE exceeds the best single agent's pass@1 by 3.5 points while reducing cost by 35%. On coding tasks, it improves pass@1 by 2.7 points at 25% lower cost. Across both domains, SALE reduces reliance on the largest agent by 53%.

Existing routers like WTP and FrugalGPT either underperform the largest agent or fail to reduce cost. FrugalGPT's costs actually increase on complex coding tasks, reaching 0.61 dollars per million tokens versus the best agent's 0.36 dollars.

Market-inspired coordination mechanisms that organize heterogeneous agents into adaptive ecosystems can systematically outperform both single large models and trained routing approaches.

Paper: https://t.co/UY8C5cmfxK

Learn to build effective AI Agents in our academy: https://t.co/1e8RZKs4uX
tweet
Offshore
Photo
DAIR.AI
// Agent Primitives //

This is a really interesting take on building effective multi-agent systems.

Multi-agent systems get more complex as tasks get harder. More roles, more prompts, more bespoke interaction patterns. However, the core computation patterns keep repeating across every system: review, vote, plan, execute.

But nobody treats these patterns as reusable building blocks.

This new research introduces Agent Primitives, a set of latent building blocks for constructing effective multi-agent systems.

Inspired by how neural networks are built from reusable components like residual blocks and attention heads, the researchers decompose multi-agent architectures into three recurring primitives: Review, Voting and Selection, and Planning and Execution.

What makes these primitives different? Agents inside each primitive communicate via KV-cache rather than natural language. This avoids the information degradation that happens when agents pass long text messages back and forth across multi-stage interactions.

An Organizer agent selects and composes primitives for each query, guided by a lightweight knowledge pool of previously successful configurations.

No manual system design required.

The results across eight benchmarks spanning math, code generation, and QA with five open-source LLMs:

> Primitives-based MAS improve average accuracy by 12.0-16.5% over single-agent baselines

> On GPQA-Diamond, the improvement is striking, 53.2% versus the 33.6-40.2% range of prior methods like AgentVerse, DyLAN, and MAS-GPT

In terms of efficiency, token usage and inference latency drop by approximately 3-4x compared to text-based MAS, while incurring only 1.3-1.6x overhead relative to single-agent inference.

Instead of designing task-specific multi-agent architectures from scratch, Agent Primitives show that a small set of reusable computation patterns with latent communication can match or exceed custom systems while being dramatically more efficient.

Paper: https://t.co/fxEL6g0x4O

Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: $ARM: -8%AH

CEO: "Arm delivered a record revenue quarter as demand for AI computing on our platform continues to accelerate. Record royalty results in the third quarter reflect the growing scale of our ecosystem, as customers design the Arm compute platform into next-generation systems across cloud, edge, and physical environments to deliver high-performance, power-efficient AI. The fundamentals of the Arm business have never been stronger."
tweet