Offshore
Photo
God of Prompt
Kilo choosing to stay in VS Code while competitors leave?

That's what "developer-first" actually looks like.. 👍

SourceGraph (Amp Code) announced plans to kill their VS Code extension.

Our VS Code extension is here to stay. Learn more:
https://t.co/8X6QVvp2FC
- Kilo
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @michaelxpettis: When you do most of your shopping at airports, that's a sign that you travel way too much, but I have to say that a bookshop in the international terminal of the Jakarta airport has one of the best selections of books on finance, economics and economic history I have ever seen in any airport. I believe this is a picture of the bookshop I went to yesterday.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: RIP "act as an expert" and basic prompting.

A former OpenAI engineer just exposed "Prompt Contract" - the internal technique that makes LLMs actually obey you.

Works on ChatGPT, Claude, Gemini, everything.

Here's how to use it right now: https://t.co/6ZDCFs5JvK
tweet
Offshore
Photo
DAIR.AI
RT @dair_ai: Great paper on improving efficieny of reasoning models.

Long chain-of-thought reasoning is powerful but fundamentally limited.

The longer a model reasons, the more expensive it gets. It's well know that self-attention scales quadratically with sequence length, context windows impose hard ceilings, and critical early information fades as traces grow longer.

But what if a model could reason indefinitely without hitting any of those walls?

This new research introduces InftyThink+, an RL framework that teaches models to break reasoning into iterative rounds connected by self-generated summaries. Instead of one massive chain-of-thought, the model reasons in bounded segments, compresses its progress into a summary, and continues fresh.

Iterative reasoning only works when the model makes good decisions about when to summarize, what to preserve, and how to continue. Previous methods used supervised learning or fixed heuristics to handle these decisions. InftyThink+ treats them as a sequential decision problem optimized end-to-end with trajectory-level RL.

Training proceeds in two stages. A supervised cold-start teaches the basic iterative format. Then RL optimizes the full trajectory, learning strategic summarization and continuation policies through reward signals.

The results on DeepSeek-R1-Distill-Qwen-1.5B: InftyThink+ improves accuracy on AIME24 by 21 percentage points, outperforming conventional long chain-of-thought RL by an additional 9 points. On the out-of-distribution GPQA benchmark, it gains 5 points over the baseline and 4 points over vanilla RL. On AIME25, inference latency drops by 32.8% compared to standard reasoning. RL training itself speeds up by 18.2%.

A key finding: RL doesn't just make the model reason longer. It teaches the model to generate better summaries. When researchers replaced RL-trained summaries with external ones from a separate LLM, performance dropped. After RL training, the model's own summaries become tightly coupled with its downstream reasoning in ways external summarizers can't replicate.

The approach also decouples reasoning depth from wall-clock time. After RL, InftyThink+ extends reasoning depth while keeping latency nearly flat on several benchmarks. Standard reasoning sees latency balloon as depth increases.

Reasoning models today are bounded by context windows and crushed by quadratic attention costs. InftyThink+ removes both constraints by teaching models to reason in compressed iterations, enabling theoretically infinite-horizon reasoning with bounded compute per step.

Paper: https://t.co/VWM71BzXUf

Learn to build effective AI Agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: $AMZN CFO: "Advertising revenue grew 22% in the fourth quarter, and we added over $12 billion of incremental revenue in 2025 alone as our full funnel advertising approach of connecting brands with customers is resonating, simplifying the advertiser experience to enable brands to better reach customers wherever they are."
tweet
Michael Fritzell (Asian Century Stocks)
👀

@MikeFritzell I was in Tokyo’s watch town the other day and it was thriving. Well assessed
- Rouleur Capital
tweet
Offshore
Photo
God of Prompt
I've written 500 articles, 23 whitepapers, and 3 ebooks using Claude over 2 years, and these 10 prompts are the ONLY ones I actually use anymore because they handle 90% of professional writing better than any human editor I've worked with and cost me $0.02 per 1000 words: 👇 https://t.co/Yx6MCNdLbr
tweet
Offshore
Photo
Brady Long
🚨 Chinese researchers just published a paper that destroys every AI agent startup pitch deck.

It's called ROME + ALE, and it exposes why every "AI agent company" you've heard of is building on quicksand.

Here's what nobody's talking about: https://t.co/cmx0AP9OJN
tweet
Michael Fritzell (Asian Century Stocks)
RT @GdiGiulio_: @ClarkSquareCap $FIH Fairfax India at ~80% of reported BV, probably ~50% of a growing FV.
https://t.co/Do6hoBdPV0

Fairfax India $FIH.U $FIH-U.TO is an investment HoldCo focused on Indian public and private businesses (equities and debt). Parent co is Fairfax Financial $FFH, led by Prem Watsa. His son Ben Watsa is chairman of FIH.
📷
- Giovanni Digiulio
tweet
Offshore
Photo
Javier Blas
RT @SStapczynski: Will China's coal production hit another record this year? 🇨🇳🪨

China’s main coal industry body warns that imports may fall if Indonesia moves to restrict shipments

They see production hitting a record 4.86 billion tons in 2026, but could go higher if imports drop sharply https://t.co/bQ4ZqgNzFd
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @crudechronicle: Another one bites the dust
$VAL and $RIG https://t.co/9Xaj9r24Mp
tweet