Offshore
Video
God of Prompt
in 10 years we'll all be employed by agents
tweet
in 10 years we'll all be employed by agents
I launched https://t.co/tNYOm7V5wD last night and already 130+ people have signed up including an OF model (lmao) and the CEO of an AI startup.
If your AI agent wants to rent a person to do an IRL task for them its as simple as one MCP call. https://t.co/tgqlAWDWtJ - Alextweet
Offshore
Photo
Dimitry Nakhla | Babylon Capitalยฎ
One of the more interesting things about $FICO is that, as long-term shareholders, a lower short-term stock price can be beneficial in the long-run.
How?
Because $FICO is a cannibal.
Shares outstanding down ~22% over the last 10 years.
In the last 12 months alone, $FICO spent ~$1.54B on buybacks.
That reduced share count by roughly 0.16M shares (160K).
Now letโs look at why price matters:
Assume $FICO allocates another $1.54B to repurchases.
___
๐๐๐๐ง๐๐ซ๐ข๐จ ๐:
Market cap = $31.65B
$1.54B / $31.65B โ 4.86% of the company โ ๐.๐๐% ๐๐จ๐จ๐ฌ๐ญ ๐ญ๐จ ๐๐๐
๐๐๐๐ง๐๐ซ๐ข๐จ ๐:
Market cap = ~$47.47B
$1.54B / $47.47B โ 3.24% of the company โ ๐.๐๐% ๐๐จ๐จ๐ฌ๐ญ ๐ญ๐จ ๐๐๐
___
Same dollars. Different outcome.
๐๐ฉ๐ฆ ๐ญ๐ฐ๐ธ๐ฆ๐ณ ๐ต๐ฉ๐ฆ ๐ฑ๐ณ๐ช๐ค๐ฆ, ๐ต๐ฉ๐ฆ ๐ฎ๐ฐ๐ณ๐ฆ ๐ฐ๐ธ๐ฏ๐ฆ๐ณ๐ด๐ฉ๐ช๐ฑ ๐ฆ๐ข๐ค๐ฉ ๐ณ๐ฆ๐ฑ๐ถ๐ณ๐ค๐ฉ๐ข๐ด๐ฆ ๐ฅ๐ฐ๐ญ๐ญ๐ข๐ณ ๐ณ๐ฆ๐ต๐ช๐ณ๐ฆ๐ด โ ๐ข๐ฏ๐ฅ ๐ต๐ฉ๐ฆ ๐ฎ๐ฐ๐ณ๐ฆ ๐ง๐ถ๐ต๐ถ๐ณ๐ฆ ๐ฆ๐ข๐ณ๐ฏ๐ช๐ฏ๐จ๐ด ๐ฑ๐ฐ๐ธ๐ฆ๐ณ ๐ข๐ค๐ค๐ณ๐ถ๐ฆ๐ด ๐ต๐ฐ ๐ณ๐ฆ๐ฎ๐ข๐ช๐ฏ๐ช๐ฏ๐จ ๐ด๐ฉ๐ข๐ณ๐ฆ๐ฉ๐ฐ๐ญ๐ฅ๐ฆ๐ณ๐ด.
๐๐๐๐จ ๐๐จ ๐ฌ๐๐ฎ ๐๐ค๐ง ๐๐๐๐-๐ฆ๐ช๐๐ก๐๐ฉ๐ฎ ๐๐๐ฃ๐ฃ๐๐๐๐ก๐จ, ๐ฅ๐๐ง๐๐ค๐๐จ ๐ค๐ ๐ฅ๐ง๐๐๐ ๐ฌ๐๐๐ ๐ฃ๐๐จ๐จ ๐ฆ๐ช๐๐๐ฉ๐ก๐ฎ ๐๐ฃ๐๐๐ฃ๐๐ ๐ก๐ค๐ฃ๐-๐ฉ๐๐ง๐ข ๐๐ค๐ข๐ฅ๐ค๐ช๐ฃ๐๐๐ฃ๐.
๐๐ถ๐บ๐ฃ๐ข๐ค๐ฌ๐ด ๐ฅ๐ฐ๐ฏโ๐ต ๐ซ๐ถ๐ด๐ต ๐ณ๐ฆ๐ต๐ถ๐ณ๐ฏ ๐ค๐ข๐ฑ๐ช๐ต๐ข๐ญ. ๐๐ฉ๐ฆ๐บ ๐ณ๐ฆ๐ข๐ญ๐ญ๐ฐ๐ค๐ข๐ต๐ฆ ๐ฐ๐ธ๐ฏ๐ฆ๐ณ๐ด๐ฉ๐ช๐ฑ ๐ต๐ฐ๐ธ๐ข๐ณ๐ฅ ๐ฑ๐ข๐ต๐ช๐ฆ๐ฏ๐ต ๐ด๐ฉ๐ข๐ณ๐ฆ๐ฉ๐ฐ๐ญ๐ฅ๐ฆ๐ณ๐ด โ ๐ฎ๐ฐ๐ด๐ต ๐ฆ๐ง๐ง๐ฆ๐ค๐ต๐ช๐ท๐ฆ๐ญ๐บ ๐ธ๐ฉ๐ฆ๐ฏ ๐ฑ๐ณ๐ช๐ค๐ฆ๐ด ๐ข๐ณ๐ฆ ๐ฅ๐ฆ๐ฑ๐ณ๐ฆ๐ด๐ด๐ฆ๐ฅ.
tweet
One of the more interesting things about $FICO is that, as long-term shareholders, a lower short-term stock price can be beneficial in the long-run.
How?
Because $FICO is a cannibal.
Shares outstanding down ~22% over the last 10 years.
In the last 12 months alone, $FICO spent ~$1.54B on buybacks.
That reduced share count by roughly 0.16M shares (160K).
Now letโs look at why price matters:
Assume $FICO allocates another $1.54B to repurchases.
___
๐๐๐๐ง๐๐ซ๐ข๐จ ๐:
Market cap = $31.65B
$1.54B / $31.65B โ 4.86% of the company โ ๐.๐๐% ๐๐จ๐จ๐ฌ๐ญ ๐ญ๐จ ๐๐๐
๐๐๐๐ง๐๐ซ๐ข๐จ ๐:
Market cap = ~$47.47B
$1.54B / $47.47B โ 3.24% of the company โ ๐.๐๐% ๐๐จ๐จ๐ฌ๐ญ ๐ญ๐จ ๐๐๐
___
Same dollars. Different outcome.
๐๐ฉ๐ฆ ๐ญ๐ฐ๐ธ๐ฆ๐ณ ๐ต๐ฉ๐ฆ ๐ฑ๐ณ๐ช๐ค๐ฆ, ๐ต๐ฉ๐ฆ ๐ฎ๐ฐ๐ณ๐ฆ ๐ฐ๐ธ๐ฏ๐ฆ๐ณ๐ด๐ฉ๐ช๐ฑ ๐ฆ๐ข๐ค๐ฉ ๐ณ๐ฆ๐ฑ๐ถ๐ณ๐ค๐ฉ๐ข๐ด๐ฆ ๐ฅ๐ฐ๐ญ๐ญ๐ข๐ณ ๐ณ๐ฆ๐ต๐ช๐ณ๐ฆ๐ด โ ๐ข๐ฏ๐ฅ ๐ต๐ฉ๐ฆ ๐ฎ๐ฐ๐ณ๐ฆ ๐ง๐ถ๐ต๐ถ๐ณ๐ฆ ๐ฆ๐ข๐ณ๐ฏ๐ช๐ฏ๐จ๐ด ๐ฑ๐ฐ๐ธ๐ฆ๐ณ ๐ข๐ค๐ค๐ณ๐ถ๐ฆ๐ด ๐ต๐ฐ ๐ณ๐ฆ๐ฎ๐ข๐ช๐ฏ๐ช๐ฏ๐จ ๐ด๐ฉ๐ข๐ณ๐ฆ๐ฉ๐ฐ๐ญ๐ฅ๐ฆ๐ณ๐ด.
๐๐๐๐จ ๐๐จ ๐ฌ๐๐ฎ ๐๐ค๐ง ๐๐๐๐-๐ฆ๐ช๐๐ก๐๐ฉ๐ฎ ๐๐๐ฃ๐ฃ๐๐๐๐ก๐จ, ๐ฅ๐๐ง๐๐ค๐๐จ ๐ค๐ ๐ฅ๐ง๐๐๐ ๐ฌ๐๐๐ ๐ฃ๐๐จ๐จ ๐ฆ๐ช๐๐๐ฉ๐ก๐ฎ ๐๐ฃ๐๐๐ฃ๐๐ ๐ก๐ค๐ฃ๐-๐ฉ๐๐ง๐ข ๐๐ค๐ข๐ฅ๐ค๐ช๐ฃ๐๐๐ฃ๐.
๐๐ถ๐บ๐ฃ๐ข๐ค๐ฌ๐ด ๐ฅ๐ฐ๐ฏโ๐ต ๐ซ๐ถ๐ด๐ต ๐ณ๐ฆ๐ต๐ถ๐ณ๐ฏ ๐ค๐ข๐ฑ๐ช๐ต๐ข๐ญ. ๐๐ฉ๐ฆ๐บ ๐ณ๐ฆ๐ข๐ญ๐ญ๐ฐ๐ค๐ข๐ต๐ฆ ๐ฐ๐ธ๐ฏ๐ฆ๐ณ๐ด๐ฉ๐ช๐ฑ ๐ต๐ฐ๐ธ๐ข๐ณ๐ฅ ๐ฑ๐ข๐ต๐ช๐ฆ๐ฏ๐ต ๐ด๐ฉ๐ข๐ณ๐ฆ๐ฉ๐ฐ๐ญ๐ฅ๐ฆ๐ณ๐ด โ ๐ฎ๐ฐ๐ด๐ต ๐ฆ๐ง๐ง๐ฆ๐ค๐ต๐ช๐ท๐ฆ๐ญ๐บ ๐ธ๐ฉ๐ฆ๐ฏ ๐ฑ๐ณ๐ช๐ค๐ฆ๐ด ๐ข๐ณ๐ฆ ๐ฅ๐ฆ๐ฑ๐ณ๐ฆ๐ด๐ด๐ฆ๐ฅ.
tweet
Bourbon Capital
Wonderful companies near 52 weeks low
Palo Alto Networks $PANW
S&P Global $SPGI
Intuit $INTU
Axon Enterprise $AXON
Fair Isaac Corporation $FICO
Spotify Technology $SPOT
ServiceNow $NOW
TransDigm Group $TDG
Intuitive Surgical $ISRG
Blackstone $BX
Intercontinental Exchange $ICE
Booking Holdings $BKNG
Copart $CPRT
Synopsys $SNPS
Netflix $NFLX
Cadence Design Systems $CDNS
Microsoft Corporation $MSFT Not yet, but itโs heading there.
tweet
Wonderful companies near 52 weeks low
Palo Alto Networks $PANW
S&P Global $SPGI
Intuit $INTU
Axon Enterprise $AXON
Fair Isaac Corporation $FICO
Spotify Technology $SPOT
ServiceNow $NOW
TransDigm Group $TDG
Intuitive Surgical $ISRG
Blackstone $BX
Intercontinental Exchange $ICE
Booking Holdings $BKNG
Copart $CPRT
Synopsys $SNPS
Netflix $NFLX
Cadence Design Systems $CDNS
Microsoft Corporation $MSFT Not yet, but itโs heading there.
tweet
Offshore
Photo
The Few Bets That Matter
https://t.co/45bAL41Db2
tweet
https://t.co/45bAL41Db2
https://t.co/TVqbdhKTn4 - The Few Bets That Mattertweet
Offshore
Photo
Benjamin Hernandez๐
$USAR RARE EARTH LIVE RUSH ๐
USA Rare Earth +16.06% $25.67 low volโRSI 60, MACD bull. Strong buy, growth tankโrare reality? Real US insights, check it!
Grounded WhatsApp: โ https://t.co/71FIJId47G
Text 'USAR Real' for deets!
On my list: $BYND $NB $ASST $BMNR $OPEN https://t.co/iV7ZBYcMMX
tweet
$USAR RARE EARTH LIVE RUSH ๐
USA Rare Earth +16.06% $25.67 low volโRSI 60, MACD bull. Strong buy, growth tankโrare reality? Real US insights, check it!
Grounded WhatsApp: โ https://t.co/71FIJId47G
Text 'USAR Real' for deets!
On my list: $BYND $NB $ASST $BMNR $OPEN https://t.co/iV7ZBYcMMX
tweet
Offshore
Photo
Quiver Quantitative
Bitcoin has now fallen 36% since this letter was sent.
Down 7% today. https://t.co/KdUDFMhn8R
tweet
Bitcoin has now fallen 36% since this letter was sent.
Down 7% today. https://t.co/KdUDFMhn8R
tweet
Offshore
Video
Brady Long
I guess all my useless files wonโt be useless anymore
tweet
I guess all my useless files wonโt be useless anymore
Documentation has existed for decades. So has frustration.
Today, Trupeer has launched the worldโs first truly AI documentation platform.
Some say it changes everything.
Some say thatโs a dangerous claim.
You decide. https://t.co/qkZ5QYR57X - Shivalitweet
Offshore
Photo
DAIR.AI
// Beyond RAG for Agent Memory //
RAG wasn't designed for agent memory. And it shows.
The default approach to agent memory today is still the standard RAG pipeline: embed stored memories, retrieve a fixed top-k by similarity, concatenate them into context, and generate an answer.
Every major agent memory system follows this base pattern.
But agent memory is fundamentally different from a document corpus.
It's a bounded, coherent dialogue stream where candidate spans are highly correlated and often near duplicates.
Fixed top-k similarity retrieval collapses into a single dense region, returning redundant evidence. And post-hoc pruning breaks temporally linked evidence chains rather than removing redundancy.
This new research introduces xMemory, a hierarchical retrieval framework that replaces similarity matching with structured component-level selection.
Agent memory needs redundancy control without fragmenting evidence chains. Structured retrieval over semantic components achieves both, consistently outperforming standard RAG and pruning approaches across multiple LLM backbones.
The key idea: It decouples memories into semantic components, organize them into a four-level hierarchy (original messages, episodes, semantics, themes), and uses this structure to drive retrieval top-down.
A sparsity-semantics objective guides split and merge operations to keep the high-level organization both searchable and semantically faithful.
At retrieval time, xMemory selects a compact, diverse set of relevant themes and semantics first, then expands to episodes and raw messages only when doing so measurably reduces the reader's uncertainty.
On LoCoMo with Qwen3-8B, xMemory achieves 34.48 BLEU and 43.98 F1 while using only 4,711 tokens per query, compared to the next best baseline Nemori at 28.51 BLEU and 40.45 F1 with 7,755 tokens. With GPT-5 nano, it reaches 38.71 BLEU and 50.00 F1, improving over Nemori while cutting token usage from 9,155 to 6,581.
xMemory retrieves contexts that cover all answer tokens in 5.66 blocks and 975 tokens, versus 10.81 blocks and 1,979 tokens for naive RAG. Higher accuracy, half the tokens.
Paper: https://t.co/UI5aS0C40V
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
// Beyond RAG for Agent Memory //
RAG wasn't designed for agent memory. And it shows.
The default approach to agent memory today is still the standard RAG pipeline: embed stored memories, retrieve a fixed top-k by similarity, concatenate them into context, and generate an answer.
Every major agent memory system follows this base pattern.
But agent memory is fundamentally different from a document corpus.
It's a bounded, coherent dialogue stream where candidate spans are highly correlated and often near duplicates.
Fixed top-k similarity retrieval collapses into a single dense region, returning redundant evidence. And post-hoc pruning breaks temporally linked evidence chains rather than removing redundancy.
This new research introduces xMemory, a hierarchical retrieval framework that replaces similarity matching with structured component-level selection.
Agent memory needs redundancy control without fragmenting evidence chains. Structured retrieval over semantic components achieves both, consistently outperforming standard RAG and pruning approaches across multiple LLM backbones.
The key idea: It decouples memories into semantic components, organize them into a four-level hierarchy (original messages, episodes, semantics, themes), and uses this structure to drive retrieval top-down.
A sparsity-semantics objective guides split and merge operations to keep the high-level organization both searchable and semantically faithful.
At retrieval time, xMemory selects a compact, diverse set of relevant themes and semantics first, then expands to episodes and raw messages only when doing so measurably reduces the reader's uncertainty.
On LoCoMo with Qwen3-8B, xMemory achieves 34.48 BLEU and 43.98 F1 while using only 4,711 tokens per query, compared to the next best baseline Nemori at 28.51 BLEU and 40.45 F1 with 7,755 tokens. With GPT-5 nano, it reaches 38.71 BLEU and 50.00 F1, improving over Nemori while cutting token usage from 9,155 to 6,581.
xMemory retrieves contexts that cover all answer tokens in 5.66 blocks and 975 tokens, versus 10.81 blocks and 1,979 tokens for naive RAG. Higher accuracy, half the tokens.
Paper: https://t.co/UI5aS0C40V
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
God of Prompt
RT @godofprompt: codex is used by all top engineers
codex hallucinates less and is more reliable than claude code
at least for now
tweet
RT @godofprompt: codex is used by all top engineers
codex hallucinates less and is more reliable than claude code
at least for now
Introducing the Codex appโa powerful command center for building with agents.
Now available on macOS.
https://t.co/HW05s2C9Nr - OpenAItweet
X (formerly Twitter)
OpenAI (@OpenAI) on X
Introducing the Codex appโa powerful command center for building with agents.
Now available on macOS.
https://t.co/HW05s2C9Nr
Now available on macOS.
https://t.co/HW05s2C9Nr