God of Prompt
wait, why hasn't anyone made a community for agents

that is only centered on scientific research and verification of it

for them to solve all of our climate, energy, poverty, health issues

shows our priorities as humans
tweet
The Transcript
$META CFO: New U.S. tax law brings 2026 cash tax savings

“We expect substantial cash tax savings from the new U.S. tax laws given the significant investments that we’re making in infrastructure and R&D.”
tweet
Bourbon Capital
Wonderful companies near 52 weeks low

Palo Alto Networks $PANW
S&P Global $SPGI
Intuit $INTU
Axon Enterprise $AXON
Fair Isaac Corporation $FICO
Spotify Technology $SPOT
ServiceNow $NOW
TransDigm Group $TDG
Intuitive Surgical $ISRG
Blackstone $BX
Intercontinental Exchange $ICE
Booking Holdings $BKNG
Copart $CPRT
Synopsys $SNPS
Netflix $NFLX
Cadence Design Systems $CDNS
Microsoft Corporation $MSFT Not yet, but it’s heading there.
tweet
Offshore
Photo
The Few Bets That Matter
https://t.co/45bAL41Db2

https://t.co/TVqbdhKTn4
- The Few Bets That Matter
tweet
Offshore
Photo
Benjamin Hernandez😎
$USAR RARE EARTH LIVE RUSH 🌍

USA Rare Earth +16.06% $25.67 low vol—RSI 60, MACD bull. Strong buy, growth tank—rare reality? Real US insights, check it!

Grounded WhatsApp: https://t.co/71FIJId47G

Text 'USAR Real' for deets!
On my list: $BYND $NB $ASST $BMNR $OPEN https://t.co/iV7ZBYcMMX
tweet
Offshore
Photo
Quiver Quantitative
Bitcoin has now fallen 36% since this letter was sent.

Down 7% today. https://t.co/KdUDFMhn8R
tweet
Offshore
Video
Brady Long
I guess all my useless files won’t be useless anymore

Documentation has existed for decades. So has frustration.

Today, Trupeer has launched the world’s first truly AI documentation platform.

Some say it changes everything.
Some say that’s a dangerous claim.
You decide. https://t.co/qkZ5QYR57X
- Shivali
tweet
Offshore
Photo
DAIR.AI
// Beyond RAG for Agent Memory //

RAG wasn't designed for agent memory. And it shows.

The default approach to agent memory today is still the standard RAG pipeline: embed stored memories, retrieve a fixed top-k by similarity, concatenate them into context, and generate an answer.

Every major agent memory system follows this base pattern.

But agent memory is fundamentally different from a document corpus.

It's a bounded, coherent dialogue stream where candidate spans are highly correlated and often near duplicates.

Fixed top-k similarity retrieval collapses into a single dense region, returning redundant evidence. And post-hoc pruning breaks temporally linked evidence chains rather than removing redundancy.

This new research introduces xMemory, a hierarchical retrieval framework that replaces similarity matching with structured component-level selection.

Agent memory needs redundancy control without fragmenting evidence chains. Structured retrieval over semantic components achieves both, consistently outperforming standard RAG and pruning approaches across multiple LLM backbones.

The key idea: It decouples memories into semantic components, organize them into a four-level hierarchy (original messages, episodes, semantics, themes), and uses this structure to drive retrieval top-down.

A sparsity-semantics objective guides split and merge operations to keep the high-level organization both searchable and semantically faithful.

At retrieval time, xMemory selects a compact, diverse set of relevant themes and semantics first, then expands to episodes and raw messages only when doing so measurably reduces the reader's uncertainty.

On LoCoMo with Qwen3-8B, xMemory achieves 34.48 BLEU and 43.98 F1 while using only 4,711 tokens per query, compared to the next best baseline Nemori at 28.51 BLEU and 40.45 F1 with 7,755 tokens. With GPT-5 nano, it reaches 38.71 BLEU and 50.00 F1, improving over Nemori while cutting token usage from 9,155 to 6,581.

xMemory retrieves contexts that cover all answer tokens in 5.66 blocks and 975 tokens, versus 10.81 blocks and 1,979 tokens for naive RAG. Higher accuracy, half the tokens.

Paper: https://t.co/UI5aS0C40V

Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
God of Prompt
RT @godofprompt: codex is used by all top engineers

codex hallucinates less and is more reliable than claude code

at least for now

Introducing the Codex app—a powerful command center for building with agents.

Now available on macOS.

https://t.co/HW05s2C9Nr
- OpenAI
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: It may have been easy to gloss over management’s comments on pricing power in Mastercard’s latest report.

$MA now derives nearly half (~44%) of total revenue from Value-Added Services & Solutions — a substantial amount of non-payment network revenue for a company still often viewed as a “payments” business.

CFO Sachin Mehra noted VAS growth was:

“Primarily driven by strong demand across digital & authentication, security solutions, consumer acquisition & engagement, and business & market insights — 𝙖𝙨 𝙬𝙚𝙡𝙡 𝙖𝙨 𝙥𝙧𝙞𝙘𝙞𝙣𝙜.”

Mehra further emphasized pricing is a function of delivering incremental customer value via new products, enhancements, and solution expansions — and then embedding that value into forecasts.

Notably, VAS is $MA fastest-growing segment, with 𝙛𝙤𝙪𝙧 𝙘𝙤𝙣𝙨𝙚𝙘𝙪𝙩𝙞𝙫𝙚 𝙦𝙪𝙖𝙧𝙩𝙚𝙧𝙨 𝙤𝙛 𝙔𝙤𝙔 𝙖𝙘𝙘𝙚𝙡𝙚𝙧𝙖𝙩𝙞𝙤𝙣.

𝘐𝘯𝘷𝘦𝘴𝘵𝘰𝘳𝘴 𝘤𝘰𝘯𝘵𝘪𝘯𝘶𝘦 𝘵𝘰 𝘶𝘯𝘥𝘦𝘳𝘦𝘴𝘵𝘪𝘮𝘢𝘵𝘦 𝘵𝘩𝘦 𝘭𝘢𝘵𝘦𝘯𝘤𝘺 𝘢𝘯𝘥 𝘥𝘶𝘳𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘰𝘧 𝘱𝘳𝘪𝘤𝘪𝘯𝘨 𝘱𝘰𝘸𝘦𝘳 𝘦𝘮𝘣𝘦𝘥𝘥𝘦𝘥 𝘪𝘯𝘴𝘪𝘥𝘦 𝘔𝘢𝘴𝘵𝘦𝘳𝘤𝘢𝘳𝘥’𝘴 𝘦𝘹𝘱𝘢𝘯𝘥𝘪𝘯𝘨 𝘝𝘈𝘚 𝘱𝘭𝘢𝘵𝘧𝘰𝘳𝘮.
tweet
God of Prompt
good prompt

This prompt is your AI coding debug agent (it fixes your issues without breaking everything else).

It isolates bugs, determines root cause vs symptom, and updates LESSONS (.md) so your build agent doesn’t make the same mistake.

Part 4. Parts 1–3 in the thread below.

Prompt:

[describe your bug + attach references]

Then paste this into your agent below.

Note: I recommend parts 1-3 prior to this. <roleYou are a senior debugging engineer. You do not build features. You do not refactor. You do not "improve" things. You find exactly what's broken, fix exactly that, and leave everything else untouched. You treat working code as sacred. Your only job is to make the broken thing work again without creating new problems. <debug_startupRead these before touching anything. No exceptions.

1. progress (.txt) — what was built recently and what state the project is in
2. LESSONS (.md) — has this mistake happened before? Is there already a rule for it?
3. TECH_STACK (.md) — exact versions, dependencies, and constraints
4. FRONTEND_GUIDELINES (.md) — component architecture and engineering rules
5. BACKEND_STRUCTURE (.md) — database schema, API contracts, auth logic
6. DESIGN_SYSTEM (.md) — visual tokens and design constraints

Do not read the full IMPLEMENTATION_PLAN (.md) or PRD (.md) unless the bug requires feature-level context. Stay scoped. You are not here to understand the whole app. You are here to understand the broken part. <debug_protocol## Step 1: Reproduce First
- Do not theorize. Reproduce the bug first.
- Run the exact steps the user describes
- Confirm: "I can reproduce this. Here's what I see: [observed behavior]"
- If you cannot reproduce it, say so immediately. Ask for environment details, exact steps, or logs.
- No fix attempt begins until reproduction is confirmed

## Step 2: Research the Blast Radius
- Before proposing any fix, research and understand every part of the codebase related to the bug
- Use subagents to investigate connected files, imports, dependencies, and data flow
- Read error logs, stack traces, and console output — the evidence comes first
- Map every file and function involved in the broken behavior
- List: "These files are involved: [list]. These systems are connected: [list]"
- Anything not on the list does not get touched

## Step 3: Present Findings Before Fixing
- After research, present your findings to the user BEFORE implementing any fix
- Structure your report:

DEBUG FINDINGS:
- Bug: [what's broken, observed vs expected behavior]
- Location: [exact files and lines involved]
- Connected systems: [what else touches this code]
- Evidence: [logs, errors, traces that confirm the issue]
- Probable cause: [what you believe is causing it and why]

Do not skip this step. Do not jump to fixing. The user needs to see your reasoning before you act on it.

## Step 4: Root Cause or Symptom?
- After presenting findings, ask yourself this question explicitly:
- "Am I solving a ROOT problem in the architecture, or am I treating a SYMPTOM caused by a deeper issue?"
- State your answer clearly to the user:

ROOT CAUSE ANALYSIS:
- Classification: [ROOT CAUSE / SYMPTOM]
- If root cause: "Fixing this will resolve the bug and prevent related issues because [reasoning]"
- If symptom: "This fix would treat the visible problem, but the actual root cause is [deeper issue]. Fixing only the symptom means [what will happen]. I recommend we fix [root cause] instead."

- If you initially identified a symptom, go back to Step 2. Research the root cause. Do not implement a symptom fix unless the user explicitly approves it as a temporary measure.
- When uncertain, say so: "I'm not 100% sure this is the root cause. Here's why: [reasoning]. I can investigate further or we can try this fix and monitor."

## Step 5: Propose the Fix
- Present the exact fix before implementing:

PROPOSED FIX:
- Files to modify: [list with specific changes]
- Files NOT being[...]