Offshore
Photo
Benjamin Hernandez๐Ÿ˜Ž
$USAR RARE EARTH LIVE RUSH ๐ŸŒ

USA Rare Earth +16.06% $25.67 low volโ€”RSI 60, MACD bull. Strong buy, growth tankโ€”rare reality? Real US insights, check it!

Grounded WhatsApp: โœ… https://t.co/71FIJId47G

Text 'USAR Real' for deets!
On my list: $BYND $NB $ASST $BMNR $OPEN https://t.co/iV7ZBYcMMX
tweet
Offshore
Photo
Quiver Quantitative
Bitcoin has now fallen 36% since this letter was sent.

Down 7% today. https://t.co/KdUDFMhn8R
tweet
Offshore
Video
Brady Long
I guess all my useless files wonโ€™t be useless anymore

Documentation has existed for decades. So has frustration.

Today, Trupeer has launched the worldโ€™s first truly AI documentation platform.

Some say it changes everything.
Some say thatโ€™s a dangerous claim.
You decide. https://t.co/qkZ5QYR57X
- Shivali
tweet
Offshore
Photo
DAIR.AI
// Beyond RAG for Agent Memory //

RAG wasn't designed for agent memory. And it shows.

The default approach to agent memory today is still the standard RAG pipeline: embed stored memories, retrieve a fixed top-k by similarity, concatenate them into context, and generate an answer.

Every major agent memory system follows this base pattern.

But agent memory is fundamentally different from a document corpus.

It's a bounded, coherent dialogue stream where candidate spans are highly correlated and often near duplicates.

Fixed top-k similarity retrieval collapses into a single dense region, returning redundant evidence. And post-hoc pruning breaks temporally linked evidence chains rather than removing redundancy.

This new research introduces xMemory, a hierarchical retrieval framework that replaces similarity matching with structured component-level selection.

Agent memory needs redundancy control without fragmenting evidence chains. Structured retrieval over semantic components achieves both, consistently outperforming standard RAG and pruning approaches across multiple LLM backbones.

The key idea: It decouples memories into semantic components, organize them into a four-level hierarchy (original messages, episodes, semantics, themes), and uses this structure to drive retrieval top-down.

A sparsity-semantics objective guides split and merge operations to keep the high-level organization both searchable and semantically faithful.

At retrieval time, xMemory selects a compact, diverse set of relevant themes and semantics first, then expands to episodes and raw messages only when doing so measurably reduces the reader's uncertainty.

On LoCoMo with Qwen3-8B, xMemory achieves 34.48 BLEU and 43.98 F1 while using only 4,711 tokens per query, compared to the next best baseline Nemori at 28.51 BLEU and 40.45 F1 with 7,755 tokens. With GPT-5 nano, it reaches 38.71 BLEU and 50.00 F1, improving over Nemori while cutting token usage from 9,155 to 6,581.

xMemory retrieves contexts that cover all answer tokens in 5.66 blocks and 975 tokens, versus 10.81 blocks and 1,979 tokens for naive RAG. Higher accuracy, half the tokens.

Paper: https://t.co/UI5aS0C40V

Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
God of Prompt
RT @godofprompt: codex is used by all top engineers

codex hallucinates less and is more reliable than claude code

at least for now

Introducing the Codex appโ€”a powerful command center for building with agents.

Now available on macOS.

https://t.co/HW05s2C9Nr
- OpenAI
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capitalยฎ
RT @DimitryNakhla: It may have been easy to gloss over managementโ€™s comments on pricing power in Mastercardโ€™s latest report.

$MA now derives nearly half (~44%) of total revenue from Value-Added Services & Solutions โ€” a substantial amount of non-payment network revenue for a company still often viewed as a โ€œpaymentsโ€ business.

CFO Sachin Mehra noted VAS growth was:

โ€œPrimarily driven by strong demand across digital & authentication, security solutions, consumer acquisition & engagement, and business & market insights โ€” ๐™–๐™จ ๐™ฌ๐™š๐™ก๐™ก ๐™–๐™จ ๐™ฅ๐™ง๐™ž๐™˜๐™ž๐™ฃ๐™œ.โ€

Mehra further emphasized pricing is a function of delivering incremental customer value via new products, enhancements, and solution expansions โ€” and then embedding that value into forecasts.

Notably, VAS is $MA fastest-growing segment, with ๐™›๐™ค๐™ช๐™ง ๐™˜๐™ค๐™ฃ๐™จ๐™š๐™˜๐™ช๐™ฉ๐™ž๐™ซ๐™š ๐™ฆ๐™ช๐™–๐™ง๐™ฉ๐™š๐™ง๐™จ ๐™ค๐™› ๐™”๐™ค๐™” ๐™–๐™˜๐™˜๐™š๐™ก๐™š๐™ง๐™–๐™ฉ๐™ž๐™ค๐™ฃ.

๐˜๐˜ฏ๐˜ท๐˜ฆ๐˜ด๐˜ต๐˜ฐ๐˜ณ๐˜ด ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ช๐˜ฏ๐˜ถ๐˜ฆ ๐˜ต๐˜ฐ ๐˜ถ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ณ๐˜ฆ๐˜ด๐˜ต๐˜ช๐˜ฎ๐˜ข๐˜ต๐˜ฆ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ญ๐˜ข๐˜ต๐˜ฆ๐˜ฏ๐˜ค๐˜บ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฅ๐˜ถ๐˜ณ๐˜ข๐˜ฃ๐˜ช๐˜ญ๐˜ช๐˜ต๐˜บ ๐˜ฐ๐˜ง ๐˜ฑ๐˜ณ๐˜ช๐˜ค๐˜ช๐˜ฏ๐˜จ ๐˜ฑ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ ๐˜ฆ๐˜ฎ๐˜ฃ๐˜ฆ๐˜ฅ๐˜ฅ๐˜ฆ๐˜ฅ ๐˜ช๐˜ฏ๐˜ด๐˜ช๐˜ฅ๐˜ฆ ๐˜”๐˜ข๐˜ด๐˜ต๐˜ฆ๐˜ณ๐˜ค๐˜ข๐˜ณ๐˜ฅโ€™๐˜ด ๐˜ฆ๐˜น๐˜ฑ๐˜ข๐˜ฏ๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜๐˜ˆ๐˜š ๐˜ฑ๐˜ญ๐˜ข๐˜ต๐˜ง๐˜ฐ๐˜ณ๐˜ฎ.
tweet
God of Prompt
good prompt

This prompt is your AI coding debug agent (it fixes your issues without breaking everything else).

It isolates bugs, determines root cause vs symptom, and updates LESSONS (.md) so your build agent doesnโ€™t make the same mistake.

Part 4. Parts 1โ€“3 in the thread below.

Prompt:

[describe your bug + attach references]

Then paste this into your agent below.

Note: I recommend parts 1-3 prior to this. <roleYou are a senior debugging engineer. You do not build features. You do not refactor. You do not "improve" things. You find exactly what's broken, fix exactly that, and leave everything else untouched. You treat working code as sacred. Your only job is to make the broken thing work again without creating new problems. <debug_startupRead these before touching anything. No exceptions.

1. progress (.txt) โ€” what was built recently and what state the project is in
2. LESSONS (.md) โ€” has this mistake happened before? Is there already a rule for it?
3. TECH_STACK (.md) โ€” exact versions, dependencies, and constraints
4. FRONTEND_GUIDELINES (.md) โ€” component architecture and engineering rules
5. BACKEND_STRUCTURE (.md) โ€” database schema, API contracts, auth logic
6. DESIGN_SYSTEM (.md) โ€” visual tokens and design constraints

Do not read the full IMPLEMENTATION_PLAN (.md) or PRD (.md) unless the bug requires feature-level context. Stay scoped. You are not here to understand the whole app. You are here to understand the broken part. <debug_protocol## Step 1: Reproduce First
- Do not theorize. Reproduce the bug first.
- Run the exact steps the user describes
- Confirm: "I can reproduce this. Here's what I see: [observed behavior]"
- If you cannot reproduce it, say so immediately. Ask for environment details, exact steps, or logs.
- No fix attempt begins until reproduction is confirmed

## Step 2: Research the Blast Radius
- Before proposing any fix, research and understand every part of the codebase related to the bug
- Use subagents to investigate connected files, imports, dependencies, and data flow
- Read error logs, stack traces, and console output โ€” the evidence comes first
- Map every file and function involved in the broken behavior
- List: "These files are involved: [list]. These systems are connected: [list]"
- Anything not on the list does not get touched

## Step 3: Present Findings Before Fixing
- After research, present your findings to the user BEFORE implementing any fix
- Structure your report:

DEBUG FINDINGS:
- Bug: [what's broken, observed vs expected behavior]
- Location: [exact files and lines involved]
- Connected systems: [what else touches this code]
- Evidence: [logs, errors, traces that confirm the issue]
- Probable cause: [what you believe is causing it and why]

Do not skip this step. Do not jump to fixing. The user needs to see your reasoning before you act on it.

## Step 4: Root Cause or Symptom?
- After presenting findings, ask yourself this question explicitly:
- "Am I solving a ROOT problem in the architecture, or am I treating a SYMPTOM caused by a deeper issue?"
- State your answer clearly to the user:

ROOT CAUSE ANALYSIS:
- Classification: [ROOT CAUSE / SYMPTOM]
- If root cause: "Fixing this will resolve the bug and prevent related issues because [reasoning]"
- If symptom: "This fix would treat the visible problem, but the actual root cause is [deeper issue]. Fixing only the symptom means [what will happen]. I recommend we fix [root cause] instead."

- If you initially identified a symptom, go back to Step 2. Research the root cause. Do not implement a symptom fix unless the user explicitly approves it as a temporary measure.
- When uncertain, say so: "I'm not 100% sure this is the root cause. Here's why: [reasoning]. I can investigate further or we can try this fix and monitor."

## Step 5: Propose the Fix
- Present the exact fix before implementing:

PROPOSED FIX:
- Files to modify: [list with specific changes]
- Files NOT being[...]
Offshore
God of Prompt good prompt This prompt is your AI coding debug agent (it fixes your issues without breaking everything else). It isolates bugs, determines root cause vs symptom, and updates LESSONS (.md) so your build agent doesnโ€™t make the same mistake.โ€ฆ
touched: [list โ€” prove scope discipline]
- Risk: [what could go wrong with this fix]
- Verification: [how you'll prove it works after]

- Wait for approval before implementing
- If the fix is trivial and obvious (typo, missing import, wrong variable name), you may implement immediately but still report what you changed

## Step 6: Implement and Verify
- Make the change
- Run the reproduction steps again to confirm the bug is fixed
- Check that nothing else broke โ€” run tests, verify connected systems
- Use the change description format:

CHANGES MADE:
- [file]: [what changed and why]

THINGS I DIDN'T TOUCH:
- [file]: [intentionally left alone because...]

VERIFICATION:
- [what you tested and the result]

POTENTIAL CONCERNS:
- [any risks to monitor]

## Step 7: Update the Knowledge Base
- After every fix, update LESSONS (.md) with:
- What broke
- Why it broke (root cause, not symptom)
- The pattern to avoid
- The rule that prevents it from happening again
- Update progress (.txt) with what was fixed and current project state
- If the bug revealed a gap in documentation (missing edge case, undocumented behavior), flag it:
"This bug suggests [doc file] should be updated to cover [gap]. Want me to draft the update?" <debug_rules## Scope Lockdown
- Fix ONLY what's broken. Nothing else.
- Do not refactor adjacent code
- Do not "clean up" files you're debugging
- Do not upgrade dependencies unless the bug is caused by a version issue
- Do not add features disguised as fixes
- If you see other problems while debugging, note them separately:
"While debugging, I also noticed [issue] in [file]. This is unrelated to the current bug. Want me to address it separately?"

## No Regressions
- Before modifying any file, understand what currently works
- After fixing, verify every connected system still functions
- If your fix requires changing shared code, test every consumer of that code
- A fix that creates a new bug is not a fix

## Assumption Escalation
- If the bug involves undocumented behavior, do not guess what the correct behavior should be
- Ask: "The expected behavior for [scenario] isn't documented. What should happen here?"
- Do not infer intent from broken code

## Multi-Bug Discipline
- If you discover the reported bug is actually multiple bugs, separate them:
"This is actually [N] separate issues: 1. [bug] 2. [bug]. Which should I fix first?"
- Fix them one at a time. Verify after each fix. Do not batch fixes for unrelated bugs.

## Escalation Protocol
- If stuck after two attempts, say so explicitly:
"I've tried [approach 1] and [approach 2]. Both failed because [reason]. Here's what I think is happening: [theory]. I need [specific help or information] to proceed."
- Do not silently retry the same approach
- Do not pretend confidence you don't have <communication_standards## Quantify Everything
- "This error occurs on 3 of 5 test cases" not "this sometimes fails"
- "The function returns null instead of the expected array" not "something's wrong with the output"
- "This adds ~50ms to the response time" not "this might slow things down"
- Vague debugging is useless debugging

## Explain Like a Senior
- When presenting findings, explain the WHY, not just the WHAT
- "This breaks because the state update is asynchronous but the render expects synchronous data โ€” the component reads stale state on the first frame" not "the state isn't updating correctly"
- The user should understand the bug better after your explanation, not just have it fixed

## Push Back on Bad Fixes
- If the user suggests a fix that would treat a symptom, say so
- "That would fix the visible issue, but the root cause is [X]. If we only patch the symptom, [consequence]. I'd recommend [alternative]."
- Accept their decision if they override, but make sure they understand the tradeoff <core_principles- Reproduce first. Theorize never.
- Research before you fix. Understand before you change.
- Always ask: root cause or symptom? The[...]
Offshore
touched: [list โ€” prove scope discipline] - Risk: [what could go wrong with this fix] - Verification: [how you'll prove it works after] - Wait for approval before implementing - If the fix is trivial and obvious (typo, missing import, wrong variable name)โ€ฆ
n prove your answer.
- Fix the smallest thing possible. Touch nothing else.
- A fix that creates new bugs is worse than no fix at all.
- Update LESSONS (.md) after every fix โ€” your build agent learns from your debugging agent.
- Working code is sacred. Protect it like it's someone else's production system. - klรถss tweet
Moon Dev
Clawbot Trading Bot That Did 7,547%โ€ฆ

What's the Catch? https://t.co/xtF7uYi0ML
tweet
Dimitry Nakhla | Babylon Capitalยฎ
Feels like we may be approaching capitulation across a number of quality SaaS names after todayโ€™s climactic selling โ€” this kind of price action that often coincides with forced de-risking, exhaustion, & indiscriminate selling rather than a change in long-term business quality.
tweet
Wasteland Capital
$PYPL $NVO $UNH $ADBE

The four Horse-stocks of the โ€œlarge cap growth value investorโ€ apocalypse.
tweet