Offshore
Video
Moon Dev
Don't Buy a Mac Mini for Clawdbot: The Secret $10,000 Architecture That Costs You Nothing
clawdbot might be the reason you feel like you need a ten thousand dollar computer right now but i am about to show you why that fomo is going to leave you broke. if you have been watching everyone rush out to buy mac minis and mac studios just to run open claw or some local models you are witnessing a massive transfer of wealth from your pocket to apple for no reason.
there is a specific setup i use that costs almost nothing and keeps my main machine safe from whatever these autonomous agents are doing. if you stick with me i will walk you through the exact architecture of a professional trading system that handles the heavy lifting without you needing to drop a single rack on hardware
most people are scared of running these bots on their main computer because they don't want an agent messing with their personal files or browser sessions. instead of buying a second mac mini for six hundred dollars you can just go to the top left of your screen and create a brand new user profile.
this acts like a completely isolated sandbox where you can install all your trading tools and agents without them ever seeing your main data. it is essentially like getting a free computer for the price of five minutes of clicking around your settings
but what if you aren't on a mac or you need to access your system while you are traveling without carrying three laptops in your backpack. this is where the first loop of professional automation starts to close because i use something called chrome remote desktop to bridge the gap.
this allows me to leave a dedicated machine running in a safe place while i access the full desktop environment from a tablet or a cheap laptop anywhere in the world. it solves the mobility issue but it still doesn't solve the problem of those massive ten thousand dollar price tags for high end mac pros
if you are a pc user or just someone who doesn't want to own physical hardware yet you should look into a windows vps through a provider like contabo. most developers will tell you to use a linux terminal but if you aren't a coder yet you need a visual interface you can actually see.
getting a windows server allows you to log in and see a desktop just like your home computer for about fifteen dollars a month. i usually recommend at least twelve gigabytes of ram to keep things from getting janky when you are running multiple browser windows and agents at once
now you might be thinking that the whole point of the big hardware was to run local models like kimi or glm to save on api costs. i spent years thinking i had to own the machines myself and i even spent hundreds of thousands on developers before i realized i could just do this myself.
the secret to running those massive open source models without the ten thousand dollar investment is renting gpu power by the hour. sites like lambda labs let you spin up a monster machine that can run any model in existence for just a couple dollars an hour
this is the ultimate pivot because it allows you to test if your strategy actually prints money before you commit to the hardware. you can turn the server on when you are iterating and turn it off the second you are done which keeps your overhead near zero.
if you haven't proven that your bot can pay for itself yet then buying a mac studio is just an expensive hobby rather than a business move. there is a much bigger loophole involving the anthropic subscriptions that most people are completely overlooking right now
right now i am using a specific plan with claude code that costs about two hundred dollars a month but it lets me run open claw all day without hitting api limits. if i were paying for those same tokens through the standard api i would probably be spending hundreds of dollars every single day.
it is a massive cost savings that allows you to iterate and fail until you find a winning strategy without draining your bank account. eve[...]
Don't Buy a Mac Mini for Clawdbot: The Secret $10,000 Architecture That Costs You Nothing
clawdbot might be the reason you feel like you need a ten thousand dollar computer right now but i am about to show you why that fomo is going to leave you broke. if you have been watching everyone rush out to buy mac minis and mac studios just to run open claw or some local models you are witnessing a massive transfer of wealth from your pocket to apple for no reason.
there is a specific setup i use that costs almost nothing and keeps my main machine safe from whatever these autonomous agents are doing. if you stick with me i will walk you through the exact architecture of a professional trading system that handles the heavy lifting without you needing to drop a single rack on hardware
most people are scared of running these bots on their main computer because they don't want an agent messing with their personal files or browser sessions. instead of buying a second mac mini for six hundred dollars you can just go to the top left of your screen and create a brand new user profile.
this acts like a completely isolated sandbox where you can install all your trading tools and agents without them ever seeing your main data. it is essentially like getting a free computer for the price of five minutes of clicking around your settings
but what if you aren't on a mac or you need to access your system while you are traveling without carrying three laptops in your backpack. this is where the first loop of professional automation starts to close because i use something called chrome remote desktop to bridge the gap.
this allows me to leave a dedicated machine running in a safe place while i access the full desktop environment from a tablet or a cheap laptop anywhere in the world. it solves the mobility issue but it still doesn't solve the problem of those massive ten thousand dollar price tags for high end mac pros
if you are a pc user or just someone who doesn't want to own physical hardware yet you should look into a windows vps through a provider like contabo. most developers will tell you to use a linux terminal but if you aren't a coder yet you need a visual interface you can actually see.
getting a windows server allows you to log in and see a desktop just like your home computer for about fifteen dollars a month. i usually recommend at least twelve gigabytes of ram to keep things from getting janky when you are running multiple browser windows and agents at once
now you might be thinking that the whole point of the big hardware was to run local models like kimi or glm to save on api costs. i spent years thinking i had to own the machines myself and i even spent hundreds of thousands on developers before i realized i could just do this myself.
the secret to running those massive open source models without the ten thousand dollar investment is renting gpu power by the hour. sites like lambda labs let you spin up a monster machine that can run any model in existence for just a couple dollars an hour
this is the ultimate pivot because it allows you to test if your strategy actually prints money before you commit to the hardware. you can turn the server on when you are iterating and turn it off the second you are done which keeps your overhead near zero.
if you haven't proven that your bot can pay for itself yet then buying a mac studio is just an expensive hobby rather than a business move. there is a much bigger loophole involving the anthropic subscriptions that most people are completely overlooking right now
right now i am using a specific plan with claude code that costs about two hundred dollars a month but it lets me run open claw all day without hitting api limits. if i were paying for those same tokens through the standard api i would probably be spending hundreds of dollars every single day.
it is a massive cost savings that allows you to iterate and fail until you find a winning strategy without draining your bank account. eve[...]
Offshore
Moon Dev Don't Buy a Mac Mini for Clawdbot: The Secret $10,000 Architecture That Costs You Nothing clawdbot might be the reason you feel like you need a ten thousand dollar computer right now but i am about to show you why that fomo is going to leave you…
n if they eventually close this loophole or snitch on the usage patterns it serves as the perfect training ground for a data dog
the goal is to find a system that works with a smaller or cheaper model like haiku before you ever try to scale up to the heavy weights. if you can make a strategy profitable using a less intelligent and cheaper model then you know you have found real alpha.
once you have that foundation you can decide if it finally makes sense to build your own custom pc rig which will always be half the price of an apple machine. i am an apple guy so i usually pay the tax anyway but i only do it once the system is already generating enough to cover the cost ten times over
i believe that code is the great equalizer because it took me from losing money and getting liquidated to having fully automated systems doing the work for me. i had to learn to live with the iterations and the failures on youtube to get to this point of clarity.
the universe tends to get out of your way once you make a non negotiable contract with yourself to see the process through to the end. you don't need the flashy hardware or the most expensive setup to start winning in this game
stay focused on the logic and the data rather than the hype and the fomo that everyone else is falling for. if you can master the bridge between renting power and owning your logic you will be ahead of ninety nine percent of the people in this space.
the path to a fully automated life isn't paved with expensive gadgets but with the discipline to iterate until the system finally prints
tweet
the goal is to find a system that works with a smaller or cheaper model like haiku before you ever try to scale up to the heavy weights. if you can make a strategy profitable using a less intelligent and cheaper model then you know you have found real alpha.
once you have that foundation you can decide if it finally makes sense to build your own custom pc rig which will always be half the price of an apple machine. i am an apple guy so i usually pay the tax anyway but i only do it once the system is already generating enough to cover the cost ten times over
i believe that code is the great equalizer because it took me from losing money and getting liquidated to having fully automated systems doing the work for me. i had to learn to live with the iterations and the failures on youtube to get to this point of clarity.
the universe tends to get out of your way once you make a non negotiable contract with yourself to see the process through to the end. you don't need the flashy hardware or the most expensive setup to start winning in this game
stay focused on the logic and the data rather than the hype and the fomo that everyone else is falling for. if you can master the bridge between renting power and owning your logic you will be ahead of ninety nine percent of the people in this space.
the path to a fully automated life isn't paved with expensive gadgets but with the discipline to iterate until the system finally prints
tweet
Offshore
Video
Lumida Wealth Management
Elon Musk just said space will be the cheapest place to run AI in 36 months
There's a physics problem with running AI on Earth that nobody's talking about.
Earth's atmosphere kills 30% of solar energy before it reaches your panels. Add in day-night cycles and massive battery costs, and you're fighting a losing battle.
Space has none of that.
Same solar panel generates five times more power. No batteries needed.
Musk's prediction is under 36 months before space becomes the cheapest option for AI infrastructure.
While Big Tech burns billions on Nevada data centers, the real advantage might be 200 miles up.
Companies building ground-based AI today could pay five times more than competitors in three years. That's extinction-level disadvantage.
SpaceX proved everyone wrong about reusable rockets. Starlink made satellite internet work.
Now, @elonmusk says the AI race won't be won in Silicon Valley.
Three years. Maybe less.
Here are some key takeaways from @elonmusk recent interview with @dwarkesh_sp and @stripe
tweet
Elon Musk just said space will be the cheapest place to run AI in 36 months
There's a physics problem with running AI on Earth that nobody's talking about.
Earth's atmosphere kills 30% of solar energy before it reaches your panels. Add in day-night cycles and massive battery costs, and you're fighting a losing battle.
Space has none of that.
Same solar panel generates five times more power. No batteries needed.
Musk's prediction is under 36 months before space becomes the cheapest option for AI infrastructure.
While Big Tech burns billions on Nevada data centers, the real advantage might be 200 miles up.
Companies building ground-based AI today could pay five times more than competitors in three years. That's extinction-level disadvantage.
SpaceX proved everyone wrong about reusable rockets. Starlink made satellite internet work.
Now, @elonmusk says the AI race won't be won in Silicon Valley.
Three years. Maybe less.
Here are some key takeaways from @elonmusk recent interview with @dwarkesh_sp and @stripe
tweet
Offshore
Photo
Benjamin Hernandez😎
This isn't for the public. We’ve found an under-the-radar play institutions are hiding while they accumulate. I’m only sharing this with the inner circle.
Join the circle: 👉 https://t.co/71FIJId47G
Reply “SILENT” for the confidential ticker.
$BMNR $BYND $NB $ASST $PULM
tweet
This isn't for the public. We’ve found an under-the-radar play institutions are hiding while they accumulate. I’m only sharing this with the inner circle.
Join the circle: 👉 https://t.co/71FIJId47G
Reply “SILENT” for the confidential ticker.
$BMNR $BYND $NB $ASST $PULM
📊 Professional Pick: $CISS
Entry: $2.28 | Target: $3.42
Technicals are flawless. $CISS just broke out of a multi-week base on record volume. RSI is rising but not yet overbought.
One-line why: The 50% gain today is just the "ignition phase." The target is the $3.42 resistance https://t.co/XkSQnEtkah - Benjamin Hernandez😎tweet
Offshore
Video
Dimitry Nakhla | Babylon Capital®
𝐂𝐡𝐫𝐢𝐬 𝐇𝐨𝐡𝐧 𝐨𝐧 𝐀𝐈, 𝐝𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐨𝐧, 𝐚𝐧𝐝 𝐰𝐡𝐲 𝐫𝐞𝐚𝐥 𝐦𝐨𝐚𝐭𝐬 𝐦𝐚𝐲 𝐦𝐚𝐭𝐭𝐞𝐫 𝐦𝐨𝐫𝐞 𝐭𝐡𝐚𝐧 𝐞𝐯𝐞𝐫:
“It’s going to increase disruption in ways we can’t even predict… but AI will increase productivity and lower the cost base of all companies.
And so if you have a company with these barriers to entry, it’s going to be worth more.”
___
𝐓𝐰𝐨 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐢𝐝𝐞𝐚𝐬 𝐞𝐦𝐛𝐞𝐝𝐝𝐞𝐝 𝐡𝐞𝐫𝐞:
𝟏. 𝐃𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐨𝐧 𝐫𝐢𝐬𝐤 𝐢𝐬 𝐫𝐢𝐬𝐢𝐧𝐠
AI lowers barriers to doing things, which means competitive pressure increases across many industries. Business models built on labor-intensive, easily replicable work are especially vulnerable.
𝟐. 𝐌𝐨𝐚𝐭𝐬 + 𝐀𝐈 𝐜𝐚𝐧 𝐛𝐞 𝐚 𝐩𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐜𝐨𝐦𝐛𝐨
If a company already has durable barriers to entry, AI becomes a margin and productivity lever rather than an existential threat.
___
A particularly 𝘢𝘵𝘵𝘳𝘢𝘤𝘵𝘪𝘷𝘦 𝘩𝘶𝘯𝘵𝘪𝘯𝘨 𝘨𝘳𝘰𝘶𝘯𝘥:
𝘽𝙪𝙨𝙞𝙣𝙚𝙨𝙨𝙚𝙨 𝙬𝙞𝙩𝙝 𝙢𝙪𝙡𝙩𝙞𝙥𝙡𝙚 𝙗𝙖𝙧𝙧𝙞𝙚𝙧𝙨 𝙩𝙤 𝙚𝙣𝙩𝙧𝙮 𝙖𝙣𝙙 𝙡𝙖𝙧𝙜𝙚 𝙝𝙪𝙢𝙖𝙣-𝙘𝙖𝙥𝙞𝙩𝙖𝙡 𝙘𝙤𝙨𝙩 𝙗𝙖𝙨𝙚𝙨.
𝘼𝙄 𝙘𝙖𝙣 𝙨𝙩𝙧𝙪𝙘𝙩𝙪𝙧𝙖𝙡𝙡𝙮 𝙡𝙤𝙬𝙚𝙧 𝙩𝙝𝙚𝙞𝙧 𝙘𝙤𝙨𝙩 𝙨𝙩𝙧𝙪𝙘𝙩𝙪𝙧𝙚 𝙬𝙝𝙞𝙡𝙚 𝙩𝙝𝙚 𝙢𝙤𝙖𝙩 𝙥𝙧𝙤𝙩𝙚𝙘𝙩𝙨 𝙥𝙧𝙞𝙘𝙞𝙣𝙜 𝙥𝙤𝙬𝙚𝙧.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
𝐂𝐡𝐫𝐢𝐬 𝐇𝐨𝐡𝐧 𝐨𝐧 𝐀𝐈, 𝐝𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐨𝐧, 𝐚𝐧𝐝 𝐰𝐡𝐲 𝐫𝐞𝐚𝐥 𝐦𝐨𝐚𝐭𝐬 𝐦𝐚𝐲 𝐦𝐚𝐭𝐭𝐞𝐫 𝐦𝐨𝐫𝐞 𝐭𝐡𝐚𝐧 𝐞𝐯𝐞𝐫:
“It’s going to increase disruption in ways we can’t even predict… but AI will increase productivity and lower the cost base of all companies.
And so if you have a company with these barriers to entry, it’s going to be worth more.”
___
𝐓𝐰𝐨 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐢𝐝𝐞𝐚𝐬 𝐞𝐦𝐛𝐞𝐝𝐝𝐞𝐝 𝐡𝐞𝐫𝐞:
𝟏. 𝐃𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐨𝐧 𝐫𝐢𝐬𝐤 𝐢𝐬 𝐫𝐢𝐬𝐢𝐧𝐠
AI lowers barriers to doing things, which means competitive pressure increases across many industries. Business models built on labor-intensive, easily replicable work are especially vulnerable.
𝟐. 𝐌𝐨𝐚𝐭𝐬 + 𝐀𝐈 𝐜𝐚𝐧 𝐛𝐞 𝐚 𝐩𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐜𝐨𝐦𝐛𝐨
If a company already has durable barriers to entry, AI becomes a margin and productivity lever rather than an existential threat.
___
A particularly 𝘢𝘵𝘵𝘳𝘢𝘤𝘵𝘪𝘷𝘦 𝘩𝘶𝘯𝘵𝘪𝘯𝘨 𝘨𝘳𝘰𝘶𝘯𝘥:
𝘽𝙪𝙨𝙞𝙣𝙚𝙨𝙨𝙚𝙨 𝙬𝙞𝙩𝙝 𝙢𝙪𝙡𝙩𝙞𝙥𝙡𝙚 𝙗𝙖𝙧𝙧𝙞𝙚𝙧𝙨 𝙩𝙤 𝙚𝙣𝙩𝙧𝙮 𝙖𝙣𝙙 𝙡𝙖𝙧𝙜𝙚 𝙝𝙪𝙢𝙖𝙣-𝙘𝙖𝙥𝙞𝙩𝙖𝙡 𝙘𝙤𝙨𝙩 𝙗𝙖𝙨𝙚𝙨.
𝘼𝙄 𝙘𝙖𝙣 𝙨𝙩𝙧𝙪𝙘𝙩𝙪𝙧𝙖𝙡𝙡𝙮 𝙡𝙤𝙬𝙚𝙧 𝙩𝙝𝙚𝙞𝙧 𝙘𝙤𝙨𝙩 𝙨𝙩𝙧𝙪𝙘𝙩𝙪𝙧𝙚 𝙬𝙝𝙞𝙡𝙚 𝙩𝙝𝙚 𝙢𝙤𝙖𝙩 𝙥𝙧𝙤𝙩𝙚𝙘𝙩𝙨 𝙥𝙧𝙞𝙘𝙞𝙣𝙜 𝙥𝙤𝙬𝙚𝙧.
___
Video: In Good Company | Norges Bank Investment Management (05/14/2025)
tweet
Offshore
Photo
DAIR.AI
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Multi-agent memory has a homogenization problem.
This work finds that role-aware latent memory that is learnable, compact, and framework-agnostic consistently outperforms handcrafted memory architectures while being substantially more efficient.
When multiple agents share the same memory pool, they end up with identical recollections regardless of their distinct roles. A coding agent, a planning agent, and a review agent all retrieve the same memory entries, ignoring functional differences that should shape what each agent remembers.
The second bottleneck is information overload. MAS inherently involves long interaction contexts, and storing fine-grained memory entries at multiple granularities amplifies this burden, overwhelming agents and obscuring critical decision signals.
This new research introduces LatentMem, a learnable multi-agent memory framework that customizes agent-specific memories in a token-efficient manner.
Instead of storing and retrieving text-based memory entries, LatentMem compresses raw interaction trajectories into compact latent representations conditioned on each agent's role profile. A lightweight memory composer synthesizes fixed-length latent memories that are injected directly into the agent's reasoning process.
To train the memory composer, they introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to encourage compact, high-utility representations. This exploits the differentiability of latent memory to enable gradient backpropagation through the entire memory pipeline.
Across six benchmarks and four MAS frameworks with Qwen3-4B, LatentMem achieves up to 16.20% improvement on TriviaQA and 19.36% on PopQA over vanilla settings. On code generation with KodCode, it delivers an 8.40-9.55% gain depending on the framework. It consistently outperforms eight existing memory architectures, including MetaGPT, Voyager, JoyAgent, and G-Memory.
The efficiency gains matter too: 50% fewer tokens and inference time reduced to roughly two-thirds compared to mainstream memory designs. On out-of-domain tasks, LatentMem still generalizes well, with 7.10% improvement on PDDL and 7.90% on unseen MAS frameworks like CAMEL.
Paper: https://t.co/VfmG0DYIf8
Learn to build effective AI agents in our academy: https://t.co/PE5l0X8fFq
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
RT @omarsar0: Another banger by the Anthropic Engineering team.
The mass-parallelized 16 Claude instances to build a full C compiler from scratch.
100,000 lines of Rust. Compiles the Linux kernel. No active human supervision.
The wildest part isn't even the compiler itself. It's that they built a system where agents autonomously pick up tasks, lock files to avoid conflicts, and git sync with each other like a remote dev team.
Looks inspired by Ralph Loop.
2 billion input tokens, 140 million output tokens, 2 weeks, and $20k in total cost.
If you're still writing code one file at a time in a single session, you are massively underestimating where this is headed.
Agent swarms that coordinate on real codebases aren't a thing of the future anymore. They're a right now thing.
2026 is shaping up to be the year of agent harnesses. And the cool part is that you can go and build your agent team with Claude Code now.
tweet
Offshore
Photo
Bourbon Capital
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
tweet
Howard Marks: What was the most important event in the financial and investment world in the last 50 years?
Howard Marks: Most people would say Lehman Brothers, 2008, tech bubble..Black Monday.. but i believe that it was the decline in interest rates
declining interest rates are extremely beneficial for assets ownership....
Oaktree Capital Management (Howard Marks) 13F as Sep 2025 https://t.co/WPSyQWpsRV - Bourbon Insider Researchtweet