Moon Dev
5 openclaws and i will be chasing jim simons

he ran up a net worth of $31b and didn't have 6 opus's

and 5 openclaws...

ill show every step of the way https://t.co/EcL5uji1VK
tweet
anon
RT @willschoebs: BuySell Technologies 7685 is a very unique to 🇯🇵 business, that is crushing lately…don’t think it’s possible to replicate in the 🇺🇸 for multiple cultural, geographic, etc reasons
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.

Not another “look, two chatbots are talking” demo.

An actual framework for how agents can infer who they’re interacting with and adapt on the fly.

The paper is “Multi-agent cooperation through in-context co-player inference.”

The core idea is deceptively simple:

In multi-agent environments, performance doesn’t just depend on the task.

It depends on who you’re paired with.

Most current systems ignore this.

They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.

Google does something smarter.

They let the model infer its co-player’s strategy directly from the interaction history inside the context window.

No retraining, separate belief model and no explicit opponent classifier.

Just in-context inference.

The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.

This turns static policies into adaptive ones.

The experiments are structured around cooperative and social dilemma games where partner types vary:

Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.

Agents without co-player inference treat all partners the same.

Agents with inference adjust.

And the performance gap is significant.

What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.

First, coordination is not just communication. It’s modeling the incentives and likely actions of others.

Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.

Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.

The most interesting part is that this capability emerges purely from structured context.

The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.

That’s belief modeling through language.

And it scales.

Think about where this matters outside toy games:

Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.

In all these settings, static competence is not enough.

Strategic awareness is the bottleneck.

The deeper shift is philosophical.

We’ve been treating LLM agents as isolated optimizers.

This paper moves us toward agents that reason about other agents reasoning about them.

That’s recursive modeling.

And once that loop becomes stable, you no longer have “a chatbot.”

You have a participant in a strategic ecosystem.

The takeaway isn’t that multi-agent AI is solved.

It’s that most current systems aren’t even attempting the hard part.

Real multi-agent intelligence isn’t multiple prompts in parallel.

It’s adaptive belief formation under uncertainty.

And this paper is one of the first clean proofs that large models can do that using nothing but context.

Paper: Multi-agent cooperation through in-context co-player inference
tweet
Offshore
Photo
anon
RT @zephyr_z9: Investors are bidding up a Titanium Aluminide blade supplier, because aero jet engines suppliers are converting/repurposing that capacity for industrial nat gas turbine production for AI DCs

Rarely seen a ~3bagger in 2 month aerospace engine is a good hunting ground will be my next séries of post after video game 🇯🇵 https://t.co/1kRg472yPe
- govro12
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.

Not another “look, two chatbots are talking” demo.

An actual framework for how agents can infer who they’re interacting with and adapt on the fly.

The paper is “Multi-agent cooperation through in-context co-player inference.”

The core idea is deceptively simple:

In multi-agent environments, performance doesn’t just depend on the task.

It depends on who you’re paired with.

Most current systems ignore this.

They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.

Google does something smarter.

They let the model infer its co-player’s strategy directly from the interaction history inside the context window.

No retraining, separate belief model and no explicit opponent classifier.

Just in-context inference.

The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.

This turns static policies into adaptive ones.

The experiments are structured around cooperative and social dilemma games where partner types vary:

Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.

Agents without co-player inference treat all partners the same.

Agents with inference adjust.

And the performance gap is significant.

What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.

First, coordination is not just communication. It’s modeling the incentives and likely actions of others.

Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.

Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.

The most interesting part is that this capability emerges purely from structured context.

The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.

That’s belief modeling through language.

And it scales.

Think about where this matters outside toy games:

Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.

In all these settings, static competence is not enough.

Strategic awareness is the bottleneck.

The deeper shift is philosophical.

We’ve been treating LLM agents as isolated optimizers.

This paper moves us toward agents that reason about other agents reasoning about them.

That’s recursive modeling.

And once that loop becomes stable, you no longer have “a chatbot.”

You have a participant in a strategic ecosystem.

The takeaway isn’t that multi-agent AI is solved.

It’s that most current systems aren’t even attempting the hard part.

Real multi-agent intelligence isn’t multiple prompts in parallel.

It’s adaptive belief formation under uncertainty.

And this paper is one of the first clean proofs that large models can do that using nothing but context.

Paper: Multi-agent cooperation through in-context co-player inference
tweet
Jukan
Taiwan Stock Market 2026 Barometer: ABF 'Supercycle' Ignites… AI Infrastructure Driving PCB & CCL Demand Explosion

- As AI server and high-performance computing (HPC) demand ramps up in earnest, the long-struggling PCB sector has entered a strong rebound phase. After enduring weak consumer electronics and inventory destocking pressure, the advancement and higher specifications of AI server architectures are driving across-the-board increases in substrate layer counts, material grades, and per-unit value, serving as key catalysts for both industry fundamentals and share prices. Related names have surged sharply year-to-date, drawing significant market attention.

- Capital is flowing back into high-end materials and process supply chains, with PCB, ABF, and CCL simultaneously entering a new growth cycle. Most major players posted January revenues well above year-ago levels, hitting all-time highs. High-end PCB makers exposed to AI server demand led the earnings recovery, while ABF substrates are also seeing a rapidly tightening supply-demand structure as AI/HPC proliferates, prompting assessments of a 'supercycle' entry.

- In a January report, Goldman Sachs identified PCB and CCL as key beneficiary sectors of next-generation AI infrastructure. The global AI server PCB market is projected to surge from $3.1B in 2024 to $27.1B in 2027, with 2026–2027 YoY growth of +113% and +117%, respectively. The upstream CCL market is expected to expand from $1.5B in 2024 to $18.7B in 2027, with even steeper 2026–2027 YoY growth of +142% and +222%. As AI servers rapidly evolve toward M9-grade materials and higher-layer designs, R&D and CAPEX barriers are rising, creating a competitive environment that favors leading players with the requisite technology and production capabilities.

- On a company basis, Taiwan Union Technology is expected to benefit from ASP uplift as M9-grade materials are adopted in VR200/300, with a stable share within the NVIDIA supply chain cited as a key strength. The price target was raised from NT$2,060 to NT$2,250.

- Unimicron is seeing a continued increase in AI PCB revenue mix, driven by expanding share in ASIC projects for Amazon Web Services, Google, and Meta. 2025–2027 EPS is projected at NT$19.24, NT$37.90, and NT$67.13, respectively, with the price target raised from NT$805 to NT$925.

- Elite Material is considered a primary beneficiary of the supply shortage, expected to absorb transition demand and expand its share within the AI server supply chain. 2025–2027 EPS is estimated at NT$13.03, NT$28.39, and NT$48.51, with the price target adjusted from NT$615 to NT$650.

- BofA also views Unimicron positively. Citing full utilization of high-end capacity and strong ASIC demand, the company has raised this year's CAPEX to a record level. January revenue beat expectations, leading to a 2026 revenue forecast of YoY +44%. 2026–2028 EPS estimates were revised upward by 5–10%, and the valuation multiple was lifted from 18.5x to 20x.

- At the broader industry level, the Taiwan Printed Circuit Association (TPCA) and the Industrial Technology Research Institute (ITRI) forecast global PCB output value at $92.36B in 2025 (YoY +15.4%) and $105.2B in 2026 (YoY +13.9%). AI is now firmly established as the core driver of industry premiumization and value-added growth.

- On the supply chain front, U.S.–China trade tensions and tariff barriers are reshaping the global division of labor. According to TPCA, amid the 'de-China' trend, U.S.-based customers are expanding non-China production in sensitive areas such as AI servers and low-earth orbit satellites, with Taiwanese PCB makers absorbing the resulting transition demand. Under the latest Taiwan–U.S. tariff regime, the export duty on Taiwan-made PCBs to the U.S. stands at ~15%, highlighting a competitive edge versus China's 45%. Taiwan has already emerged as the largest PCB import source for the U.S. in 2025. However, establishing local U.S. factories requires careful consideration of ROI a[...]
Offshore
Photo
Moon Dev
teaching today

today i am going to teach you how to automate your trading step by step even if you have never coded before

i will spend the first hour teaching and the next couple hours building trading systems i need as a quant

feel free to jump into the stream if there are still a couple tickets left

join here https://t.co/Aw7dcEw2RV

moon
tweet
Offshore
Photo
Benjamin Hernandez😎
Elite U.S. universities are growing frustrated with weak private-equity returns amid a crowded deal landscape. Pressure on performance could impact capital flows across alternative asset managers.

$BX $KKR $APO https://t.co/XJmNiOrFnp
tweet
Offshore
Video
Moon Dev
Stop Being a "Tinker" and Start Being a Quant: How to One-Shot Trading Bots in 20 Minutes

spending three hundred thousand dollars on a hardware farm sounds like a dream until you realize you can reach the same level of automation for the price of a monthly sandwich. most traders are going to spend the next six months arguing about which operating system is better while the real movers are already scaling their systems to the moon

my name is moon dev i believe that code is the great equalizer because through losing money with liquidations and over trading i knew i had to automate my trading so i learned to code as in the past i spent hundreds of thousands on devs for app, thinking i would not be able to code myself. with bots you must iterate to success so i decided to learn live on youtube, and now we are here, fully automated systems trading for me instead of getting liquidated

i recently found myself standing in a costco staring at stacks of mac minis thinking i needed to buy every single one of them to reach my goal of five hundred independent agents. scaling at six hundred dollars per machine is a three hundred thousand dollar problem that most people simply cannot afford to solve. but there is a specific reason why i ended up with three minis and a macbook air on my desk despite finding a way to do it for ten dollars a month

you have to decide if you are a tinkerer who likes to spend twenty hours building a puzzle or a builder who wants to launch a business in twenty minutes. i put in over one hundred hours of vicious testing comparing windows vps options against ubuntu desktops to see which one could actually handle the weight of autonomous trading agents. while the ten dollar ubuntu option seems like the ultimate win for your bank account it might actually be the thing that kills your speed in the race to the moon

most traders fail because they listen to tech gatekeepers who want to feel smarter than everyone else by making simple things sound impossible. they will tell you that you need to learn linux and spend days configuring servers just to save twenty dollars. if you believe your time is worth five thousand dollars an hour then spending seven hours to set up a cheap server is actually the most expensive mistake you can make

when i ran a direct comparison test between a mac mini and a windows vps the results were immediate and undeniable. i gave both systems the exact same prompt to build a complex trading dashboard and a simple game. the mac mini one shotted the task in seconds while the windows vps threw back an error that required manual iteration

the goal is to recreate yourself five hundred times so you can dominate the markets while you sleep. to do that you need a system that stays awake twenty four seven and never gets tired. i use an app called amphetamine or caffeine to keep my macs from ever falling asleep because the goal is permanent uptime for every single agent

if you have never typed a single line of code in your life the terminal is going to look like a scary place that belongs in a movie. but in reality it is just a clicker game where the instructions are written in plain english. you are just copying and pasting commands and the computer is doing exactly what you tell it to do

the first step to mastering the claw is getting xcode installed which is just a set of tools from apple that makes everything else work. once you have that you are going to use a simple command to install node.js and npx. these are just the engines that allow your ai agents to talk to the market and build your systems

i realized that scaling to five hundred agents was going to be impossible if i used the most expensive ai models for every single task. that is when i found the ultimate secret sauce for keeping costs near zero while keeping performance high. there are specific eastern models like minimax two point five and glm four point seven that are one hundred times cheaper than the big name models but just as fire

you can[...]
Offshore
Moon Dev Stop Being a "Tinker" and Start Being a Quant: How to One-Shot Trading Bots in 20 Minutes spending three hundred thousand dollars on a hardware farm sounds like a dream until you realize you can reach the same level of automation for the price of…
connect these models through a platform called open code zen to get massive credits and access to every model you could ever need. using these models is like finding a shortcut in a race that everyone else is running the long way. while they are paying for high end luxury tokens you are running five hundred agents for the price of one

the tech world wants you to stay confused because it keeps them in a position of power. they want you to think you need to be a math genius or a senior developer to build a trading empire. but code is the great equalizer because the ai can now walk you through every single step of the process if you just know how to ask

i found that the best way to interact with these systems is actually through the command line once you get used to it. it is a much faster and cleaner experience than trying to use a janky user interface that crashes or lags. you can set up a simple shortcut like an alias so that typing a single letter launches your agent in dangerous mode with full permissions

the race to the moon is not about having the biggest ego or the most complex setup. it is about having the most data and iterating faster than everyone else in the game. if you are still manual trading or getting liquidated because you are too scared to touch the terminal you are already falling behind the curve

i have spent hundreds of thousands of dollars on developers in the past only to realize that i could have done it all myself with a little bit of focus. you do not need a computer science degree when you have an ai agent that can one shot a brick breaker game in front of your eyes. the separation between the winners and the losers in the next five years is going to be determined by who can manage an army of ai agents

even though costco will only sell you two mac minis at a time you can always find a way to scale if your vision is big enough. speed is the only currency that matters in a market that moves at the speed of light. you can either spend your afternoon trying to save ten dollars on a vps or you can spend it launching ten new agents that trade for you

the most successful traders i know are not the ones who found the perfect secret indicator. they are the ones who figured out how to automate their edge and remove the human emotion that leads to over trading and liquidations. once your systems are running you just have to sit back and watch the data take you where you need to go

i believe in you and i believe that anyone can master this if they are willing to stop being a normie and start being a data dog. don't let the fear of a black screen and white text stop you from building a life of freedom. the job is not finished until you have your agents working for you twenty four seven

the final piece of the puzzle is understanding that you are an asset to your family and your future depends on how well you adapt to this change. code is the only thing that can give you true scale and leverage in a world that is moving toward total automation. step on the gas and do not look back because the race has already started
tweet
Offshore
Photo
Benjamin Hernandez😎
Molson Coors posted lower profit and sales as beer demand stayed soft, adding pressure on consumer staples. Investors are watching pricing power and volume recovery trends.

$TAP $BUD $STZ https://t.co/D12YDCuXHC
tweet