Offshore
Photo
God of Prompt
🚨 R.I.P Harvard MBA.
I built a personal MBA using 12 prompts across Claude and Gemini.
It teaches business strategy, growth tactics, and pricing psychology better than any $200K degree.
Here's every prompt you can copy & paste: https://t.co/DhvhgN0OEz
tweet
🚨 R.I.P Harvard MBA.
I built a personal MBA using 12 prompts across Claude and Gemini.
It teaches business strategy, growth tactics, and pricing psychology better than any $200K degree.
Here's every prompt you can copy & paste: https://t.co/DhvhgN0OEz
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Perplexity is terrifyingly good at competitive intelligence.
If you use these 10 prompts, you’ll see why:
(Bookmark this thread for later) https://t.co/iEiiYxTKyp
tweet
RT @godofprompt: Perplexity is terrifyingly good at competitive intelligence.
If you use these 10 prompts, you’ll see why:
(Bookmark this thread for later) https://t.co/iEiiYxTKyp
tweet
Offshore
Photo
Javier Blas
COLUMN: Is the White House lulling itself into a false sense of security about energy and the Middle East?
"Just because Trump bombed Iran last year without sending oil prices skyrocketing, it doesn’t mean he can do it again."
@Opinion
https://t.co/Pqx721NJbL
tweet
COLUMN: Is the White House lulling itself into a false sense of security about energy and the Middle East?
"Just because Trump bombed Iran last year without sending oil prices skyrocketing, it doesn’t mean he can do it again."
@Opinion
https://t.co/Pqx721NJbL
tweet
Moon Dev
5 openclaws and i will be chasing jim simons
he ran up a net worth of $31b and didn't have 6 opus's
and 5 openclaws...
ill show every step of the way https://t.co/EcL5uji1VK
tweet
5 openclaws and i will be chasing jim simons
he ran up a net worth of $31b and didn't have 6 opus's
and 5 openclaws...
ill show every step of the way https://t.co/EcL5uji1VK
tweet
anon
RT @willschoebs: BuySell Technologies 7685 is a very unique to 🇯🇵 business, that is crushing lately…don’t think it’s possible to replicate in the 🇺🇸 for multiple cultural, geographic, etc reasons
tweet
RT @willschoebs: BuySell Technologies 7685 is a very unique to 🇯🇵 business, that is crushing lately…don’t think it’s possible to replicate in the 🇺🇸 for multiple cultural, geographic, etc reasons
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
Offshore
Photo
anon
RT @zephyr_z9: Investors are bidding up a Titanium Aluminide blade supplier, because aero jet engines suppliers are converting/repurposing that capacity for industrial nat gas turbine production for AI DCs
tweet
RT @zephyr_z9: Investors are bidding up a Titanium Aluminide blade supplier, because aero jet engines suppliers are converting/repurposing that capacity for industrial nat gas turbine production for AI DCs
Rarely seen a ~3bagger in 2 month aerospace engine is a good hunting ground will be my next séries of post after video game 🇯🇵 https://t.co/1kRg472yPe - govro12tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
RT @godofprompt: 🚨 Holy shit… Google just published one of the cleanest demonstrations of real multi-agent intelligence I’ve seen so far.
Not another “look, two chatbots are talking” demo.
An actual framework for how agents can infer who they’re interacting with and adapt on the fly.
The paper is “Multi-agent cooperation through in-context co-player inference.”
The core idea is deceptively simple:
In multi-agent environments, performance doesn’t just depend on the task.
It depends on who you’re paired with.
Most current systems ignore this.
They optimize against an average opponent.
Or assume fixed partner behavior.
Or hard-code roles.
Google does something smarter.
They let the model infer its co-player’s strategy directly from the interaction history inside the context window.
No retraining, separate belief model and no explicit opponent classifier.
Just in-context inference.
The agent observes a few rounds of behavior. Forms an implicit hypothesis about its partner’s type. Then updates its own strategy accordingly.
This turns static policies into adaptive ones.
The experiments are structured around cooperative and social dilemma games where partner types vary:
Some partners are fully cooperative.
Some are selfish.
Some are stochastic.
Some strategically defect.
Agents without co-player inference treat all partners the same.
Agents with inference adjust.
And the performance gap is significant.
What makes this paper uncomfortable for a lot of current “multi-agent” hype is how clearly it shows what real coordination requires.
First, coordination is not just communication. It’s modeling the incentives and likely actions of others.
Second, robustness matters. An agent that cooperates blindly gets exploited. An agent that defects blindly loses cooperative gains. The system must dynamically balance trust and caution.
Third, adaptation must happen at inference time. In real deployments, you cannot retrain every time the population changes.
The most interesting part is that this capability emerges purely from structured context.
The model isn’t fine-tuned to classify opponent types explicitly. It uses behavioral traces embedded in the prompt to infer latent strategy.
That’s belief modeling through language.
And it scales.
Think about where this matters outside toy games:
Autonomous trading systems reacting to different market participants.
Negotiation agents interacting with unpredictable humans.
Distributed AI workflows coordinating across departments.
Swarm robotics where teammate reliability varies.
In all these settings, static competence is not enough.
Strategic awareness is the bottleneck.
The deeper shift is philosophical.
We’ve been treating LLM agents as isolated optimizers.
This paper moves us toward agents that reason about other agents reasoning about them.
That’s recursive modeling.
And once that loop becomes stable, you no longer have “a chatbot.”
You have a participant in a strategic ecosystem.
The takeaway isn’t that multi-agent AI is solved.
It’s that most current systems aren’t even attempting the hard part.
Real multi-agent intelligence isn’t multiple prompts in parallel.
It’s adaptive belief formation under uncertainty.
And this paper is one of the first clean proofs that large models can do that using nothing but context.
Paper: Multi-agent cooperation through in-context co-player inference
tweet
Jukan
Taiwan Stock Market 2026 Barometer: ABF 'Supercycle' Ignites… AI Infrastructure Driving PCB & CCL Demand Explosion
- As AI server and high-performance computing (HPC) demand ramps up in earnest, the long-struggling PCB sector has entered a strong rebound phase. After enduring weak consumer electronics and inventory destocking pressure, the advancement and higher specifications of AI server architectures are driving across-the-board increases in substrate layer counts, material grades, and per-unit value, serving as key catalysts for both industry fundamentals and share prices. Related names have surged sharply year-to-date, drawing significant market attention.
- Capital is flowing back into high-end materials and process supply chains, with PCB, ABF, and CCL simultaneously entering a new growth cycle. Most major players posted January revenues well above year-ago levels, hitting all-time highs. High-end PCB makers exposed to AI server demand led the earnings recovery, while ABF substrates are also seeing a rapidly tightening supply-demand structure as AI/HPC proliferates, prompting assessments of a 'supercycle' entry.
- In a January report, Goldman Sachs identified PCB and CCL as key beneficiary sectors of next-generation AI infrastructure. The global AI server PCB market is projected to surge from $3.1B in 2024 to $27.1B in 2027, with 2026–2027 YoY growth of +113% and +117%, respectively. The upstream CCL market is expected to expand from $1.5B in 2024 to $18.7B in 2027, with even steeper 2026–2027 YoY growth of +142% and +222%. As AI servers rapidly evolve toward M9-grade materials and higher-layer designs, R&D and CAPEX barriers are rising, creating a competitive environment that favors leading players with the requisite technology and production capabilities.
- On a company basis, Taiwan Union Technology is expected to benefit from ASP uplift as M9-grade materials are adopted in VR200/300, with a stable share within the NVIDIA supply chain cited as a key strength. The price target was raised from NT$2,060 to NT$2,250.
- Unimicron is seeing a continued increase in AI PCB revenue mix, driven by expanding share in ASIC projects for Amazon Web Services, Google, and Meta. 2025–2027 EPS is projected at NT$19.24, NT$37.90, and NT$67.13, respectively, with the price target raised from NT$805 to NT$925.
- Elite Material is considered a primary beneficiary of the supply shortage, expected to absorb transition demand and expand its share within the AI server supply chain. 2025–2027 EPS is estimated at NT$13.03, NT$28.39, and NT$48.51, with the price target adjusted from NT$615 to NT$650.
- BofA also views Unimicron positively. Citing full utilization of high-end capacity and strong ASIC demand, the company has raised this year's CAPEX to a record level. January revenue beat expectations, leading to a 2026 revenue forecast of YoY +44%. 2026–2028 EPS estimates were revised upward by 5–10%, and the valuation multiple was lifted from 18.5x to 20x.
- At the broader industry level, the Taiwan Printed Circuit Association (TPCA) and the Industrial Technology Research Institute (ITRI) forecast global PCB output value at $92.36B in 2025 (YoY +15.4%) and $105.2B in 2026 (YoY +13.9%). AI is now firmly established as the core driver of industry premiumization and value-added growth.
- On the supply chain front, U.S.–China trade tensions and tariff barriers are reshaping the global division of labor. According to TPCA, amid the 'de-China' trend, U.S.-based customers are expanding non-China production in sensitive areas such as AI servers and low-earth orbit satellites, with Taiwanese PCB makers absorbing the resulting transition demand. Under the latest Taiwan–U.S. tariff regime, the export duty on Taiwan-made PCBs to the U.S. stands at ~15%, highlighting a competitive edge versus China's 45%. Taiwan has already emerged as the largest PCB import source for the U.S. in 2025. However, establishing local U.S. factories requires careful consideration of ROI a[...]
Taiwan Stock Market 2026 Barometer: ABF 'Supercycle' Ignites… AI Infrastructure Driving PCB & CCL Demand Explosion
- As AI server and high-performance computing (HPC) demand ramps up in earnest, the long-struggling PCB sector has entered a strong rebound phase. After enduring weak consumer electronics and inventory destocking pressure, the advancement and higher specifications of AI server architectures are driving across-the-board increases in substrate layer counts, material grades, and per-unit value, serving as key catalysts for both industry fundamentals and share prices. Related names have surged sharply year-to-date, drawing significant market attention.
- Capital is flowing back into high-end materials and process supply chains, with PCB, ABF, and CCL simultaneously entering a new growth cycle. Most major players posted January revenues well above year-ago levels, hitting all-time highs. High-end PCB makers exposed to AI server demand led the earnings recovery, while ABF substrates are also seeing a rapidly tightening supply-demand structure as AI/HPC proliferates, prompting assessments of a 'supercycle' entry.
- In a January report, Goldman Sachs identified PCB and CCL as key beneficiary sectors of next-generation AI infrastructure. The global AI server PCB market is projected to surge from $3.1B in 2024 to $27.1B in 2027, with 2026–2027 YoY growth of +113% and +117%, respectively. The upstream CCL market is expected to expand from $1.5B in 2024 to $18.7B in 2027, with even steeper 2026–2027 YoY growth of +142% and +222%. As AI servers rapidly evolve toward M9-grade materials and higher-layer designs, R&D and CAPEX barriers are rising, creating a competitive environment that favors leading players with the requisite technology and production capabilities.
- On a company basis, Taiwan Union Technology is expected to benefit from ASP uplift as M9-grade materials are adopted in VR200/300, with a stable share within the NVIDIA supply chain cited as a key strength. The price target was raised from NT$2,060 to NT$2,250.
- Unimicron is seeing a continued increase in AI PCB revenue mix, driven by expanding share in ASIC projects for Amazon Web Services, Google, and Meta. 2025–2027 EPS is projected at NT$19.24, NT$37.90, and NT$67.13, respectively, with the price target raised from NT$805 to NT$925.
- Elite Material is considered a primary beneficiary of the supply shortage, expected to absorb transition demand and expand its share within the AI server supply chain. 2025–2027 EPS is estimated at NT$13.03, NT$28.39, and NT$48.51, with the price target adjusted from NT$615 to NT$650.
- BofA also views Unimicron positively. Citing full utilization of high-end capacity and strong ASIC demand, the company has raised this year's CAPEX to a record level. January revenue beat expectations, leading to a 2026 revenue forecast of YoY +44%. 2026–2028 EPS estimates were revised upward by 5–10%, and the valuation multiple was lifted from 18.5x to 20x.
- At the broader industry level, the Taiwan Printed Circuit Association (TPCA) and the Industrial Technology Research Institute (ITRI) forecast global PCB output value at $92.36B in 2025 (YoY +15.4%) and $105.2B in 2026 (YoY +13.9%). AI is now firmly established as the core driver of industry premiumization and value-added growth.
- On the supply chain front, U.S.–China trade tensions and tariff barriers are reshaping the global division of labor. According to TPCA, amid the 'de-China' trend, U.S.-based customers are expanding non-China production in sensitive areas such as AI servers and low-earth orbit satellites, with Taiwanese PCB makers absorbing the resulting transition demand. Under the latest Taiwan–U.S. tariff regime, the export duty on Taiwan-made PCBs to the U.S. stands at ~15%, highlighting a competitive edge versus China's 45%. Taiwan has already emerged as the largest PCB import source for the U.S. in 2025. However, establishing local U.S. factories requires careful consideration of ROI a[...]
Offshore
Jukan Taiwan Stock Market 2026 Barometer: ABF 'Supercycle' Ignites… AI Infrastructure Driving PCB & CCL Demand Explosion - As AI server and high-performance computing (HPC) demand ramps up in earnest, the long-struggling PCB sector has entered a strong rebound…
nd customer negotiation terms, and in the near term, Asia-based manufacturing is judged to retain a competitive advantage.
tweet
tweet