Offshore
Photo
anon
Optical transceivers/adapters at Suncall growing 60%+ top line with electronics segment margin at 34%. This segment includes legay HDD suspensions (winding down) and printer components, so steady-state of optical alone may be higher. Trades at about 5.5x ev/ebit 1.2x tbv. https://t.co/JjoSvhkbZ5
tweet
Optical transceivers/adapters at Suncall growing 60%+ top line with electronics segment margin at 34%. This segment includes legay HDD suspensions (winding down) and printer components, so steady-state of optical alone may be higher. Trades at about 5.5x ev/ebit 1.2x tbv. https://t.co/JjoSvhkbZ5
tweet
Offshore
Photo
anon
Wow, Hikari Tsuhin raising 500 billion yen in new fund structure with Rheos, where LPs/Pensions get exposure only to dividends of Hikari's portfolio (Hikari takes the full beta volatility hit, with cap gains or losses). This is hugely positive for small caps! 500bn yen!
tweet
Wow, Hikari Tsuhin raising 500 billion yen in new fund structure with Rheos, where LPs/Pensions get exposure only to dividends of Hikari's portfolio (Hikari takes the full beta volatility hit, with cap gains or losses). This is hugely positive for small caps! 500bn yen!
光通信とレオスの新しい取り組み。株価変動リスクを取らず配当収入だけほしい年金の資金を狙ってる。こういうファンドの作り方もあるのか。
・光通信は配当収入を放棄してキャピタルゲインだけを得る
・レオスのファンドは配当収入だけ受け取って株価変動リスクはない
https://t.co/rs11nivDLI - 上原@投資家tweet
The Transcript
RT @TheTranscript_: $LYFT CEO: Lyft Ads scaled from concept to $100M exit run-rate in just two years.
“Lyft Ads, 2 years ago, when we were doing Investor Day, it was an idea. It was an early concept. Now we've done exactly what we said we wanted to do, which is reach $100 million run rate exit rate from Q4.”
tweet
RT @TheTranscript_: $LYFT CEO: Lyft Ads scaled from concept to $100M exit run-rate in just two years.
“Lyft Ads, 2 years ago, when we were doing Investor Day, it was an idea. It was an early concept. Now we've done exactly what we said we wanted to do, which is reach $100 million run rate exit rate from Q4.”
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.
The title sounds like a joke:
“This human study did not involve human subjects.”
But it’s dead serious.
The researchers are asking a controversial question:
Can LLM simulations count as behavioral evidence?
Here’s the core idea.
Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.
Not generic prompts.
But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.
Then they test whether the simulated responses statistically match real-world human data.
And disturbingly… they often do.
Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:
• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits
Not perfectly. Not universally.
But far closer than most people would expect.
The key contribution of the paper isn’t “LLMs are human.”
It’s validation.
They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.
When the distributions match, the simulation isn’t just storytelling.
It becomes empirical evidence.
That’s the uncomfortable shift.
If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?
Because if the answer is yes, this changes everything:
• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation
You could prototype social interventions before deploying them in the real world.
You could stress-test messaging strategies across simulated demographics.
You could explore rare edge-case populations without recruitment bottlenecks.
But here’s where Stanford is careful.
The models don’t “understand” humans.
They reflect training data patterns.
They can amplify biases.
They can collapse under distribution shift.
And they can simulate plausibility without causality.
So the paper doesn’t claim replacement.
It argues for calibration.
LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.
That’s the distinction.
Not synthetic humans.
Synthetic behavioral priors.
The wild part?
This paper forces academia to confront something bigger:
If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.
Not minds.
Maps.
And maps can be useful.
We’re moving from “AI as text generator” to “AI as behavioral simulator.”
The ethics, methodology, and epistemology implications are massive.
Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.
And that might be the real revolution hidden in this paper.
tweet
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.
The title sounds like a joke:
“This human study did not involve human subjects.”
But it’s dead serious.
The researchers are asking a controversial question:
Can LLM simulations count as behavioral evidence?
Here’s the core idea.
Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.
Not generic prompts.
But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.
Then they test whether the simulated responses statistically match real-world human data.
And disturbingly… they often do.
Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:
• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits
Not perfectly. Not universally.
But far closer than most people would expect.
The key contribution of the paper isn’t “LLMs are human.”
It’s validation.
They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.
When the distributions match, the simulation isn’t just storytelling.
It becomes empirical evidence.
That’s the uncomfortable shift.
If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?
Because if the answer is yes, this changes everything:
• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation
You could prototype social interventions before deploying them in the real world.
You could stress-test messaging strategies across simulated demographics.
You could explore rare edge-case populations without recruitment bottlenecks.
But here’s where Stanford is careful.
The models don’t “understand” humans.
They reflect training data patterns.
They can amplify biases.
They can collapse under distribution shift.
And they can simulate plausibility without causality.
So the paper doesn’t claim replacement.
It argues for calibration.
LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.
That’s the distinction.
Not synthetic humans.
Synthetic behavioral priors.
The wild part?
This paper forces academia to confront something bigger:
If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.
Not minds.
Maps.
And maps can be useful.
We’re moving from “AI as text generator” to “AI as behavioral simulator.”
The ethics, methodology, and epistemology implications are massive.
Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.
And that might be the real revolution hidden in this paper.
tweet
Javier Blas
RT @phildstewart: Trump's top national security advisers met in the White House Situation Room today to discuss Iran, a senior U.S. official told @steveholland1
tweet
RT @phildstewart: Trump's top national security advisers met in the White House Situation Room today to discuss Iran, a senior U.S. official told @steveholland1
tweet
The Transcript
RT @TheTranscript_: $NOW ServiceNow CEO: "Our pipelines have never been better. Let me be clear, never been better...So you should feel really good about ServiceNow."
tweet
RT @TheTranscript_: $NOW ServiceNow CEO: "Our pipelines have never been better. Let me be clear, never been better...So you should feel really good about ServiceNow."
tweet
Offshore
Video
memenodes
when you worry about the markets and she asks if you’d still love her if she was a penguin https://t.co/mRMtXjQ6qf
tweet
when you worry about the markets and she asks if you’d still love her if she was a penguin https://t.co/mRMtXjQ6qf
tweet
Offshore
Photo
The Few Bets That Matter
RT @WealthyReadings: Investing is unforgiving. One small mistake can be very costly, and there’s no way back.
$NBIS might be that mistake for me this year.
This isn’t hindsight bias. I said the very next day that selling was a mistake as the stock didn't break my conditions to hold. Sentiment took over and I took a decision I shouldn’t have.
That’s all it takes.
The last week and a half cost me a few salaries in unrealized P&L. One emotinal afternoon cost me months of work.
I’m not here to complain. I’m here to be transparent, to illustrate how critical systems - and respecting them, really are.
Your system exists to protect you from yourself.
Ideas, opinions, convictions can make money. But they can't regularly outperform. Over time, convictions turn into bias, and bias costs.
Systems only can compound over decades. And the single most important rule is simple: don’t break it.
I broke mine. Now I have to work on fixing that.
Mistakes are opportunities to improve.
tweet
RT @WealthyReadings: Investing is unforgiving. One small mistake can be very costly, and there’s no way back.
$NBIS might be that mistake for me this year.
This isn’t hindsight bias. I said the very next day that selling was a mistake as the stock didn't break my conditions to hold. Sentiment took over and I took a decision I shouldn’t have.
That’s all it takes.
The last week and a half cost me a few salaries in unrealized P&L. One emotinal afternoon cost me months of work.
I’m not here to complain. I’m here to be transparent, to illustrate how critical systems - and respecting them, really are.
Your system exists to protect you from yourself.
Ideas, opinions, convictions can make money. But they can't regularly outperform. Over time, convictions turn into bias, and bias costs.
Systems only can compound over decades. And the single most important rule is simple: don’t break it.
I broke mine. Now I have to work on fixing that.
Mistakes are opportunities to improve.
Few $NBIS notes after this quarter.
I'll be the bear, once more.
I continue to believe the market will punish the stock - or not reward it as much as many expect.
Not because the company isn’t excellent, but because it did not reward $GOOG, so why would it reward $NBIS for the same behavior?
Fundamentally, everyone will be bullish. Demand is through the roof, compute was sold out, management is planning to build more sites, etc...
Everything FinX wants to see.
From a market perspective, Q4 CapEx slowed down, guidance talks about ~20% increase of contracted power for FY26 without news on connected power, except for the upgrade from 7 sites to 16 sites.
This means FY26 CapEx will accelerate - just like for everyone else, and won't slow down FY27 as contracted power continues to climb.
More spending. Which was punished across all hyperscalers.
Also note that ARR guidance wasn’t increased, meaning no beat expected hence nothing above expectations and no buildouts closing faster than expected.
Some will say "why would you want more? It doesn't matter, they are executing at their pace"
I disagree. Acceleration is everything, otherwise you'll miss on expectations just like they did.
That revenue miss is due to real-world constraints, as I’ve shared yesterday and for months: you cannot build faster than physics and logistics allow you to.
The issue is that growth factually slows/doesn't accelerate. Growth stocks work on acceleration not stable growth.
The why doesn’t matter, even if you’re supply constrained.
Growth slows, CapEx increases, cash generation decreases, and there are no certainties that demand won’t be fulfilled by other hyperscalers by the time infrastructure is built.
Like many of you, I believe there will be demand and everything will be fine. But today, you cannot know. You can bet on it, but you cannot know.
That is the issue. And that is why the market might react like it did for $GOOG.
I continue to believe the company is excellent and its future is bright. And that the stock won’t be rewarded as much as many expect in the short term.
I’d love to be wrong. - The Few Bets That Mattertweet
The Transcript
RT @TheTranscript_: $RBLX CEO: "Every day, we capture roughly 30,000 years of human interaction data on Roblox in a PII and privacy compliant way. We're actively using this data to develop and train AI models that continue to bring our vision to life. I want to highlight that we're internally now running over 400 AI models. "
tweet
RT @TheTranscript_: $RBLX CEO: "Every day, we capture roughly 30,000 years of human interaction data on Roblox in a PII and privacy compliant way. We're actively using this data to develop and train AI models that continue to bring our vision to life. I want to highlight that we're internally now running over 400 AI models. "
tweet