Offshore
Photo
Bourbon Capital
$MSCI Projected FCF for the next few years https://t.co/NSUMAoTToY

Fernandez $MSCI CEO keeps buying more and more shares.....

$3.5M today at $523 https://t.co/cQHqujkI6Q
- Bourbon Capital
tweet
Offshore
Photo
The Transcript
RT @TheTranscript_: Wednesday's earnings:

Before Open: $ADI $GRMN $SEDG $MCO $FVRR $WING $CSTM $GPN $GLBE $LBTYA $PERI

After Close: $CVNA $KGC $FIG $DASH $PAAS $EBAY $BKNG $BTG $RELY $EQX $OXY $RGLD https://t.co/ycFoqlkQ1m
tweet
Offshore
Photo
The Transcript
Analog Devices CEO: "ADI's robust Q1 built upon the strong position & momentum with which we entered the year

CFO: "During our Q1, bookings growth continued, driven by broad strength in Industrial & record orders for our Data Center segment."

$ADI: +7% Pre-Market https://t.co/sf84jT3XqU
tweet
Offshore
Photo
The Transcript
Verisk CEO: "Verisk delivered a solid Q4 result, capping off another year of growth in line with our long-term financial targets. We enter 2026 with clear strategic momentum"

$VRSK: +13% Pre-Market https://t.co/h65oFHBi8a
tweet
Offshore
Photo
The Transcript
Berkshire portfolio updates: https://t.co/AhDrizd1gb
tweet
Offshore
Video
memenodes
when you realize bear market is really here and there is nothing you can do but go back to 9-5 https://t.co/lAsccGN5sv
tweet
Offshore
Photo
memenodes
Ethereum is down bad from “future of finance” to this shit... https://t.co/6Pn7GTKdsH
tweet
Michael Fritzell (Asian Century Stocks)
RT @nitinkinvest: I'm not entirely sure, but I've been monitoring this for about six months. There seems to be sudden notification fatigue that people are talking about. The analog watch does its job and respects boundaries.

I think smartwatch pricing is another factor. An analog watch is like permanent jewelry, whereas one has to continually upgrade to the latest Garmin. We've seen some switching to lower-priced smartwatches like the Zepp Helio band.

Quiet luxury and old money trends are also driving a pivot toward expensive analog watches as well I feel. Personally purchased an analog watch for the first time in 5 years.
tweet
Lumida Wealth Management
1/ Daily News Round-Up:

- Anthropic–Pentagon feud escalates over AI terms
- Vance says Iran talks stall, sets 2-week window
- Apple decouples from Big Tech in AI volatility
- Berkshire slashes Amazon stake
- Uber invests $100M in robotaxi charging hubs
tweet
memenodes
every year

which year destroyed your mental health most?
- 🍂
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.

The title sounds like a joke:

“This human study did not involve human subjects.”

But it’s dead serious.

The researchers are asking a controversial question:

Can LLM simulations count as behavioral evidence?

Here’s the core idea.

Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.

Not generic prompts.

But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.

Then they test whether the simulated responses statistically match real-world human data.

And disturbingly… they often do.

Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:

• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits

Not perfectly. Not universally.

But far closer than most people would expect.

The key contribution of the paper isn’t “LLMs are human.”

It’s validation.

They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.

When the distributions match, the simulation isn’t just storytelling.

It becomes empirical evidence.

That’s the uncomfortable shift.

If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?

Because if the answer is yes, this changes everything:

• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation

You could prototype social interventions before deploying them in the real world.

You could stress-test messaging strategies across simulated demographics.

You could explore rare edge-case populations without recruitment bottlenecks.

But here’s where Stanford is careful.

The models don’t “understand” humans.

They reflect training data patterns.

They can amplify biases.

They can collapse under distribution shift.

And they can simulate plausibility without causality.

So the paper doesn’t claim replacement.

It argues for calibration.

LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.

That’s the distinction.

Not synthetic humans.

Synthetic behavioral priors.

The wild part?

This paper forces academia to confront something bigger:

If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.

Not minds.

Maps.

And maps can be useful.

We’re moving from “AI as text generator” to “AI as behavioral simulator.”

The ethics, methodology, and epistemology implications are massive.

Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.

And that might be the real revolution hidden in this paper.
tweet