Offshore
Photo
The Transcript
Verisk CEO: "Verisk delivered a solid Q4 result, capping off another year of growth in line with our long-term financial targets. We enter 2026 with clear strategic momentum"

$VRSK: +13% Pre-Market https://t.co/h65oFHBi8a
tweet
Offshore
Photo
The Transcript
Berkshire portfolio updates: https://t.co/AhDrizd1gb
tweet
Offshore
Video
memenodes
when you realize bear market is really here and there is nothing you can do but go back to 9-5 https://t.co/lAsccGN5sv
tweet
Offshore
Photo
memenodes
Ethereum is down bad from “future of finance” to this shit... https://t.co/6Pn7GTKdsH
tweet
Michael Fritzell (Asian Century Stocks)
RT @nitinkinvest: I'm not entirely sure, but I've been monitoring this for about six months. There seems to be sudden notification fatigue that people are talking about. The analog watch does its job and respects boundaries.

I think smartwatch pricing is another factor. An analog watch is like permanent jewelry, whereas one has to continually upgrade to the latest Garmin. We've seen some switching to lower-priced smartwatches like the Zepp Helio band.

Quiet luxury and old money trends are also driving a pivot toward expensive analog watches as well I feel. Personally purchased an analog watch for the first time in 5 years.
tweet
Lumida Wealth Management
1/ Daily News Round-Up:

- Anthropic–Pentagon feud escalates over AI terms
- Vance says Iran talks stall, sets 2-week window
- Apple decouples from Big Tech in AI volatility
- Berkshire slashes Amazon stake
- Uber invests $100M in robotaxi charging hubs
tweet
memenodes
every year

which year destroyed your mental health most?
- 🍂
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.

The title sounds like a joke:

“This human study did not involve human subjects.”

But it’s dead serious.

The researchers are asking a controversial question:

Can LLM simulations count as behavioral evidence?

Here’s the core idea.

Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.

Not generic prompts.

But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.

Then they test whether the simulated responses statistically match real-world human data.

And disturbingly… they often do.

Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:

• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits

Not perfectly. Not universally.

But far closer than most people would expect.

The key contribution of the paper isn’t “LLMs are human.”

It’s validation.

They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.

When the distributions match, the simulation isn’t just storytelling.

It becomes empirical evidence.

That’s the uncomfortable shift.

If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?

Because if the answer is yes, this changes everything:

• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation

You could prototype social interventions before deploying them in the real world.

You could stress-test messaging strategies across simulated demographics.

You could explore rare edge-case populations without recruitment bottlenecks.

But here’s where Stanford is careful.

The models don’t “understand” humans.

They reflect training data patterns.

They can amplify biases.

They can collapse under distribution shift.

And they can simulate plausibility without causality.

So the paper doesn’t claim replacement.

It argues for calibration.

LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.

That’s the distinction.

Not synthetic humans.

Synthetic behavioral priors.

The wild part?

This paper forces academia to confront something bigger:

If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.

Not minds.

Maps.

And maps can be useful.

We’re moving from “AI as text generator” to “AI as behavioral simulator.”

The ethics, methodology, and epistemology implications are massive.

Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.

And that might be the real revolution hidden in this paper.
tweet
Offshore
Video
Startup Archive
Jensen Huang explains his decision to start NVIDIA as a parent with young children

Jensen was 30 years old when he quit his job at LSI Logic to co-found NVIDIA in 1993. Asked how he made this decision as a young parent at the time, he responds:

“I believed in [my co-founders], and I believed in myself… Even though we had a family and our kids were young — they were just one and two — and that could cause us to be quite risk averse, I was never concerned about being able to do something else if it didn’t work out. And so I felt like I wasn’t risking anything. Maybe that’s too careless by some other standards, but I really believed it. I believed that we weren’t putting our family in harm’s way. And if things didn’t work out, there’ll be an even better job for me somewhere, someday… Lori and I were young and it wasn’t a decision that was difficult per se. It was probably even less than a dinner conversation. Maybe even less than that.”

Jensen offers the following advice to the Berkeley students in the audience:

“All of you are young and bright, and there’s so much opportunity out there. I genuinely don’t believe that when you make a decision to start a company or join a startup that it’s a horribly difficult life decision. The only thing that really matters, in my estimation, is are you going to love the people that you work with? Are you going to love the work that you’re going to do? Are you going to love it so much that all the pain and suffering that’s going to come your way — which I promise you will be lots: setbacks, disappointments, the list of bad days — you’ll be able to keep carrying on. So long as you love the work that you do, you’ll be able to keep carrying on. That’s really it. That’s 100% of the wisdom.”

Video source: @BerkeleyHaas (2023)
tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: How good are AI agents at long-horizon CLI programming?

Not very. Leading agents succeed less than 20% of the time.

LongCLI-Bench introduces a benchmark of 20 complex tasks spanning building from scratch, adding features, fixing bugs, and refactoring code, all executed through command-line interfaces.

Failures typically occur early in task execution. Self-correction provides minimal improvement.

But human-agent collaboration through plan guidance and interactive input substantially enhances performance.

Why does it matter?

The benchmark highlights that for real-world programming tasks, the path forward isn't fully autonomous agents. It's human-agent collaboration with structured oversight.

Paper: https://t.co/oTNUEnvb1j

Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet