Offshore
Moon Dev Building the Ultimate Solana Sniper: How to Filter 1,000x Gems Before They Ever Hit Birdeye the ultimate beginners guide to solana trading bots is actually a blueprint for financial freedom because the market is a casino where the house always wins…
he game when you are trading these degenerate meme coins. i like to throw ten dollars at a bunch of different tokens and if one hits a thousand percent return it covers all the small losses from the rugs. the bot trades emotionally free which is impossible for a human who is watching their life savings bounce around on a chart

you can use api endpoints to see the last trade time and ensure that the token still has active buyers in the last hour. if a token has zero trades in sixty minutes we drop it immediately because it is already a dead ship. it be like that sometimes in the wild west of solana but as long as we have the tools to filter the noise we will find the signal

building a market maker is the next level after you master the sniper bot because it allows you to create your own volume and profit from the spread. we are constantly coming up with new strategies and back testing them to see if they worked in the past before we ever risk a single cent. this is the path from being a gambler to being a quantitative trader who actually understands the math behind the moves

there is so much opportunity in this space right now that it is almost overwhelming but you have to take action to get separation from the crowd. most traders will just keep chasing pumps and losing money because they are too lazy to learn how to code their own edge. i am going to keep building these crazy bots and showing you every single step because i want us all to win together
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans.

The title sounds like a joke:

“This human study did not involve human subjects.”

But it’s dead serious.

The researchers are asking a controversial question:

Can LLM simulations count as behavioral evidence?

Here’s the core idea.

Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models.

Not generic prompts.

But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints.

Then they test whether the simulated responses statistically match real-world human data.

And disturbingly… they often do.

Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns:

• Established psychological biases
• Preference distributions
• Decision-making trends
• Even demographic splits

Not perfectly. Not universally.

But far closer than most people would expect.

The key contribution of the paper isn’t “LLMs are human.”

It’s validation.

They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks.

When the distributions match, the simulation isn’t just storytelling.

It becomes empirical evidence.

That’s the uncomfortable shift.

If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy?

Because if the answer is yes, this changes everything:

• Behavioral economics
• Political science
• Market research
• Policy testing
• UX experimentation

You could prototype social interventions before deploying them in the real world.

You could stress-test messaging strategies across simulated demographics.

You could explore rare edge-case populations without recruitment bottlenecks.

But here’s where Stanford is careful.

The models don’t “understand” humans.

They reflect training data patterns.

They can amplify biases.

They can collapse under distribution shift.

And they can simulate plausibility without causality.

So the paper doesn’t claim replacement.

It argues for calibration.

LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits.

That’s the distinction.

Not synthetic humans.

Synthetic behavioral priors.

The wild part?

This paper forces academia to confront something bigger:

If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies.

Not minds.

Maps.

And maps can be useful.

We’re moving from “AI as text generator” to “AI as behavioral simulator.”

The ethics, methodology, and epistemology implications are massive.

Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment.

And that might be the real revolution hidden in this paper.
tweet
Offshore
Photo
Benjamin Hernandez😎
$WING 14% EPS COMPOUNDER!

Wingstop +14.55% hitting $288.41. EPS growth is +79.0% YoY. Buy ratings across the board. The fundamental moat is widening.
Buy on any shallow pullback, the trend is unstoppable.

DM to get the specific Wave 3 entry points.
$SOC $BMNR $BYND $NB $PULM https://t.co/zSGxt9jfZ9
tweet
Offshore
Video
Brady Long
RT @thisdudelikesAI: the last time something this big happened to an industry this fast was kodak. and kodak had a warning...

PolyAI has raised $200M from Nvidia, Khosla Ventures, and multiple top VCs.

We're one of the fastest-growing companies in the UK, and we handle 500M+ calls for:

• Marriott
• PG&E
• Gordon Ramsay's restaurants
• And 3,000 more real deployments

Which means that if you've ever called them, chances are you've talked to our voice agents.

Every restaurant we onboard books thousands in revenue within 30 days.

But how?

Because PolyAI works 24/7, answering every call in <2
- PolyAI
tweet
The Transcript
$NOW ServiceNow CEO: "Our pipelines have never been better. Let me be clear, never been better...So you should feel really good about ServiceNow."
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$MCO CEO was just asked about AI threats & the durability of its moat. His response:

“First of all, a lot of the data simply isn’t available to the public… built on decades of commercial agreements & IP rights… legal & regulatory constraints… semantic complexity… entity resolution… historical depth… governance…

Every bank I talk to tells me, ‘Good enough is not good enough for our institution.’ What they want from us, they want to move, in many cases, to fewer trusted providers…

“We’ve never had seat-based licenses…thinking as we speak and trailing different pricing models to be able to capture some of that upside.”
tweet
Offshore
Video
Moon Dev
If you actually want to use openclaw

I made all the mistakes so you don’t have to https://t.co/T4nrspt93T
tweet
Offshore
Photo
App Economy Insights
RT @EconomyApp: 💰 13F filings just dropped!

What was Wall Street buying in Q4?

🔎 A lot more $GOOGL
⚡️ AI power/infrastructure
🌏 Global e-commerce players

But we have surprises (not AI-related).
https://t.co/Oqm4jrfCTb
tweet
NecoKronos
RT @anthdm: We are soon rolling out historical data for trades. Both on MMT and API.

This means you can have sub 1 minute backfills for your tools.

We also rolling out Hyperliquid Full data through our API's.

- full orderbook depth
- positions
- liquidations

The hole shebang.

Might be a good time to start using MMT.

We all love you, you know that.
tweet
The Transcript
RT @TheTranscript_: $OWL CEO: Hyperscale data center lease financings now reaching unprecedented $30B–$50B transaction sizes

"Our biggest deal is just under $30B for Meta. We're working on deals. I am not exaggerating. This is just for the shell and maybe some GPUs in there, $50B. I mean, in our lifetime, I never thought I'd be financing deals of that size, but I never thought I'd come across CapEx projects of that magnitude."
tweet
Offshore
Video
Startup Archive
Sam Altman on the advice he wish he received when he enrolled in YC in 2005

Y Combinator CEO Garry Tan asks OpenAI founder Sam Altman what he wish knew when he was going through YC back in 2005.

Sam responds:

“I wish someone had taught me the importance of conviction and resilience over a long period of time. People don’t really talk about how hard that is. It’s easy for a little while, but your reserves kind of wear down on it.”

He continues:

“Also just trust that it’s eventually going to work out. Obviously my first startup [Loopt] didn’t work that well. A lot of people give up after one failed startup, but startups don’t workout all the time. Learning how to keep working through that is really important. So is developing trust in your own instincts and increasing that trust as you refine your decision-making instincts over time. Courage to work on stuff that is out of fashion but is what you believe in and care about is also really important.”

Sam recently had a kid and reflects on how everyone will tell you that it’s “the best thing you will ever do, but also the hardest thing you will ever do.” He believes startups are similar:

“The good parts are really great — better than you think. And the hard parts are shockingly much harder than anyone can express in a way that makes any sense to you, and you have to just keep going.”

Video source: @ycombinator (2025)
tweet
Offshore
Photo
Fiscal.ai
Wingstop just crossed 3,000 restaurants globally.

They've grown locations at a 14% CAGR since 2019.

Why wouldn't this continue?

$WING https://t.co/dSbJZa7ukj
tweet