Offshore
Video
memenodes
One day i will go away from everyone to the mountains like this https://t.co/OlzsDEVkKa
tweet
One day i will go away from everyone to the mountains like this https://t.co/OlzsDEVkKa
tweet
Offshore
Photo
Fiscal.ai
RT @StockMKTNewz: Netflix $NFLX brought in $5.3 Billion from the United States 🇺🇸 and Canada 🇨🇦 last quarter up from $3.3B in Q4 2021 https://t.co/ro6g2sG1Pf
tweet
RT @StockMKTNewz: Netflix $NFLX brought in $5.3 Billion from the United States 🇺🇸 and Canada 🇨🇦 last quarter up from $3.3B in Q4 2021 https://t.co/ro6g2sG1Pf
tweet
Offshore
Photo
God of Prompt
RT @ytscribeai: life hack for content creators:
1/ n8n watches youtube channels via RSS (no API key)
2/ https://t.co/sWBIcxGUoD extracts transcripts ($0.0035 each)
3/ AI turns them into newsletters, blogs, tweets
one workflow. infinite content engine.
https://t.co/sWBIcxGUoD https://t.co/cRrSCMABas
tweet
RT @ytscribeai: life hack for content creators:
1/ n8n watches youtube channels via RSS (no API key)
2/ https://t.co/sWBIcxGUoD extracts transcripts ($0.0035 each)
3/ AI turns them into newsletters, blogs, tweets
one workflow. infinite content engine.
https://t.co/sWBIcxGUoD https://t.co/cRrSCMABas
tweet
Offshore
Photo
God of Prompt
RT @alex_prompter: This paper from Google DeepMind, Meta, Amazon, and Yale University quietly explains why most “AI agents” feel smart in demos and dumb in real work.
The core idea is simple but uncomfortable: today’s LLMs don’t reason, they react. They generate fluent answers token by token, but they don’t explicitly plan, reflect, or decide when to stop and rethink. This paper argues that real progress comes from turning LLMs into agentic reasoners systems that can set goals, break them into subgoals, choose actions, evaluate outcomes, and revise their strategy mid-flight.
The authors formalize agentic reasoning as a loop, not a prompt:
observe → plan → act → reflect → update state → repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
• explicit intermediate goals
• checkpoints for self-evaluation
• the ability to abandon bad paths
• memory of past attempts
No new weights. No bigger models. Just better control over when and why the model reasons.
The takeaway is brutal for the industry: scaling tokens and parameters won’t give us reliable agents. Architecture will. Agentic reasoning isn’t a feature it’s the missing operating system for LLMs.
Most “autonomous agents” today are just fast typists with tools.
This paper explains what it actually takes to build thinkers.
tweet
RT @alex_prompter: This paper from Google DeepMind, Meta, Amazon, and Yale University quietly explains why most “AI agents” feel smart in demos and dumb in real work.
The core idea is simple but uncomfortable: today’s LLMs don’t reason, they react. They generate fluent answers token by token, but they don’t explicitly plan, reflect, or decide when to stop and rethink. This paper argues that real progress comes from turning LLMs into agentic reasoners systems that can set goals, break them into subgoals, choose actions, evaluate outcomes, and revise their strategy mid-flight.
The authors formalize agentic reasoning as a loop, not a prompt:
observe → plan → act → reflect → update state → repeat.
Instead of one long chain-of-thought, the model maintains an internal task state. It decides what to think about next, not just how to finish the sentence.
This is why classic tricks like longer CoT plateau. You get more words, not better decisions.
One of the most important insights: reasoning quality collapses when control and reasoning are mixed. When the same prompt tries to plan, execute, critique, and finalize, errors compound silently. Agentic setups separate these roles.
Planning is explicit. Execution is scoped. Reflection is delayed and structured.
The paper shows that even strong frontier models improve dramatically when given:
• explicit intermediate goals
• checkpoints for self-evaluation
• the ability to abandon bad paths
• memory of past attempts
No new weights. No bigger models. Just better control over when and why the model reasons.
The takeaway is brutal for the industry: scaling tokens and parameters won’t give us reliable agents. Architecture will. Agentic reasoning isn’t a feature it’s the missing operating system for LLMs.
Most “autonomous agents” today are just fast typists with tools.
This paper explains what it actually takes to build thinkers.
tweet
Offshore
Video
Startup Archive
Mark Zuckerberg on how to avoid bad hires when your startup is growing quickly
As Mark explains, every fast-growing startup will repeatedly face the choice: “Do I hire the person who’s in front of me now because they seem good?” or “Do I hold out to get someone who’s even better?”
Mark offers his personal heuristic for founders facing this choice:
“The heuristic that I always focused on for myself and my own kind of direct hiring, that I think works when you recurse it through the organization, is that you should only hire someone to be on your team if you would be happy working for them in an alternate universe. I think that works, and that’s basically how I’ve tried to build my team.”
He continues:
“I’m not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, I’d be happy to work for them. I feel like I’d learn from them. I respect their general judgment. They’re all very insightful. They have good values… I think if you apply that at every layer in the organization, then you’ll have a pretty strong organization.”
Video source: @lexfridman (2023)
tweet
Mark Zuckerberg on how to avoid bad hires when your startup is growing quickly
As Mark explains, every fast-growing startup will repeatedly face the choice: “Do I hire the person who’s in front of me now because they seem good?” or “Do I hold out to get someone who’s even better?”
Mark offers his personal heuristic for founders facing this choice:
“The heuristic that I always focused on for myself and my own kind of direct hiring, that I think works when you recurse it through the organization, is that you should only hire someone to be on your team if you would be happy working for them in an alternate universe. I think that works, and that’s basically how I’ve tried to build my team.”
He continues:
“I’m not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, I’d be happy to work for them. I feel like I’d learn from them. I respect their general judgment. They’re all very insightful. They have good values… I think if you apply that at every layer in the organization, then you’ll have a pretty strong organization.”
Video source: @lexfridman (2023)
tweet
Offshore
Photo
Quiver Quantitative
Last week, we sent out an alert when we saw a new account on Polymarket betting on Rick Rieder as the next Fed chair.
They have now made $83K in unrealized gains, as his odds have skyrocketed.
Follow for more alerts on potential insider trades.
tweet
Last week, we sent out an alert when we saw a new account on Polymarket betting on Rick Rieder as the next Fed chair.
They have now made $83K in unrealized gains, as his odds have skyrocketed.
Follow for more alerts on potential insider trades.
A new account on Polymarket has bet $14K on Rick Rieder being the next fed chair.
They will win $180K if correct. https://t.co/wh34ZLs5wv - Quiver Quantitativetweet
Moon Dev
claude code for trading is nuts
this is how it can help you automate your trading in 2026
im not teaching this stuff anymore so you may wanna bm this https://t.co/1b4s020nvP
tweet
claude code for trading is nuts
this is how it can help you automate your trading in 2026
im not teaching this stuff anymore so you may wanna bm this https://t.co/1b4s020nvP
tweet
Offshore
Photo
Moon Dev
heres a loan
i am even giving out loans so you can join the all access pass
there is now a 2 pay option so you can get lifetime all access while its still here
join now: https://t.co/5lubmNhCuD
moon dev https://t.co/76856ofJcw
tweet
heres a loan
i am even giving out loans so you can join the all access pass
there is now a 2 pay option so you can get lifetime all access while its still here
join now: https://t.co/5lubmNhCuD
moon dev https://t.co/76856ofJcw
tweet
Offshore
Photo
Quiver Quantitative
Nancy Pelosi has filed new stock trades in January or February of every year since 2020.
Get ready for what might be her last.
Turn on notifications to get an alert right when she files, or check out our app:
https://t.co/hP9BfQ4z8g https://t.co/LxGVBv63cw
tweet
Nancy Pelosi has filed new stock trades in January or February of every year since 2020.
Get ready for what might be her last.
Turn on notifications to get an alert right when she files, or check out our app:
https://t.co/hP9BfQ4z8g https://t.co/LxGVBv63cw
tweet