Offshore
Photo
Giuliano
Call me Ted -> The Emperor of All Maladies (need to finish)-> Scale (in process) -> Pavlov.
Maybe return to Buffett’s letters.
Left at 2002. https://t.co/YxG5TBlB23
tweet
Call me Ted -> The Emperor of All Maladies (need to finish)-> Scale (in process) -> Pavlov.
Maybe return to Buffett’s letters.
Left at 2002. https://t.co/YxG5TBlB23
tweet
Offshore
Photo
The Transcript
$KKR Co-CEO stresses that not all software face disruption from AI:
"the market right now, and as you know, this happens when there's this much emotion all at once, is painting everything with one brush. We would just caution that not all software investments are the same" https://t.co/UtRlORvbFH
tweet
$KKR Co-CEO stresses that not all software face disruption from AI:
"the market right now, and as you know, this happens when there's this much emotion all at once, is painting everything with one brush. We would just caution that not all software investments are the same" https://t.co/UtRlORvbFH
$ARES CEO: AI will disrupt some software, not all.
"It is interesting to see how the markets are thinking about software companies as all being equal and not really understanding the difference between companies that could get disrupted by AI in places like digital content creation or data analytics and visualization versus like real entrenched enterprise systems." - The Transcripttweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: Below is a comparison of 20 stocks & their lowest weekly RSI over the past 10 years versus their current RSI:
Stock | Lowest RSI (10Y) | Current RSI
1. $CRWD 31 | 34
2. $FICO 27 | 34
3. $SPGI 30 | 33
4. $AXON 30 | 31
5. $BMI 28 | 31
6. $QCOM 34 | 30
7. $RACE 30 | 30
8. $CRM 29 | 31
9. $MSFT 29 | 29
10. $ORCL 29 | 29
11. $ADSK 28 | 29
12. $ADBE 28 | 28
13. $VEEV 27 | 27
14. $INTU 26 | 26
15. $NFLX 15 | 26
16. $NOW 21 | 21
17. $CSU 20 | 20
18. $VRSK 19 | 19
19. $TYL 17 | 17
20. $ROP 14 | 14
___
*RSI (Relative Strength Index) measures how overbought or oversold a stock is based on recent price momentum.
**Photo below is the $IGV Software ETF
tweet
RT @DimitryNakhla: Below is a comparison of 20 stocks & their lowest weekly RSI over the past 10 years versus their current RSI:
Stock | Lowest RSI (10Y) | Current RSI
1. $CRWD 31 | 34
2. $FICO 27 | 34
3. $SPGI 30 | 33
4. $AXON 30 | 31
5. $BMI 28 | 31
6. $QCOM 34 | 30
7. $RACE 30 | 30
8. $CRM 29 | 31
9. $MSFT 29 | 29
10. $ORCL 29 | 29
11. $ADSK 28 | 29
12. $ADBE 28 | 28
13. $VEEV 27 | 27
14. $INTU 26 | 26
15. $NFLX 15 | 26
16. $NOW 21 | 21
17. $CSU 20 | 20
18. $VRSK 19 | 19
19. $TYL 17 | 17
20. $ROP 14 | 14
___
*RSI (Relative Strength Index) measures how overbought or oversold a stock is based on recent price momentum.
**Photo below is the $IGV Software ETF
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: How to prompt Claude Opus 4.6 to get shockingly good outputs while cutting your API costs by 60%: https://t.co/3ddCvGqFZe
tweet
RT @godofprompt: How to prompt Claude Opus 4.6 to get shockingly good outputs while cutting your API costs by 60%: https://t.co/3ddCvGqFZe
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @TheStalwart: The rise of Chinese running shoe brands on r/runningshoegeeks https://t.co/ASYhpwaVJs
tweet
RT @TheStalwart: The rise of Chinese running shoe brands on r/runningshoegeeks https://t.co/ASYhpwaVJs
tweet
Moon Dev
this is actually a billion dollar use case of clawdbot
cant believe im dropping it
the guy who inspired this ran up a net worth of $40m off $20,000
this replaces his manual strategy https://t.co/5ASDwqwfFZ
tweet
this is actually a billion dollar use case of clawdbot
cant believe im dropping it
the guy who inspired this ran up a net worth of $40m off $20,000
this replaces his manual strategy https://t.co/5ASDwqwfFZ
tweet
God of Prompt
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
tweet
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
You’re cooked if you over rely on AI for everything - God of Prompttweet
X (formerly Twitter)
God of Prompt (@godofprompt) on X
You’re cooked if you over rely on AI for everything
God of Prompt
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete entirely
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
tweet
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete entirely
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
You’re cooked if you over rely on AI for everything - God of Prompttweet
X (formerly Twitter)
God of Prompt (@godofprompt) on X
You’re cooked if you over rely on AI for everything
Offshore
Photo
Brady Long
RT @thisguyknowsai: 10 Powerful Gemini 3.0 prompts that will help you build a million dollar business (steal them): https://t.co/oL0BVzPIum
tweet
RT @thisguyknowsai: 10 Powerful Gemini 3.0 prompts that will help you build a million dollar business (steal them): https://t.co/oL0BVzPIum
tweet
God of Prompt
vibe coding = coding
Software development is undergoing a renaissance in front of our eyes.
If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model.
Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now:
As a first step, by March 31st, we're aiming that:
(1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal.
(2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions.
In order to get there, here's what we recommended to the team a few weeks ago:
1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying.
- Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow.
- Share experiences or questions in a few designated internal channels
- Take a day for a company-wide Codex hackathon
2. Create skills and AGENTS[.md].
- Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task.
- Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository
3. Inventory and make accessible any internal tools.
- Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server).
4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration.
- Write tests which are quick to run, and create high-quality interfaces between components.
5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high
- Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting.
6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.
Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codeb[...]
vibe coding = coding
Software development is undergoing a renaissance in front of our eyes.
If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model.
Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now:
As a first step, by March 31st, we're aiming that:
(1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal.
(2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions.
In order to get there, here's what we recommended to the team a few weeks ago:
1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying.
- Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow.
- Share experiences or questions in a few designated internal channels
- Take a day for a company-wide Codex hackathon
2. Create skills and AGENTS[.md].
- Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task.
- Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository
3. Inventory and make accessible any internal tools.
- Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server).
4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration.
- Write tests which are quick to run, and create high-quality interfaces between components.
5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high
- Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting.
6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.
Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codeb[...]