Offshore
Photo
God of Prompt
RT @godofprompt: How to prompt Claude Opus 4.6 to get shockingly good outputs while cutting your API costs by 60%: https://t.co/3ddCvGqFZe
tweet
RT @godofprompt: How to prompt Claude Opus 4.6 to get shockingly good outputs while cutting your API costs by 60%: https://t.co/3ddCvGqFZe
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @TheStalwart: The rise of Chinese running shoe brands on r/runningshoegeeks https://t.co/ASYhpwaVJs
tweet
RT @TheStalwart: The rise of Chinese running shoe brands on r/runningshoegeeks https://t.co/ASYhpwaVJs
tweet
Moon Dev
this is actually a billion dollar use case of clawdbot
cant believe im dropping it
the guy who inspired this ran up a net worth of $40m off $20,000
this replaces his manual strategy https://t.co/5ASDwqwfFZ
tweet
this is actually a billion dollar use case of clawdbot
cant believe im dropping it
the guy who inspired this ran up a net worth of $40m off $20,000
this replaces his manual strategy https://t.co/5ASDwqwfFZ
tweet
God of Prompt
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
tweet
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
You’re cooked if you over rely on AI for everything - God of Prompttweet
X (formerly Twitter)
God of Prompt (@godofprompt) on X
You’re cooked if you over rely on AI for everything
God of Prompt
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete entirely
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
tweet
if you outsource everything to AI, including your thinking, there will be no need for anyone to have you use that very AI
You will become obsolete entirely
Your taste, skills, uniqueness, original thought will be lost unless you practice it
And the only way to practice it is not to outsource it to AI
You’re cooked if you over rely on AI for everything - God of Prompttweet
X (formerly Twitter)
God of Prompt (@godofprompt) on X
You’re cooked if you over rely on AI for everything
Offshore
Photo
Brady Long
RT @thisguyknowsai: 10 Powerful Gemini 3.0 prompts that will help you build a million dollar business (steal them): https://t.co/oL0BVzPIum
tweet
RT @thisguyknowsai: 10 Powerful Gemini 3.0 prompts that will help you build a million dollar business (steal them): https://t.co/oL0BVzPIum
tweet
God of Prompt
vibe coding = coding
Software development is undergoing a renaissance in front of our eyes.
If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model.
Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now:
As a first step, by March 31st, we're aiming that:
(1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal.
(2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions.
In order to get there, here's what we recommended to the team a few weeks ago:
1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying.
- Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow.
- Share experiences or questions in a few designated internal channels
- Take a day for a company-wide Codex hackathon
2. Create skills and AGENTS[.md].
- Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task.
- Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository
3. Inventory and make accessible any internal tools.
- Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server).
4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration.
- Write tests which are quick to run, and create high-quality interfaces between components.
5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high
- Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting.
6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.
Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codeb[...]
vibe coding = coding
Software development is undergoing a renaissance in front of our eyes.
If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model.
Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now:
As a first step, by March 31st, we're aiming that:
(1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal.
(2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions.
In order to get there, here's what we recommended to the team a few weeks ago:
1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying.
- Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow.
- Share experiences or questions in a few designated internal channels
- Take a day for a company-wide Codex hackathon
2. Create skills and AGENTS[.md].
- Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task.
- Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository
3. Inventory and make accessible any internal tools.
- Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server).
4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration.
- Write tests which are quick to run, and create high-quality interfaces between components.
5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high
- Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting.
6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use.
Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codeb[...]
Offshore
Photo
Jukan
Samsung Electronics to Begin World's First HBM4 Mass Production After Lunar New Year Holiday…Seizing Initiative in Next-Generation Market
Samsung Electronics will become the world's first to mass-produce and ship the next-generation High Bandwidth Memory (HBM), HBM4—a 'game changer' for artificial intelligence (AI) semiconductors with the world's highest performance—after the Lunar New Year holiday.
This strategy aims to overcome the setbacks with previous-generation products that sparked crisis concerns in its semiconductor business, seize the initiative in the next-generation market, and solidify its position as the world's number one memory manufacturer.
According to industry sources on the 8th, Samsung Electronics has decided to begin mass production shipments of HBM4 to NVIDIA after this Lunar New Year holiday, possibly as early as the third week of this month.
Samsung Electronics passed quality tests with NVIDIA early on and received purchase orders (PO), and reportedly finalized this schedule after comprehensively considering NVIDIA's launch plans for the 'Vera Rubin' AI accelerator that will incorporate HBM4.
The volume of HBM4 samples Samsung Electronics is supplying for customer product module testing is also observed to have expanded significantly with this PO.
NVIDIA is expected to unveil 'Vera Rubin' featuring Samsung Electronics' HBM4 for the first time at its technology conference 'GTC 2026' next month.
This marks the world's first mass production shipment of next-generation HBM4.
Moreover, Samsung Electronics' HBM4 is evaluated as having world-class performance.
From the start of HBM4 development, Samsung Electronics set its goal to achieve top performance exceeding JEDEC (international semiconductor standards organization) standards.
Accordingly, it made the bold move of simultaneously applying 1c (10nm-class 6th generation) DRAM and 4nm foundry process to this HBM4.
With this industry-unique process combination, Samsung Electronics' HBM4 achieves data processing speeds of up to 11.7 Gbps (gigabits per second), surpassing the JEDEC standard of 8Gbps.
This represents a 37% improvement over the JEDEC standard and 22% over the previous generation HBM3E (9.6Gbps).
Additionally, Samsung Electronics' HBM4 achieves memory bandwidth of up to 3TB/s per single stack, a 2.4-fold improvement over its predecessor, and offers capacity of up to 36GB with 12-layer stacking technology.
Future application of 16-layer stacking technology could enable capacity expansion up to 48GB.
It can significantly reduce power consumption and cooling costs for servers and data centers through low-power design while maximizing computational performance.
Samsung Electronics, as the world's only semiconductor company capable of providing one-stop solutions encompassing logic, memory, foundry, and packaging, plans to leverage the synergy between its advanced memory and foundry process capabilities to deliver top-tier HBM performance.
The company also expects HBM sales volume to increase more than threefold year-over-year this year and has decided to install new production lines at its Pyeongtaek Campus Plant 4 to expand HBM production capacity.
Following the achievement of stable yields despite applying cutting-edge processes and commencing mass production shipments, yields are expected to improve further as production scales up.
Furthermore, Samsung Electronics is likely to establish optimal HBM4 production plans based on thorough review of overall memory market conditions.
In a situation where prices are soaring not only for HBM but all memory products, the company intends to efficiently distribute and utilize its world's largest production capacity.
This confidence is also based on the belief that Samsung Electronics can control market share while maintaining leadership, given that its HBM4 enters the market first with top performance.
An industry official stated, "Samsung Electronics, with the world's largest pr[...]
Samsung Electronics to Begin World's First HBM4 Mass Production After Lunar New Year Holiday…Seizing Initiative in Next-Generation Market
Samsung Electronics will become the world's first to mass-produce and ship the next-generation High Bandwidth Memory (HBM), HBM4—a 'game changer' for artificial intelligence (AI) semiconductors with the world's highest performance—after the Lunar New Year holiday.
This strategy aims to overcome the setbacks with previous-generation products that sparked crisis concerns in its semiconductor business, seize the initiative in the next-generation market, and solidify its position as the world's number one memory manufacturer.
According to industry sources on the 8th, Samsung Electronics has decided to begin mass production shipments of HBM4 to NVIDIA after this Lunar New Year holiday, possibly as early as the third week of this month.
Samsung Electronics passed quality tests with NVIDIA early on and received purchase orders (PO), and reportedly finalized this schedule after comprehensively considering NVIDIA's launch plans for the 'Vera Rubin' AI accelerator that will incorporate HBM4.
The volume of HBM4 samples Samsung Electronics is supplying for customer product module testing is also observed to have expanded significantly with this PO.
NVIDIA is expected to unveil 'Vera Rubin' featuring Samsung Electronics' HBM4 for the first time at its technology conference 'GTC 2026' next month.
This marks the world's first mass production shipment of next-generation HBM4.
Moreover, Samsung Electronics' HBM4 is evaluated as having world-class performance.
From the start of HBM4 development, Samsung Electronics set its goal to achieve top performance exceeding JEDEC (international semiconductor standards organization) standards.
Accordingly, it made the bold move of simultaneously applying 1c (10nm-class 6th generation) DRAM and 4nm foundry process to this HBM4.
With this industry-unique process combination, Samsung Electronics' HBM4 achieves data processing speeds of up to 11.7 Gbps (gigabits per second), surpassing the JEDEC standard of 8Gbps.
This represents a 37% improvement over the JEDEC standard and 22% over the previous generation HBM3E (9.6Gbps).
Additionally, Samsung Electronics' HBM4 achieves memory bandwidth of up to 3TB/s per single stack, a 2.4-fold improvement over its predecessor, and offers capacity of up to 36GB with 12-layer stacking technology.
Future application of 16-layer stacking technology could enable capacity expansion up to 48GB.
It can significantly reduce power consumption and cooling costs for servers and data centers through low-power design while maximizing computational performance.
Samsung Electronics, as the world's only semiconductor company capable of providing one-stop solutions encompassing logic, memory, foundry, and packaging, plans to leverage the synergy between its advanced memory and foundry process capabilities to deliver top-tier HBM performance.
The company also expects HBM sales volume to increase more than threefold year-over-year this year and has decided to install new production lines at its Pyeongtaek Campus Plant 4 to expand HBM production capacity.
Following the achievement of stable yields despite applying cutting-edge processes and commencing mass production shipments, yields are expected to improve further as production scales up.
Furthermore, Samsung Electronics is likely to establish optimal HBM4 production plans based on thorough review of overall memory market conditions.
In a situation where prices are soaring not only for HBM but all memory products, the company intends to efficiently distribute and utilize its world's largest production capacity.
This confidence is also based on the belief that Samsung Electronics can control market share while maintaining leadership, given that its HBM4 enters the market first with top performance.
An industry official stated, "Samsung Electronics, with the world's largest pr[...]
Offshore
Jukan Samsung Electronics to Begin World's First HBM4 Mass Production After Lunar New Year Holiday…Seizing Initiative in Next-Generation Market Samsung Electronics will become the world's first to mass-produce and ship the next-generation High Bandwidth Memory…
oduction capacity and most diverse product lineup, has proven its restored technological competitiveness through the world's first mass production of top-performance HBM4," adding, "Building on this foundation, it will be able to lead the market from the most advantageous position."
tweet
tweet
Offshore
Video
God of Prompt
RT @prompt_copilot: Grammarly fixes your writing.
💫 https://t.co/7vzwuTo8vA fixes your prompts.
> Prompt enhancement
> Autocomplete
> Context profiles
Chrome extension for ChatGPT, Gemini, Perplexity.
Start your free trial 👉 https://t.co/TKMMCzVWj1 https://t.co/gZhh1ozINU
tweet
RT @prompt_copilot: Grammarly fixes your writing.
💫 https://t.co/7vzwuTo8vA fixes your prompts.
> Prompt enhancement
> Autocomplete
> Context profiles
Chrome extension for ChatGPT, Gemini, Perplexity.
Start your free trial 👉 https://t.co/TKMMCzVWj1 https://t.co/gZhh1ozINU
tweet
God of Prompt
RT @DoctorYev: best 15 accounts in AI (plus some GEMS):
@gregisenberg = startup ideas daily
@jackfriks = solo apps, real numbers
@MarcinAI81 = shipping AI
@eptwts = prompts + algo hacks
@illyism = SEO and AI
@Jacobsklug = AI workflows
@godofprompt = prompt guides
@antinertia = AI growth playbooks
@AmirMushich = AI ads + video
@Scobleizer = AI lists
@TheBestOfAdam = AI community
@juliapintar = AI and UGC
@DoctorYev = Individual AI
@natiakourdadze = marketing AI playbooks
@kloss_xyz = for this trend
follow them all, and learn from them
tweet
RT @DoctorYev: best 15 accounts in AI (plus some GEMS):
@gregisenberg = startup ideas daily
@jackfriks = solo apps, real numbers
@MarcinAI81 = shipping AI
@eptwts = prompts + algo hacks
@illyism = SEO and AI
@Jacobsklug = AI workflows
@godofprompt = prompt guides
@antinertia = AI growth playbooks
@AmirMushich = AI ads + video
@Scobleizer = AI lists
@TheBestOfAdam = AI community
@juliapintar = AI and UGC
@DoctorYev = Individual AI
@natiakourdadze = marketing AI playbooks
@kloss_xyz = for this trend
follow them all, and learn from them
tweet
Offshore
Video
The Transcript
$NVDA CEO this week: "There's a whole bunch of software companies whose stock prices are under a lot of pressure. Because somehow, AI is going to replace them. It is the most illogical thing in the world, and time will prove itself." https://t.co/oD7YzWrPWF
tweet
$NVDA CEO this week: "There's a whole bunch of software companies whose stock prices are under a lot of pressure. Because somehow, AI is going to replace them. It is the most illogical thing in the world, and time will prove itself." https://t.co/oD7YzWrPWF
tweet