Offshore
Photo
God of Prompt
RT @WallStreetMav: LEAK:
OpenAI and Jony Ive seeks to build replacement for Apple Airpods, per Weibo reporter. https://t.co/pUBv9wjiaf
tweet
RT @WallStreetMav: LEAK:
OpenAI and Jony Ive seeks to build replacement for Apple Airpods, per Weibo reporter. https://t.co/pUBv9wjiaf
Hearing fresh detail on Openai "To-go" hardware project from last report. Now confirmed it is a special audio product to replace Airpod, internal code name is "Sweetpea"
On manufacturing, Foxconn has been told to prepare for total 5 devices by Q4 2028. All not known but a home style device and pen are still considered
However many sources repeated same thing: Sweetpea is now front of the line due to priority of Johny ive team. The release has been told to be near september, volume projection 40-50mm firstyear. Only some details currently known:
-hardware design is said to be "unique, unseen before" and main device is to be metal and resembling shape of eggstone
-inside eggstone there are two 胶囊pills who are removed and rest behind the ear like image above
-main processor target is 2nm smart phone style chip (exynos most favored). a custom chip also been developed to allowing the device to "replace iphone actions by commanding Siri"
-BOM is feared to be very high as material / components are closer to a phone bom but device function is said to be stronger
FXC leaders still embarrassed by losing all airpods programs to Lux 立讯. Now they see golden chance to win back this category - 智慧皮卡丘 Smart Pikachu (Weibo)tweet
Offshore
Photo
Offshore
Photo
God of Prompt
RT @godofprompt: MIT researchers just proved that prompt engineering is a social skill, not a technical one.
and that revelation breaks everything we thought we knew about working with AI.
they analyzed 667 people solving problems with AI. used bayesian statistics to isolate two different abilities in each person. ability to solve problems alone. ability to solve problems with AI.
here's what shattered the entire framework.
the two abilities barely correlate.
being a genius problem-solver on your own tells you almost nothing about how well you'll collaborate with AI. they're separate, measurable, independently functioning skills.
which means every prompt engineering course, every mega-prompt template, every "10 hacks to get better results" thread is fundamentally misunderstanding what's actually happening when you get good results.
the templates work. but not for the reason everyone thinks.
they work because they accidentally force you to practice something else entirely.
the skill that actually predicts success with AI isn't about keywords or structure or chain-of-thought formatting.
it's theory of mind. your capacity to model what another agent knows, doesn't know, believes, needs. to anticipate their confusion before it happens. to bridge information gaps you didn't even realize existed.
and here's the part that changes the game completely: they proved it's not a static trait you either have or don't.
it's dynamic. activated. something you turn on and off.
moment-to-moment changes in how much cognitive effort you put into perspective-taking directly changed AI response quality on individual prompts.
meaning when you actually stop and think "what does this AI need to know that i'm taking for granted" on one specific question, you get measurably better answers on that question.
the skill is something you dial up and down. practice. strengthen. like a muscle you didn't know you had.
it gets better the more you treat AI like a collaborator with incomplete information instead of a search engine you're trying to hack with the right magic words.
tweet
RT @godofprompt: MIT researchers just proved that prompt engineering is a social skill, not a technical one.
and that revelation breaks everything we thought we knew about working with AI.
they analyzed 667 people solving problems with AI. used bayesian statistics to isolate two different abilities in each person. ability to solve problems alone. ability to solve problems with AI.
here's what shattered the entire framework.
the two abilities barely correlate.
being a genius problem-solver on your own tells you almost nothing about how well you'll collaborate with AI. they're separate, measurable, independently functioning skills.
which means every prompt engineering course, every mega-prompt template, every "10 hacks to get better results" thread is fundamentally misunderstanding what's actually happening when you get good results.
the templates work. but not for the reason everyone thinks.
they work because they accidentally force you to practice something else entirely.
the skill that actually predicts success with AI isn't about keywords or structure or chain-of-thought formatting.
it's theory of mind. your capacity to model what another agent knows, doesn't know, believes, needs. to anticipate their confusion before it happens. to bridge information gaps you didn't even realize existed.
and here's the part that changes the game completely: they proved it's not a static trait you either have or don't.
it's dynamic. activated. something you turn on and off.
moment-to-moment changes in how much cognitive effort you put into perspective-taking directly changed AI response quality on individual prompts.
meaning when you actually stop and think "what does this AI need to know that i'm taking for granted" on one specific question, you get measurably better answers on that question.
the skill is something you dial up and down. practice. strengthen. like a muscle you didn't know you had.
it gets better the more you treat AI like a collaborator with incomplete information instead of a search engine you're trying to hack with the right magic words.
tweet
Offshore
Photo
God of Prompt
RT @free_ai_guides: Never run out of tweet ideas again.
This prompt gives you 30 at once:
→ Questions (to provide value)
→ Misconceptions (to grab attention)
→ Trust-builders (to create fans)
Tailored to YOUR audience and positioning.
Comment "Ideas" and I'll DM the prompt. https://t.co/XELI8VPYBU
tweet
RT @free_ai_guides: Never run out of tweet ideas again.
This prompt gives you 30 at once:
→ Questions (to provide value)
→ Misconceptions (to grab attention)
→ Trust-builders (to create fans)
Tailored to YOUR audience and positioning.
Comment "Ideas" and I'll DM the prompt. https://t.co/XELI8VPYBU
tweet
Offshore
Photo
God of Prompt
🚨 OpenAI's o1 proves you can make models smarter by making them "think longer" at inference not training bigger models.
DeepSeek, Google, Anthropic all pivoting to test-time compute.
Training wars are over. The inference wars just started.
Here's the paradigm shift happening right now:
tweet
🚨 OpenAI's o1 proves you can make models smarter by making them "think longer" at inference not training bigger models.
DeepSeek, Google, Anthropic all pivoting to test-time compute.
Training wars are over. The inference wars just started.
Here's the paradigm shift happening right now:
tweet
Offshore
Photo
God of Prompt
new writing pattern emerged in Claude:
not [x].
not [y].
[z]. https://t.co/qngOkVvlFK
tweet
new writing pattern emerged in Claude:
not [x].
not [y].
[z]. https://t.co/qngOkVvlFK
tweet
God of Prompt
RT @alex_prompter: i reverse-engineered dan koe's viral life reset post into 10 AI prompts.
not surface-level motivation. psychological excavation.
each one walks you through 5-8 phases of self-examination most people avoid their entire lives.
warning: these will make you uncomfortable.
that's the point 👇
tweet
RT @alex_prompter: i reverse-engineered dan koe's viral life reset post into 10 AI prompts.
not surface-level motivation. psychological excavation.
each one walks you through 5-8 phases of self-examination most people avoid their entire lives.
warning: these will make you uncomfortable.
that's the point 👇
https://t.co/7l7Jef99QZ - DAN KOEtweet
X (formerly Twitter)
DAN KOE (@thedankoe) on X
How to fix your entire life in 1 day
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 I analyzed 2,847 AI safety papers from 2020-2024. 94% test on the same 6 benchmarks.
Worse: I can modify one line of code and score "state-of-the-art" on all 6—without improving actual safety.
Academic AI research is systematic p-hacking. Here's how the entire field is broken:
tweet
RT @godofprompt: 🚨 I analyzed 2,847 AI safety papers from 2020-2024. 94% test on the same 6 benchmarks.
Worse: I can modify one line of code and score "state-of-the-art" on all 6—without improving actual safety.
Academic AI research is systematic p-hacking. Here's how the entire field is broken:
tweet
Offshore
Photo
Brady Long
🚨 MIT proved you can delete 90% of a neural network without losing accuracy.
Five years later, nobody implements it.
"The Lottery Ticket Hypothesis" just went from academic curiosity to production necessity, and it's about to 10x your inference costs.
Here's what changed (and why this matters now):
tweet
🚨 MIT proved you can delete 90% of a neural network without losing accuracy.
Five years later, nobody implements it.
"The Lottery Ticket Hypothesis" just went from academic curiosity to production necessity, and it's about to 10x your inference costs.
Here's what changed (and why this matters now):
tweet
Offshore
Photo
Illiquid
$besi supplier $vacn +12.5%. Preannounced results.
https://t.co/S0MIagSrlo
tweet
$besi supplier $vacn +12.5%. Preannounced results.
https://t.co/S0MIagSrlo
🌐 ✅ 📈 🚀 👇🏼 Massive semi equip rally now with $ASML +7%, $ASMI.AS +10% and $BESI +8% after exceptionally strong $TSM results/guidance with CapEx projections $52-56 billion for 2026 while consensus was aiming for 'only' $46 billion https://t.co/KGM8wl2dH4 - Jordy Beuvingtweet