None of my intelligent (130+ IQ) friends use GPT-4o. They only use it selectively and rarely e.g. for voice or a DallE, but almost never use it spontaneously in their own time. This has been a long term consistent observation, but today confirmation came. A new meta-analysis showed time using ChatGPT increases exponentially with lower IQ, with drop off to near 0 minutes at 145+. ChatGPT is mostly appealing as a way to fill an otherwise vacant mind that has no interesting thoughts and minimal goal-directed behavior. Intelligent people find stimulation from Claude 3.5 Sonnet, not from marinading in vaguely pleasant GPT-4o slop like a soggy potato sitting in oil in a dirty oven tray
๐ฟ111๐ค68๐28โ10๐ฅฑ10๐7๐ค3๐คฏ3๐ฅ2๐1๐ค1
This media is not supported in your browser
VIEW IN TELEGRAM
cyberbullying
๐ญ190๐คฌ24๐7๐6๐6๐จ6๐ข4๐ฅฐ1
Revised 12 days of OpenAI predictions
1. Tsunami โ
2. Locusts
3. Rivers of blood
4. Bird flu
5. Golden Rings, Sauron
6. Frogs
7. GPU famine
8. Nuclear fallout
9. Zombies
10. Death of first born sons, rip GPT-4
11. Rats, eleven pipers piping
12. Eternal darkness in a pear tree
1. Tsunami โ
2. Locusts
3. Rivers of blood
4. Bird flu
5. Golden Rings, Sauron
6. Frogs
7. GPU famine
8. Nuclear fallout
9. Zombies
10. Death of first born sons, rip GPT-4
11. Rats, eleven pipers piping
12. Eternal darkness in a pear tree
๐50๐คฉ8๐ฑ5โค1๐1๐คฏ1๐1
nvidia publicly supporting the first trump administration while criticizing biden's policies for their impact on ai is a major vibe shift
๐คฃ50๐8๐ฟ3
OpenAI announcing theyโre teaming up with tech giants and dropping half a trillion dollars over four years to build a massive AGI/ASI
๐ฅ31๐ฉ7๐คฏ4๐คก3๐ณ3๐2๐ฑ1
Jevons Paradox Strikes Chip Stocks Overnight After DeepSeek Takes First Place in Stores
The release of DeepSeek sent ripples through the chip market, as engineers proved that optimizing code could maximize GPU efficiency without increasing hardware demand. The result? A major hit to chip stocks:
๐ Arm ($ARM): -5.5%
๐ Nvidia ($NVDA): -5.3%
๐ Broadcom ($AVGO): -4.9%
๐ Super Micro ($SMCI): -4.6%
๐ Taiwan Semi ($TSM): -4.5%
๐ Micron ($MU): -4.3%
๐ Qualcomm ($QCOM): -2.8%
๐ AMD ($AMD): -2.5%
๐ Intel ($INTC): -2.0%
Why the market panic?
Instead of relying on raw compute power, engineers behind DeepSeek focused on highly efficient code optimization, reducing dependency on high-end hardware.
FAQs About DeepSeek's Success:
Q: How did DeepSeek get around export restrictions?
A: They didnโt. Instead, they optimized their chips for maximum memory efficiency. With perfectly tuned low-level code, they avoided bottlenecks entirely.
Q: How did DeepSeek train so efficiently?
A: They used predictive formulas to determine which tokens the model would activate and trained only those tokens. This approach required 95% fewer GPUs than Meta by focusing training on just 5% of parameters for each token.
Q: Why is DeepSeekโs inference so much cheaper?
A: They innovated by compressing the KV cache, a breakthrough from earlier research that dramatically cut costs.
Q: How did they replicate o1?
A: Through reinforcement learning. They tested the model with verifiable, complex tasks (like math and code), updating it only when it got the answers correct.
The Bottom Line: DeepSeekโs success is a testament to software-driven innovation. Engineers are proving that efficiency can outpace brute forceโand the market is feeling the impact.
OpenAI vacuumed the whole internet, while DeepSeek vacuumed the o1 models and karpathy warned us about this a month ago
The release of DeepSeek sent ripples through the chip market, as engineers proved that optimizing code could maximize GPU efficiency without increasing hardware demand. The result? A major hit to chip stocks:
๐ Arm ($ARM): -5.5%
๐ Nvidia ($NVDA): -5.3%
๐ Broadcom ($AVGO): -4.9%
๐ Super Micro ($SMCI): -4.6%
๐ Taiwan Semi ($TSM): -4.5%
๐ Micron ($MU): -4.3%
๐ Qualcomm ($QCOM): -2.8%
๐ AMD ($AMD): -2.5%
๐ Intel ($INTC): -2.0%
Why the market panic?
Instead of relying on raw compute power, engineers behind DeepSeek focused on highly efficient code optimization, reducing dependency on high-end hardware.
FAQs About DeepSeek's Success:
Q: How did DeepSeek get around export restrictions?
A: They didnโt. Instead, they optimized their chips for maximum memory efficiency. With perfectly tuned low-level code, they avoided bottlenecks entirely.
Q: How did DeepSeek train so efficiently?
A: They used predictive formulas to determine which tokens the model would activate and trained only those tokens. This approach required 95% fewer GPUs than Meta by focusing training on just 5% of parameters for each token.
Q: Why is DeepSeekโs inference so much cheaper?
A: They innovated by compressing the KV cache, a breakthrough from earlier research that dramatically cut costs.
Q: How did they replicate o1?
A: Through reinforcement learning. They tested the model with verifiable, complex tasks (like math and code), updating it only when it got the answers correct.
The Bottom Line: DeepSeekโs success is a testament to software-driven innovation. Engineers are proving that efficiency can outpace brute forceโand the market is feeling the impact.
OpenAI vacuumed the whole internet, while DeepSeek vacuumed the o1 models and karpathy warned us about this a month ago
๐ซก36๐9๐คฃ7โค5๐ค4๐1๐คฎ1๐ค1
This media is not supported in your browser
VIEW IN TELEGRAM
semiconductor fund managers after seeing 4 memes about deepseek
๐52๐ฅ5๐ฅด2๐1