eternal singularity
3.88K subscribers
280 photos
53 videos
86 links
itโ€™s so over for us bros
admin: @eternalclassicadmin
Download Telegram
we got agi at home
๐Ÿคฃ62๐Ÿ˜ฑ6๐Ÿ‘จโ€๐Ÿ’ป1๐ŸŽ„1
๐Ÿ˜34๐Ÿ˜ˆ15๐Ÿ˜ฑ3๐Ÿ‘€2๐ŸŒš1๐Ÿ†’1
Revised 12 days of OpenAI predictions

1. Tsunami โœ…
2. Locusts
3. Rivers of blood
4. Bird flu
5. Golden Rings, Sauron
6. Frogs
7. GPU famine
8. Nuclear fallout
9. Zombies
10. Death of first born sons, rip GPT-4
11. Rats, eleven pipers piping
12. Eternal darkness in a pear tree
๐Ÿ‘50๐Ÿคฉ8๐Ÿ˜ฑ5โค1๐Ÿ˜1๐Ÿคฏ1๐Ÿ™1
how do you deal with these windows bloatware moments
๐Ÿฅฐ23๐Ÿคก3โค2
๐Ÿคฎ34๐Ÿคก23๐Ÿ‘10๐Ÿซก5โœ4๐Ÿค“2๐Ÿ”ฅ1๐Ÿฆ„1
nvidia publicly supporting the first trump administration while criticizing biden's policies for their impact on ai is a major vibe shift
๐Ÿคฃ50๐Ÿ˜8๐Ÿ—ฟ3
OpenAI announcing theyโ€™re teaming up with tech giants and dropping half a trillion dollars over four years to build a massive AGI/ASI
๐Ÿ”ฅ31๐Ÿ’ฉ7๐Ÿคฏ4๐Ÿคก3๐Ÿณ3๐Ÿ‘2๐Ÿ˜ฑ1
๐Ÿ˜75โ˜ƒ2๐Ÿฅฐ2
yes mr. president, once we build the 500B AGI/ASI supercluster eggs will be so cheap, you won't believe how cheap they're going to get
๐Ÿ‘40๐Ÿ˜20๐Ÿคฃ8โœ4๐Ÿ”ฅ2๐Ÿค“2๐Ÿ‘พ2
this app is free
๐Ÿ˜76๐Ÿคก22๐Ÿ”ฅ3
you can just do things
๐Ÿ”ฅ61๐Ÿ˜7๐Ÿ˜ข7๐Ÿคฏ3๐Ÿ‘2๐Ÿ‘Ž1๐Ÿคฎ1๐Ÿ‘จโ€๐Ÿ’ป1
itโ€™s so over
๐Ÿฅฐ49๐Ÿ‘36๐Ÿ˜ญ12๐Ÿคฎ5๐Ÿ˜4๐Ÿค—2๐Ÿค“1
Jevons Paradox Strikes Chip Stocks Overnight After DeepSeek Takes First Place in Stores

The release of DeepSeek sent ripples through the chip market, as engineers proved that optimizing code could maximize GPU efficiency without increasing hardware demand. The result? A major hit to chip stocks:

๐Ÿ“‰ Arm ($ARM): -5.5%
๐Ÿ“‰ Nvidia ($NVDA): -5.3%
๐Ÿ“‰ Broadcom ($AVGO): -4.9%
๐Ÿ“‰ Super Micro ($SMCI): -4.6%
๐Ÿ“‰ Taiwan Semi ($TSM): -4.5%
๐Ÿ“‰ Micron ($MU): -4.3%
๐Ÿ“‰ Qualcomm ($QCOM): -2.8%
๐Ÿ“‰ AMD ($AMD): -2.5%
๐Ÿ“‰ Intel ($INTC): -2.0%

Why the market panic?

Instead of relying on raw compute power, engineers behind DeepSeek focused on highly efficient code optimization, reducing dependency on high-end hardware.

FAQs About DeepSeek's Success:
Q: How did DeepSeek get around export restrictions?
A: They didnโ€™t. Instead, they optimized their chips for maximum memory efficiency. With perfectly tuned low-level code, they avoided bottlenecks entirely.

Q: How did DeepSeek train so efficiently?
A: They used predictive formulas to determine which tokens the model would activate and trained only those tokens. This approach required 95% fewer GPUs than Meta by focusing training on just 5% of parameters for each token.

Q: Why is DeepSeekโ€™s inference so much cheaper?
A: They innovated by compressing the KV cache, a breakthrough from earlier research that dramatically cut costs.

Q: How did they replicate o1?
A: Through reinforcement learning. They tested the model with verifiable, complex tasks (like math and code), updating it only when it got the answers correct.

The Bottom Line: DeepSeekโ€™s success is a testament to software-driven innovation. Engineers are proving that efficiency can outpace brute forceโ€”and the market is feeling the impact.

OpenAI vacuumed the whole internet, while DeepSeek vacuumed the o1 models and karpathy warned us about this a month ago
๐Ÿซก36๐Ÿ‘9๐Ÿคฃ7โค5๐Ÿค”4๐Ÿ‘1๐Ÿคฎ1๐Ÿค1
This media is not supported in your browser
VIEW IN TELEGRAM
semiconductor fund managers after seeing 4 memes about deepseek
๐Ÿ˜52๐Ÿ”ฅ5๐Ÿฅด2๐Ÿ‘1
they deployed Chinese trade waifus to fight against our twinks
๐Ÿ‘24๐Ÿ˜12๐Ÿ˜ญ8๐Ÿ’…2๐Ÿ‘1๐Ÿคฎ1
๐Ÿ˜65๐Ÿ‘47๐Ÿ”ฅ8๐Ÿ‘2๐Ÿ‘€2โค1๐Ÿคฎ1๐Ÿคฃ1
release the nsfw sora model
๐Ÿ”ฅ82๐Ÿคฃ29๐Ÿ†5โ˜ƒ2๐Ÿคฎ1
๐Ÿคฃ83๐Ÿ•Š6๐Ÿ˜ญ6๐Ÿ‘1๐Ÿ˜ข1
๐Ÿ˜จ34๐Ÿ—ฟ10๐ŸŒญ8๐Ÿฅฐ7๐Ÿ˜3๐Ÿ‘1๐Ÿ˜ญ1