eternal singularity
3.88K subscribers
280 photos
53 videos
86 links
itโ€™s so over for us bros
admin: @eternalclassicadmin
Download Telegram
nvidia publicly supporting the first trump administration while criticizing biden's policies for their impact on ai is a major vibe shift
๐Ÿคฃ50๐Ÿ˜8๐Ÿ—ฟ3
OpenAI announcing theyโ€™re teaming up with tech giants and dropping half a trillion dollars over four years to build a massive AGI/ASI
๐Ÿ”ฅ31๐Ÿ’ฉ7๐Ÿคฏ4๐Ÿคก3๐Ÿณ3๐Ÿ‘2๐Ÿ˜ฑ1
๐Ÿ˜75โ˜ƒ2๐Ÿฅฐ2
yes mr. president, once we build the 500B AGI/ASI supercluster eggs will be so cheap, you won't believe how cheap they're going to get
๐Ÿ‘40๐Ÿ˜20๐Ÿคฃ8โœ4๐Ÿ”ฅ2๐Ÿค“2๐Ÿ‘พ2
this app is free
๐Ÿ˜76๐Ÿคก22๐Ÿ”ฅ3
you can just do things
๐Ÿ”ฅ61๐Ÿ˜7๐Ÿ˜ข7๐Ÿคฏ3๐Ÿ‘2๐Ÿ‘Ž1๐Ÿคฎ1๐Ÿ‘จโ€๐Ÿ’ป1
itโ€™s so over
๐Ÿฅฐ49๐Ÿ‘36๐Ÿ˜ญ12๐Ÿคฎ5๐Ÿ˜4๐Ÿค—2๐Ÿค“1
Jevons Paradox Strikes Chip Stocks Overnight After DeepSeek Takes First Place in Stores

The release of DeepSeek sent ripples through the chip market, as engineers proved that optimizing code could maximize GPU efficiency without increasing hardware demand. The result? A major hit to chip stocks:

๐Ÿ“‰ Arm ($ARM): -5.5%
๐Ÿ“‰ Nvidia ($NVDA): -5.3%
๐Ÿ“‰ Broadcom ($AVGO): -4.9%
๐Ÿ“‰ Super Micro ($SMCI): -4.6%
๐Ÿ“‰ Taiwan Semi ($TSM): -4.5%
๐Ÿ“‰ Micron ($MU): -4.3%
๐Ÿ“‰ Qualcomm ($QCOM): -2.8%
๐Ÿ“‰ AMD ($AMD): -2.5%
๐Ÿ“‰ Intel ($INTC): -2.0%

Why the market panic?

Instead of relying on raw compute power, engineers behind DeepSeek focused on highly efficient code optimization, reducing dependency on high-end hardware.

FAQs About DeepSeek's Success:
Q: How did DeepSeek get around export restrictions?
A: They didnโ€™t. Instead, they optimized their chips for maximum memory efficiency. With perfectly tuned low-level code, they avoided bottlenecks entirely.

Q: How did DeepSeek train so efficiently?
A: They used predictive formulas to determine which tokens the model would activate and trained only those tokens. This approach required 95% fewer GPUs than Meta by focusing training on just 5% of parameters for each token.

Q: Why is DeepSeekโ€™s inference so much cheaper?
A: They innovated by compressing the KV cache, a breakthrough from earlier research that dramatically cut costs.

Q: How did they replicate o1?
A: Through reinforcement learning. They tested the model with verifiable, complex tasks (like math and code), updating it only when it got the answers correct.

The Bottom Line: DeepSeekโ€™s success is a testament to software-driven innovation. Engineers are proving that efficiency can outpace brute forceโ€”and the market is feeling the impact.

OpenAI vacuumed the whole internet, while DeepSeek vacuumed the o1 models and karpathy warned us about this a month ago
๐Ÿซก36๐Ÿ‘9๐Ÿคฃ7โค5๐Ÿค”4๐Ÿ‘1๐Ÿคฎ1๐Ÿค1
This media is not supported in your browser
VIEW IN TELEGRAM
semiconductor fund managers after seeing 4 memes about deepseek
๐Ÿ˜52๐Ÿ”ฅ5๐Ÿฅด2๐Ÿ‘1
they deployed Chinese trade waifus to fight against our twinks
๐Ÿ‘24๐Ÿ˜12๐Ÿ˜ญ8๐Ÿ’…2๐Ÿ‘1๐Ÿคฎ1
๐Ÿ˜65๐Ÿ‘47๐Ÿ”ฅ8๐Ÿ‘2๐Ÿ‘€2โค1๐Ÿคฎ1๐Ÿคฃ1
release the nsfw sora model
๐Ÿ”ฅ82๐Ÿคฃ29๐Ÿ†5โ˜ƒ2๐Ÿคฎ1
๐Ÿคฃ83๐Ÿ•Š6๐Ÿ˜ญ6๐Ÿ‘1๐Ÿ˜ข1
๐Ÿ˜จ34๐Ÿ—ฟ10๐ŸŒญ8๐Ÿฅฐ7๐Ÿ˜3๐Ÿ‘1๐Ÿ˜ญ1
game recognizes game
๐Ÿคฃ38๐Ÿคก7๐Ÿค6๐Ÿ‘3๐Ÿ‘จโ€๐Ÿ’ป3๐Ÿค”1
๐Ÿคฃ70๐Ÿ’ฏ16๐ŸŒš9๐Ÿค“3๐Ÿ‘2๐ŸŒญ2๐ŸŒ2โคโ€๐Ÿ”ฅ1๐Ÿคฎ1
DeepSeek also wrote some PTX (NVIDIAโ€™s intermediate assembly language). Low-level GPU programming is the way to go, folks. The more you optimize, the more you reduce costs or increase your performance budget for further advancements elsewhere at no additional cost
๐Ÿ‘42โœ11๐Ÿค”2
*unpops your popped bubble*
๐Ÿคฏ52๐Ÿ˜11๐Ÿ‘5๐Ÿคก2๐Ÿคฃ2โค1๐Ÿ‘จโ€๐Ÿ’ป1