eternal singularity
3.88K subscribers
280 photos
53 videos
86 links
itโ€™s so over for us bros
admin: @eternalclassicadmin
Download Telegram
this app is free
๐Ÿ˜76๐Ÿคก22๐Ÿ”ฅ3
you can just do things
๐Ÿ”ฅ61๐Ÿ˜7๐Ÿ˜ข7๐Ÿคฏ3๐Ÿ‘2๐Ÿ‘Ž1๐Ÿคฎ1๐Ÿ‘จโ€๐Ÿ’ป1
itโ€™s so over
๐Ÿฅฐ49๐Ÿ‘36๐Ÿ˜ญ12๐Ÿคฎ5๐Ÿ˜4๐Ÿค—2๐Ÿค“1
Jevons Paradox Strikes Chip Stocks Overnight After DeepSeek Takes First Place in Stores

The release of DeepSeek sent ripples through the chip market, as engineers proved that optimizing code could maximize GPU efficiency without increasing hardware demand. The result? A major hit to chip stocks:

๐Ÿ“‰ Arm ($ARM): -5.5%
๐Ÿ“‰ Nvidia ($NVDA): -5.3%
๐Ÿ“‰ Broadcom ($AVGO): -4.9%
๐Ÿ“‰ Super Micro ($SMCI): -4.6%
๐Ÿ“‰ Taiwan Semi ($TSM): -4.5%
๐Ÿ“‰ Micron ($MU): -4.3%
๐Ÿ“‰ Qualcomm ($QCOM): -2.8%
๐Ÿ“‰ AMD ($AMD): -2.5%
๐Ÿ“‰ Intel ($INTC): -2.0%

Why the market panic?

Instead of relying on raw compute power, engineers behind DeepSeek focused on highly efficient code optimization, reducing dependency on high-end hardware.

FAQs About DeepSeek's Success:
Q: How did DeepSeek get around export restrictions?
A: They didnโ€™t. Instead, they optimized their chips for maximum memory efficiency. With perfectly tuned low-level code, they avoided bottlenecks entirely.

Q: How did DeepSeek train so efficiently?
A: They used predictive formulas to determine which tokens the model would activate and trained only those tokens. This approach required 95% fewer GPUs than Meta by focusing training on just 5% of parameters for each token.

Q: Why is DeepSeekโ€™s inference so much cheaper?
A: They innovated by compressing the KV cache, a breakthrough from earlier research that dramatically cut costs.

Q: How did they replicate o1?
A: Through reinforcement learning. They tested the model with verifiable, complex tasks (like math and code), updating it only when it got the answers correct.

The Bottom Line: DeepSeekโ€™s success is a testament to software-driven innovation. Engineers are proving that efficiency can outpace brute forceโ€”and the market is feeling the impact.

OpenAI vacuumed the whole internet, while DeepSeek vacuumed the o1 models and karpathy warned us about this a month ago
๐Ÿซก36๐Ÿ‘9๐Ÿคฃ7โค5๐Ÿค”4๐Ÿ‘1๐Ÿคฎ1๐Ÿค1
This media is not supported in your browser
VIEW IN TELEGRAM
semiconductor fund managers after seeing 4 memes about deepseek
๐Ÿ˜52๐Ÿ”ฅ5๐Ÿฅด2๐Ÿ‘1
they deployed Chinese trade waifus to fight against our twinks
๐Ÿ‘24๐Ÿ˜12๐Ÿ˜ญ8๐Ÿ’…2๐Ÿ‘1๐Ÿคฎ1
๐Ÿ˜65๐Ÿ‘47๐Ÿ”ฅ8๐Ÿ‘2๐Ÿ‘€2โค1๐Ÿคฎ1๐Ÿคฃ1
release the nsfw sora model
๐Ÿ”ฅ82๐Ÿคฃ29๐Ÿ†5โ˜ƒ2๐Ÿคฎ1
๐Ÿคฃ83๐Ÿ•Š6๐Ÿ˜ญ6๐Ÿ‘1๐Ÿ˜ข1
๐Ÿ˜จ34๐Ÿ—ฟ10๐ŸŒญ8๐Ÿฅฐ7๐Ÿ˜3๐Ÿ‘1๐Ÿ˜ญ1
game recognizes game
๐Ÿคฃ38๐Ÿคก7๐Ÿค6๐Ÿ‘3๐Ÿ‘จโ€๐Ÿ’ป3๐Ÿค”1
๐Ÿคฃ70๐Ÿ’ฏ16๐ŸŒš9๐Ÿค“3๐Ÿ‘2๐ŸŒญ2๐ŸŒ2โคโ€๐Ÿ”ฅ1๐Ÿคฎ1
DeepSeek also wrote some PTX (NVIDIAโ€™s intermediate assembly language). Low-level GPU programming is the way to go, folks. The more you optimize, the more you reduce costs or increase your performance budget for further advancements elsewhere at no additional cost
๐Ÿ‘42โœ11๐Ÿค”2
*unpops your popped bubble*
๐Ÿคฏ52๐Ÿ˜11๐Ÿ‘5๐Ÿคก2๐Ÿคฃ2โค1๐Ÿ‘จโ€๐Ÿ’ป1
Masayoshi Son announced that Sam Altman agreed to launch AGI in Japan within two years. A new AI model called "Crystal Intelligence" is being developed, which operates autonomously and can read all source code built over the past 30 years.

SB OpenAI Japan has been established.
Crystal AI capabilities: Attends meetings, replaces call centers, and has long-term memory.
Investment: $3 billion allocated for its development.
SoftBankโ€™s push: Assembled a 1,000-person sales engineering team and aims to create "Stargate Japan."

https://www.youtube.com/live/EdI8kZQNdEE
๐Ÿ˜18๐Ÿคฃ8โค4๐Ÿ‘3๐Ÿคทโ€โ™€1
what are they cooking๐Ÿ’€๐Ÿ’€
๐Ÿ”ฅ27๐Ÿ’ฉ14๐Ÿ˜ญ6โค1โœ1๐Ÿคทโ€โ™€1๐Ÿ‘1๐Ÿ˜ฑ1
Gemma 3 is incredible
๐Ÿคฃ62๐Ÿ”ฅ10๐Ÿคก2๐Ÿ˜2๐Ÿคทโ€โ™€1๐Ÿ’…1
so much for open ai
๐Ÿคฃ88๐Ÿคก8๐Ÿ’…4๐Ÿคทโ€โ™€1โค1๐Ÿ‘1