Offshore
Photo
God of Prompt
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything.
Then suddenly, in a single epoch, they generalize perfectly.
This phenomenon is called "Grokking".
It went from a weird training glitch to a core theory of how models actually learn.
Here’s what changed (and why this matters now):
tweet
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything.
Then suddenly, in a single epoch, they generalize perfectly.
This phenomenon is called "Grokking".
It went from a weird training glitch to a core theory of how models actually learn.
Here’s what changed (and why this matters now):
tweet
Illiquid
Uh Sumitomo doesn’t get indium from China right
tweet
Uh Sumitomo doesn’t get indium from China right
[China’s Ministry of Commerce: Effective today, a full ban on exports of dual-use items to Japan — effectively the start of rare-earth controls]
- China’s Ministry of Commerce announced that it will prohibit the export of all dual-use items to Japanese military end-users, for military purposes, and for any other uses that could contribute to strengthening Japan’s military capabilities.
- “Dual-use items” broadly refer to products, technologies, and equipment that can be used for both civilian and military applications. Examples include items related to nuclear, biological, chemical, drones, rockets, and electronics, among others.
- Chinese authorities explained that the export controls are being implemented to safeguard national security and national interests. - Jukantweet
Offshore
Photo
God of Prompt
RT @godofprompt: R.I.P few-shot prompting.
Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.
It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.
Here's the breakthrough (and why this changes everything): 👇
tweet
RT @godofprompt: R.I.P few-shot prompting.
Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.
It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.
Here's the breakthrough (and why this changes everything): 👇
tweet
Offshore
Photo
memenodes
lakers fans saying NO
tweet
lakers fans saying NO
Gabe Vincent, Dalton Knecht & 1st round pick for Trae Young
Who says no? https://t.co/IfVaIi24gk - LakeShowYotweet
Offshore
Photo
God of Prompt
RT @godofprompt: R.I.P few-shot prompting.
Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.
It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.
Here's the breakthrough (and why this changes everything): 👇
tweet
RT @godofprompt: R.I.P few-shot prompting.
Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.
It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.
Here's the breakthrough (and why this changes everything): 👇
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$NFLX trades at a fairly attractive PEG 📺
NTM P/E ~29x
2026 EPS➡️ $3.24 (+28%)
2027 EPS ➡️ $3.91 (+20%)
2028 EPS ➡️ $4.48 (+15%)
CAGR at various multiples assuming 2028 EPS Estimates of $4.48:
32x | 16%
30x | 14%
29x | 12%
28x | 11%
27x | 10%
26x | 8% https://t.co/ZYHFeB8Iic
tweet
$NFLX trades at a fairly attractive PEG 📺
NTM P/E ~29x
2026 EPS➡️ $3.24 (+28%)
2027 EPS ➡️ $3.91 (+20%)
2028 EPS ➡️ $4.48 (+15%)
CAGR at various multiples assuming 2028 EPS Estimates of $4.48:
32x | 16%
30x | 14%
29x | 12%
28x | 11%
27x | 10%
26x | 8% https://t.co/ZYHFeB8Iic
tweet