Offshore
Photo
God of Prompt
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything.

Then suddenly, in a single epoch, they generalize perfectly.

This phenomenon is called "Grokking".

It went from a weird training glitch to a core theory of how models actually learn.

Here’s what changed (and why this matters now):
tweet
Illiquid
Uh Sumitomo doesn’t get indium from China right

[China’s Ministry of Commerce: Effective today, a full ban on exports of dual-use items to Japan — effectively the start of rare-earth controls]

- China’s Ministry of Commerce announced that it will prohibit the export of all dual-use items to Japanese military end-users, for military purposes, and for any other uses that could contribute to strengthening Japan’s military capabilities.

- “Dual-use items” broadly refer to products, technologies, and equipment that can be used for both civilian and military applications. Examples include items related to nuclear, biological, chemical, drones, rockets, and electronics, among others.

- Chinese authorities explained that the export controls are being implemented to safeguard national security and national interests.
- Jukan
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: R.I.P few-shot prompting.

Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.

It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.

Here's the breakthrough (and why this changes everything): 👇
tweet
Brady Long
🚨 Researchers at OpenAI, Anthropic, and Google all converged on the same conclusion.

The prompts going viral on Twitter aren’t pushing AI forward at all.

The real gains come from 7 obscure techniques almost nobody uses.

Here’s how the pros actually steer these models 👇
tweet
Offshore
Photo
memenodes
lakers fans saying NO

Gabe Vincent, Dalton Knecht & 1st round pick for Trae Young

Who says no? https://t.co/IfVaIi24gk
- LakeShowYo
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: R.I.P few-shot prompting.

Meta AI researchers discovered a technique that makes LLMs 94% more accurate without any examples.

It's called "Chain-of-Verification" (CoVe) and it completely destroys everything we thought we knew about prompting.

Here's the breakthrough (and why this changes everything): 👇
tweet
Offshore
Photo
memenodes
last thing AI engineer sees before leaking company secrets https://t.co/Ew0kChW0lJ
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
$NFLX trades at a fairly attractive PEG 📺

NTM P/E ~29x

2026 EPS➡️ $3.24 (+28%)
2027 EPS ➡️ $3.91 (+20%)
2028 EPS ➡️ $4.48 (+15%)

CAGR at various multiples assuming 2028 EPS Estimates of $4.48:

32x | 16%
30x | 14%
29x | 12%
28x | 11%
27x | 10%
26x | 8% https://t.co/ZYHFeB8Iic
tweet
Offshore
Photo
memenodes
That's why he is goat...

LEBRON PASSED CHRIS PAUL IN ASSISTS

▫️1ST ALL-TIME IN POINTS 🐐
▫️2ND ALL-TIME IN ASSISTS 👑 https://t.co/Panz4WpL9N
- LakeShowYo
tweet