Less is More
With only 7M parameters,
TRM obtains 45% test-accuracy on ARC-AGI-
1 and 8% on ARC-AGI-2, higher than most
LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5
Pro) with less than 0.01% of the parameters
This paper from Samsung fundamentally alters how we design training architecture and required compute.
https://arxiv.org/pdf/2510.04871v1
With only 7M parameters,
TRM obtains 45% test-accuracy on ARC-AGI-
1 and 8% on ARC-AGI-2, higher than most
LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5
Pro) with less than 0.01% of the parameters
This paper from Samsung fundamentally alters how we design training architecture and required compute.
https://arxiv.org/pdf/2510.04871v1
Ribbit_Token_Letter_June_2025_Confidential_vFinal_Distributed.pdf
3 MB
Token Letter - Ribbit Capital