Conext Engineering for Engineers
Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to maximize recall and precision, and discusses specific challenges and techniques for AI agents, such as intelligent compaction.
Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to maximize recall and precision, and discusses specific challenges and techniques for AI agents, such as intelligent compaction.
Building effective AI systems is not about mastering "prompt engineering" or the latest RAG technique, but about the disciplined practice of Context Engineering. An AI system can be understood simply as a program: it takes an instruction set, relevant information, and user input to produce an output. The core task of a builder is to control the "relevant information" that goes into the context window to create reliable, fast, and cheap software.
The Illusion of Long Context
The industry's push towards massive context windows—from one million to ten million tokens—is not the panacea it appears to be. While impressive on benchmarks, performance in practical applications degrades sharply long before these limits are reached.
A technical report from Chroma demonstrates this phenomenon. On simple tasks that a human could perform easily, model performance drops precipitously as the input length increases, with significant degradation observed as early as 10,000 tokens.
The primary benchmark used to substantiate long-context capabilities is the "Needle in a Haystack" test. This test is fundamentally flawed as a measure of real-world utility for two reasons:
1. Low Attention Requirement: By definition, the model only needs to find and pay attention to a single piece of information (the "needle"), ignoring the vast majority of the context.
2. Zero Reasoning Power: The task requires simple pattern matching, not complex reasoning. For example, finding the sentence "The best writing advice... was to write every week" in response to a question about the best writing advice.
Most valuable AI tasks, such as summarization or agentic workflows, require the model to pay attention to a large portion of the context and apply significant reasoning power. Relying on long context alone for these tasks leads to a significant drop in performance. Focused, curated context provides massive gains in performance compared to feeding the model the full, unfiltered context.
A Framework for Context Engineering: Gather and Glean
The central challenge is deciding what information, out of all the information in the universe, should be in the context window for any given turn. This can be approached with a two-stage model:
1. Gather (Maximize Recall): The first stage is to collect all potentially relevant information. The goal is to maximize recall, even at the expense of including some irrelevant data. This can involve creating a query plan that probes multiple sources:
⦁ Structured data (e.g., SQL queries)
⦁ Unstructured data (e.g., from a vector database like Chroma)
⦁ APIs and other tools
⦁ Web search results
⦁ Chat conversation history
2. Glean (Maximize Precision): The second stage is to filter the gathered data down to a pristine set of highly relevant, non-distracting information. This is a process of maximizing precision. Common techniques include:
⦁ Top-K vector similarity search.
⦁ Reciprocal Rank Fusion (RRF) to combine results from multiple retrievers.
⦁ Learning to Rank (LTR) models.
⦁ Dedicated reranking models.
⦁ Using LLMs themselves—often small, fast, and cheap models run in parallel—to "brute force" the search and curation process.
Context Engineering for AI Agents
These principles are even more critical for AI agents, where the gather-and-glean loop occurs repeatedly. The agent's conversation and action history becomes a major, and rapidly growing, component of the context window. The sheer volume of code, logs, and observations generated in a few turns can be impossible for any human to parse, let alone an AI.
An interesting finding in agent performance is the value of failure:
⦁ Giving an agent access to its past failure cases helps it break out of local minima and improve performance.
⦁ Conversely, giving it access to prior success cases can be detrimental, causing the agent to get "lazy" an...
Full story
The Illusion of Long Context
The industry's push towards massive context windows—from one million to ten million tokens—is not the panacea it appears to be. While impressive on benchmarks, performance in practical applications degrades sharply long before these limits are reached.
A technical report from Chroma demonstrates this phenomenon. On simple tasks that a human could perform easily, model performance drops precipitously as the input length increases, with significant degradation observed as early as 10,000 tokens.
The primary benchmark used to substantiate long-context capabilities is the "Needle in a Haystack" test. This test is fundamentally flawed as a measure of real-world utility for two reasons:
1. Low Attention Requirement: By definition, the model only needs to find and pay attention to a single piece of information (the "needle"), ignoring the vast majority of the context.
2. Zero Reasoning Power: The task requires simple pattern matching, not complex reasoning. For example, finding the sentence "The best writing advice... was to write every week" in response to a question about the best writing advice.
Most valuable AI tasks, such as summarization or agentic workflows, require the model to pay attention to a large portion of the context and apply significant reasoning power. Relying on long context alone for these tasks leads to a significant drop in performance. Focused, curated context provides massive gains in performance compared to feeding the model the full, unfiltered context.
A Framework for Context Engineering: Gather and Glean
The central challenge is deciding what information, out of all the information in the universe, should be in the context window for any given turn. This can be approached with a two-stage model:
1. Gather (Maximize Recall): The first stage is to collect all potentially relevant information. The goal is to maximize recall, even at the expense of including some irrelevant data. This can involve creating a query plan that probes multiple sources:
⦁ Structured data (e.g., SQL queries)
⦁ Unstructured data (e.g., from a vector database like Chroma)
⦁ APIs and other tools
⦁ Web search results
⦁ Chat conversation history
2. Glean (Maximize Precision): The second stage is to filter the gathered data down to a pristine set of highly relevant, non-distracting information. This is a process of maximizing precision. Common techniques include:
⦁ Top-K vector similarity search.
⦁ Reciprocal Rank Fusion (RRF) to combine results from multiple retrievers.
⦁ Learning to Rank (LTR) models.
⦁ Dedicated reranking models.
⦁ Using LLMs themselves—often small, fast, and cheap models run in parallel—to "brute force" the search and curation process.
Context Engineering for AI Agents
These principles are even more critical for AI agents, where the gather-and-glean loop occurs repeatedly. The agent's conversation and action history becomes a major, and rapidly growing, component of the context window. The sheer volume of code, logs, and observations generated in a few turns can be impossible for any human to parse, let alone an AI.
An interesting finding in agent performance is the value of failure:
⦁ Giving an agent access to its past failure cases helps it break out of local minima and improve performance.
⦁ Conversely, giving it access to prior success cases can be detrimental, causing the agent to get "lazy" an...
Full story
tokenless.tech
Conext Engineering for Engineers | Tokenless
Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to…
The Top 100 Most Used AI Apps in 2025
In the fifth edition of the a16z Consumer AI 100, an analysis of the most-used AI-native products reveals a market that is beginning to stabilize after a period of chaotic growth. Key trends identified include the continued dominance of AI companionship and creative tools, the significant market entry of major players like Google and xAI's Grok, the rise of Chinese AI companies on the global stage, and the emergence of a powerful new category: "vibe coding." The data suggests a future of increased verticalization, prosumer tool adoption, and the development of more sophisticated network effects beyond simple data acquisition.
In the fifth edition of the a16z Consumer AI 100, an analysis of the most-used AI-native products reveals a market that is beginning to stabilize after a period of chaotic growth. Key trends identified include the continued dominance of AI companionship and creative tools, the significant market entry of major players like Google and xAI's Grok, the rise of Chinese AI companies on the global stage, and the emergence of a powerful new category: "vibe coding." The data suggests a future of increased verticalization, prosumer tool adoption, and the development of more sophisticated network effects beyond simple data acquisition.
The consumer generative AI ecosystem is showing signs of maturation and stabilization, a significant shift from the "total chaos" of its early days. An analysis of the top 50 AI-native web and mobile products, ranked by monthly usage rather than revenue, highlights several key trends shaping the industry.
Market Stabilization and Dominant Categories
The pace of change is slowing, with only 11 new names on the web list compared to 17 in the previous six-month period. This suggests that the market is beginning to consolidate around established players and clear use cases. Two categories continue to dominate consumer attention:
⦁ AI Companionship: This remains a major segment, with platforms like Character.ai, Janitor, and Spicy Chat consistently ranking high. The list saw three new companionship apps join the ranks, indicating sustained interest and innovation in this area.
⦁ Creative Tools: This category, encompassing image, video, and audio generation, maintains a strong presence with mainstays like Midjourney, Leonardo, and ElevenLabs.
The Rise of "Vibe Coding"
A significant new trend is the emergence of "vibe coding" platforms, which allow users to build applications with natural language. Loveable and Replit both made the main web list, demonstrating rapid growth. Analysis of these platforms reveals strong underlying business models:
⦁ High Revenue Retention: Many leading vibe coding platforms show revenue retention of 100% or more in the first three months. This suggests that users are not just experimenting but are upgrading plans and deriving continuous value, pointing towards prosumer or even enterprise use cases.
⦁ Usage Patterns: Interestingly, traffic to the creation platforms themselves (e.g.,
Competitive Landscape: Incumbents and International Players
Google's Strong Debut
With changes in how their traffic is tracked, four distinct Google properties made a significant debut on the web list:
1. Gemini: Ranked #2 on the web, capturing about 10% of ChatGPT's traffic. On mobile, it's much closer, with half of ChatGPT's traffic, driven primarily by Android users.
2. AI Studio: Google's developer-facing model sandbox landed in the top 10, showing strong adoption among builders.
3. NotebookLM: This research and writing assistant has maintained surprisingly strong, flat-to-increasing traffic since its launch, landing at #13.
4. Google Labs: Ranked #39, this consumer-facing sandbox saw a 15% traffic spike with the release of the Veo video model.
The Debut of Grok
xAI's Grok made a powerful entrance, debuting at #4 on the web list. Its integration into the X platform and its unique features have quickly attracted a large user base. Meta AI also began to make an appearance on the web list, signaling that the competition among large language model assistants is far from over.
The Multi-Faceted Role of Chinese AI
Chinese companies are making an impact in three distinct ways:
1. Domestic Focus: Products like Alibaba's Cork, ByteDance's Doubao, and Moonshot AI's Kimi rank high, serving the large domestic market where many Western AI products are unavailable.
2. Global Exports: A new wave of startups is developing AI for a global audience, particularly in the image and video generation space (e.g., Kling, PixVerse). These models are often distributed through their own properties or aggregated on US-based platforms.
3. Hybrid Model: Some companies, like Remini, successfully serve both domestic and international markets, with its top traffic sources bei...
Full story
Market Stabilization and Dominant Categories
The pace of change is slowing, with only 11 new names on the web list compared to 17 in the previous six-month period. This suggests that the market is beginning to consolidate around established players and clear use cases. Two categories continue to dominate consumer attention:
⦁ AI Companionship: This remains a major segment, with platforms like Character.ai, Janitor, and Spicy Chat consistently ranking high. The list saw three new companionship apps join the ranks, indicating sustained interest and innovation in this area.
⦁ Creative Tools: This category, encompassing image, video, and audio generation, maintains a strong presence with mainstays like Midjourney, Leonardo, and ElevenLabs.
The Rise of "Vibe Coding"
A significant new trend is the emergence of "vibe coding" platforms, which allow users to build applications with natural language. Loveable and Replit both made the main web list, demonstrating rapid growth. Analysis of these platforms reveals strong underlying business models:
⦁ High Revenue Retention: Many leading vibe coding platforms show revenue retention of 100% or more in the first three months. This suggests that users are not just experimenting but are upgrading plans and deriving continuous value, pointing towards prosumer or even enterprise use cases.
⦁ Usage Patterns: Interestingly, traffic to the creation platforms themselves (e.g.,
loveable.ai) is significantly higher than traffic to the applications hosted on their subdomains. This could imply two things: serious users are deploying projects on custom domains, or a large number of users are building "personal software" for themselves or a small circle, which is highly valuable to the individual even without attracting mass traffic.Competitive Landscape: Incumbents and International Players
Google's Strong Debut
With changes in how their traffic is tracked, four distinct Google properties made a significant debut on the web list:
1. Gemini: Ranked #2 on the web, capturing about 10% of ChatGPT's traffic. On mobile, it's much closer, with half of ChatGPT's traffic, driven primarily by Android users.
2. AI Studio: Google's developer-facing model sandbox landed in the top 10, showing strong adoption among builders.
3. NotebookLM: This research and writing assistant has maintained surprisingly strong, flat-to-increasing traffic since its launch, landing at #13.
4. Google Labs: Ranked #39, this consumer-facing sandbox saw a 15% traffic spike with the release of the Veo video model.
The Debut of Grok
xAI's Grok made a powerful entrance, debuting at #4 on the web list. Its integration into the X platform and its unique features have quickly attracted a large user base. Meta AI also began to make an appearance on the web list, signaling that the competition among large language model assistants is far from over.
The Multi-Faceted Role of Chinese AI
Chinese companies are making an impact in three distinct ways:
1. Domestic Focus: Products like Alibaba's Cork, ByteDance's Doubao, and Moonshot AI's Kimi rank high, serving the large domestic market where many Western AI products are unavailable.
2. Global Exports: A new wave of startups is developing AI for a global audience, particularly in the image and video generation space (e.g., Kling, PixVerse). These models are often distributed through their own properties or aggregated on US-based platforms.
3. Hybrid Model: Some companies, like Remini, successfully serve both domestic and international markets, with its top traffic sources bei...
Full story
tokenless.tech
The Top 100 Most Used AI Apps in 2025 | Tokenless
In the fifth edition of the a16z Consumer AI 100, an analysis of the most-used AI-native products reveals a market that is beginning to stabilize after a period of chaotic growth. Key trends identified include the continued dominance of AI companionship and…
Intelligence Isn't What You Think
Dr. Michael Timothy Bennett challenges conventional AI paradigms, arguing for a new approach inspired by the principles of living systems. He critiques the separation of software and hardware ("computational dualism"), redefines intelligence as efficient adaptation, and offers a novel theory of consciousness as a "tapestry of valence" essential for genuine intelligence.
Dr. Michael Timothy Bennett challenges conventional AI paradigms, arguing for a new approach inspired by the principles of living systems. He critiques the separation of software and hardware ("computational dualism"), redefines intelligence as efficient adaptation, and offers a novel theory of consciousness as a "tapestry of valence" essential for genuine intelligence.
Dr. Michael Timothy Bennett begins by challenging the conventional definitions and approaches to artificial intelligence, advocating for a perspective rooted in biology and embodied cognition. He favors Pei Wang's definition of intelligence as "adaptation with limited resources," emphasizing efficiency in terms of energy and data, a stark contrast to the "scale maxing" approach prevalent in Silicon Valley.
Critique of Formal Models and Computational Dualism
Bennett critiques formalisms like AIXI, which are based on Solomonoff induction and Occam's Razor (simplicity). While compelling, these models run into the problem of subjective complexity. The perceived simplicity of a model depends on the "interpreter" or the underlying language (abstraction layer) used by the agent. One can make an agent seem arbitrarily intelligent or stupid simply by changing the interpretative framework, making objective claims about performance difficult.
This leads to his central critique of modern AI, which he terms "computational dualism." He draws a provocative analogy to Cartesian dualism, where Descartes proposed the pineal gland as the interface between the non-physical mind and the physical body.
He argues that treating intelligence as pure software, separate from its hardware and environment, is a fundamental mistake. The behavior of any software is contingent on the interpreter that executes it, all the way down to the physical laws governing the hardware. To understand intelligence, one must analyze the system as a whole, including its embodiment and environment—a concept known in cognitive science as enactive cognition. This view also aligns with the concept of mortal computation, where the physical substrate is inseparable from the computation itself, as opposed to the abstract, "immortal" nature of a theoretical Turing machine.
A Biologically-Inspired Vision for Intelligence
Bennett advocates for an AI that emulates the properties of living systems: self-organization, decentralization, and multi-scale adaptation.
⦁ Causality and Abstraction: True intelligence requires learning a causal model of the world, starting with a representation of the self as a causal agent. An agent must be able to distinguish between its own actions causing a change and the environment causing a change. This "causal identity for self" is fundamental to subjective experience.
⦁ The Law of the Stack: Bennett proposes a principle where the adaptability of a system's high-level abstractions (e.g., software) is contingent on the adaptability of its lower-level abstractions (e.g., hardware). Biological systems excel because they delegate adaptation down the stack, allowing for flexibility at all levels. Computers, in contrast, are like an "inflexible bureaucracy that makes decisions only at the top."
⦁ Decentralization and Constraints: Drawing on the work of Michael Levin, Bennett views systems like cancer as a failure of collective intelligence, where a cell becomes informationally isolated from the whole and reverts to primitive behavior. This can happen when a system is over-constrained. Imposing too much top-down control eliminates potentially correct policies and forces components to "break off." This suggests that AI safety should focus on designing the entire system with appropriate, minimal constraints rather than trying to rigidly align a single component.
Consciousness as a Necessary Adaptation
Bennett directly confronts the "hard problem of consciousness" by arguing that philosophical zombies—beings identical to humans but without subjective experience—are impossible in any conceivable world. He posits that consciousness is not an epiphenomenal, non-causal addition to information processing but a necessary feature of a sufficiently adaptive, intelligent system.
His theory frames subjective experien...
Full story
Critique of Formal Models and Computational Dualism
Bennett critiques formalisms like AIXI, which are based on Solomonoff induction and Occam's Razor (simplicity). While compelling, these models run into the problem of subjective complexity. The perceived simplicity of a model depends on the "interpreter" or the underlying language (abstraction layer) used by the agent. One can make an agent seem arbitrarily intelligent or stupid simply by changing the interpretative framework, making objective claims about performance difficult.
This leads to his central critique of modern AI, which he terms "computational dualism." He draws a provocative analogy to Cartesian dualism, where Descartes proposed the pineal gland as the interface between the non-physical mind and the physical body.
We have just replaced the pineal gland with a touring machine.
He argues that treating intelligence as pure software, separate from its hardware and environment, is a fundamental mistake. The behavior of any software is contingent on the interpreter that executes it, all the way down to the physical laws governing the hardware. To understand intelligence, one must analyze the system as a whole, including its embodiment and environment—a concept known in cognitive science as enactive cognition. This view also aligns with the concept of mortal computation, where the physical substrate is inseparable from the computation itself, as opposed to the abstract, "immortal" nature of a theoretical Turing machine.
A Biologically-Inspired Vision for Intelligence
Bennett advocates for an AI that emulates the properties of living systems: self-organization, decentralization, and multi-scale adaptation.
⦁ Causality and Abstraction: True intelligence requires learning a causal model of the world, starting with a representation of the self as a causal agent. An agent must be able to distinguish between its own actions causing a change and the environment causing a change. This "causal identity for self" is fundamental to subjective experience.
⦁ The Law of the Stack: Bennett proposes a principle where the adaptability of a system's high-level abstractions (e.g., software) is contingent on the adaptability of its lower-level abstractions (e.g., hardware). Biological systems excel because they delegate adaptation down the stack, allowing for flexibility at all levels. Computers, in contrast, are like an "inflexible bureaucracy that makes decisions only at the top."
⦁ Decentralization and Constraints: Drawing on the work of Michael Levin, Bennett views systems like cancer as a failure of collective intelligence, where a cell becomes informationally isolated from the whole and reverts to primitive behavior. This can happen when a system is over-constrained. Imposing too much top-down control eliminates potentially correct policies and forces components to "break off." This suggests that AI safety should focus on designing the entire system with appropriate, minimal constraints rather than trying to rigidly align a single component.
Consciousness as a Necessary Adaptation
Bennett directly confronts the "hard problem of consciousness" by arguing that philosophical zombies—beings identical to humans but without subjective experience—are impossible in any conceivable world. He posits that consciousness is not an epiphenomenal, non-causal addition to information processing but a necessary feature of a sufficiently adaptive, intelligent system.
His theory frames subjective experien...
Full story
tokenless.tech
Intelligence Isn't What You Think | Tokenless
Dr. Michael Timothy Bennett challenges conventional AI paradigms, arguing for a new approach inspired by the principles of living systems. He critiques the separation of software and hardware ("computational dualism"), redefines intelligence as efficient…
The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain
Andrew Ng, founder of Google Brain and DeepLearning.AI, discusses the history of neural networks and the foundational ideas that led to modern AI breakthroughs. He covers the controversial early bets on scale and general-purpose algorithms, the technical innovations behind Transformers, and the future democratizing effect of artificial intelligence.
Andrew Ng, founder of Google Brain and DeepLearning.AI, discusses the history of neural networks and the foundational ideas that led to modern AI breakthroughs. He covers the controversial early bets on scale and general-purpose algorithms, the technical innovations behind Transformers, and the future democratizing effect of artificial intelligence.
The creation and success of Google Brain were driven by two core, and at the time, controversial hypotheses. The first was that scale matters. Around 2010, the prevailing academic view favored inventing novel algorithms over simply building bigger neural networks. Despite advice from senior figures that focusing on scale was not a good career move, the data generated by my students at Stanford showed a clear, undeniable trend: for every model we tried, performance improved as the model size increased. This data provided the confidence to pursue scale relentlessly.
The second core idea was the "one learning algorithm" hypothesis. Inspired by neuro-rewiring experiments in the brain, where one part of the brain tissue can learn a new function (e.g., learning to "see" after previously learning to "hear"), the question was whether we needed thousands of hand-engineered algorithms for different tasks. The hypothesis was that a single, general-purpose learning algorithm could, if fed different data (text, images, audio), learn to process each type effectively. This was heresy at the time in a field dominated by specialized models, but it has since become the foundation of modern AI.
The Early Days: Pushing Against the Current
In the early 2010s, neural networks were largely out of favor in the AI community, having been in the "wilderness" for a long time. The path to publishing in top conferences was through clever mathematical proofs, not demonstrating the power of scaled-up systems. This focus on scale was seen as lacking intellectual rigor. For researchers who had spent decades meticulously tweaking specific algorithms, the idea that a large model fed with massive amounts of data could outperform their work was emotionally wrenching.
The Google Brain project began at X after Sebastian Thrun, who deserves immense credit for its inception, encouraged me to pitch the idea of using Google's massive compute infrastructure to Larry Page. The partnership with Jeff Dean was crucial; he brought the deep computer systems expertise, while I brought the machine learning perspective. This combination allowed us to effectively leverage Google's infrastructure to scale our algorithms.
Technical Innovations and Breakthroughs
Hardware and Architecture
Initially, we were slower to embrace GPUs, partly because Google's CPU infrastructure was so brilliant and there were concerns about creating a heterogeneous and hard-to-manage compute environment. However, the need for parallel computation was undeniable.
This philosophy of designing for parallel hardware was a core, if sometimes underappreciated, aspect of the Transformer paper. Before Transformers, models for tasks like translation tried to ingest and memorize an entire sentence before generating the output. The Transformer's key innovation was the attention mechanism, which allowed the model to focus on specific, relevant parts of the input sentence as it generated the output. Crucially, the entire architecture was designed so that every step was highly parallelizable, making it a perfect fit for GPUs and TPUs. This design choice was what unlocked its ability to scale and become the foundation for today's large models.
The "Cat Video" Paper
Google Brain's "coming out moment" was the 2012 paper demonstrating unsupervised learning. We built what was likely the largest neural network in the world at the time and trained it by showing it unlabeled frames from YouTube videos. One day, my student Quoc Le showed me that a neuron in the network had learned to respond specifically to images of cats. The algorithm had, on its own and without any human labels, discovered the concept of a "cat." This was a massive breakthrough, proving that models could learn meaningful features from the world's vast stores of unlabeled data.
From Research to Real-World Application
To prove our value, we collaborated with teams across Google. Our early succe...
Full story
The second core idea was the "one learning algorithm" hypothesis. Inspired by neuro-rewiring experiments in the brain, where one part of the brain tissue can learn a new function (e.g., learning to "see" after previously learning to "hear"), the question was whether we needed thousands of hand-engineered algorithms for different tasks. The hypothesis was that a single, general-purpose learning algorithm could, if fed different data (text, images, audio), learn to process each type effectively. This was heresy at the time in a field dominated by specialized models, but it has since become the foundation of modern AI.
The Early Days: Pushing Against the Current
In the early 2010s, neural networks were largely out of favor in the AI community, having been in the "wilderness" for a long time. The path to publishing in top conferences was through clever mathematical proofs, not demonstrating the power of scaled-up systems. This focus on scale was seen as lacking intellectual rigor. For researchers who had spent decades meticulously tweaking specific algorithms, the idea that a large model fed with massive amounts of data could outperform their work was emotionally wrenching.
The Google Brain project began at X after Sebastian Thrun, who deserves immense credit for its inception, encouraged me to pitch the idea of using Google's massive compute infrastructure to Larry Page. The partnership with Jeff Dean was crucial; he brought the deep computer systems expertise, while I brought the machine learning perspective. This combination allowed us to effectively leverage Google's infrastructure to scale our algorithms.
Technical Innovations and Breakthroughs
Hardware and Architecture
Initially, we were slower to embrace GPUs, partly because Google's CPU infrastructure was so brilliant and there were concerns about creating a heterogeneous and hard-to-manage compute environment. However, the need for parallel computation was undeniable.
This philosophy of designing for parallel hardware was a core, if sometimes underappreciated, aspect of the Transformer paper. Before Transformers, models for tasks like translation tried to ingest and memorize an entire sentence before generating the output. The Transformer's key innovation was the attention mechanism, which allowed the model to focus on specific, relevant parts of the input sentence as it generated the output. Crucially, the entire architecture was designed so that every step was highly parallelizable, making it a perfect fit for GPUs and TPUs. This design choice was what unlocked its ability to scale and become the foundation for today's large models.
The "Cat Video" Paper
Google Brain's "coming out moment" was the 2012 paper demonstrating unsupervised learning. We built what was likely the largest neural network in the world at the time and trained it by showing it unlabeled frames from YouTube videos. One day, my student Quoc Le showed me that a neuron in the network had learned to respond specifically to images of cats. The algorithm had, on its own and without any human labels, discovered the concept of a "cat." This was a massive breakthrough, proving that models could learn meaningful features from the world's vast stores of unlabeled data.
From Research to Real-World Application
To prove our value, we collaborated with teams across Google. Our early succe...
Full story
tokenless.tech
The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain | Tokenless
Andrew Ng, founder of Google Brain and DeepLearning.AI, discusses the history of neural networks and the foundational ideas that led to modern AI breakthroughs. He covers the controversial early bets on scale and general-purpose algorithms, the technical…
Moonshot Podcast Deep Dive: André Prager on Prototyping at Wing
André Prager, former Chief Engineer at Wing, discusses the core engineering philosophy of simplicity and cost-effectiveness that enabled the drone delivery service. He covers the design of key systems like the passive charging pad, the intelligent winch, the non-powered autoloader, and the iterative process of making the drones acoustically unobtrusive.
André Prager, former Chief Engineer at Wing, discusses the core engineering philosophy of simplicity and cost-effectiveness that enabled the drone delivery service. He covers the design of key systems like the passive charging pad, the intelligent winch, the non-powered autoloader, and the iterative process of making the drones acoustically unobtrusive.
At the heart of Wing's development was a core philosophy: radical simplicity. The most scalable, robust, and affordable system is one with the fewest components. As former Chief Engineer André Prager puts it, "Everything that's not there doesn't need to be developed. Everything that's not there doesn't break." This principle guided the team away from complex, expensive solutions towards elegant, minimalist designs capable of scaling to a billion flights a year.
The Engineer as an Artist
Prager views engineering as more akin to art than pure mathematics—a creative process of discovery driven by curiosity. This mindset, rooted in childhood experiments like building an electric skateboard in the early '90s, emphasizes building unique things where the outcome isn't known beforehand. He argues that while math is easy to test, the creative and associative thinking required for innovative engineering is harder to measure but far more valuable. This approach seeks to find engineers who can make novel connections between different domains, a skill Prager believes is more difficult to find than pure mathematical proficiency.
The Real Challenge: The System, Not Just the Drone
When Prager joined Wing, the core architecture of the aircraft—a hybrid design with separate propellers for vertical hover and forward flight—was largely figured out. The fundamental challenge wasn't just keeping the drone in the air, but solving for "everything else." This included the entire operational ecosystem:
⦁ Payload Management: How to get packages onto and off the drone efficiently.
⦁ Infrastructure: Where the drones "sleep," how they charge, and how they are managed.
⦁ Air Traffic Management: How to coordinate a fleet of drones operating simultaneously.
Solving these problems required a relentless focus on cost and scalability, rejecting complex, demo-friendly prototypes in favor of systems that could be affordably deployed worldwide.
Engineering Simplicity in Action
Several key subsystems exemplify Wing's philosophy of minimalist, hardware-simple, and software-intelligent design.
The Charging Pad
The goal was a landing pad that required no complex robotics, manipulation, or precise alignment. The team avoided heavy wireless inductive charging coils due to weight constraints. Instead, they developed a passive contact-based system. The drone has small, conductive "feet," and the landing pad features a specific geometric pattern of positive and negative contacts. This geometry ensures that no matter how the drone lands on the 3x3 foot pad, its feet will complete a circuit and begin charging. The pad itself is simple, robust, and inexpensive, resembling a large printed circuit board (PCB).
The Winch and Delivery Hook
The system for lowering and retrieving packages could have been incredibly complex. The final design, however, features a hook with no moving parts and no electronics. This result was the product of a two-year development process with over 90 prototypes.
The intelligence resides in the winch motor and the software. By sensing force and position through the tether—much like sensing vibrations on a string—the system can infer a surprising amount of information:
⦁ When the package has touched the ground.
⦁ If a person is pulling on the line.
⦁ If the hook is snagged on an obstacle.
This allows for a simple, lightweight hardware implementation where intelligence and new features can be added over time through software updates.
The Autoloader
To automate the process of attaching a package, Prager developed the "autoloader," a device with no moving parts, no power, and no electronics. Inspired by the simplicity of a restaurant patio umbrella, the device consists of two arms that catch the drone's tether as it hovers nearby. The drone performs a slight sideways movement, which guides the hook through a channel and attaches it to the package via friction. This...
Full story
The Engineer as an Artist
Prager views engineering as more akin to art than pure mathematics—a creative process of discovery driven by curiosity. This mindset, rooted in childhood experiments like building an electric skateboard in the early '90s, emphasizes building unique things where the outcome isn't known beforehand. He argues that while math is easy to test, the creative and associative thinking required for innovative engineering is harder to measure but far more valuable. This approach seeks to find engineers who can make novel connections between different domains, a skill Prager believes is more difficult to find than pure mathematical proficiency.
The Real Challenge: The System, Not Just the Drone
When Prager joined Wing, the core architecture of the aircraft—a hybrid design with separate propellers for vertical hover and forward flight—was largely figured out. The fundamental challenge wasn't just keeping the drone in the air, but solving for "everything else." This included the entire operational ecosystem:
⦁ Payload Management: How to get packages onto and off the drone efficiently.
⦁ Infrastructure: Where the drones "sleep," how they charge, and how they are managed.
⦁ Air Traffic Management: How to coordinate a fleet of drones operating simultaneously.
Solving these problems required a relentless focus on cost and scalability, rejecting complex, demo-friendly prototypes in favor of systems that could be affordably deployed worldwide.
Engineering Simplicity in Action
Several key subsystems exemplify Wing's philosophy of minimalist, hardware-simple, and software-intelligent design.
The Charging Pad
The goal was a landing pad that required no complex robotics, manipulation, or precise alignment. The team avoided heavy wireless inductive charging coils due to weight constraints. Instead, they developed a passive contact-based system. The drone has small, conductive "feet," and the landing pad features a specific geometric pattern of positive and negative contacts. This geometry ensures that no matter how the drone lands on the 3x3 foot pad, its feet will complete a circuit and begin charging. The pad itself is simple, robust, and inexpensive, resembling a large printed circuit board (PCB).
The Winch and Delivery Hook
The system for lowering and retrieving packages could have been incredibly complex. The final design, however, features a hook with no moving parts and no electronics. This result was the product of a two-year development process with over 90 prototypes.
The intelligence resides in the winch motor and the software. By sensing force and position through the tether—much like sensing vibrations on a string—the system can infer a surprising amount of information:
⦁ When the package has touched the ground.
⦁ If a person is pulling on the line.
⦁ If the hook is snagged on an obstacle.
This allows for a simple, lightweight hardware implementation where intelligence and new features can be added over time through software updates.
The Autoloader
To automate the process of attaching a package, Prager developed the "autoloader," a device with no moving parts, no power, and no electronics. Inspired by the simplicity of a restaurant patio umbrella, the device consists of two arms that catch the drone's tether as it hovers nearby. The drone performs a slight sideways movement, which guides the hook through a channel and attaches it to the package via friction. This...
Full story
tokenless.tech
Moonshot Podcast Deep Dive: André Prager on Prototyping at Wing | Tokenless
André Prager, former Chief Engineer at Wing, discusses the core engineering philosophy of simplicity and cost-effectiveness that enabled the drone delivery service. He covers the design of key systems like the passive charging pad, the intelligent winch,…
As the field of artificial intelligence evolves at a breakneck pace, it's crucial for technology professionals to stay current with its core concepts. Here are seven essential AI terms that are shaping the future of the industry.
1. Agentic AI (AI Agents)
AI agents represent a shift from reactive models, like chatbots that only respond to one prompt at a time, to proactive systems that can reason and act autonomously to achieve goals. These agents operate in a continuous cycle:
1. Perceive: They assess their current environment.
2. Reason: They determine the next best steps to achieve a predefined goal.
3. Act: They execute the plan formulated during the reasoning stage.
4. Observe: They analyze the results of their actions and repeat the cycle.
This autonomous nature allows them to fulfill complex roles, such as a travel agent booking a trip, a data analyst identifying trends in reports, or a DevOps engineer detecting anomalies, testing fixes in containers, and rolling back faulty deployments.
2. Large Reasoning Models
AI agents are often powered by a specialized class of LLMs known as Large Reasoning Models (LRMs). Unlike standard LLMs that generate responses immediately, LRMs are fine-tuned to work through problems step-by-step. This methodical approach is essential for agents planning complex, multi-stage tasks.
The training process involves:
⦁ Using datasets with verifiably correct answers, such as math problems or code that can be tested by compilers.
⦁ Employing reinforcement learning to teach the model how to generate reasoning sequences that lead to correct final answers.
When a chatbot pauses and displays a "thinking..." message, it's often an LRM at work, generating an internal chain of thought to deconstruct a problem before providing a coherent response.
3. Vector Databases
Vector databases are a fundamental component of modern AI infrastructure, particularly for handling unstructured data. Instead of storing data like text or images as raw files, an embedding model converts this data into vectors—long lists of numbers that capture the data's semantic meaning and context.
The primary advantage is that similarity searches become mathematical operations. By finding vectors that are numerically close to each other in the multi-dimensional "embedding space," the system can identify semantically similar content. For example, a search for a picture of a mountain can find other similar images, related text articles, or even thematically similar music files based on their vector proximity.
4. Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is a powerful technique that leverages vector databases to make LLM responses more accurate and context-aware. It enriches user prompts with relevant, external information before the LLM generates a response.
The process works as follows:
1. A user's input prompt is converted into a vector using an embedding model.
2. This vector is used to perform a similarity search in a vector database containing a specific knowledge base (e.g., internal company documents).
3. The relevant information retrieved from the database is then inserted into the original prompt.
4. This augmented prompt is sent to the LLM, which now has the necessary context to generate a factually grounded answer.
For instance, asking a question about company policy can trigger a RAG system to pull the relevant section from the employee handbook and include it in the prompt, ensuring the LLM's answer is accurate.
5. Model Context Protocol (MCP)
For LLMs to be truly useful, they must interact with a wide range of external data sources, services, and tools. The Model Context Protocol (MCP) is an emerging standard designed to standardize these interactions.
Currently, developers often have to build custom, one-off connections for each new tool or database an LLM needs to access. MCP aims to solve th...
Full story
1. Agentic AI (AI Agents)
AI agents represent a shift from reactive models, like chatbots that only respond to one prompt at a time, to proactive systems that can reason and act autonomously to achieve goals. These agents operate in a continuous cycle:
1. Perceive: They assess their current environment.
2. Reason: They determine the next best steps to achieve a predefined goal.
3. Act: They execute the plan formulated during the reasoning stage.
4. Observe: They analyze the results of their actions and repeat the cycle.
This autonomous nature allows them to fulfill complex roles, such as a travel agent booking a trip, a data analyst identifying trends in reports, or a DevOps engineer detecting anomalies, testing fixes in containers, and rolling back faulty deployments.
2. Large Reasoning Models
AI agents are often powered by a specialized class of LLMs known as Large Reasoning Models (LRMs). Unlike standard LLMs that generate responses immediately, LRMs are fine-tuned to work through problems step-by-step. This methodical approach is essential for agents planning complex, multi-stage tasks.
The training process involves:
⦁ Using datasets with verifiably correct answers, such as math problems or code that can be tested by compilers.
⦁ Employing reinforcement learning to teach the model how to generate reasoning sequences that lead to correct final answers.
When a chatbot pauses and displays a "thinking..." message, it's often an LRM at work, generating an internal chain of thought to deconstruct a problem before providing a coherent response.
3. Vector Databases
Vector databases are a fundamental component of modern AI infrastructure, particularly for handling unstructured data. Instead of storing data like text or images as raw files, an embedding model converts this data into vectors—long lists of numbers that capture the data's semantic meaning and context.
The primary advantage is that similarity searches become mathematical operations. By finding vectors that are numerically close to each other in the multi-dimensional "embedding space," the system can identify semantically similar content. For example, a search for a picture of a mountain can find other similar images, related text articles, or even thematically similar music files based on their vector proximity.
4. Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is a powerful technique that leverages vector databases to make LLM responses more accurate and context-aware. It enriches user prompts with relevant, external information before the LLM generates a response.
The process works as follows:
1. A user's input prompt is converted into a vector using an embedding model.
2. This vector is used to perform a similarity search in a vector database containing a specific knowledge base (e.g., internal company documents).
3. The relevant information retrieved from the database is then inserted into the original prompt.
4. This augmented prompt is sent to the LLM, which now has the necessary context to generate a factually grounded answer.
For instance, asking a question about company policy can trigger a RAG system to pull the relevant section from the employee handbook and include it in the prompt, ensuring the LLM's answer is accurate.
5. Model Context Protocol (MCP)
For LLMs to be truly useful, they must interact with a wide range of external data sources, services, and tools. The Model Context Protocol (MCP) is an emerging standard designed to standardize these interactions.
Currently, developers often have to build custom, one-off connections for each new tool or database an LLM needs to access. MCP aims to solve th...
Full story
tokenless.tech
7 AI Terms You Need to Know: Agents, RAG, ASI & More | Tokenless
A deep dive into seven essential AI concepts shaping the future of intelligent systems, including Agentic AI, RAG, Mixture of Experts (MoE), and the theoretical frontier of Artificial Superintelligence (ASI).
Small Language Models are the Future of Agentic AI Reading Group
This paper challenges the prevailing "bigger is better" narrative in AI, arguing that Small Language Models (SLMs) are not just sufficient but often superior for agentic AI tasks due to their efficiency, speed, and specialization. The discussion explores the paper's core arguments, counterarguments, and the practical implications of adopting a hybrid LLM-SLM approach.
This paper challenges the prevailing "bigger is better" narrative in AI, arguing that Small Language Models (SLMs) are not just sufficient but often superior for agentic AI tasks due to their efficiency, speed, and specialization. The discussion explores the paper's core arguments, counterarguments, and the practical implications of adopting a hybrid LLM-SLM approach.
The paper "Small Language Models are the Future of Agentic AI" posits that the trend toward ever-larger models may be misguided for agentic systems. Instead, it argues that a heterogeneous ecosystem of smaller, specialized models offers a more powerful, efficient, and economical path forward. The core intuition is to scale out by composing specialized "Lego" blocks (SLMs) rather than scaling up a single monolithic model (LLM).
The Three Pillars of the SLM Argument
The authors build their case on three primary arguments:
1. SLMs are Powerful Enough: Recent advancements have produced SLMs (e.g., Microsoft's Phi series) that achieve competitive performance on benchmarks for reasoning, language, and coding tasks when compared to models 10-20 times their size. For the vast majority of agentic tasks, which often involve a limited subset of an LLM's full capabilities, this level of performance is sufficient. The broad, general intelligence of a massive LLM can be "intelligence overkill" when an agent is only performing a narrow, specific function.
2. SLMs are Operationally Superior:
⦁ Performance: They offer significantly lower inference latency and require less memory, making them faster and easier to deploy.
⦁ Flexibility: Their small size allows for greater operational flexibility, including deployment on edge devices and consumer-grade GPUs without specialized infrastructure.
⦁ Behavioral Alignment: Agentic systems require predictable, structured interactions, often using formats like JSON or YAML. It's easier and more reliable to fine-tune an SLM to consistently produce a specific format, reducing the risk of hallucinations or formatting errors that can occur with a general-purpose LLM trained on countless formats.
⦁ Heterogeneity: Agentic workflows are naturally composed of diverse tasks with varying complexity. A system can dynamically choose the best model for each sub-task—a simple SLM for a simple task and perhaps a more powerful model only when necessary.
3. SLMs are More Economical:
⦁ Inference Costs: Serving a 7B parameter SLM can be 10-30 times cheaper than serving a 70B+ parameter LLM.
⦁ Operational Simplicity: SLMs avoid the complexities of multi-GPU and multi-node parallelization, simplifying infrastructure management and maintenance.
⦁ Fine-Tuning Agility: Fine-tuning an SLM requires only a few GPU hours, enabling rapid iteration and specialization, compared to the weeks and significant resources needed for large models.
⦁ Parameter Utilization: SLMs are fundamentally more efficient, activating a higher percentage of their parameters for a given task compared to the sparse activation in very large models.
A compelling argument is that agentic systems naturally evolve toward SLMs. Each interaction an agent has (prompt, output, user feedback) generates valuable training data. Even if a system starts with an LLM, this continuous stream of task-specific data creates the perfect conditions for optimizing and distilling that capability into a smaller, expert model.
Counterarguments and Practical Challenges
The discussion also highlighted significant counterarguments and real-world barriers:
⦁ Scaling Laws and Generalization: LLMs benefit from scaling laws, giving them a more nuanced and abstract understanding of concepts, multi-linguality, and multi-modality that SLMs may lack. This deep generalization might be crucial for a top-level "supervisor" agent that needs to orchestrate complex tasks.
⦁ The Cost of a Fleet: While a single SLM is cheap, managing an entire fleet of specialized models introduces its own operational complexity and costs, including infrastructure, talent, and orchestration. Centralized LLM endpoints can benefit from higher utilization, potentially making them more cost-effective at scale than multiple, under-utilized SLM endpoints.
⦁ Real-World Ne...
Full story
The Three Pillars of the SLM Argument
The authors build their case on three primary arguments:
1. SLMs are Powerful Enough: Recent advancements have produced SLMs (e.g., Microsoft's Phi series) that achieve competitive performance on benchmarks for reasoning, language, and coding tasks when compared to models 10-20 times their size. For the vast majority of agentic tasks, which often involve a limited subset of an LLM's full capabilities, this level of performance is sufficient. The broad, general intelligence of a massive LLM can be "intelligence overkill" when an agent is only performing a narrow, specific function.
2. SLMs are Operationally Superior:
⦁ Performance: They offer significantly lower inference latency and require less memory, making them faster and easier to deploy.
⦁ Flexibility: Their small size allows for greater operational flexibility, including deployment on edge devices and consumer-grade GPUs without specialized infrastructure.
⦁ Behavioral Alignment: Agentic systems require predictable, structured interactions, often using formats like JSON or YAML. It's easier and more reliable to fine-tune an SLM to consistently produce a specific format, reducing the risk of hallucinations or formatting errors that can occur with a general-purpose LLM trained on countless formats.
⦁ Heterogeneity: Agentic workflows are naturally composed of diverse tasks with varying complexity. A system can dynamically choose the best model for each sub-task—a simple SLM for a simple task and perhaps a more powerful model only when necessary.
3. SLMs are More Economical:
⦁ Inference Costs: Serving a 7B parameter SLM can be 10-30 times cheaper than serving a 70B+ parameter LLM.
⦁ Operational Simplicity: SLMs avoid the complexities of multi-GPU and multi-node parallelization, simplifying infrastructure management and maintenance.
⦁ Fine-Tuning Agility: Fine-tuning an SLM requires only a few GPU hours, enabling rapid iteration and specialization, compared to the weeks and significant resources needed for large models.
⦁ Parameter Utilization: SLMs are fundamentally more efficient, activating a higher percentage of their parameters for a given task compared to the sparse activation in very large models.
A compelling argument is that agentic systems naturally evolve toward SLMs. Each interaction an agent has (prompt, output, user feedback) generates valuable training data. Even if a system starts with an LLM, this continuous stream of task-specific data creates the perfect conditions for optimizing and distilling that capability into a smaller, expert model.
Counterarguments and Practical Challenges
The discussion also highlighted significant counterarguments and real-world barriers:
⦁ Scaling Laws and Generalization: LLMs benefit from scaling laws, giving them a more nuanced and abstract understanding of concepts, multi-linguality, and multi-modality that SLMs may lack. This deep generalization might be crucial for a top-level "supervisor" agent that needs to orchestrate complex tasks.
⦁ The Cost of a Fleet: While a single SLM is cheap, managing an entire fleet of specialized models introduces its own operational complexity and costs, including infrastructure, talent, and orchestration. Centralized LLM endpoints can benefit from higher utilization, potentially making them more cost-effective at scale than multiple, under-utilized SLM endpoints.
⦁ Real-World Ne...
Full story
tokenless.tech
Small Language Models are the Future of Agentic AI Reading Group | Tokenless
This paper challenges the prevailing "bigger is better" narrative in AI, arguing that Small Language Models (SLMs) are not just sufficient but often superior for agentic AI tasks due to their efficiency, speed, and specialization. The discussion explores…
The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)
Professor Cristopher Moore of the Santa Fe Institute discusses the surprising effectiveness of AI, arguing it stems from the rich, non-random structure of the real world. He explores the limits of current models, the nature of intelligence as creative problem-solving and abstraction, the importance of grounding and shared reality, and the profound implications of computational irreducibility and the need for algorithmic transparency in high-stakes applications.
Professor Cristopher Moore of the Santa Fe Institute discusses the surprising effectiveness of AI, arguing it stems from the rich, non-random structure of the real world. He explores the limits of current models, the nature of intelligence as creative problem-solving and abstraction, the importance of grounding and shared reality, and the profound implications of computational irreducibility and the need for algorithmic transparency in high-stakes applications.
Cristopher Moore, a self-described "frog" in the world of science, prefers diving deep into concrete problems over taking a high-level "bird's-eye view". This perspective informs his analysis of artificial intelligence, computational theory, and the nature of intelligence itself.
The Structure of the World and the Success of AI
The surprising effectiveness of large models like transformers stems not from a magical architecture, but from the nature of the data they are trained on. Real-world data is neither completely random nor adversarially designed to be difficult. Instead, it is filled with rich structure, patterns, and hierarchies. Moore argues: "the real world presents us with examples of these problems where there is so much rich structure to sink your teeth into."
Any sufficiently rich architecture can learn to exploit this structure. We will likely look back and realize that what truly matters is that "the world is structured and any architecture which is capable of capturing some of that structure is going to do well at prediction." This contrasts with theoretical work in computer science and statistical physics, which often proves hardness based on worst-case adversarial examples or purely random data models. While concepts like phase transitions—sharp shifts in problem difficulty based on signal-to-noise ratios, analogous to a magnet losing its field at a critical temperature—are powerful for understanding random problems, they don't capture the full picture of real-world AI performance.
Intelligence as Creative Problem-Solving
Despite their success, current models falter on tasks requiring novel reasoning and abstraction, such as modern Sudoku variants with complex, layered rules. These puzzles, designed by humans for humans, require insights and the creation of new logical constraints on the fly, a process current AI struggles with. Moore notes that the ability of AI to absorb rules and perform intelligent search "hasn't happened yet."
This highlights a deeper aspect of intelligence: the ability to transform hard problems into simpler ones. It's about inventing heuristics and new forms of "partial knowledge" to navigate a problem space. Humans fluidly switch their approach, asking "which piece can fit here?" and then "where can this piece go?" This process of formalization and mathematization is a crucial, creative step that often constitutes 90% of the work in scientific modeling. True intelligence involves inventing the variables and constraints to address a problem, not just solving a pre-defined one.
Grounding, Meaning, and Shared Reality
A significant limitation of current language models is their lack of grounding in the physical world. When asked to summarize a nuanced essay, a model might regress to the mean, producing a "lowest common denominator" summary based on common arguments about the topic, completely missing the author's unique, subtle point. This indicates a failure to grasp meaning beyond statistical correlation.
Moore, a self-professed Platonist, believes in a shared, objective reality of abstract concepts. When two people visualize a cube, they perceive the same object with 8 corners and 12 edges. This shared perception allows for meaningful agreement and correction. He suggests that once AI systems can utilize multimodal "workspaces"—to doodle, run code, and manipulate virtual objects—they will move closer to this kind of grounded understanding.
Computation, Universality, and Irreducibility
The conversation delves into the fundamental nature of computation and its relationship to intelligence.
⦁ Computational Irreducibility: Drawing on Stephen Wolfram's work, Moore discusses systems where there are no analytical shortcuts to predict a future state. To know the outcome, "you have to do the work" of simulating every intervening step. While our only method for proving a system is irreducible is to build a universal c...
Full story
The Structure of the World and the Success of AI
The surprising effectiveness of large models like transformers stems not from a magical architecture, but from the nature of the data they are trained on. Real-world data is neither completely random nor adversarially designed to be difficult. Instead, it is filled with rich structure, patterns, and hierarchies. Moore argues: "the real world presents us with examples of these problems where there is so much rich structure to sink your teeth into."
Any sufficiently rich architecture can learn to exploit this structure. We will likely look back and realize that what truly matters is that "the world is structured and any architecture which is capable of capturing some of that structure is going to do well at prediction." This contrasts with theoretical work in computer science and statistical physics, which often proves hardness based on worst-case adversarial examples or purely random data models. While concepts like phase transitions—sharp shifts in problem difficulty based on signal-to-noise ratios, analogous to a magnet losing its field at a critical temperature—are powerful for understanding random problems, they don't capture the full picture of real-world AI performance.
Intelligence as Creative Problem-Solving
Despite their success, current models falter on tasks requiring novel reasoning and abstraction, such as modern Sudoku variants with complex, layered rules. These puzzles, designed by humans for humans, require insights and the creation of new logical constraints on the fly, a process current AI struggles with. Moore notes that the ability of AI to absorb rules and perform intelligent search "hasn't happened yet."
This highlights a deeper aspect of intelligence: the ability to transform hard problems into simpler ones. It's about inventing heuristics and new forms of "partial knowledge" to navigate a problem space. Humans fluidly switch their approach, asking "which piece can fit here?" and then "where can this piece go?" This process of formalization and mathematization is a crucial, creative step that often constitutes 90% of the work in scientific modeling. True intelligence involves inventing the variables and constraints to address a problem, not just solving a pre-defined one.
Grounding, Meaning, and Shared Reality
A significant limitation of current language models is their lack of grounding in the physical world. When asked to summarize a nuanced essay, a model might regress to the mean, producing a "lowest common denominator" summary based on common arguments about the topic, completely missing the author's unique, subtle point. This indicates a failure to grasp meaning beyond statistical correlation.
Moore, a self-professed Platonist, believes in a shared, objective reality of abstract concepts. When two people visualize a cube, they perceive the same object with 8 corners and 12 edges. This shared perception allows for meaningful agreement and correction. He suggests that once AI systems can utilize multimodal "workspaces"—to doodle, run code, and manipulate virtual objects—they will move closer to this kind of grounded understanding.
Computation, Universality, and Irreducibility
The conversation delves into the fundamental nature of computation and its relationship to intelligence.
⦁ Computational Irreducibility: Drawing on Stephen Wolfram's work, Moore discusses systems where there are no analytical shortcuts to predict a future state. To know the outcome, "you have to do the work" of simulating every intervening step. While our only method for proving a system is irreducible is to build a universal c...
Full story
tokenless.tech
The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore) | Tokenless
Professor Cristopher Moore of the Santa Fe Institute discusses the surprising effectiveness of AI, arguing it stems from the rich, non-random structure of the real world. He explores the limits of current models, the nature of intelligence as creative problem…
921: NPUs vs GPUs vs CPUs for Local AI Workloads — with Dell’s Ish Shah and Shirish Gupta
Shirish Gupta and Ish Shah from Dell Technologies explore the evolving landscape of AI hardware. They discuss why Windows, enhanced by WSL 2, remains a dominant platform for developers, and delve into the distinct roles of CPUs, GPUs, and the increasingly important Neural Processing Units (NPUs). The conversation covers the trade-offs between local and cloud computing for AI workloads and introduces new hardware, like workstations with discrete NPUs, that are making on-device AI more powerful and accessible than ever.
Shirish Gupta and Ish Shah from Dell Technologies explore the evolving landscape of AI hardware. They discuss why Windows, enhanced by WSL 2, remains a dominant platform for developers, and delve into the distinct roles of CPUs, GPUs, and the increasingly important Neural Processing Units (NPUs). The conversation covers the trade-offs between local and cloud computing for AI workloads and introduces new hardware, like workstations with discrete NPUs, that are making on-device AI more powerful and accessible than ever.
The Operating System Debate: Windows for AI Development
While the AI and data science communities often gravitate towards Unix-based systems, Windows remains a formidable platform for development. Statistically, Windows is the most popular OS among software developers, used by approximately 64%. Its familiarity, user-friendliness, and compatibility with essential productivity applications make it an accessible starting point. For enterprise environments, Windows offers streamlined IT management and security integrations.
A common best practice is to develop in an environment that mirrors production, and since most large-scale ML deployments run on Linux, this has traditionally been a point of friction. However, the gap is closing significantly with Windows Subsystem for Linux (WSL 2), which allows a full Linux kernel to run directly on Windows. This provides developers with the "best of both worlds": the productivity and enterprise benefits of Windows alongside the native command-line tools and environment of Linux, eliminating the need for dual-booting.
A New Trinity of Processors: CPU, GPU, and NPU
The modern AI landscape requires a nuanced approach to processing, moving beyond a one-size-fits-all model. The choice of hardware is no longer a simple matrix; it has expanded to an "8x8" complexity of options tailored to specific needs. The key is using the right tool for the right job.
The Rise of the NPU (Neural Processing Unit)
An NPU is a specialized processor purpose-built to handle AI and ML workloads, particularly the vector math that forms the foundation of neural networks.
⦁ Efficiency is Key: NPUs are architected for maximum performance per watt. This is critical for mobile devices and laptops, where they can handle tasks like background blur or speech-to-text without draining the battery, offloading this work from the CPU.
⦁ Integrated vs. Discrete: NPUs can be integrated into the main chipset (SoC) or exist as powerful discrete cards. Dell has announced the Dell Pro Max mobile workstation, which will feature a discrete NPU, capable of running a 109-billion-parameter model locally. This is a game-changer for use cases requiring high performance in offline, secure, or latency-sensitive environments.
GPUs: The Powerhouse for Scalable Performance
GPUs remain the champions of parallel processing and are currently more versatile and scalable than NPUs for high-end AI tasks.
⦁ High-End Training and Inference: New hardware like the Nvidia Blackwell RTX Pro GPUs continues to push boundaries. For instance, the GB10 appliance will be capable of fine-tuning a 200-billion-parameter model locally.
⦁ Versatility: GPUs are not only for AI; they accelerate a wide range of tasks from CAD software to gaming, offering a dual-use benefit for many users. Their primary trade-off, especially on client devices, is higher power consumption compared to NPUs.
The CPU: The Enduring Workhorse
The CPU is still the core of the system, handling general-purpose tasks and running the operating system. If a system lacks a dedicated NPU or GPU, the CPU must also take on the new AI workloads, potentially impacting overall performance. Modern architectures like Intel's Lunar Lake are enhancing CPU capabilities with features like on-chip memory for faster data transfer and powerful integrated GPUs that can rival some entry-level discrete cards.
The Developer Experience: Bridging Hardware and Software
To simplify the complexity of targeting these different processors, Dell has introduced Dell Pro AI Studio. This software layer abstracts away the underlying toolchains (like Intel OpenVINO) required to run models on specific silicon (Intel, AMD, Qualcomm, etc.).
⦁ Democratizing Access: It dramatically reduces development time. In one case study, a task that took a team three months to complete using standard toolchains was accompl...
Full story
While the AI and data science communities often gravitate towards Unix-based systems, Windows remains a formidable platform for development. Statistically, Windows is the most popular OS among software developers, used by approximately 64%. Its familiarity, user-friendliness, and compatibility with essential productivity applications make it an accessible starting point. For enterprise environments, Windows offers streamlined IT management and security integrations.
A common best practice is to develop in an environment that mirrors production, and since most large-scale ML deployments run on Linux, this has traditionally been a point of friction. However, the gap is closing significantly with Windows Subsystem for Linux (WSL 2), which allows a full Linux kernel to run directly on Windows. This provides developers with the "best of both worlds": the productivity and enterprise benefits of Windows alongside the native command-line tools and environment of Linux, eliminating the need for dual-booting.
A New Trinity of Processors: CPU, GPU, and NPU
The modern AI landscape requires a nuanced approach to processing, moving beyond a one-size-fits-all model. The choice of hardware is no longer a simple matrix; it has expanded to an "8x8" complexity of options tailored to specific needs. The key is using the right tool for the right job.
The Rise of the NPU (Neural Processing Unit)
An NPU is a specialized processor purpose-built to handle AI and ML workloads, particularly the vector math that forms the foundation of neural networks.
⦁ Efficiency is Key: NPUs are architected for maximum performance per watt. This is critical for mobile devices and laptops, where they can handle tasks like background blur or speech-to-text without draining the battery, offloading this work from the CPU.
⦁ Integrated vs. Discrete: NPUs can be integrated into the main chipset (SoC) or exist as powerful discrete cards. Dell has announced the Dell Pro Max mobile workstation, which will feature a discrete NPU, capable of running a 109-billion-parameter model locally. This is a game-changer for use cases requiring high performance in offline, secure, or latency-sensitive environments.
GPUs: The Powerhouse for Scalable Performance
GPUs remain the champions of parallel processing and are currently more versatile and scalable than NPUs for high-end AI tasks.
⦁ High-End Training and Inference: New hardware like the Nvidia Blackwell RTX Pro GPUs continues to push boundaries. For instance, the GB10 appliance will be capable of fine-tuning a 200-billion-parameter model locally.
⦁ Versatility: GPUs are not only for AI; they accelerate a wide range of tasks from CAD software to gaming, offering a dual-use benefit for many users. Their primary trade-off, especially on client devices, is higher power consumption compared to NPUs.
The CPU: The Enduring Workhorse
The CPU is still the core of the system, handling general-purpose tasks and running the operating system. If a system lacks a dedicated NPU or GPU, the CPU must also take on the new AI workloads, potentially impacting overall performance. Modern architectures like Intel's Lunar Lake are enhancing CPU capabilities with features like on-chip memory for faster data transfer and powerful integrated GPUs that can rival some entry-level discrete cards.
The Developer Experience: Bridging Hardware and Software
To simplify the complexity of targeting these different processors, Dell has introduced Dell Pro AI Studio. This software layer abstracts away the underlying toolchains (like Intel OpenVINO) required to run models on specific silicon (Intel, AMD, Qualcomm, etc.).
⦁ Democratizing Access: It dramatically reduces development time. In one case study, a task that took a team three months to complete using standard toolchains was accompl...
Full story
tokenless.tech
921: NPUs vs GPUs vs CPUs for Local AI Workloads — with Dell’s Ish Shah and Shirish Gupta | Tokenless
Shirish Gupta and Ish Shah from Dell Technologies explore the evolving landscape of AI hardware. They discuss why Windows, enhanced by WSL 2, remains a dominant platform for developers, and delve into the distinct roles of CPUs, GPUs, and the increasingly…
Why language models hallucinate, revisiting Amodei’s code prediction and AI in the job market
Experts discuss an OpenAI paper that reframes hallucinations as a feature driven by training incentives, not just a bug. The panel also revisits Dario Amodei's prediction on AI coding, explores AI's chaotic impact on the job market, and imagines the future of running LLMs on business-card-sized devices.
Experts discuss an OpenAI paper that reframes hallucinations as a feature driven by training incentives, not just a bug. The panel also revisits Dario Amodei's prediction on AI coding, explores AI's chaotic impact on the job market, and imagines the future of running LLMs on business-card-sized devices.
Reframing Language Model Hallucinations
A recent paper from OpenAI, "Why Language Models Hallucinate," suggests that the issue of hallucination is more complex than a simple bug to be fixed. The core argument is that the problem is inherent to the current model training paradigm.
Models are incentivized to guess rather than admit uncertainty. During training, particularly with reinforcement learning, a model receives a reward for a correct answer but gets zero points for stating "I don't know." This reward structure encourages the model to take a chance on an answer, as there's a possibility of being correct. This is compounded by the landscape of external evaluations and benchmarks, which often rely on binary (yes/no) scoring. Model providers, aiming for the highest possible scores on these leaderboards, are therefore disincentivized from training models that frequently express uncertainty.
The paper challenges the myth that simply increasing model accuracy will decrease hallucinations. The authors argue that accuracy and hallucination are different measures. The proposed solution is not to eliminate guessing but to achieve better calibration between accuracy and uncertainty through more sophisticated reward functions and evaluations.
This leads to a broader discussion on the role of hallucinations:
⦁ A Tool for Creativity: For certain use cases, such as generating creative text or adopting a persona (e.g., "act like a pirate"), hallucination is not a bug but a feature. It represents a form of creative inference, combining disparate concepts to generate novel outputs. A world without this capability would lead to rigid, uncreative models.
⦁ Need for a Better Definition: The community lacks a clear, agreed-upon definition of what constitutes a hallucination versus the model simply being incorrect due to conflicting data in its training set.
⦁ A Multi-Faceted Solution: Eliminating hallucinations entirely is likely impossible. The path forward involves a combination of better-calibrated models and a suite of external tools, including guardrails, symbolic approaches, and Retrieval-Augmented Generation (RAG), to verify model outputs against grounded context.
Revisiting the 90% AI Coding Prediction
In March, Anthropic's CEO Dario Amodei predicted that within six months, AI would be writing 90% of the code for software developers. With that timeframe now passed, the panel reflected on its accuracy.
The key distinction is between automation (replacing developers) and augmentation (assisting developers). While 90% of developers have not lost their jobs, it is plausible that AI now assists in generating a significant portion of code, perhaps approaching that 90% figure for developers who have fully integrated tools like GitHub Copilot.
The prediction might be correct in terms of technological capability, even if societal adoption and developer tooling haven't caught up yet. The discussion framed this shift as another layer of abstraction in software development, similar to the move to object-oriented programming or the adoption of ORMs that generate database code. However, there are still areas where these tools struggle, such as generating reliable and complex SQL, where understanding intricate database schemas remains a significant challenge.
AI's Impact on the "Hellish" Job Market
Referencing an article from The Atlantic, the discussion turned to the chaotic state of the job market, which has become an "arms race" between AI-powered tools.
⦁ Candidates use generative AI to automate job applications, tailor CVs, and even assist during interviews.
⦁ Recruiters use AI screening tools to filter the resulting flood of applications.
The outcome is a noisy, impersonal system where it's difficult for humans to connect. This has led to a renewed emphasis on "old-school" techniques like leveraging personal networks to find opportunities. ...
Full story
A recent paper from OpenAI, "Why Language Models Hallucinate," suggests that the issue of hallucination is more complex than a simple bug to be fixed. The core argument is that the problem is inherent to the current model training paradigm.
Models are incentivized to guess rather than admit uncertainty. During training, particularly with reinforcement learning, a model receives a reward for a correct answer but gets zero points for stating "I don't know." This reward structure encourages the model to take a chance on an answer, as there's a possibility of being correct. This is compounded by the landscape of external evaluations and benchmarks, which often rely on binary (yes/no) scoring. Model providers, aiming for the highest possible scores on these leaderboards, are therefore disincentivized from training models that frequently express uncertainty.
The paper challenges the myth that simply increasing model accuracy will decrease hallucinations. The authors argue that accuracy and hallucination are different measures. The proposed solution is not to eliminate guessing but to achieve better calibration between accuracy and uncertainty through more sophisticated reward functions and evaluations.
This leads to a broader discussion on the role of hallucinations:
⦁ A Tool for Creativity: For certain use cases, such as generating creative text or adopting a persona (e.g., "act like a pirate"), hallucination is not a bug but a feature. It represents a form of creative inference, combining disparate concepts to generate novel outputs. A world without this capability would lead to rigid, uncreative models.
⦁ Need for a Better Definition: The community lacks a clear, agreed-upon definition of what constitutes a hallucination versus the model simply being incorrect due to conflicting data in its training set.
⦁ A Multi-Faceted Solution: Eliminating hallucinations entirely is likely impossible. The path forward involves a combination of better-calibrated models and a suite of external tools, including guardrails, symbolic approaches, and Retrieval-Augmented Generation (RAG), to verify model outputs against grounded context.
Revisiting the 90% AI Coding Prediction
In March, Anthropic's CEO Dario Amodei predicted that within six months, AI would be writing 90% of the code for software developers. With that timeframe now passed, the panel reflected on its accuracy.
The key distinction is between automation (replacing developers) and augmentation (assisting developers). While 90% of developers have not lost their jobs, it is plausible that AI now assists in generating a significant portion of code, perhaps approaching that 90% figure for developers who have fully integrated tools like GitHub Copilot.
The prediction might be correct in terms of technological capability, even if societal adoption and developer tooling haven't caught up yet. The discussion framed this shift as another layer of abstraction in software development, similar to the move to object-oriented programming or the adoption of ORMs that generate database code. However, there are still areas where these tools struggle, such as generating reliable and complex SQL, where understanding intricate database schemas remains a significant challenge.
AI's Impact on the "Hellish" Job Market
Referencing an article from The Atlantic, the discussion turned to the chaotic state of the job market, which has become an "arms race" between AI-powered tools.
⦁ Candidates use generative AI to automate job applications, tailor CVs, and even assist during interviews.
⦁ Recruiters use AI screening tools to filter the resulting flood of applications.
The outcome is a noisy, impersonal system where it's difficult for humans to connect. This has led to a renewed emphasis on "old-school" techniques like leveraging personal networks to find opportunities. ...
Full story
tokenless.tech
Why language models hallucinate, revisiting Amodei’s code prediction and AI in the job market | Tokenless
Experts discuss an OpenAI paper that reframes hallucinations as a feature driven by training incentives, not just a bug. The panel also revisits Dario Amodei's prediction on AI coding, explores AI's chaotic impact on the job market, and imagines the future…
Fully autonomous robots are much closer than you think – Sergey Levine
Sergey Levine, co-founder of Physical Intelligence, outlines the path to general-purpose robots, predicting a 'self-improvement flywheel' could lead to fully autonomous household robots by 2030. He discusses the architecture of vision-language-action models, the critical role of embodiment in solving the data problem, and how robotics will scale faster than self-driving cars.
Sergey Levine, co-founder of Physical Intelligence, outlines the path to general-purpose robots, predicting a 'self-improvement flywheel' could lead to fully autonomous household robots by 2030. He discusses the architecture of vision-language-action models, the critical role of embodiment in solving the data problem, and how robotics will scale faster than self-driving cars.