Tokenless - The Best of AI, ML & CS Talks
3 subscribers
99 photos
114 links
Daily posts on the internals of AI, ML, and CS — straight from the experts. No hype, no bullshit news.

#AI #AInews #newsletter
Download Telegram
Dr. Michael Timothy Bennett begins by challenging the conventional definitions and approaches to artificial intelligence, advocating for a perspective rooted in biology and embodied cognition. He favors Pei Wang's definition of intelligence as "adaptation with limited resources," emphasizing efficiency in terms of energy and data, a stark contrast to the "scale maxing" approach prevalent in Silicon Valley.

Critique of Formal Models and Computational Dualism

Bennett critiques formalisms like AIXI, which are based on Solomonoff induction and Occam's Razor (simplicity). While compelling, these models run into the problem of subjective complexity. The perceived simplicity of a model depends on the "interpreter" or the underlying language (abstraction layer) used by the agent. One can make an agent seem arbitrarily intelligent or stupid simply by changing the interpretative framework, making objective claims about performance difficult.

This leads to his central critique of modern AI, which he terms "computational dualism." He draws a provocative analogy to Cartesian dualism, where Descartes proposed the pineal gland as the interface between the non-physical mind and the physical body.
We have just replaced the pineal gland with a touring machine.

He argues that treating intelligence as pure software, separate from its hardware and environment, is a fundamental mistake. The behavior of any software is contingent on the interpreter that executes it, all the way down to the physical laws governing the hardware. To understand intelligence, one must analyze the system as a whole, including its embodiment and environment—a concept known in cognitive science as enactive cognition. This view also aligns with the concept of mortal computation, where the physical substrate is inseparable from the computation itself, as opposed to the abstract, "immortal" nature of a theoretical Turing machine.

A Biologically-Inspired Vision for Intelligence

Bennett advocates for an AI that emulates the properties of living systems: self-organization, decentralization, and multi-scale adaptation.

Causality and Abstraction: True intelligence requires learning a causal model of the world, starting with a representation of the self as a causal agent. An agent must be able to distinguish between its own actions causing a change and the environment causing a change. This "causal identity for self" is fundamental to subjective experience.
The Law of the Stack: Bennett proposes a principle where the adaptability of a system's high-level abstractions (e.g., software) is contingent on the adaptability of its lower-level abstractions (e.g., hardware). Biological systems excel because they delegate adaptation down the stack, allowing for flexibility at all levels. Computers, in contrast, are like an "inflexible bureaucracy that makes decisions only at the top."
Decentralization and Constraints: Drawing on the work of Michael Levin, Bennett views systems like cancer as a failure of collective intelligence, where a cell becomes informationally isolated from the whole and reverts to primitive behavior. This can happen when a system is over-constrained. Imposing too much top-down control eliminates potentially correct policies and forces components to "break off." This suggests that AI safety should focus on designing the entire system with appropriate, minimal constraints rather than trying to rigidly align a single component.

Consciousness as a Necessary Adaptation

Bennett directly confronts the "hard problem of consciousness" by arguing that philosophical zombies—beings identical to humans but without subjective experience—are impossible in any conceivable world. He posits that consciousness is not an epiphenomenal, non-causal addition to information processing but a necessary feature of a sufficiently adaptive, intelligent system.

His theory frames subjective experien...

Full story
The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain

Andrew Ng, founder of Google Brain and DeepLearning.AI, discusses the history of neural networks and the foundational ideas that led to modern AI breakthroughs. He covers the controversial early bets on scale and general-purpose algorithms, the technical innovations behind Transformers, and the future democratizing effect of artificial intelligence.
The creation and success of Google Brain were driven by two core, and at the time, controversial hypotheses. The first was that scale matters. Around 2010, the prevailing academic view favored inventing novel algorithms over simply building bigger neural networks. Despite advice from senior figures that focusing on scale was not a good career move, the data generated by my students at Stanford showed a clear, undeniable trend: for every model we tried, performance improved as the model size increased. This data provided the confidence to pursue scale relentlessly.

The second core idea was the "one learning algorithm" hypothesis. Inspired by neuro-rewiring experiments in the brain, where one part of the brain tissue can learn a new function (e.g., learning to "see" after previously learning to "hear"), the question was whether we needed thousands of hand-engineered algorithms for different tasks. The hypothesis was that a single, general-purpose learning algorithm could, if fed different data (text, images, audio), learn to process each type effectively. This was heresy at the time in a field dominated by specialized models, but it has since become the foundation of modern AI.

The Early Days: Pushing Against the Current

In the early 2010s, neural networks were largely out of favor in the AI community, having been in the "wilderness" for a long time. The path to publishing in top conferences was through clever mathematical proofs, not demonstrating the power of scaled-up systems. This focus on scale was seen as lacking intellectual rigor. For researchers who had spent decades meticulously tweaking specific algorithms, the idea that a large model fed with massive amounts of data could outperform their work was emotionally wrenching.

The Google Brain project began at X after Sebastian Thrun, who deserves immense credit for its inception, encouraged me to pitch the idea of using Google's massive compute infrastructure to Larry Page. The partnership with Jeff Dean was crucial; he brought the deep computer systems expertise, while I brought the machine learning perspective. This combination allowed us to effectively leverage Google's infrastructure to scale our algorithms.

Technical Innovations and Breakthroughs

Hardware and Architecture
Initially, we were slower to embrace GPUs, partly because Google's CPU infrastructure was so brilliant and there were concerns about creating a heterogeneous and hard-to-manage compute environment. However, the need for parallel computation was undeniable.

This philosophy of designing for parallel hardware was a core, if sometimes underappreciated, aspect of the Transformer paper. Before Transformers, models for tasks like translation tried to ingest and memorize an entire sentence before generating the output. The Transformer's key innovation was the attention mechanism, which allowed the model to focus on specific, relevant parts of the input sentence as it generated the output. Crucially, the entire architecture was designed so that every step was highly parallelizable, making it a perfect fit for GPUs and TPUs. This design choice was what unlocked its ability to scale and become the foundation for today's large models.

The "Cat Video" Paper
Google Brain's "coming out moment" was the 2012 paper demonstrating unsupervised learning. We built what was likely the largest neural network in the world at the time and trained it by showing it unlabeled frames from YouTube videos. One day, my student Quoc Le showed me that a neuron in the network had learned to respond specifically to images of cats. The algorithm had, on its own and without any human labels, discovered the concept of a "cat." This was a massive breakthrough, proving that models could learn meaningful features from the world's vast stores of unlabeled data.

From Research to Real-World Application

To prove our value, we collaborated with teams across Google. Our early succe...

Full story
Moonshot Podcast Deep Dive: André Prager on Prototyping at Wing

André Prager, former Chief Engineer at Wing, discusses the core engineering philosophy of simplicity and cost-effectiveness that enabled the drone delivery service. He covers the design of key systems like the passive charging pad, the intelligent winch, the non-powered autoloader, and the iterative process of making the drones acoustically unobtrusive.
At the heart of Wing's development was a core philosophy: radical simplicity. The most scalable, robust, and affordable system is one with the fewest components. As former Chief Engineer André Prager puts it, "Everything that's not there doesn't need to be developed. Everything that's not there doesn't break." This principle guided the team away from complex, expensive solutions towards elegant, minimalist designs capable of scaling to a billion flights a year.

The Engineer as an Artist

Prager views engineering as more akin to art than pure mathematics—a creative process of discovery driven by curiosity. This mindset, rooted in childhood experiments like building an electric skateboard in the early '90s, emphasizes building unique things where the outcome isn't known beforehand. He argues that while math is easy to test, the creative and associative thinking required for innovative engineering is harder to measure but far more valuable. This approach seeks to find engineers who can make novel connections between different domains, a skill Prager believes is more difficult to find than pure mathematical proficiency.

The Real Challenge: The System, Not Just the Drone

When Prager joined Wing, the core architecture of the aircraft—a hybrid design with separate propellers for vertical hover and forward flight—was largely figured out. The fundamental challenge wasn't just keeping the drone in the air, but solving for "everything else." This included the entire operational ecosystem:

Payload Management: How to get packages onto and off the drone efficiently.
Infrastructure: Where the drones "sleep," how they charge, and how they are managed.
Air Traffic Management: How to coordinate a fleet of drones operating simultaneously.

Solving these problems required a relentless focus on cost and scalability, rejecting complex, demo-friendly prototypes in favor of systems that could be affordably deployed worldwide.

Engineering Simplicity in Action

Several key subsystems exemplify Wing's philosophy of minimalist, hardware-simple, and software-intelligent design.

The Charging Pad

The goal was a landing pad that required no complex robotics, manipulation, or precise alignment. The team avoided heavy wireless inductive charging coils due to weight constraints. Instead, they developed a passive contact-based system. The drone has small, conductive "feet," and the landing pad features a specific geometric pattern of positive and negative contacts. This geometry ensures that no matter how the drone lands on the 3x3 foot pad, its feet will complete a circuit and begin charging. The pad itself is simple, robust, and inexpensive, resembling a large printed circuit board (PCB).

The Winch and Delivery Hook

The system for lowering and retrieving packages could have been incredibly complex. The final design, however, features a hook with no moving parts and no electronics. This result was the product of a two-year development process with over 90 prototypes.

The intelligence resides in the winch motor and the software. By sensing force and position through the tether—much like sensing vibrations on a string—the system can infer a surprising amount of information:
⦁ When the package has touched the ground.
⦁ If a person is pulling on the line.
⦁ If the hook is snagged on an obstacle.

This allows for a simple, lightweight hardware implementation where intelligence and new features can be added over time through software updates.

The Autoloader

To automate the process of attaching a package, Prager developed the "autoloader," a device with no moving parts, no power, and no electronics. Inspired by the simplicity of a restaurant patio umbrella, the device consists of two arms that catch the drone's tether as it hovers nearby. The drone performs a slight sideways movement, which guides the hook through a channel and attaches it to the package via friction. This...

Full story
7 AI Terms You Need to Know: Agents, RAG, ASI & More

A deep dive into seven essential AI concepts shaping the future of intelligent systems, including Agentic AI, RAG, Mixture of Experts (MoE), and the theoretical frontier of Artificial Superintelligence (ASI).
As the field of artificial intelligence evolves at a breakneck pace, it's crucial for technology professionals to stay current with its core concepts. Here are seven essential AI terms that are shaping the future of the industry.

1. Agentic AI (AI Agents)
AI agents represent a shift from reactive models, like chatbots that only respond to one prompt at a time, to proactive systems that can reason and act autonomously to achieve goals. These agents operate in a continuous cycle:
1. Perceive: They assess their current environment.
2. Reason: They determine the next best steps to achieve a predefined goal.
3. Act: They execute the plan formulated during the reasoning stage.
4. Observe: They analyze the results of their actions and repeat the cycle.

This autonomous nature allows them to fulfill complex roles, such as a travel agent booking a trip, a data analyst identifying trends in reports, or a DevOps engineer detecting anomalies, testing fixes in containers, and rolling back faulty deployments.

2. Large Reasoning Models
AI agents are often powered by a specialized class of LLMs known as Large Reasoning Models (LRMs). Unlike standard LLMs that generate responses immediately, LRMs are fine-tuned to work through problems step-by-step. This methodical approach is essential for agents planning complex, multi-stage tasks.

The training process involves:
⦁ Using datasets with verifiably correct answers, such as math problems or code that can be tested by compilers.
⦁ Employing reinforcement learning to teach the model how to generate reasoning sequences that lead to correct final answers.

When a chatbot pauses and displays a "thinking..." message, it's often an LRM at work, generating an internal chain of thought to deconstruct a problem before providing a coherent response.

3. Vector Databases
Vector databases are a fundamental component of modern AI infrastructure, particularly for handling unstructured data. Instead of storing data like text or images as raw files, an embedding model converts this data into vectors—long lists of numbers that capture the data's semantic meaning and context.

The primary advantage is that similarity searches become mathematical operations. By finding vectors that are numerically close to each other in the multi-dimensional "embedding space," the system can identify semantically similar content. For example, a search for a picture of a mountain can find other similar images, related text articles, or even thematically similar music files based on their vector proximity.

4. Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is a powerful technique that leverages vector databases to make LLM responses more accurate and context-aware. It enriches user prompts with relevant, external information before the LLM generates a response.

The process works as follows:
1. A user's input prompt is converted into a vector using an embedding model.
2. This vector is used to perform a similarity search in a vector database containing a specific knowledge base (e.g., internal company documents).
3. The relevant information retrieved from the database is then inserted into the original prompt.
4. This augmented prompt is sent to the LLM, which now has the necessary context to generate a factually grounded answer.

For instance, asking a question about company policy can trigger a RAG system to pull the relevant section from the employee handbook and include it in the prompt, ensuring the LLM's answer is accurate.

5. Model Context Protocol (MCP)
For LLMs to be truly useful, they must interact with a wide range of external data sources, services, and tools. The Model Context Protocol (MCP) is an emerging standard designed to standardize these interactions.

Currently, developers often have to build custom, one-off connections for each new tool or database an LLM needs to access. MCP aims to solve th...

Full story
Small Language Models are the Future of Agentic AI Reading Group

This paper challenges the prevailing "bigger is better" narrative in AI, arguing that Small Language Models (SLMs) are not just sufficient but often superior for agentic AI tasks due to their efficiency, speed, and specialization. The discussion explores the paper's core arguments, counterarguments, and the practical implications of adopting a hybrid LLM-SLM approach.
The paper "Small Language Models are the Future of Agentic AI" posits that the trend toward ever-larger models may be misguided for agentic systems. Instead, it argues that a heterogeneous ecosystem of smaller, specialized models offers a more powerful, efficient, and economical path forward. The core intuition is to scale out by composing specialized "Lego" blocks (SLMs) rather than scaling up a single monolithic model (LLM).

The Three Pillars of the SLM Argument

The authors build their case on three primary arguments:

1. SLMs are Powerful Enough: Recent advancements have produced SLMs (e.g., Microsoft's Phi series) that achieve competitive performance on benchmarks for reasoning, language, and coding tasks when compared to models 10-20 times their size. For the vast majority of agentic tasks, which often involve a limited subset of an LLM's full capabilities, this level of performance is sufficient. The broad, general intelligence of a massive LLM can be "intelligence overkill" when an agent is only performing a narrow, specific function.

2. SLMs are Operationally Superior:
Performance: They offer significantly lower inference latency and require less memory, making them faster and easier to deploy.
Flexibility: Their small size allows for greater operational flexibility, including deployment on edge devices and consumer-grade GPUs without specialized infrastructure.
Behavioral Alignment: Agentic systems require predictable, structured interactions, often using formats like JSON or YAML. It's easier and more reliable to fine-tune an SLM to consistently produce a specific format, reducing the risk of hallucinations or formatting errors that can occur with a general-purpose LLM trained on countless formats.
Heterogeneity: Agentic workflows are naturally composed of diverse tasks with varying complexity. A system can dynamically choose the best model for each sub-task—a simple SLM for a simple task and perhaps a more powerful model only when necessary.

3. SLMs are More Economical:
Inference Costs: Serving a 7B parameter SLM can be 10-30 times cheaper than serving a 70B+ parameter LLM.
Operational Simplicity: SLMs avoid the complexities of multi-GPU and multi-node parallelization, simplifying infrastructure management and maintenance.
Fine-Tuning Agility: Fine-tuning an SLM requires only a few GPU hours, enabling rapid iteration and specialization, compared to the weeks and significant resources needed for large models.
Parameter Utilization: SLMs are fundamentally more efficient, activating a higher percentage of their parameters for a given task compared to the sparse activation in very large models.

A compelling argument is that agentic systems naturally evolve toward SLMs. Each interaction an agent has (prompt, output, user feedback) generates valuable training data. Even if a system starts with an LLM, this continuous stream of task-specific data creates the perfect conditions for optimizing and distilling that capability into a smaller, expert model.

Counterarguments and Practical Challenges

The discussion also highlighted significant counterarguments and real-world barriers:

Scaling Laws and Generalization: LLMs benefit from scaling laws, giving them a more nuanced and abstract understanding of concepts, multi-linguality, and multi-modality that SLMs may lack. This deep generalization might be crucial for a top-level "supervisor" agent that needs to orchestrate complex tasks.
The Cost of a Fleet: While a single SLM is cheap, managing an entire fleet of specialized models introduces its own operational complexity and costs, including infrastructure, talent, and orchestration. Centralized LLM endpoints can benefit from higher utilization, potentially making them more cost-effective at scale than multiple, under-utilized SLM endpoints.
Real-World Ne...

Full story
The Day AI Solves My Puzzles Is The Day I Worry (Prof. Cristopher Moore)

Professor Cristopher Moore of the Santa Fe Institute discusses the surprising effectiveness of AI, arguing it stems from the rich, non-random structure of the real world. He explores the limits of current models, the nature of intelligence as creative problem-solving and abstraction, the importance of grounding and shared reality, and the profound implications of computational irreducibility and the need for algorithmic transparency in high-stakes applications.
Cristopher Moore, a self-described "frog" in the world of science, prefers diving deep into concrete problems over taking a high-level "bird's-eye view". This perspective informs his analysis of artificial intelligence, computational theory, and the nature of intelligence itself.

The Structure of the World and the Success of AI

The surprising effectiveness of large models like transformers stems not from a magical architecture, but from the nature of the data they are trained on. Real-world data is neither completely random nor adversarially designed to be difficult. Instead, it is filled with rich structure, patterns, and hierarchies. Moore argues: "the real world presents us with examples of these problems where there is so much rich structure to sink your teeth into."

Any sufficiently rich architecture can learn to exploit this structure. We will likely look back and realize that what truly matters is that "the world is structured and any architecture which is capable of capturing some of that structure is going to do well at prediction." This contrasts with theoretical work in computer science and statistical physics, which often proves hardness based on worst-case adversarial examples or purely random data models. While concepts like phase transitions—sharp shifts in problem difficulty based on signal-to-noise ratios, analogous to a magnet losing its field at a critical temperature—are powerful for understanding random problems, they don't capture the full picture of real-world AI performance.

Intelligence as Creative Problem-Solving

Despite their success, current models falter on tasks requiring novel reasoning and abstraction, such as modern Sudoku variants with complex, layered rules. These puzzles, designed by humans for humans, require insights and the creation of new logical constraints on the fly, a process current AI struggles with. Moore notes that the ability of AI to absorb rules and perform intelligent search "hasn't happened yet."

This highlights a deeper aspect of intelligence: the ability to transform hard problems into simpler ones. It's about inventing heuristics and new forms of "partial knowledge" to navigate a problem space. Humans fluidly switch their approach, asking "which piece can fit here?" and then "where can this piece go?" This process of formalization and mathematization is a crucial, creative step that often constitutes 90% of the work in scientific modeling. True intelligence involves inventing the variables and constraints to address a problem, not just solving a pre-defined one.

Grounding, Meaning, and Shared Reality

A significant limitation of current language models is their lack of grounding in the physical world. When asked to summarize a nuanced essay, a model might regress to the mean, producing a "lowest common denominator" summary based on common arguments about the topic, completely missing the author's unique, subtle point. This indicates a failure to grasp meaning beyond statistical correlation.

Moore, a self-professed Platonist, believes in a shared, objective reality of abstract concepts. When two people visualize a cube, they perceive the same object with 8 corners and 12 edges. This shared perception allows for meaningful agreement and correction. He suggests that once AI systems can utilize multimodal "workspaces"—to doodle, run code, and manipulate virtual objects—they will move closer to this kind of grounded understanding.

Computation, Universality, and Irreducibility

The conversation delves into the fundamental nature of computation and its relationship to intelligence.

Computational Irreducibility: Drawing on Stephen Wolfram's work, Moore discusses systems where there are no analytical shortcuts to predict a future state. To know the outcome, "you have to do the work" of simulating every intervening step. While our only method for proving a system is irreducible is to build a universal c...

Full story
921: NPUs vs GPUs vs CPUs for Local AI Workloads — with Dell’s Ish Shah and Shirish Gupta

Shirish Gupta and Ish Shah from Dell Technologies explore the evolving landscape of AI hardware. They discuss why Windows, enhanced by WSL 2, remains a dominant platform for developers, and delve into the distinct roles of CPUs, GPUs, and the increasingly important Neural Processing Units (NPUs). The conversation covers the trade-offs between local and cloud computing for AI workloads and introduces new hardware, like workstations with discrete NPUs, that are making on-device AI more powerful and accessible than ever.
The Operating System Debate: Windows for AI Development

While the AI and data science communities often gravitate towards Unix-based systems, Windows remains a formidable platform for development. Statistically, Windows is the most popular OS among software developers, used by approximately 64%. Its familiarity, user-friendliness, and compatibility with essential productivity applications make it an accessible starting point. For enterprise environments, Windows offers streamlined IT management and security integrations.

A common best practice is to develop in an environment that mirrors production, and since most large-scale ML deployments run on Linux, this has traditionally been a point of friction. However, the gap is closing significantly with Windows Subsystem for Linux (WSL 2), which allows a full Linux kernel to run directly on Windows. This provides developers with the "best of both worlds": the productivity and enterprise benefits of Windows alongside the native command-line tools and environment of Linux, eliminating the need for dual-booting.

A New Trinity of Processors: CPU, GPU, and NPU

The modern AI landscape requires a nuanced approach to processing, moving beyond a one-size-fits-all model. The choice of hardware is no longer a simple matrix; it has expanded to an "8x8" complexity of options tailored to specific needs. The key is using the right tool for the right job.

The Rise of the NPU (Neural Processing Unit)

An NPU is a specialized processor purpose-built to handle AI and ML workloads, particularly the vector math that forms the foundation of neural networks.
Efficiency is Key: NPUs are architected for maximum performance per watt. This is critical for mobile devices and laptops, where they can handle tasks like background blur or speech-to-text without draining the battery, offloading this work from the CPU.
Integrated vs. Discrete: NPUs can be integrated into the main chipset (SoC) or exist as powerful discrete cards. Dell has announced the Dell Pro Max mobile workstation, which will feature a discrete NPU, capable of running a 109-billion-parameter model locally. This is a game-changer for use cases requiring high performance in offline, secure, or latency-sensitive environments.

GPUs: The Powerhouse for Scalable Performance

GPUs remain the champions of parallel processing and are currently more versatile and scalable than NPUs for high-end AI tasks.
High-End Training and Inference: New hardware like the Nvidia Blackwell RTX Pro GPUs continues to push boundaries. For instance, the GB10 appliance will be capable of fine-tuning a 200-billion-parameter model locally.
Versatility: GPUs are not only for AI; they accelerate a wide range of tasks from CAD software to gaming, offering a dual-use benefit for many users. Their primary trade-off, especially on client devices, is higher power consumption compared to NPUs.

The CPU: The Enduring Workhorse

The CPU is still the core of the system, handling general-purpose tasks and running the operating system. If a system lacks a dedicated NPU or GPU, the CPU must also take on the new AI workloads, potentially impacting overall performance. Modern architectures like Intel's Lunar Lake are enhancing CPU capabilities with features like on-chip memory for faster data transfer and powerful integrated GPUs that can rival some entry-level discrete cards.

The Developer Experience: Bridging Hardware and Software

To simplify the complexity of targeting these different processors, Dell has introduced Dell Pro AI Studio. This software layer abstracts away the underlying toolchains (like Intel OpenVINO) required to run models on specific silicon (Intel, AMD, Qualcomm, etc.).
Democratizing Access: It dramatically reduces development time. In one case study, a task that took a team three months to complete using standard toolchains was accompl...

Full story
Why language models hallucinate, revisiting Amodei’s code prediction and AI in the job market

Experts discuss an OpenAI paper that reframes hallucinations as a feature driven by training incentives, not just a bug. The panel also revisits Dario Amodei's prediction on AI coding, explores AI's chaotic impact on the job market, and imagines the future of running LLMs on business-card-sized devices.
Reframing Language Model Hallucinations

A recent paper from OpenAI, "Why Language Models Hallucinate," suggests that the issue of hallucination is more complex than a simple bug to be fixed. The core argument is that the problem is inherent to the current model training paradigm.

Models are incentivized to guess rather than admit uncertainty. During training, particularly with reinforcement learning, a model receives a reward for a correct answer but gets zero points for stating "I don't know." This reward structure encourages the model to take a chance on an answer, as there's a possibility of being correct. This is compounded by the landscape of external evaluations and benchmarks, which often rely on binary (yes/no) scoring. Model providers, aiming for the highest possible scores on these leaderboards, are therefore disincentivized from training models that frequently express uncertainty.

The paper challenges the myth that simply increasing model accuracy will decrease hallucinations. The authors argue that accuracy and hallucination are different measures. The proposed solution is not to eliminate guessing but to achieve better calibration between accuracy and uncertainty through more sophisticated reward functions and evaluations.

This leads to a broader discussion on the role of hallucinations:

A Tool for Creativity: For certain use cases, such as generating creative text or adopting a persona (e.g., "act like a pirate"), hallucination is not a bug but a feature. It represents a form of creative inference, combining disparate concepts to generate novel outputs. A world without this capability would lead to rigid, uncreative models.
Need for a Better Definition: The community lacks a clear, agreed-upon definition of what constitutes a hallucination versus the model simply being incorrect due to conflicting data in its training set.
A Multi-Faceted Solution: Eliminating hallucinations entirely is likely impossible. The path forward involves a combination of better-calibrated models and a suite of external tools, including guardrails, symbolic approaches, and Retrieval-Augmented Generation (RAG), to verify model outputs against grounded context.

Revisiting the 90% AI Coding Prediction

In March, Anthropic's CEO Dario Amodei predicted that within six months, AI would be writing 90% of the code for software developers. With that timeframe now passed, the panel reflected on its accuracy.

The key distinction is between automation (replacing developers) and augmentation (assisting developers). While 90% of developers have not lost their jobs, it is plausible that AI now assists in generating a significant portion of code, perhaps approaching that 90% figure for developers who have fully integrated tools like GitHub Copilot.

The prediction might be correct in terms of technological capability, even if societal adoption and developer tooling haven't caught up yet. The discussion framed this shift as another layer of abstraction in software development, similar to the move to object-oriented programming or the adoption of ORMs that generate database code. However, there are still areas where these tools struggle, such as generating reliable and complex SQL, where understanding intricate database schemas remains a significant challenge.

AI's Impact on the "Hellish" Job Market

Referencing an article from The Atlantic, the discussion turned to the chaotic state of the job market, which has become an "arms race" between AI-powered tools.

Candidates use generative AI to automate job applications, tailor CVs, and even assist during interviews.
Recruiters use AI screening tools to filter the resulting flood of applications.

The outcome is a noisy, impersonal system where it's difficult for humans to connect. This has led to a renewed emphasis on "old-school" techniques like leveraging personal networks to find opportunities. ...

Full story
Fully autonomous robots are much closer than you think – Sergey Levine

Sergey Levine, co-founder of Physical Intelligence, outlines the path to general-purpose robots, predicting a 'self-improvement flywheel' could lead to fully autonomous household robots by 2030. He discusses the architecture of vision-language-action models, the critical role of embodiment in solving the data problem, and how robotics will scale faster than self-driving cars.
Sergey Levine, co-founder of Physical Intelligence and a professor at UC Berkeley, envisions a near future where general-purpose robots are commonplace, estimating a median timeline of 2030 for robots capable of autonomously running a household. The key is not a single breakthrough, but initiating a "self-improvement flywheel": deploying robots that are useful enough in narrow domains to begin collecting vast amounts of real-world experience, which is then used to improve the general model, enabling wider deployment and more data collection.

The Path to General-Purpose Robots: Flywheels and Common Sense

The central goal is to create robotic foundation models—general-purpose systems that can control any robot for any task. The initial challenge is not achieving full autonomy, but reaching a level of competence where the flywheel can start. This could begin with narrow, repetitive tasks and gradually expand in scope as the models improve, much like the evolution of coding assistants from simple autocompletion to generating entire pull requests.

Levine argues that robotics will scale faster than self-driving cars for several key reasons:
Learning from Mistakes: Many manipulation tasks are more forgiving than driving. A robot can make a mistake, like dropping a T-shirt, correct it, and learn from the experience. The consequences of a mistake in autonomous driving are far more severe, making this kind of trial-and-error learning difficult.
The Role of Common Sense: The advent of Large Language Models (LLMs) and Vision-Language Models (VLMs) provides a source of common sense that was absent in the early days of self-driving. A model can now be queried about abstract concepts ("What does a 'slippery floor' sign mean?") to infer potential outcomes without having to experience them firsthand.
Human-in-the-Loop Interaction: The feedback loop for improvement is more natural. A human supervising a robot can provide simple verbal instructions ("pick up the cup"), which serve as valuable training data. This seamless integration of human feedback accelerates on-the-job learning.

How Robotic Foundation Models Work

Physical Intelligence's models are built upon the architecture of pre-trained Vision-Language Models (VLMs). The core idea is to leverage the vast prior knowledge about the world embedded in these models.
Architecture: The model can be conceptualized as an LLM (like Google's open-source Gemma) with a "visual cortex" (a vision encoder) and a "motor cortex" (an action decoder) grafted onto it. It's an end-to-end transformer, often using a mixture-of-experts (MoE) structure.
Action Generation: The model takes in sensory data, performs internal chain-of-thought reasoning to break down a command (e.g., "clean the kitchen"), and then passes the final instruction to the action expert. Because motor control requires high-frequency, continuous outputs, the actions are not discrete tokens but are generated using techniques like diffusion or flow matching for precision.
Emergent Capabilities: This approach leads to compositional generalization and emergent behaviors not explicitly present in the training data. For instance, a robot tasked with folding laundry might encounter a second T-shirt accidentally picked up with the first. The model can reason to pick up the extraneous item and place it back in the bin before continuing its primary task. Another example is the robot turning shorts right-side out before folding them, demonstrating a deeper, compositional understanding of the task.

The Data and Representation Challenge

While scaling data is crucial, Levine emphasizes that it's not just about quantity. The challenge is identifying the right axes of scale to improve specific capabilities like robustness, efficiency, and edge-case handling.

A key problem in AI has been that video models have not been as effec...

Full story
Faster Science, Better Drugs

Erik Torenberg, Patrick Hsu (Arc Institute), and Jorge Conde (a16z) discuss Arc's moonshot to create 'virtual cells' using foundation models to simulate biology. They cover why science is slow, how AI can accelerate drug discovery by predicting cellular perturbations, and the remaining bottlenecks in clinical trials and capital intensity that the biotech industry faces.
The Core Problem: Why Science is Slow

Scientific progress, particularly in biology, is hindered by a "weird Gordian knot" of factors. Unlike AI research that moves at the speed of GPUs, biological research involves moving atoms and is constrained by the real-time processes of growing cells, tissues, and animals. The core issues are multifactorial, stemming from incentives, funding structures, and the training system.

A significant challenge is the increasing need for interdisciplinary collaboration. It is exceptionally difficult for individual research groups or companies to excel at more than two distinct domains simultaneously (e.g., computational biology and genomics). Modern problems require integrating five or more fields, such as neuroscience, immunology, machine learning, chemical biology, and genomics.

The Arc Institute's Approach: Fostering Collaboration

The Arc Institute was founded as an "organizational experiment" to address these challenges. By bringing experts from five distinct domains under one physical roof, the goal is to "increase the collision frequency" and unlock a new space of research problems. This model contrasts with a traditional university setting, where physical distance and misaligned incentives often discourage deep collaboration. In academia, researchers are primarily incentivized to publish their own papers and make their own discoveries, not necessarily to work on larger, collective flagship projects. Arc is designed to enable these larger projects, such as finding new Alzheimer's drug targets and building "virtual cells."

The Moonshot: Simulating Biology with "Virtual Cells"

The central moonshot at Arc is to create "virtual cells" to simulate human biology using foundation models. The ambition is to make these models the default tool for experimentalists, accelerating discovery to the speed of a neural network's forward pass.

However, modeling biology with AI is fundamentally harder than modeling language or images.
Lack of Native Intuition: Humans are native speakers of language and interpreters of images, making it easy to evaluate the output of models like GPT-4 or DALL-E. In contrast, we don't "speak the language of biology" and can only interpret it with a "thick accent." Evaluating a DNA foundation model's output is not intuitive.
The "Lab in the Loop" Bottleneck: The iteration cycle for biological models is slow because predictions must be validated with physical lab experiments. Increasing the speed and dimensionality of this experimental feedback is a critical challenge.
Incomplete Data: We are almost certainly not measuring all the critical components of a cell. While we can scale the measurement of transcriptional information (RNA), this is only a "lower resolution mirror" for what is happening at the protein or metabolic level. The strategy is to bet on what can be scaled today (genomics) and layer in other data modalities over time, trusting that scaling laws will help fill in the gaps.

Defining the Virtual Cell: Perturbation Prediction as the Goal

The most famous success of ML in biology is AlphaFold, which accurately predicts a protein's 3D structure from its amino acid sequence. The goal for virtual cells is to achieve a similar "AlphaFold moment" for cell biology.

At Arc, this is operationalized as perturbation prediction. The model's core task is to predict the necessary interventions to move a cell from one state to another across a manifold of cell states (e.g., from inflamed to quiescent, or from a fibroblast to a stem cell). This directly mirrors the process of drug discovery, which is fundamentally about finding a molecule (a perturbation) that shifts a cell from a disease state to a healthy one.

The objective is to create a practical "co-pilot for a wet lab biologist" that can suggest combinatorial perturbations and facilitate in-silico target identification, ultimately forming the basi...

Full story
Upwork's Radical Bet on Reinforcement Learning: Building RLEF from Scratch | Andrew Rabinovich (CTO)

Andrew Rabinovich, CTO and Head of AI at Upwork, details their strategy for building AI agents for digital work. He introduces a custom reinforcement learning approach called RLEF (Reinforcement Learning from Experience), explains why digital work marketplaces are ideal training grounds, and shares his vision for a future where AI delivers finished projects, orchestrated by a meta-agent named Uma.
Upwork's AI Strategy: From Matchmaking to Work Delivery

At Upwork, the AI strategy is centered around a meta-agent named Uma (Upwork's Mindful AI). Initially, Uma's role is to facilitate the connection between clients and freelancers by understanding a client's project needs and recommending the right talent. This represents a shift from a traditional marketplace to an AI-guided platform. The long-term vision, however, extends far beyond matchmaking to a point where a client describes a project to Uma, and Uma delivers the completed work.

A Hybrid AI Architecture: MoE, RAG, and Knowledge Graphs

Upwork employs a sophisticated, hybrid AI architecture rather than relying on a single monolithic model. The system is designed as a Mixture of Experts (MoE), where Uma possesses various specialized "skills," each fine-tuned for a specific task:
⦁ Creating a detailed job post from a client conversation.
⦁ Identifying and ranking suitable freelancers.
⦁ Assisting freelancers in drafting compelling proposals.
⦁ Helping clients evaluate and select freelancers based on those proposals.

To ground these skills in real-time platform data, Upwork heavily utilizes Retrieval-Augmented Generation (RAG). A crucial component of this is a knowledge graph that serves two purposes:
1. Routing: It directs queries to the appropriate data sources and RAG systems.
2. Inference and Query Expansion: It understands relationships between concepts, allowing for more intelligent search. For example, a search for a "front-end developer" can be automatically expanded to include related skills like "JavaScript" or "React," which are then used to retrieve a richer context for the language model.

A key advantage for tuning this RAG system is Upwork's vast amount of "self-curating" data. A successful contract between a client and a freelancer serves as a strong positive label, validating the effectiveness of the search and matching process. This feedback loop allows for continuous optimization of data sources and retrieval algorithms.

The Thesis: Digital Work as the Ultimate Agent Training Ground

Digital work marketplaces provide a unique and powerful environment for training AI agents, superior in many ways to traditional methods like simulation or game-based self-play.

If you allow agents to learn from an environment that is realistic, relevant relatively to their real world, then the results are really incredible... digital work is the kind of domain where it is almost okay to make mistakes so long that you can learn from them.

Unlike training self-driving cars where mistakes have severe consequences, or game environments like Go where the reward function is clearly defined, digital work offers a real-world setting with low-stakes failure. This allows agents to explore unconventional solutions—the equivalent of AlphaGo's "Move 37"—in creative and business tasks. The challenge in the real world is the absence of a predefined value function. Upwork solves this by leveraging its network of human experts who can evaluate the agents' outputs and provide the necessary reward signals.

RLEF: A New Paradigm Beyond RLHF

To train these agents, Upwork is developing a novel framework called Reinforcement Learning from Experience (RLEF), which diverges significantly from the more common Reinforcement Learning from Human Feedback (RLHF).

RLHF: Typically involves humans ranking a set of machine-generated outputs (e.g., A is better than B). This confines the model's learning to the boundaries of human preference and imagination.
RLEF: Allows an agent to freely explore a vast landscape of possible solutions. A human expert then provides a direct reward signal on the final output, similar to classical reinforcement learning. This encourages the agent to discover solutions that a human might never have conceived.

To overcome the sample inefficiency of RL, Upwork's approach leverages the...

Full story