Firouz Naderi - Wikipedia
https://en.m.wikipedia.org/wiki/Firouz_Naderi
https://en.m.wikipedia.org/wiki/Firouz_Naderi
Wikipedia
Firouz Naderi
Iranian scientist (1946–2023)
British scientist Stephen Hawking and Iranian-American NASA scientist Firouz Naderi could use the BCI at its mature phase.
Bill Gates, Jeff Bezos And Jack Ma Fund $200 Million Round For KoBold Metals, A Company Using AI To Improve Mining Of Rare Earth Metals Crucial For EVs
https://finance.yahoo.com/news/bill-gates-jeff-bezos-jack-150613684.html
https://finance.yahoo.com/news/bill-gates-jeff-bezos-jack-150613684.html
Yahoo Finance
Bill Gates, Jeff Bezos And Jack Ma Fund $200 Million Round For KoBold Metals, A Company Using AI To Improve Mining Of Rare Earth…
Breakthrough Energy Ventures, backed by Bill Gates, Jack Ma and Jeff Bezos, recently led a $200 million funding round for KoBold Metals. Other investors included Andreessen Horowitz, and Bond Capital. KoBold is using artificial intelligence (AI) to improve…
Forwarded from Farrokh Mohamadi
SciTechDaily
Key Protein Vital for Structural Integrity of Neurons – Without It Axons Break, Synapses Die
Scientists find a protein common to flies and people is essential for supporting the structure of axons that neurons project to make circuit connections. In a study conducted by MIT's Picower Institute for Learning and Memory, researchers found that a protein…
Forwarded from Farrokh Mohamadi
SciTechDaily
“Cytoelectric Coupling”: A Groundbreaking Hypothesis on How Our Brains Function
Brain waves act as carriers of information. A recently proposed "Cytoelectric Coupling" hypothesis suggests that these wavering electric fields contribute to the optimization of the brain network's efficiency and robustness. They do this by influencing the…
Forwarded from Farrokh Mohamadi
Google DeepMind and the University of Tokyo Researchers Introduce WebAgent: An LLM-Driven Agent that can Complete the Tasks on Real Websites Following Natural Language Instructions - MarkTechPost
https://www.marktechpost.com/2023/07/29/google-deepmind-and-the-university-of-tokyo-researchers-introduce-webagent-an-llm-driven-agent-that-can-complete-the-tasks-on-real-websites-following-natural-language-instructions/
https://www.marktechpost.com/2023/07/29/google-deepmind-and-the-university-of-tokyo-researchers-introduce-webagent-an-llm-driven-agent-that-can-complete-the-tasks-on-real-websites-following-natural-language-instructions/
MarkTechPost
Google DeepMind and the University of Tokyo Researchers Introduce WebAgent: An LLM-Driven Agent that can Complete the Tasks on…
Several natural language activities, including arithmetic, common sense, logical reasoning, question-and-answer tasks, text production, and even interactive decision-making tasks, may be solved using large language models (LLM). By utilizing the ability of…
Forwarded from Farrokh Mohamadi
HAS AI BECOME TOO HUMAN? Researchers At Google AI Find LLMs Can Now Use ML Models And APIs With Just Tool Documentation! - MarkTechPost
https://www.marktechpost.com/2023/08/09/has-ai-become-too-human-researchers-at-google-ai-find-llms-can-now-use-ml-models-and-apis-with-just-tool-documentation/
https://www.marktechpost.com/2023/08/09/has-ai-become-too-human-researchers-at-google-ai-find-llms-can-now-use-ml-models-and-apis-with-just-tool-documentation/
MarkTechPost
HAS AI BECOME TOO HUMAN? Researchers At Google AI Find LLMs Can Now Use ML Models And APIs With Just Tool Documentation!
In this era where each day AI seems to be taking over the planet, Large Language Models are growing closer to the human brain more than ever. Researchers at Google have proved that large language models can use undiscovered tools in a zero-shot fashion without…
Yann LeCun, a prominent figure in AI and deep learning, has expressed skepticism and criticism toward Reinforcement Learning (RL)—especially in the context of general intelligence or autonomous agents. His main concerns center around efficiency, scalability, and biological plausibility. Here's a breakdown of why LeCun doesn't favor RL as a general learning paradigm:
🔹 1. Sample Inefficiency
LeCun argues that RL is extremely sample-inefficient. Modern RL algorithms often require millions of interactions with an environment to learn even relatively simple tasks—something humans and animals can learn in just a few trials.
This makes RL impractical for real-world scenarios where data is expensive or interactions are limited.
🔹 2. Not How the Brain Works
LeCun believes that human and animal learning is not primarily driven by reinforcement signals (i.e., rewards or punishments). Instead, he argues the brain relies much more on self-supervised learning—predicting sensory inputs, learning representations, and modeling the world.
He often draws analogies to the brain’s cortex (handling perception and prediction) versus the basal ganglia (handling rewards and actions). In his view:
The cortex (self-supervised learning) does most of the work.
The basal ganglia (RL) is just a small part.
🔹 3. Reward Engineering is Hard
Designing a proper reward function in RL is non-trivial and error-prone. LeCun sees this as a major limitation for applying RL to complex real-world problems.
Badly shaped rewards can lead to unintended behavior, reward hacking, or failure to generalize.
🔹 4. Doesn’t Scale to Complex Tasks
RL has trouble generalizing and scaling to tasks that:
Have long time horizons
Require planning
Involve abstract reasoning
LeCun suggests that more modular, hierarchical, and model-based approaches—particularly self-supervised learning combined with world modeling—are more scalable.
🔹 5. Better Alternatives Exist
LeCun strongly advocates for self-supervised learning (SSL) as the future of AI. He sees SSL as:
- More biologically plausible
- More efficient
- More generalizable
He’s also promoting architectures like the Joint Embedding Predictive Architecture (JEPA) and energy-based models that learn by predicting and modeling the world, rather than reacting to rewards.
🔹 6. RL Is Useful, But Narrow
To be clear, LeCun doesn’t say RL is useless. He acknowledges it’s very useful in specific domains, like:
Games (e.g., AlphaGo, Atari)
Robotics (with careful engineering)
Bandits and decision-making under uncertainty, but he argues RL should be a narrow tool, not the foundation of general intelligence.
🧠 Summary of LeCun’s View:
In his vision for autonomous intelligence, world modeling, self-supervised learning, and planning play the central role—not reward maximization.
🔹 1. Sample Inefficiency
LeCun argues that RL is extremely sample-inefficient. Modern RL algorithms often require millions of interactions with an environment to learn even relatively simple tasks—something humans and animals can learn in just a few trials.
“It’s the most inefficient way of learning anything that has ever been invented by humans.” — Yann LeCun
This makes RL impractical for real-world scenarios where data is expensive or interactions are limited.
🔹 2. Not How the Brain Works
LeCun believes that human and animal learning is not primarily driven by reinforcement signals (i.e., rewards or punishments). Instead, he argues the brain relies much more on self-supervised learning—predicting sensory inputs, learning representations, and modeling the world.
He often draws analogies to the brain’s cortex (handling perception and prediction) versus the basal ganglia (handling rewards and actions). In his view:
The cortex (self-supervised learning) does most of the work.
The basal ganglia (RL) is just a small part.
🔹 3. Reward Engineering is Hard
Designing a proper reward function in RL is non-trivial and error-prone. LeCun sees this as a major limitation for applying RL to complex real-world problems.
Badly shaped rewards can lead to unintended behavior, reward hacking, or failure to generalize.
🔹 4. Doesn’t Scale to Complex Tasks
RL has trouble generalizing and scaling to tasks that:
Have long time horizons
Require planning
Involve abstract reasoning
LeCun suggests that more modular, hierarchical, and model-based approaches—particularly self-supervised learning combined with world modeling—are more scalable.
🔹 5. Better Alternatives Exist
LeCun strongly advocates for self-supervised learning (SSL) as the future of AI. He sees SSL as:
- More biologically plausible
- More efficient
- More generalizable
He’s also promoting architectures like the Joint Embedding Predictive Architecture (JEPA) and energy-based models that learn by predicting and modeling the world, rather than reacting to rewards.
🔹 6. RL Is Useful, But Narrow
To be clear, LeCun doesn’t say RL is useless. He acknowledges it’s very useful in specific domains, like:
Games (e.g., AlphaGo, Atari)
Robotics (with careful engineering)
Bandits and decision-making under uncertainty, but he argues RL should be a narrow tool, not the foundation of general intelligence.
🧠 Summary of LeCun’s View:
“You don’t learn to drive by getting a reward every time you stay on the road. You learn by predicting what happens when you turn the wheel or hit the brakes.”
In his vision for autonomous intelligence, world modeling, self-supervised learning, and planning play the central role—not reward maximization.