NASA: "Look closely 👀
These “leopard spots” on a Martian rock are clues pointing to possibly the best signs of ancient microbial life we’ve found yet on Mars. To know for sure, we need to study the rock in labs on Earth."
https://www.jpl.nasa.gov/news/nasas-perseverance-rover-scientists-find-intriguing-mars-rock
These “leopard spots” on a Martian rock are clues pointing to possibly the best signs of ancient microbial life we’ve found yet on Mars. To know for sure, we need to study the rock in labs on Earth."
https://www.jpl.nasa.gov/news/nasas-perseverance-rover-scientists-find-intriguing-mars-rock
🤯15🥴6👏5🤡4👍3🔥2💩2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
Apples or Hamsters? 🍎🐹, Kling AI
👏17😁3❤🔥2👍2👎1
This media is not supported in your browser
VIEW IN TELEGRAM
Big Bang Gears by Tamás Görbe
"With the fastest gear turning once every 1.5 seconds and a gear ratio of 4:1 the last gear completes a full turn once every (you guessed it) 13.7 billion years!!
The outer teeth on the last gear move by a proton's diameter every 25 minutes..."
Source: https://x.com/TamasGorbe/status/1816761031743471847
"With the fastest gear turning once every 1.5 seconds and a gear ratio of 4:1 the last gear completes a full turn once every (you guessed it) 13.7 billion years!!
The outer teeth on the last gear move by a proton's diameter every 25 minutes..."
Source: https://x.com/TamasGorbe/status/1816761031743471847
👍9🤯7🥱3
Links for 2024-07-28
AI:
1. Recursive Introspection: Teaching Language Model Agents How to Self-Improve https://arxiv.org/abs/2407.18219
2. How do we leverage AI-synthesized data without catastrophic degradation? Rank-and-prune feedback, from humans or even weaker models, provably restores and even surpasses original performance! https://arxiv.org/abs/2406.07515 (related: Model collapse is not a significant threat under current best practices https://x.com/RylanSchaeffer/status/1816881533795422404)
3. Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk https://arxiv.org/abs/2401.05033
4. The space of human goals is infinitely vast -- yet, people spontaneously infer plausible motivations for others from just a few actions. How? Infinite Ends from Finite Samples: Open-Ended Goal Inference as Top-Down Bayesian Filtering of Bottom-Up Proposals https://arxiv.org/abs/2407.16770
5. How to make your robot handle diverse visual scenarios? Maniwhere: A Visual Generalizable Framework For Reinforcement Learning https://gemcollector.github.io/maniwhere/
6. “Do neural networks dream of internal goals? We confirm RNNs trained to play Sokoban with RL learn to plan. Our black-box analysis reveals novel behaviors such as agents “pacing” to gain thinking time. We open-source the RNNs as model organisms for interpretability research.” https://far.ai/post/2024-07-learned-planners/
7. Discovery of Crystallizable Organic Semiconductors with Machine LearningClick to copy article link https://pubs.acs.org/doi/10.1021/jacs.4c05245
8. Advanced Hardware Device Slashes AI Energy Consumption by 1000x https://cse.umn.edu/college/news/researchers-develop-state-art-device-make-artificial-intelligence-more-energy
9. “it turns out that the reason GPT-2 couldn't multiply four-digit numbers wasn't too few layers, too small of a hidden dimension, or bad training data. all that was fine. the learning algorithm itself was the issue” https://x.com/jxmnop/status/1816958426385383753
10. "Fundamentally, more compute directly translates into better results, both at training time and at inference time. This trend isn't going to stop, even after we hit the training data ceiling." https://x.com/fchollet/status/1816808401093161276
Science:
1. Dual-action antibiotic could make bacterial resistance nearly impossible https://today.uic.edu/dual-action-antibiotic-could-make-bacterial-resistance-nearly-impossible/
2. The brain does not have a miraculous ability to 'rewire' itself or repurpose neurons to overcome injury or disability, despite the popular explanation. Instead it just enhances or emphasizes different parts of pre-existing structures by simple repetition https://elifesciences.org/articles/84716
3. Remember the supermodel granny drug that extended lifespan by 25% in mice by IL-11 last week? It only took 8 days for a new intervention to come out increasing lifespan by 35%!! https://www.biorxiv.org/content/10.1101/2024.07.25.605097v1
Physics:
1. Within the flash of a brilliant gamma-ray burst, astronomers have found a remarkable signal: Matter and antimatter destroying each other. While fleeing a newborn black hole. Moving at 99.9% the speed of light. https://science.nasa.gov/science-research/astrophysics/gamma-ray-bursts/nasas-fermi-finds-new-feature-in-brightest-gamma-ray-burst-yet-seen/
2. Contrary to what general relativity says, black holes can't be made solely from light - because quantum effects prevent it. https://physics.aps.org/articles/v17/119
Politics:
1. “We now have fertility data for 2023 from Denmark, and it looks like every major group—from nonwestern migrants to native Danes—is still below replacement, and the situation is largely progressing in the wrong direction…Yes, even Muslims, Africans, and other globally high-fertility groups have low fertility after migration.” https://x.com/cremieuxrecueil/status/1816878701549617245
2. 1% of people are responsible for 24% of the health spending in America and 5% of people are responsible for just over half. https://x.com/cremieuxrecueil/status/1817249178017321257
AI:
1. Recursive Introspection: Teaching Language Model Agents How to Self-Improve https://arxiv.org/abs/2407.18219
2. How do we leverage AI-synthesized data without catastrophic degradation? Rank-and-prune feedback, from humans or even weaker models, provably restores and even surpasses original performance! https://arxiv.org/abs/2406.07515 (related: Model collapse is not a significant threat under current best practices https://x.com/RylanSchaeffer/status/1816881533795422404)
3. Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk https://arxiv.org/abs/2401.05033
4. The space of human goals is infinitely vast -- yet, people spontaneously infer plausible motivations for others from just a few actions. How? Infinite Ends from Finite Samples: Open-Ended Goal Inference as Top-Down Bayesian Filtering of Bottom-Up Proposals https://arxiv.org/abs/2407.16770
5. How to make your robot handle diverse visual scenarios? Maniwhere: A Visual Generalizable Framework For Reinforcement Learning https://gemcollector.github.io/maniwhere/
6. “Do neural networks dream of internal goals? We confirm RNNs trained to play Sokoban with RL learn to plan. Our black-box analysis reveals novel behaviors such as agents “pacing” to gain thinking time. We open-source the RNNs as model organisms for interpretability research.” https://far.ai/post/2024-07-learned-planners/
7. Discovery of Crystallizable Organic Semiconductors with Machine LearningClick to copy article link https://pubs.acs.org/doi/10.1021/jacs.4c05245
8. Advanced Hardware Device Slashes AI Energy Consumption by 1000x https://cse.umn.edu/college/news/researchers-develop-state-art-device-make-artificial-intelligence-more-energy
9. “it turns out that the reason GPT-2 couldn't multiply four-digit numbers wasn't too few layers, too small of a hidden dimension, or bad training data. all that was fine. the learning algorithm itself was the issue” https://x.com/jxmnop/status/1816958426385383753
10. "Fundamentally, more compute directly translates into better results, both at training time and at inference time. This trend isn't going to stop, even after we hit the training data ceiling." https://x.com/fchollet/status/1816808401093161276
Science:
1. Dual-action antibiotic could make bacterial resistance nearly impossible https://today.uic.edu/dual-action-antibiotic-could-make-bacterial-resistance-nearly-impossible/
2. The brain does not have a miraculous ability to 'rewire' itself or repurpose neurons to overcome injury or disability, despite the popular explanation. Instead it just enhances or emphasizes different parts of pre-existing structures by simple repetition https://elifesciences.org/articles/84716
3. Remember the supermodel granny drug that extended lifespan by 25% in mice by IL-11 last week? It only took 8 days for a new intervention to come out increasing lifespan by 35%!! https://www.biorxiv.org/content/10.1101/2024.07.25.605097v1
Physics:
1. Within the flash of a brilliant gamma-ray burst, astronomers have found a remarkable signal: Matter and antimatter destroying each other. While fleeing a newborn black hole. Moving at 99.9% the speed of light. https://science.nasa.gov/science-research/astrophysics/gamma-ray-bursts/nasas-fermi-finds-new-feature-in-brightest-gamma-ray-burst-yet-seen/
2. Contrary to what general relativity says, black holes can't be made solely from light - because quantum effects prevent it. https://physics.aps.org/articles/v17/119
Politics:
1. “We now have fertility data for 2023 from Denmark, and it looks like every major group—from nonwestern migrants to native Danes—is still below replacement, and the situation is largely progressing in the wrong direction…Yes, even Muslims, Africans, and other globally high-fertility groups have low fertility after migration.” https://x.com/cremieuxrecueil/status/1816878701549617245
2. 1% of people are responsible for 24% of the health spending in America and 5% of people are responsible for just over half. https://x.com/cremieuxrecueil/status/1817249178017321257
👍7💊2
This media is not supported in your browser
VIEW IN TELEGRAM
"Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos.
SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences"
https://ai.meta.com/blog/segment-anything-2/
SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences"
https://ai.meta.com/blog/segment-anything-2/
👍9
>>Continuous Learning Model (CLM) by Topology<<
"The CLM is a new model that remembers interactions, learns skills autonomously, and thinks in its free time, just like humans.
The CLM just wants to learn.
Try it at https://topologychat.com/
LLMs are stateless.
>CLM remembers and references all chats
LLMs don’t have an inner-life.
>CLM forms ideas by mulling over memories in its free time
LLMs have no soul.
>CLM actively organizes memories/ideas, granting it an emergent personality
CLM is a drop-in replacement for existing LLMs. Change one line of code and get continuous learning.
It simply learns content provided in user messages. Topology’s CLM eliminates RAG, GPTs, reranking, simple agents, and fine-tuning."
"The CLM is a new model that remembers interactions, learns skills autonomously, and thinks in its free time, just like humans.
The CLM just wants to learn.
Try it at https://topologychat.com/
LLMs are stateless.
>CLM remembers and references all chats
LLMs don’t have an inner-life.
>CLM forms ideas by mulling over memories in its free time
LLMs have no soul.
>CLM actively organizes memories/ideas, granting it an emergent personality
CLM is a drop-in replacement for existing LLMs. Change one line of code and get continuous learning.
It simply learns content provided in user messages. Topology’s CLM eliminates RAG, GPTs, reranking, simple agents, and fine-tuning."
🤯9🥴5🔥2
Links for 2024-07-30
AI:
1. Baidu presents an end-to-end self-reasoning framework to improve the reliability and traceability of RAG systems. https://arxiv.org/abs/2407.19813
2. MindSearch is an open-sourced AI search engine framework, with comparable performance with Perplexity.ai Pro. Deploy your own Perplexity.ai style search engine! https://mindsearch.netlify.app/
3. AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic https://garymarcus.substack.com/p/alphaproof-alphageometry-chatgp
4. Jack Clark, co-founder of Anthropic: "Registering a prediction: I predict that within two years (by July 2026) we'll see an AI system beat all humans at the IMO, obtaining the top score. Alongside this, I would wager we'll see the same thing - an AI system beating all humans in a known-hard competition - in another scientific domain outside of mathematics. If both of those things occur, I believe that will present strong evidence that AI may successfully automate large chunks of scientific research before the end of the decade." https://importai.substack.com/p/import-ai-380-distributed-13bn-parameter
5. “AI existential risk probabilities are too unreliable to inform policy” https://www.aisnakeoil.com/p/ai-existential-risk-probabilities
6. Small Molecule Optimization with Large Language Models https://arxiv.org/abs/2407.18897
7. Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget — “They trained BERT and outperformed the original using a single GPU in a single day. Their best model used around 1e19 FLOP compared to around 6e20 FLOP for the original BERT. So they are ~60x more compute-efficient after 4 years, consistent with roughly doubling algorithmic efficiency every 8 months.” https://arxiv.org/abs/2407.15811
8. Constrained-CoT: Constraining the reasoning of LLaMA2-70b to 100 words improves the accuracy from 36.01% (CoT) to 41.07% (CCoT) on GSM8K. https://arxiv.org/abs/2407.19825
9. Theia: Distilling Diverse Vision Foundation Models for Robot Learning http://theia.theaiinstitute.com/
10. “SearchGPT has the ‘best shot at changing the search paradigm as we’ve known it for 25 years’” https://www.tomsguide.com/ai/chatgpt/searchgpt-has-the-best-shot-at-changing-the-search-paradigm-as-weve-known-it-for-25-years
11. How This Brain Implant Is Using ChatGPT https://www.cnet.com/tech/computing/how-this-brain-implant-is-using-chatgpt/
12. "The Virtue of Complexity in Return Prediction", Kelly et al 2023 (large models can be profitable even with negative R^2) https://onlinelibrary.wiley.com/doi/full/10.1111/jofi.13298
13. "A Visual Guide to Quantization: Demystifying the Compression of Large Language Models", Maarten Grootendorst 2024 https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization
Space:
1. The discovery of a possible sign of life in Venus’ clouds sparked controversy. Now, scientists say they have more proof https://edition.cnn.com/2024/07/29/science/venus-gases-phosphine-ammonia/index.html
2. “plants found on Earth could even survive the harsh conditions of the Red Planet. One such planet, a type of moss found in arid locales like Tibet & Antarctica, survived rigorous testing, including deep freezing and high radiation" https://www.popularmechanics.com/space/moon-mars/a61668800/moss-from-earth-can-survive-on-mars/
Miscellaneous:
1. Why controlling for variables is insufficient — On the pervasiveness of residual confounding in the social sciences, how to think about it, and what to do https://inquisitivebird.xyz/p/why-controlling-for-variables-is
2. How a Mind-Controlling Parasite Could Deliver Medicine to the Brain https://singularityhub.com/2024/07/29/how-a-mind-controlling-parasite-could-deliver-medicine-to-the-brain/
AI:
1. Baidu presents an end-to-end self-reasoning framework to improve the reliability and traceability of RAG systems. https://arxiv.org/abs/2407.19813
2. MindSearch is an open-sourced AI search engine framework, with comparable performance with Perplexity.ai Pro. Deploy your own Perplexity.ai style search engine! https://mindsearch.netlify.app/
3. AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic https://garymarcus.substack.com/p/alphaproof-alphageometry-chatgp
4. Jack Clark, co-founder of Anthropic: "Registering a prediction: I predict that within two years (by July 2026) we'll see an AI system beat all humans at the IMO, obtaining the top score. Alongside this, I would wager we'll see the same thing - an AI system beating all humans in a known-hard competition - in another scientific domain outside of mathematics. If both of those things occur, I believe that will present strong evidence that AI may successfully automate large chunks of scientific research before the end of the decade." https://importai.substack.com/p/import-ai-380-distributed-13bn-parameter
5. “AI existential risk probabilities are too unreliable to inform policy” https://www.aisnakeoil.com/p/ai-existential-risk-probabilities
6. Small Molecule Optimization with Large Language Models https://arxiv.org/abs/2407.18897
7. Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget — “They trained BERT and outperformed the original using a single GPU in a single day. Their best model used around 1e19 FLOP compared to around 6e20 FLOP for the original BERT. So they are ~60x more compute-efficient after 4 years, consistent with roughly doubling algorithmic efficiency every 8 months.” https://arxiv.org/abs/2407.15811
8. Constrained-CoT: Constraining the reasoning of LLaMA2-70b to 100 words improves the accuracy from 36.01% (CoT) to 41.07% (CCoT) on GSM8K. https://arxiv.org/abs/2407.19825
9. Theia: Distilling Diverse Vision Foundation Models for Robot Learning http://theia.theaiinstitute.com/
10. “SearchGPT has the ‘best shot at changing the search paradigm as we’ve known it for 25 years’” https://www.tomsguide.com/ai/chatgpt/searchgpt-has-the-best-shot-at-changing-the-search-paradigm-as-weve-known-it-for-25-years
11. How This Brain Implant Is Using ChatGPT https://www.cnet.com/tech/computing/how-this-brain-implant-is-using-chatgpt/
12. "The Virtue of Complexity in Return Prediction", Kelly et al 2023 (large models can be profitable even with negative R^2) https://onlinelibrary.wiley.com/doi/full/10.1111/jofi.13298
13. "A Visual Guide to Quantization: Demystifying the Compression of Large Language Models", Maarten Grootendorst 2024 https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization
Space:
1. The discovery of a possible sign of life in Venus’ clouds sparked controversy. Now, scientists say they have more proof https://edition.cnn.com/2024/07/29/science/venus-gases-phosphine-ammonia/index.html
2. “plants found on Earth could even survive the harsh conditions of the Red Planet. One such planet, a type of moss found in arid locales like Tibet & Antarctica, survived rigorous testing, including deep freezing and high radiation" https://www.popularmechanics.com/space/moon-mars/a61668800/moss-from-earth-can-survive-on-mars/
Miscellaneous:
1. Why controlling for variables is insufficient — On the pervasiveness of residual confounding in the social sciences, how to think about it, and what to do https://inquisitivebird.xyz/p/why-controlling-for-variables-is
2. How a Mind-Controlling Parasite Could Deliver Medicine to the Brain https://singularityhub.com/2024/07/29/how-a-mind-controlling-parasite-could-deliver-medicine-to-the-brain/
🤯2👍1
2018: Self-Driving Cars Will Always Be Limited. Even the Industry Leader Admits it. https://archive.is/hZUcm
2019: Driverless cars are stuck in a jam. Blame Silicon Valley hype—and the limits of AI https://archive.is/TDxqe
2020: Volkswagen exec admits full self-driving cars 'may never happen' https://archive.is/9Z7lL
2019: Driverless cars are stuck in a jam. Blame Silicon Valley hype—and the limits of AI https://archive.is/TDxqe
2020: Volkswagen exec admits full self-driving cars 'may never happen' https://archive.is/9Z7lL
😁5🔥3
Media is too big
VIEW IN TELEGRAM
The Chinese government is going all-in on autonomous vehicles https://www.technologyreview.com/2024/07/10/1094811/chinese-government-policy-autonomous-vehicles/ [no paywall: https://archive.is/ph0q9]
"There are at least 19 companies testing self-driving car technologies across 16 different cities in China, the most of any place on Earth, as reported recently by the New York Times." https://www.nytimes.com/2024/06/13/business/china-driverless-cars.html [no paywall: https://archive.is/l6wNU]
"There are at least 19 companies testing self-driving car technologies across 16 different cities in China, the most of any place on Earth, as reported recently by the New York Times." https://www.nytimes.com/2024/06/13/business/china-driverless-cars.html [no paywall: https://archive.is/l6wNU]
🥴11👍5🔥4❤1🤯1🥱1
Links for 2024-08-02
AI:
1. Google released an experimental updated version of Gemini 1.5 Pro that is #1 on the Chatbot Arena. Try it here: https://aistudio.google.com/app/
2. Method prevents an AI model from being overconfident about wrong answers https://news.mit.edu/2024/thermometer-prevents-ai-model-overconfidence-about-wrong-answers-0731
3. Sparse Autoencoders as a microscope for AI internals. https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/
4. Diffusion Augmented Agents: A Framework for Efficient Exploration and Transfer Learning https://arxiv.org/abs/2407.20798
5. Odyssey equips LLM-agents with advanced skills for exploring Minecraft. https://github.com/zju-vipa/Odyssey?tab=readme-ov-file
6. Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge https://arxiv.org/abs/2407.19594
7. “We let models make hundreds or thousands of attempts when solving a problem, rather than just one...outperforming the single-attempt SOTA.” https://scalyresearch.stanford.edu/pubs/large_language_monkeys/
8. “Which is better, running a 70B model once, or a 7B model 10 times? Our findings reveal that the repeated use of smaller models can yield consistent improvements.” https://arxiv.org/abs/2404.00725
9. Achieving new SOTA standards by ensembling every other model into a meta-model that learns when to call each LLM. https://www.notdiamond.ai/
10. Claude Engineer https://github.com/Doriandarko/claude-engineer
11. LangGraph Studio: The first agent IDE https://www.youtube.com/watch?v=pLPJoFvq4_M
12. “By making programs differentiable, we inherently introduce probability distributions over their execution, providing a means to quantify the uncertainty associated with program outputs.” https://arxiv.org/abs/2403.14606
13. How AI is changing warfare https://www.economist.com/briefing/2024/06/20/how-ai-is-changing-warfare [no paywall: https://archive.is/yw8Yz]
14. “We discover a systematic way to scale up robot data...and we multiply that data 1000x or more in simulation.” https://x.com/DrJimFan/status/1818302152982343983
15. Figure AI: "Only recently has time opened a window of opportunity to scale billions of intelligent humanoid robots…Life is about to turn into a SciFi film." https://x.com/adcock_brett/status/1819191267785581049
Health:
1. One dose of a new nasal spray treatment clears toxic tau proteins from brain cells, improving memory. https://www.utmb.edu/news/article/utmb-news/2024/07/03/new-breakthrough-in-alzheimer-s-research--utmb-researchers-develop-nasal-spray-treatment-for-alzheimer-s-disease
2. New weight-loss drugs are causing people to spend less on groceries and choose healthier options. A new study shows that users buy 52% less snacks and confectionery, 47% less baked goods, and 28% less sugary drinks. https://nypost.com/2024/07/27/lifestyle/weight-loss-drugs-eat-into-grocery-basket/
Physics:
1. Is nature really as strange as quantum theory says? Neutron measurements prove: It doesn't work without the strange properties of quantum theory. https://www.tuwien.at/en/phy/ati/news/neutronen-auf-klassisch-unerklaerlichen-bahnen-1
2. New work suggests that when black holes die, they turn into white holes. And that these objects are an ideal candidate for the dark matter that cosmologists believe fills the universe but have never directly observed. https://arxiv.org/abs/2407.09584
Miscellaneous:
1. Space is a latent sequence: A theory of the hippocampus https://www.science.org/doi/10.1126/sciadv.adm8470
2. Probability is just...really weird https://www.youtube.com/watch?v=zczGnnM05TQ
3. How computers work explained from scratch. https://www.youtube.com/playlist?list=PLnAxReCloSeTJc8ZGogzjtCtXl_eE6yzA
4. List of biotech founders and drug hunters who were unlikely to succeed (and yet they did) https://www.ladanuzhna.xyz/writing/list-of-biotech-founders
5. Romae Industriae: What were the binding constraints on a Roman Industrial Revolution? https://www.maximum-progress.com/p/romae-industriae
AI:
1. Google released an experimental updated version of Gemini 1.5 Pro that is #1 on the Chatbot Arena. Try it here: https://aistudio.google.com/app/
2. Method prevents an AI model from being overconfident about wrong answers https://news.mit.edu/2024/thermometer-prevents-ai-model-overconfidence-about-wrong-answers-0731
3. Sparse Autoencoders as a microscope for AI internals. https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/
4. Diffusion Augmented Agents: A Framework for Efficient Exploration and Transfer Learning https://arxiv.org/abs/2407.20798
5. Odyssey equips LLM-agents with advanced skills for exploring Minecraft. https://github.com/zju-vipa/Odyssey?tab=readme-ov-file
6. Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge https://arxiv.org/abs/2407.19594
7. “We let models make hundreds or thousands of attempts when solving a problem, rather than just one...outperforming the single-attempt SOTA.” https://scalyresearch.stanford.edu/pubs/large_language_monkeys/
8. “Which is better, running a 70B model once, or a 7B model 10 times? Our findings reveal that the repeated use of smaller models can yield consistent improvements.” https://arxiv.org/abs/2404.00725
9. Achieving new SOTA standards by ensembling every other model into a meta-model that learns when to call each LLM. https://www.notdiamond.ai/
10. Claude Engineer https://github.com/Doriandarko/claude-engineer
11. LangGraph Studio: The first agent IDE https://www.youtube.com/watch?v=pLPJoFvq4_M
12. “By making programs differentiable, we inherently introduce probability distributions over their execution, providing a means to quantify the uncertainty associated with program outputs.” https://arxiv.org/abs/2403.14606
13. How AI is changing warfare https://www.economist.com/briefing/2024/06/20/how-ai-is-changing-warfare [no paywall: https://archive.is/yw8Yz]
14. “We discover a systematic way to scale up robot data...and we multiply that data 1000x or more in simulation.” https://x.com/DrJimFan/status/1818302152982343983
15. Figure AI: "Only recently has time opened a window of opportunity to scale billions of intelligent humanoid robots…Life is about to turn into a SciFi film." https://x.com/adcock_brett/status/1819191267785581049
Health:
1. One dose of a new nasal spray treatment clears toxic tau proteins from brain cells, improving memory. https://www.utmb.edu/news/article/utmb-news/2024/07/03/new-breakthrough-in-alzheimer-s-research--utmb-researchers-develop-nasal-spray-treatment-for-alzheimer-s-disease
2. New weight-loss drugs are causing people to spend less on groceries and choose healthier options. A new study shows that users buy 52% less snacks and confectionery, 47% less baked goods, and 28% less sugary drinks. https://nypost.com/2024/07/27/lifestyle/weight-loss-drugs-eat-into-grocery-basket/
Physics:
1. Is nature really as strange as quantum theory says? Neutron measurements prove: It doesn't work without the strange properties of quantum theory. https://www.tuwien.at/en/phy/ati/news/neutronen-auf-klassisch-unerklaerlichen-bahnen-1
2. New work suggests that when black holes die, they turn into white holes. And that these objects are an ideal candidate for the dark matter that cosmologists believe fills the universe but have never directly observed. https://arxiv.org/abs/2407.09584
Miscellaneous:
1. Space is a latent sequence: A theory of the hippocampus https://www.science.org/doi/10.1126/sciadv.adm8470
2. Probability is just...really weird https://www.youtube.com/watch?v=zczGnnM05TQ
3. How computers work explained from scratch. https://www.youtube.com/playlist?list=PLnAxReCloSeTJc8ZGogzjtCtXl_eE6yzA
4. List of biotech founders and drug hunters who were unlikely to succeed (and yet they did) https://www.ladanuzhna.xyz/writing/list-of-biotech-founders
5. Romae Industriae: What were the binding constraints on a Roman Industrial Revolution? https://www.maximum-progress.com/p/romae-industriae
👍6
Links for 2024-08-04
AI:
1. AgentGen uses LLMs to synthesize diverse environments and planning tasks in a scalable way. https://arxiv.org/abs/2408.00764
2. Using LLM embeddings to capture word-by-word linguistic content transmitted from the speaker's brain to the listener's brain in real-time, face-to-face conversations https://www.cell.com/neuron/fulltext/S0896-6273(24)00460-4
3. An introduction to reinforcement learning for neuroscience https://arxiv.org/abs/2311.07315
4. From Text to Life: On the Reciprocal Relationship between Artificial Life and Large Language Models https://arxiv.org/abs/2407.09502
5. Toward De Novo Protein Design from Natural Language https://www.biorxiv.org/content/10.1101/2024.08.01.606258v1
6. The newly released Palmyra-Fin-70B outperforms Claude 3.5 Sonnet, GPT-4o, and Mixtral-8x7b on the long-fin-eval benchmark, across a variety of real-world financial use cases. https://x.com/rohanpaul_ai/status/1819443015481446643
7. “TPU transformation: A look back at 10 years of our AI-specialized chips” https://cloud.google.com/transform/ai-specialized-chips-tpu-history-gen-ai
8. Tyler Cowen on ChatGPT Advanced Voice Mode: "It’s happening, and this is to date one of the most vivid and impressive illustrations of what is possible. A mere three years ago this would have seemed like witchcraft." https://marginalrevolution.com/marginalrevolution/2024/08/chatgpt-advanced-voice-mode.html
9. ChatGPT Advanced Voice Mode Impresses Testers With Sound Effects, Catching Its Breath https://arstechnica.com/information-technology/2024/07/when-counting-quickly-openais-new-voice-mode-stops-to-catch-its-breath/
10. “I'm not going to make any arguments about what the future holds. I just want to provide a list of 50 conversations that I (a programmer and research scientist studying machine learning) have had with different large language models to meaningfully improve my ability to perform research and help me work on random coding side projects.” https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
11. Character.AI CEO Noam Shazeer returns to Google. Google is also signing a non-exclusive agreement with Character.AI to use its tech. https://techcrunch.com/2024/08/02/character-ai-ceo-noam-shazeer-returns-to-google/
12. UK government shelves £1.3bn tech and AI plans https://www.bbc.com/news/articles/cyx5x44vnyeo
Miscellaneous:
1. “From an evolutionary perspective, what distinguishes the human brain? You may say, the neocortex. Surprisingly, in humans and other great apes, the expansion of the cerebellum accelerated faster than the enlargement of the cerebral cortex.” https://www.cell.com/current-biology/fulltext/S0960-9822(14)01069-0
2. “No matter what you post on social media. You can be found. Whether it's a zoomed in photo of your table or just a photo of your lunch. Even the smallest details in a photo give the biggest hints.” https://www.youtube.com/watch?app=desktop&v=Ue94gpWqEkM
Politics:
1. Iran has told Arab diplomats that they don't care if the response triggers a war with Israel, according to people familiar with the conversations https://www.wsj.com/world/middle-east/iran-rebuffs-calls-for-restraint-in-its-response-to-killing-of-hamas-leader-309314e7 [no paywall: https://archive.is/tmWJ3]
2. “Why is society so vulnerable to far-left ideas? One key reason: it’s hard to counter weaponized empathy. When actions are taken under the banner of a long-suffering group, that makes it much more difficult to challenge the worldview behind them. And far-left activists do skew female, suggesting that empathy plays a significant role.” https://x.com/RichardMCNgo/status/1819400985569329350
3. "In 2017, a survey of economists by the Chicago Booth School of Business asked if refugees will benefit Germany. Only 6% said they would be net cost. Almost 1 million Syrians in Germany, over half on welfare, the rest get medical, housing benefits. Not great forecasting." https://x.com/whyvert/status/1819395080714871003
AI:
1. AgentGen uses LLMs to synthesize diverse environments and planning tasks in a scalable way. https://arxiv.org/abs/2408.00764
2. Using LLM embeddings to capture word-by-word linguistic content transmitted from the speaker's brain to the listener's brain in real-time, face-to-face conversations https://www.cell.com/neuron/fulltext/S0896-6273(24)00460-4
3. An introduction to reinforcement learning for neuroscience https://arxiv.org/abs/2311.07315
4. From Text to Life: On the Reciprocal Relationship between Artificial Life and Large Language Models https://arxiv.org/abs/2407.09502
5. Toward De Novo Protein Design from Natural Language https://www.biorxiv.org/content/10.1101/2024.08.01.606258v1
6. The newly released Palmyra-Fin-70B outperforms Claude 3.5 Sonnet, GPT-4o, and Mixtral-8x7b on the long-fin-eval benchmark, across a variety of real-world financial use cases. https://x.com/rohanpaul_ai/status/1819443015481446643
7. “TPU transformation: A look back at 10 years of our AI-specialized chips” https://cloud.google.com/transform/ai-specialized-chips-tpu-history-gen-ai
8. Tyler Cowen on ChatGPT Advanced Voice Mode: "It’s happening, and this is to date one of the most vivid and impressive illustrations of what is possible. A mere three years ago this would have seemed like witchcraft." https://marginalrevolution.com/marginalrevolution/2024/08/chatgpt-advanced-voice-mode.html
9. ChatGPT Advanced Voice Mode Impresses Testers With Sound Effects, Catching Its Breath https://arstechnica.com/information-technology/2024/07/when-counting-quickly-openais-new-voice-mode-stops-to-catch-its-breath/
10. “I'm not going to make any arguments about what the future holds. I just want to provide a list of 50 conversations that I (a programmer and research scientist studying machine learning) have had with different large language models to meaningfully improve my ability to perform research and help me work on random coding side projects.” https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
11. Character.AI CEO Noam Shazeer returns to Google. Google is also signing a non-exclusive agreement with Character.AI to use its tech. https://techcrunch.com/2024/08/02/character-ai-ceo-noam-shazeer-returns-to-google/
12. UK government shelves £1.3bn tech and AI plans https://www.bbc.com/news/articles/cyx5x44vnyeo
Miscellaneous:
1. “From an evolutionary perspective, what distinguishes the human brain? You may say, the neocortex. Surprisingly, in humans and other great apes, the expansion of the cerebellum accelerated faster than the enlargement of the cerebral cortex.” https://www.cell.com/current-biology/fulltext/S0960-9822(14)01069-0
2. “No matter what you post on social media. You can be found. Whether it's a zoomed in photo of your table or just a photo of your lunch. Even the smallest details in a photo give the biggest hints.” https://www.youtube.com/watch?app=desktop&v=Ue94gpWqEkM
Politics:
1. Iran has told Arab diplomats that they don't care if the response triggers a war with Israel, according to people familiar with the conversations https://www.wsj.com/world/middle-east/iran-rebuffs-calls-for-restraint-in-its-response-to-killing-of-hamas-leader-309314e7 [no paywall: https://archive.is/tmWJ3]
2. “Why is society so vulnerable to far-left ideas? One key reason: it’s hard to counter weaponized empathy. When actions are taken under the banner of a long-suffering group, that makes it much more difficult to challenge the worldview behind them. And far-left activists do skew female, suggesting that empathy plays a significant role.” https://x.com/RichardMCNgo/status/1819400985569329350
3. "In 2017, a survey of economists by the Chicago Booth School of Business asked if refugees will benefit Germany. Only 6% said they would be net cost. Almost 1 million Syrians in Germany, over half on welfare, the rest get medical, housing benefits. Not great forecasting." https://x.com/whyvert/status/1819395080714871003
👍8
Predictions for the future of software engineering: https://x.com/russelljkaplan/status/1820460524460802256
🥱24👍5🤔4😨2😭1
Links for 2024-08-06
AI:
1. Figure 02 unveiled — working autonomously at BMW's Spartanburg factory. ⦿ New 16 Degrees of Freedom hand ⦿ Onboard inference, running VLM locally for speech-to-speech reasoning ⦿ 2.25 KWh battery ⦿ Exoskeleton structure + Integrated wiring https://www.youtube.com/watch?v=0SRVJaOg9Co (press article: https://spectrum.ieee.org/figure-new-humanoid-robot)
2. Fully-automatic robot dentist performs world's first human procedure https://newatlas.com/health-wellbeing/robot-dentist-world-first/
3. Big tech’s huge AI spending isn’t slowing down. And according to their forward-looking statements, that spending is expected to go up even more. https://sherwood.news/tech/meta-amazon-microsoft-massive-ai-capex-spending-quarterly-earnings/
4. Flux: OpenAI’s DALL-E 3-Like AI For Free, Forever! https://www.youtube.com/watch?v=-7crpGKEA2g
5. Meta presents Self-Taught Evaluators: Without any labeled preference data, the proposed model utperforms commonly used LLM judges such as GPT-4 and matches the performance of the top-performing reward models trained with labeled examples https://arxiv.org/abs/2408.02666
6. AI capabilities can be significantly improved without expensive retraining https://arxiv.org/abs/2312.07413
7. MiniCPM-V: A GPT-4V Level MLLM on Your Phone https://arxiv.org/abs/2408.01800
8. DeepL’s latest large language model, which is trained to specialize in translation, outperforms Google Translate and GPT-4 for translation tasks. https://thenextweb.com/news/deepl-new-llm-that-outperforms-google-translate-chatgpt
9. A New Type of Neural Network Is More Interpretable -- Kolmogorov-Arnold Networks could point physicists to new hypotheses https://spectrum.ieee.org/kan-neural-network
10. GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS https://arxiv.org/abs/2408.01584
11. Tenstorrent has developed a new set of AI chips that are much less expensive than NVIDIA’s. They are available as PCIe cards or as components of complete workstations. https://wccftech.com/tenstorrent-wormhole-ai-processors-risc-v-phenomenal-price-to-performance-value/
12. Meta says it will need 10x more computing power to train Llama 4 compared to Llama 3. https://techcrunch.com/2024/08/01/zuckerberg-says-meta-will-need-10x-more-computing-power-to-train-llama-4-than-llama-3/
13. “Breaking my hand forced me to write all my code with AI for 2 months. I’m never going back.” https://erikschluntz.com/software/2024/07/30/code-with-ai.html
Miscellaneous:
1. ‘Sensational’ Proof Delivers New Insights Into Prime Numbers https://www.quantamagazine.org/sensational-proof-delivers-new-insights-into-prime-numbers-20240715/
2. Neuroscience research into people with aphantasia, who don’t experience mental imagery, is revealing how imagination works and demonstrating the sweeping variety in our subjective experiences. https://www.quantamagazine.org/what-happens-in-a-mind-that-cant-see-mental-images-20240801/
3. No proof that radiation from X rays and CT scans causes cancer https://www.sciencedaily.com/releases/2016/02/160203134456.htm
4. Japan's unmanned stores count on shoppers' honesty https://web-japan.org/trends/11_tech-life/tec202309_unmanned-stores.html
5. "In my opinion, every moment beyond this should simply be interpreted within the context of a lower moment. Every even moment (4 - aka kurtosis, 6, 8, etc.) corresponds to variance, while every odd moment (5, 7, 9, etc.) corresponds to skewness. As the moments get larger, they are more impacted by outliers. So, the fourth moment (kurtosis) measures the same things that the second moment does (variance), but with a heavier focus on the outliers. This is where the "fat tails" description of kurtosis comes from. It measures the spread of the data but is more depend on the behavior of outliers in the tails." https://www.reddit.com/r/AskStatistics/comments/6d3fsp/comment/di0b0pc/
AI:
1. Figure 02 unveiled — working autonomously at BMW's Spartanburg factory. ⦿ New 16 Degrees of Freedom hand ⦿ Onboard inference, running VLM locally for speech-to-speech reasoning ⦿ 2.25 KWh battery ⦿ Exoskeleton structure + Integrated wiring https://www.youtube.com/watch?v=0SRVJaOg9Co (press article: https://spectrum.ieee.org/figure-new-humanoid-robot)
2. Fully-automatic robot dentist performs world's first human procedure https://newatlas.com/health-wellbeing/robot-dentist-world-first/
3. Big tech’s huge AI spending isn’t slowing down. And according to their forward-looking statements, that spending is expected to go up even more. https://sherwood.news/tech/meta-amazon-microsoft-massive-ai-capex-spending-quarterly-earnings/
4. Flux: OpenAI’s DALL-E 3-Like AI For Free, Forever! https://www.youtube.com/watch?v=-7crpGKEA2g
5. Meta presents Self-Taught Evaluators: Without any labeled preference data, the proposed model utperforms commonly used LLM judges such as GPT-4 and matches the performance of the top-performing reward models trained with labeled examples https://arxiv.org/abs/2408.02666
6. AI capabilities can be significantly improved without expensive retraining https://arxiv.org/abs/2312.07413
7. MiniCPM-V: A GPT-4V Level MLLM on Your Phone https://arxiv.org/abs/2408.01800
8. DeepL’s latest large language model, which is trained to specialize in translation, outperforms Google Translate and GPT-4 for translation tasks. https://thenextweb.com/news/deepl-new-llm-that-outperforms-google-translate-chatgpt
9. A New Type of Neural Network Is More Interpretable -- Kolmogorov-Arnold Networks could point physicists to new hypotheses https://spectrum.ieee.org/kan-neural-network
10. GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS https://arxiv.org/abs/2408.01584
11. Tenstorrent has developed a new set of AI chips that are much less expensive than NVIDIA’s. They are available as PCIe cards or as components of complete workstations. https://wccftech.com/tenstorrent-wormhole-ai-processors-risc-v-phenomenal-price-to-performance-value/
12. Meta says it will need 10x more computing power to train Llama 4 compared to Llama 3. https://techcrunch.com/2024/08/01/zuckerberg-says-meta-will-need-10x-more-computing-power-to-train-llama-4-than-llama-3/
13. “Breaking my hand forced me to write all my code with AI for 2 months. I’m never going back.” https://erikschluntz.com/software/2024/07/30/code-with-ai.html
Miscellaneous:
1. ‘Sensational’ Proof Delivers New Insights Into Prime Numbers https://www.quantamagazine.org/sensational-proof-delivers-new-insights-into-prime-numbers-20240715/
2. Neuroscience research into people with aphantasia, who don’t experience mental imagery, is revealing how imagination works and demonstrating the sweeping variety in our subjective experiences. https://www.quantamagazine.org/what-happens-in-a-mind-that-cant-see-mental-images-20240801/
3. No proof that radiation from X rays and CT scans causes cancer https://www.sciencedaily.com/releases/2016/02/160203134456.htm
4. Japan's unmanned stores count on shoppers' honesty https://web-japan.org/trends/11_tech-life/tec202309_unmanned-stores.html
5. "In my opinion, every moment beyond this should simply be interpreted within the context of a lower moment. Every even moment (4 - aka kurtosis, 6, 8, etc.) corresponds to variance, while every odd moment (5, 7, 9, etc.) corresponds to skewness. As the moments get larger, they are more impacted by outliers. So, the fourth moment (kurtosis) measures the same things that the second moment does (variance), but with a heavier focus on the outliers. This is where the "fat tails" description of kurtosis comes from. It measures the spread of the data but is more depend on the behavior of outliers in the tails." https://www.reddit.com/r/AskStatistics/comments/6d3fsp/comment/di0b0pc/
👍2❤1
Empirical data on how useful AI agents are currently compared to humans: They can't do everything, but they can do a decent chunk of what humans can do, and they can do it significantly cheaper/faster.
Read more: https://metr.org/blog/2024-08-06-update-on-evaluations/
Read more: https://metr.org/blog/2024-08-06-update-on-evaluations/
👍4
This media is not supported in your browser
VIEW IN TELEGRAM
[Open Source] Unitree First View Teleoperation for Humanoid Robots to advance the convenience of data collection for humanoid robots: https://github.com/unitreerobotics/avp_teleoperate
👍4
Links for 2024-08-08
AI:
1. “Can LLMs predict results of social science experiments? Across 70 studies, we find striking alignment (r = .85) between simulated and observed effects. Overall our results show high accuracy of LLM-derived predictions for experiments with human participants, generally greater accuracy than samples of lay and expert humans.” https://docsend.com/view/qeeccuggec56k9hd
2. “LLaVA-OneVision allows strong transfer learning across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos.” https://arxiv.org/abs/2408.03326
3. Key-Point-Driven Mathematical Reasoning Distillation of Large Language Model https://arxiv.org/abs/2407.10167
4. Benchmarking LLMs for Optimization Modeling and Enhancing Reasoning via Reverse Socratic Synthesis https://arxiv.org/abs/2407.09887
5. "Transformers are Universal In-context Learners": in this paper, we show that deep transformers with a fixed embedding dimension are universal approximators for an arbitrarily large number of tokens. https://arxiv.org/abs/2408.01367
6. “How can we prevent LLM safeguards from being simply removed with a few steps of fine-tuning? We show it's surprisingly possible to make progress on creating safeguards that are tamper-resistant, reducing malicious use risks of open-weight models.” https://arxiv.org/abs/2408.00761
7. Diffusion Models as Data Mining Tools https://arxiv.org/abs/2408.02752
8. Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolution https://arxiv.org/abs/2408.00160
9. Google announces Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters — Test-time compute can be used to outperform a 14× larger model https://arxiv.org/abs/2408.03314
10. A New Study Says AI Models Encode Language Like the Human Brain Does https://singularityhub.com/2024/08/07/a-new-study-says-ai-models-encode-language-like-the-human-brain-does/
11. A.I. ‐ Humanity's Final Invention? https://www.youtube.com/watch?v=fa8k8IQ1_X0
12. AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes https://www.technologyreview.com/2024/08/07/1095879/ai-godfather-yoshua-bengio-joins-uk-project-to-prevent-ai-catastrophes/ [no paywall: https://archive.is/wcpgo]
Miscellaneous:
1. “We're using ultrasound to safely and non-invasively measure and modulate brain activity at high resolution” https://quintinfrerichs.xyz/nudge
2. Japanese scientists develop simplified EUV scanner that can make production of chips considerably cheaper https://www.tomshardware.com/tech-industry/japanese-scientists-develop-simplified-euv-scanner-that-can-make-production-of-chips-considerably-cheaper
3. Tiny arm bone belonged to smallest ancient human ever found https://www.nature.com/articles/d41586-024-02548-6
4. “The implications for life in the liquid water oceans, under the surface of icy moons, are obvious, and enormous. So I'm going to predict now, with medium confidence (and a couple of caveats, to follow) that we may well ultimately discover similar polymetallic nodules, producing oxygen through similar chemical processes, on the warm seafloors of the liquid water oceans under the frozen crusts of icy moons.” https://theeggandtherock.com/p/the-deep-ocean-floor-is-covered-in
5. Feasibility of keeping Mars warm with nanoparticles https://www.science.org/doi/10.1126/sciadv.adn4650
6. “When that enormous magnitude-9 earthquake hit Japan in 2011, it caused waves 1.5 meters high in some lakes in NORWAY!” https://mathstodon.xyz/@johncarlosbaez/112920894947197795
Politics:
1. ‘Sky’s the limit’: Fort Stewart soldiers prepare for the modern battlefield by building small drones from scratch https://www.stripes.com/branches/army/2024-08-06/army-soldiers-building-drones-fort-stewart-14761022.html
2. What can we say about the "far right" riots? https://www.aporiamagazine.com/p/what-can-we-say-about-the-far-right
AI:
1. “Can LLMs predict results of social science experiments? Across 70 studies, we find striking alignment (r = .85) between simulated and observed effects. Overall our results show high accuracy of LLM-derived predictions for experiments with human participants, generally greater accuracy than samples of lay and expert humans.” https://docsend.com/view/qeeccuggec56k9hd
2. “LLaVA-OneVision allows strong transfer learning across different modalities/scenarios, yielding new emerging capabilities. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos.” https://arxiv.org/abs/2408.03326
3. Key-Point-Driven Mathematical Reasoning Distillation of Large Language Model https://arxiv.org/abs/2407.10167
4. Benchmarking LLMs for Optimization Modeling and Enhancing Reasoning via Reverse Socratic Synthesis https://arxiv.org/abs/2407.09887
5. "Transformers are Universal In-context Learners": in this paper, we show that deep transformers with a fixed embedding dimension are universal approximators for an arbitrarily large number of tokens. https://arxiv.org/abs/2408.01367
6. “How can we prevent LLM safeguards from being simply removed with a few steps of fine-tuning? We show it's surprisingly possible to make progress on creating safeguards that are tamper-resistant, reducing malicious use risks of open-weight models.” https://arxiv.org/abs/2408.00761
7. Diffusion Models as Data Mining Tools https://arxiv.org/abs/2408.02752
8. Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolution https://arxiv.org/abs/2408.00160
9. Google announces Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters — Test-time compute can be used to outperform a 14× larger model https://arxiv.org/abs/2408.03314
10. A New Study Says AI Models Encode Language Like the Human Brain Does https://singularityhub.com/2024/08/07/a-new-study-says-ai-models-encode-language-like-the-human-brain-does/
11. A.I. ‐ Humanity's Final Invention? https://www.youtube.com/watch?v=fa8k8IQ1_X0
12. AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes https://www.technologyreview.com/2024/08/07/1095879/ai-godfather-yoshua-bengio-joins-uk-project-to-prevent-ai-catastrophes/ [no paywall: https://archive.is/wcpgo]
Miscellaneous:
1. “We're using ultrasound to safely and non-invasively measure and modulate brain activity at high resolution” https://quintinfrerichs.xyz/nudge
2. Japanese scientists develop simplified EUV scanner that can make production of chips considerably cheaper https://www.tomshardware.com/tech-industry/japanese-scientists-develop-simplified-euv-scanner-that-can-make-production-of-chips-considerably-cheaper
3. Tiny arm bone belonged to smallest ancient human ever found https://www.nature.com/articles/d41586-024-02548-6
4. “The implications for life in the liquid water oceans, under the surface of icy moons, are obvious, and enormous. So I'm going to predict now, with medium confidence (and a couple of caveats, to follow) that we may well ultimately discover similar polymetallic nodules, producing oxygen through similar chemical processes, on the warm seafloors of the liquid water oceans under the frozen crusts of icy moons.” https://theeggandtherock.com/p/the-deep-ocean-floor-is-covered-in
5. Feasibility of keeping Mars warm with nanoparticles https://www.science.org/doi/10.1126/sciadv.adn4650
6. “When that enormous magnitude-9 earthquake hit Japan in 2011, it caused waves 1.5 meters high in some lakes in NORWAY!” https://mathstodon.xyz/@johncarlosbaez/112920894947197795
Politics:
1. ‘Sky’s the limit’: Fort Stewart soldiers prepare for the modern battlefield by building small drones from scratch https://www.stripes.com/branches/army/2024-08-06/army-soldiers-building-drones-fort-stewart-14761022.html
2. What can we say about the "far right" riots? https://www.aporiamagazine.com/p/what-can-we-say-about-the-far-right
👍6
This media is not supported in your browser
VIEW IN TELEGRAM
Google unveils "Achieving Human Level Competitive Robot Table Tennis"! The robot won 100% vs. beginners and 55% vs. intermediate players, showcasing solid amateur human-level performance.
"The robot has to be good at low level skills, such as returning the ball, as well as high level skills, like strategizing and long-term planning to achieve a goal.
The robot first trains in a simulated environment, which can model the physics of table tennis matches accurately.
Once deployed to the real world, it collects data on its performance against humans to refine its skills back in simulation - creating a continuous feedback loop."
Read more: https://sites.google.com/view/competitive-robot-table-tennis/home
"The robot has to be good at low level skills, such as returning the ball, as well as high level skills, like strategizing and long-term planning to achieve a goal.
The robot first trains in a simulated environment, which can model the physics of table tennis matches accurately.
Once deployed to the real world, it collects data on its performance against humans to refine its skills back in simulation - creating a continuous feedback loop."
Read more: https://sites.google.com/view/competitive-robot-table-tennis/home
👏9🥱3