πΉAlphaFold: Improved #protein structure #prediction using potentials from #deep_learning
https://deepmind.com/research/publications/AlphaFold-Improved-protein-structure-prediction-using-potentials-from-deep-learning
βββββββββββββββββββ
Via: Cutting-edge Deep Learning
Credit: deepmind.com
#deepmind
#machinelearning
#neuralnetworks
https://deepmind.com/research/publications/AlphaFold-Improved-protein-structure-prediction-using-potentials-from-deep-learning
βββββββββββββββββββ
Via: Cutting-edge Deep Learning
Credit: deepmind.com
#deepmind
#machinelearning
#neuralnetworks
πΉProteins are complex molecules that are essential to life, and each has its own unique 3D shape.
Today weβre excited to share DeepMindβs first significant milestone in demonstrating how artificial intelligence research can drive and accelerate new scientific discoveries. With a strongly interdisciplinary approach to our work, #DeepMind has brought together experts from the fields of structural biology, physics, and #machine_learning to apply #cutting-edge techniques to #predict the 3D structure of a #protein based solely on its #genetic sequence.
πVia: @cedeeplearning
link: https://deepmind.com/blog/article/alphafold-casp13
Today weβre excited to share DeepMindβs first significant milestone in demonstrating how artificial intelligence research can drive and accelerate new scientific discoveries. With a strongly interdisciplinary approach to our work, #DeepMind has brought together experts from the fields of structural biology, physics, and #machine_learning to apply #cutting-edge techniques to #predict the 3D structure of a #protein based solely on its #genetic sequence.
πVia: @cedeeplearning
link: https://deepmind.com/blog/article/alphafold-casp13
GANs.pdf
2.2 MB
πΉImproved Techniques for Training GANs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels.
πVia: @cedeeplearning
link: https://arxiv.org/abs/1606.03498
#GANS
#generative_model
#deeplearning
#research
#machinelearning
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels.
πVia: @cedeeplearning
link: https://arxiv.org/abs/1606.03498
#GANS
#generative_model
#deeplearning
#research
#machinelearning
π»DeepMind's Losses and the Future of #Artificial_Intelligence
DeepMind, likely the worldβs largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. #DeepMind also has more than $1 billion in debt due in the next 12 months.
Does this mean that AI is falling apart?
πVia: @cedeeplearning
link: https://www.wired.com/story/deepminds-losses-future-artificial-intelligence/
#deeplearning
#machinelearning
#AI
DeepMind, likely the worldβs largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. #DeepMind also has more than $1 billion in debt due in the next 12 months.
Does this mean that AI is falling apart?
πVia: @cedeeplearning
link: https://www.wired.com/story/deepminds-losses-future-artificial-intelligence/
#deeplearning
#machinelearning
#AI
πΉDeep Learning #Algorithms Identify Structures in Living Cells
For cell biologists, fluorescence microΒscopy is an invaluable tool. Fusing dyes to antibodies or inserting genes coding for fluorescent proteins into the #DNA of living cells can help scientists pick out the location of #organelles, #cytoskeletal elements, and other subcellular #structures from otherwise #impenetrable microscopy images. But this technique has its #drawbacks.
πVia: @cedeeplearning
link: https://www.the-scientist.com/notebook/deep-learning-algorithms-identify-structures-in-living-cells-65778
#deeplearning
#neuralnetworks
#machinelearning
For cell biologists, fluorescence microΒscopy is an invaluable tool. Fusing dyes to antibodies or inserting genes coding for fluorescent proteins into the #DNA of living cells can help scientists pick out the location of #organelles, #cytoskeletal elements, and other subcellular #structures from otherwise #impenetrable microscopy images. But this technique has its #drawbacks.
πVia: @cedeeplearning
link: https://www.the-scientist.com/notebook/deep-learning-algorithms-identify-structures-in-living-cells-65778
#deeplearning
#neuralnetworks
#machinelearning
πΉArtificial Intelligence Vs Neural Networks
The term βartificial intelligenceβ dates back to the mid-1950s, when mathematician John McCarthy, widely recognized as the father of AI, used it to describe machines that do things people might call intelligent. He and Marvin Minsky, whose work was just as influential in the AI field, organized the Dartmouth Summer Research Project on Artificial Intelligence in 1956.
πVia: @cedeeplearning
link: https://www.the-scientist.com/magazine-issue/artificial-intelligence-versus-neural-networks-65802
#neuralnetworks
#deepearning
#machinelearning
#AI
The term βartificial intelligenceβ dates back to the mid-1950s, when mathematician John McCarthy, widely recognized as the father of AI, used it to describe machines that do things people might call intelligent. He and Marvin Minsky, whose work was just as influential in the AI field, organized the Dartmouth Summer Research Project on Artificial Intelligence in 1956.
πVia: @cedeeplearning
link: https://www.the-scientist.com/magazine-issue/artificial-intelligence-versus-neural-networks-65802
#neuralnetworks
#deepearning
#machinelearning
#AI
πΉAI Networks Generate Super-Resolution from Basic Microscopy
A new study uses deep learning to improve the resolution of biological images, but elicits skepticism about its ability to enhance snapshots of sample types that it has never seen before.
πVia: @cedeeplearning
link: https://www.the-scientist.com/news-opinion/ai-networks-generate-super-resolution-from-basic-microscopy-65219
#deeplerning
#neuralnetworks
#machinelearning
A new study uses deep learning to improve the resolution of biological images, but elicits skepticism about its ability to enhance snapshots of sample types that it has never seen before.
πVia: @cedeeplearning
link: https://www.the-scientist.com/news-opinion/ai-networks-generate-super-resolution-from-basic-microscopy-65219
#deeplerning
#neuralnetworks
#machinelearning
πΉNeural networks facilitate optimization in the search for new materials
Sorting through millions of possibilities, a search for battery materials delivered results in five weeks instead of 50 years. When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once.
πVia: @cedeeplearning
link: http://news.mit.edu/2020/neural-networks-optimize-materials-search-0326
#MIT
#deeplearning
#neuralnetworks
#imagedetection
Sorting through millions of possibilities, a search for battery materials delivered results in five weeks instead of 50 years. When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once.
πVia: @cedeeplearning
link: http://news.mit.edu/2020/neural-networks-optimize-materials-search-0326
#MIT
#deeplearning
#neuralnetworks
#imagedetection
πΉDeep learning for mechanical property evaluation
New technique allows for more precise measurements of #deformation characteristics using nanoindentation tools.
A #standard method for testing some of the #mechanical properties of #materials is to poke them with a sharp point. This βindentation techniqueβ can provide detailed measurements of how the material responds to the pointβs force, as a function of its #penetration depth.
πVia: @cedeeplearning
link: http://news.mit.edu/2020/deep-learning-mechanical-property-metallic-0316
#neuralnetworks
#deeplearning
#machinelearning
New technique allows for more precise measurements of #deformation characteristics using nanoindentation tools.
A #standard method for testing some of the #mechanical properties of #materials is to poke them with a sharp point. This βindentation techniqueβ can provide detailed measurements of how the material responds to the pointβs force, as a function of its #penetration depth.
πVia: @cedeeplearning
link: http://news.mit.edu/2020/deep-learning-mechanical-property-metallic-0316
#neuralnetworks
#deeplearning
#machinelearning
πΉUnderstanding Generative Adversarial Networks (GANs)
Yann LeCun described it as βthe most interesting idea in the last 10 years in #Machine_Learningβ. Of course, such a compliment coming from such a prominent researcher in the #deep_learning area is always a great advertisement for the subject we are talking about! And, indeed, #Generative Adversarial #Networks (#GANs for short) have had a huge success since they were introduced in 2014 by Ian J. #Goodfellow and co-authors in the article Generative Adversarial Nets.
πVia: @cedeeplearning
link: https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29
Yann LeCun described it as βthe most interesting idea in the last 10 years in #Machine_Learningβ. Of course, such a compliment coming from such a prominent researcher in the #deep_learning area is always a great advertisement for the subject we are talking about! And, indeed, #Generative Adversarial #Networks (#GANs for short) have had a huge success since they were introduced in 2014 by Ian J. #Goodfellow and co-authors in the article Generative Adversarial Nets.
πVia: @cedeeplearning
link: https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29
πΉStructured learning and GANs in TF, another viral face-swapper, optimizer benchmarks, and more...
This week in #deep_learning we bring you a GAN library for TensorFlow 2.0, another viral #face-swapping app, an #AI Mahjong player from Microsoft, and surprising results showing random architecture search beating neural architecture search. You may also enjoy an interview with Yann LeCun on the AI Podcast, a primer on #MLIR from Google, a few-shot face-#swapping #GAN, benchmarks for recent optimizers, a structured learning #framework for #TensorFlow, and more!
πVia: @cedeeplearning
link: https://www.deeplearningweekly.com/issues/deep-learning-weekly-issue-124.html
This week in #deep_learning we bring you a GAN library for TensorFlow 2.0, another viral #face-swapping app, an #AI Mahjong player from Microsoft, and surprising results showing random architecture search beating neural architecture search. You may also enjoy an interview with Yann LeCun on the AI Podcast, a primer on #MLIR from Google, a few-shot face-#swapping #GAN, benchmarks for recent optimizers, a structured learning #framework for #TensorFlow, and more!
πVia: @cedeeplearning
link: https://www.deeplearningweekly.com/issues/deep-learning-weekly-issue-124.html
π»When not to use deep learning
Despite #DL many successes, there are at least 4 situations where it is more of a hindrance, including low-budget problems, or when explaining #models and #features to general public is required.
So when not to use #deep_learning?
1. #Low-budget or #low-commitment problems
2. Interpreting and communicating model parameters/feature importance to a general audience
3. Establishing causal mechanisms
4. Learning from β#unstructuredβ features
πVia: @cedeeplearning
link: https://www.kdnuggets.com/2017/07/when-not-use-deep-learning.html/2
Despite #DL many successes, there are at least 4 situations where it is more of a hindrance, including low-budget problems, or when explaining #models and #features to general public is required.
So when not to use #deep_learning?
1. #Low-budget or #low-commitment problems
2. Interpreting and communicating model parameters/feature importance to a general audience
3. Establishing causal mechanisms
4. Learning from β#unstructuredβ features
πVia: @cedeeplearning
link: https://www.kdnuggets.com/2017/07/when-not-use-deep-learning.html/2
π»Free Mathematics Courses for Data Science & Machine Learning
It's no secret that #mathematics is the foundation of data science. Here are a selection of courses to help increase your math skills to excel in #data_science, #machine_learning, and beyond. (πΉclick on the link belowπΉ)
πVia: @cedeeplearning
link: https://www.kdnuggets.com/2020/02/free-mathematics-courses-data-science-machine-learning.html
It's no secret that #mathematics is the foundation of data science. Here are a selection of courses to help increase your math skills to excel in #data_science, #machine_learning, and beyond. (πΉclick on the link belowπΉ)
πVia: @cedeeplearning
link: https://www.kdnuggets.com/2020/02/free-mathematics-courses-data-science-machine-learning.html
π»20 AI, Data Science, Machine Learning Terms You Need to Know in 2020
2020 is well underway, and we bring you 20 AI, #data_science, and #machine_learning #terms we should all be familiar with as the year marches onward.
πVia: @cedeeplearning
π»Part1: https://www.kdnuggets.com/2020/02/ai-data-science-machine-learning-key-terms-2020.html
π»Part2: https://www.kdnuggets.com/2020/03/ai-data-science-machine-learning-key-terms-part2.html
#deeplearning
#terminology
2020 is well underway, and we bring you 20 AI, #data_science, and #machine_learning #terms we should all be familiar with as the year marches onward.
πVia: @cedeeplearning
π»Part1: https://www.kdnuggets.com/2020/02/ai-data-science-machine-learning-key-terms-2020.html
π»Part2: https://www.kdnuggets.com/2020/03/ai-data-science-machine-learning-key-terms-part2.html
#deeplearning
#terminology
πΉA more thorough comparison between the #HRRR and #MetNet models can be found in the video.
https://youtu.be/-dAvqroX7ZI
https://youtu.be/-dAvqroX7ZI
YouTube
Neural Weather Model MetNet: Samples
From the paper: "MetNet: A Neural Weather Model for Precipitation Forecasting"
This media is not supported in your browser
VIEW IN TELEGRAM
πΉA Neural Weather Model for Eight-Hour Precipitation Forecasting
Predicting weather from minutes to weeks ahead with high #accuracy is a fundamental scientific challenge that can have a wide ranging impact on many aspects of society. Current forecasts employed by many meteorological agencies are based on physical models of the atmosphere that, despite improving substantially over the preceding decades, are inherently constrained by their computational requirements and are sensitive to approximations of the physical laws that govern them. An alternative approach to weather prediction that is able to overcome some of these constraints uses deep neural networks (#DNNs): instead of encoding explicit physical laws, DNNs discover #patterns in the #data and learn complex transformations from inputs to the desired outputs using parallel computation on powerful specialized hardware such as #GPUs and #TPUs.
πVia: @cedeeplearning
link: https://ai.googleblog.com/
#deeplearning
#neuralnetworks
#machinelearning
Predicting weather from minutes to weeks ahead with high #accuracy is a fundamental scientific challenge that can have a wide ranging impact on many aspects of society. Current forecasts employed by many meteorological agencies are based on physical models of the atmosphere that, despite improving substantially over the preceding decades, are inherently constrained by their computational requirements and are sensitive to approximations of the physical laws that govern them. An alternative approach to weather prediction that is able to overcome some of these constraints uses deep neural networks (#DNNs): instead of encoding explicit physical laws, DNNs discover #patterns in the #data and learn complex transformations from inputs to the desired outputs using parallel computation on powerful specialized hardware such as #GPUs and #TPUs.
πVia: @cedeeplearning
link: https://ai.googleblog.com/
#deeplearning
#neuralnetworks
#machinelearning
πΉLearning to See Transparent Objects
Optical 3D range sensors, like #RGB-D cameras and #LIDAR, have found widespread use in robotics to generate rich and accurate 3D maps of the environment, from #self-driving cars to autonomous manipulators. However, despite the ubiquity of these complex #robotic systems, transparent objects (like a glass container) can confound even a suite of expensive sensors that are commonly used. This is because optical 3D sensors are driven by algorithms that assume all surfaces are Lambertian, i.e., they reflect light evenly in all directions, resulting in a uniform surface brightness from all viewing angles. However, transparent objects violate this assumption, since their surfaces both refract and reflect light. Hence, most of the depth data from transparent objects are invalid or contain unpredictable noise.
πVia: @cedeeplearning
link: https://ai.googleblog.com/search?updated-max=2020-02-24T13:01:00-08:00&max-results=10&start=8&by-date=false
#deeplearning
#neuralnetworks
Optical 3D range sensors, like #RGB-D cameras and #LIDAR, have found widespread use in robotics to generate rich and accurate 3D maps of the environment, from #self-driving cars to autonomous manipulators. However, despite the ubiquity of these complex #robotic systems, transparent objects (like a glass container) can confound even a suite of expensive sensors that are commonly used. This is because optical 3D sensors are driven by algorithms that assume all surfaces are Lambertian, i.e., they reflect light evenly in all directions, resulting in a uniform surface brightness from all viewing angles. However, transparent objects violate this assumption, since their surfaces both refract and reflect light. Hence, most of the depth data from transparent objects are invalid or contain unpredictable noise.
πVia: @cedeeplearning
link: https://ai.googleblog.com/search?updated-max=2020-02-24T13:01:00-08:00&max-results=10&start=8&by-date=false
#deeplearning
#neuralnetworks