Neural scene representation and rendering
In June #DeepMind introduced Generative Query Network (#GQN) framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes.
Link: https://deepmind.com/blog/neural-scene-representation-and-rendering/
In June #DeepMind introduced Generative Query Network (#GQN) framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes.
Link: https://deepmind.com/blog/neural-scene-representation-and-rendering/
Deepmind
Neural scene representation and rendering
There is more than meets the eye when it comes to how we understand a visual scene: our brains draw on prior knowledge to reason and to make inferences that go far beyond the patterns of light that hit our retinas. For example, when entering a room for the…
#DeepMind new release: Neural Processes (#NPs) that generalise #GQN ’s training regime to other few-shot prediction tasks such as regression and classification
Arxiv 1: https://arxiv.org/abs/1807.01622
Arxiv 2: https://arxiv.org/abs/1807.01613
#ICML2018
Arxiv 1: https://arxiv.org/abs/1807.01622
Arxiv 2: https://arxiv.org/abs/1807.01613
#ICML2018
Teams at #DeepMind and #Moorfields have developed AI technology that can detect eye disease and prioritise patients. 'Clinically applicable deep learning for diagnosis and referral in retinal OCT' has been published online in #NatureMedicine today:
https://www.nature.com/articles/s41591-018-0107-6
#cv #dl
https://www.nature.com/articles/s41591-018-0107-6
#cv #dl
Nature
Clinically applicable deep learning for diagnosis and referral in retinal disease
Nature Medicine - A novel deep learning architecture performs device-independent tissue segmentation of clinical 3D retinal images followed by separate diagnostic classification that meets or...
Neural nets are terrible at arithmetic & counting. If you train one in 1 to 10, it will do okay on 3 + 5 but fail miserably for 1000 + 3000. Resolving this, «Neural Arithmetic Logic Units» can track time, do arithmetic on images of numbers, & extrapolate, providing better results than other architectures.
https://arxiv.org/pdf/1808.00508.pdf
#nn #architecture #concept #deepmind #arithmetic
https://arxiv.org/pdf/1808.00508.pdf
#nn #architecture #concept #deepmind #arithmetic
Paper «A Probabilistic U-Net for Segmentation of Ambiguous Images» from #NIPS2018 spotlight presentation.
Github: https://github.com/SimonKohl/probabilistic_unet
Github: Arxiv: https://arxiv.org/abs/1806.05034
#DeepMind #segmentation #cv
Github: https://github.com/SimonKohl/probabilistic_unet
Github: Arxiv: https://arxiv.org/abs/1806.05034
#DeepMind #segmentation #cv
GitHub
GitHub - SimonKohl/probabilistic_unet: A U-Net combined with a variational auto-encoder that is able to learn conditional distributions…
A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations. - GitHub - SimonKohl/probabilistic_unet: A U-Net combined with a variat...
#DeepMind ’s library for deep learning on graphs.
ArXiV: https://arxiv.org/abs/1806.01261
Github: https://github.com/deepmind/graph_nets
ArXiV: https://arxiv.org/abs/1806.01261
Github: https://github.com/deepmind/graph_nets
GitHub
GitHub - google-deepmind/graph_nets: Build Graph Nets in Tensorflow
Build Graph Nets in Tensorflow. Contribute to google-deepmind/graph_nets development by creating an account on GitHub.
🎓 Free «Advanced Deep Learning and Reinforcement Learning» course.
#DeepMind researchers have released video recordings of lectures from «Advanced Deep Learning and Reinforcement Learning» a course on deep RL taught at #UCL earlier this year.
YouTube Playlist: https://www.youtube.com/playlist?list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs
#course #video #RL #DL
#DeepMind researchers have released video recordings of lectures from «Advanced Deep Learning and Reinforcement Learning» a course on deep RL taught at #UCL earlier this year.
YouTube Playlist: https://www.youtube.com/playlist?list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs
#course #video #RL #DL
🔥 AlphaFold: Using AI for scientific discovery.
#DeepMind has significally improved protein folding prediction.
Protein folding is important because it allows to predict function along with the functioning mechanism.
Website: https://deepmind.com/blog/alphafold/
Guardian: https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins
#bioinformatics #alphafold #genetics
#DeepMind has significally improved protein folding prediction.
Protein folding is important because it allows to predict function along with the functioning mechanism.
Website: https://deepmind.com/blog/alphafold/
Guardian: https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins
#bioinformatics #alphafold #genetics
Papers from #DeepMind panel at #NIPS2018
Work on radiotherapy planning: https://arxiv.org/abs/1809.04430
Triaging eye diseases: https://www.nature.com/articles/s41591-018-0107-6
Probabilistic U-net: https://arxiv.org/abs/1806.05034
#segmentation #CV #Unet
Work on radiotherapy planning: https://arxiv.org/abs/1809.04430
Triaging eye diseases: https://www.nature.com/articles/s41591-018-0107-6
Probabilistic U-net: https://arxiv.org/abs/1806.05034
#segmentation #CV #Unet
#DeepMind will show AI playing #Starcraft II.
Starts in 8 hours (6:00 PM GMT)
youtube.com/c/deepmind / https://www.twitch.tv/starcraft
#RL
Starts in 8 hours (6:00 PM GMT)
youtube.com/c/deepmind / https://www.twitch.tv/starcraft
#RL
Large Scale Adversarial Representation Learning
DeepMind shows that GANs can be harnessed for unsupervised representation learning, with state-of-the-art results on ImageNet. Reconstructions, as shown in paper, tend to emphasise high-level semantics over pixel-level details.
Link: https://arxiv.org/abs/1907.02544
#DeepMind #GAN #CV #DL #SOTA
DeepMind shows that GANs can be harnessed for unsupervised representation learning, with state-of-the-art results on ImageNet. Reconstructions, as shown in paper, tend to emphasise high-level semantics over pixel-level details.
Link: https://arxiv.org/abs/1907.02544
#DeepMind #GAN #CV #DL #SOTA
DeepMind's Behaviour Suite for Reinforcement Learning
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
GitHub
GitHub - deepmind/bsuite: bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement…
bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent - GitHub - deepmind/bsuite: bsuite is a collection of carefully-de...
Applying machine learning optimization methods to the production of a quantum gas
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
🔥DeepMind’s AlphaStar beats top human players at strategy game StarCraft II
AlphaStar by Google’s DeepMind can now play StarCraft 2 so well that it places in the 99.8 percentile on the European server. In other words, way better than even great human players, achieving performance similar to gods of StarCraft.
Solution basically combines reinforcement learning with a quality-diversity algorithm, which is similar to an evolutionary algorithm.
What’s difficult about StarCraft and how is it different to recent #Go and #Chess AI solutions: even finding winning strategy (StarCraft is famouse to closeness to rock-scissors-paper, not-so-transitive game design, as chess and go), is not enough to win, since the result depends on execution on different macro and micro levels at different timescales.
How that is applicable in real world: basically, it is running logistics, manufacture, research with complex operations and different units.
Why this matters: it brings AI one step closer to running real business.
Blog post: https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
Nature: https://www.nature.com/articles/d41586-019-03298-6
ArXiV: https://arxiv.org/abs/1902.01724
Nontechnical video: https://www.youtube.com/watch?v=6eiErYh_FeY
#Google #GoogleAI #AlphaStar #Starcraft #Deepmind #nature #AlphaZero
AlphaStar by Google’s DeepMind can now play StarCraft 2 so well that it places in the 99.8 percentile on the European server. In other words, way better than even great human players, achieving performance similar to gods of StarCraft.
Solution basically combines reinforcement learning with a quality-diversity algorithm, which is similar to an evolutionary algorithm.
What’s difficult about StarCraft and how is it different to recent #Go and #Chess AI solutions: even finding winning strategy (StarCraft is famouse to closeness to rock-scissors-paper, not-so-transitive game design, as chess and go), is not enough to win, since the result depends on execution on different macro and micro levels at different timescales.
How that is applicable in real world: basically, it is running logistics, manufacture, research with complex operations and different units.
Why this matters: it brings AI one step closer to running real business.
Blog post: https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
Nature: https://www.nature.com/articles/d41586-019-03298-6
ArXiV: https://arxiv.org/abs/1902.01724
Nontechnical video: https://www.youtube.com/watch?v=6eiErYh_FeY
#Google #GoogleAI #AlphaStar #Starcraft #Deepmind #nature #AlphaZero
YouTube
The AI that mastered Starcraft II
Google’s DeepMind artificial intelligence researchers have already mastered games like Pong, Chess and Go but their latest triumph is on another planet. AlphaStar is an artificial intelligence trained to play the science fiction video game StarCraft II.
…
…
LOGAN: Latent Optimisation for Generative Adversarial Networks
Game-theory motivated algorithm from #DeepMind improves the state-of-the-art in #GAN image generation by over 30% measured in FID.
ArXiV: https://arxiv.org/abs/1912.00953
Game-theory motivated algorithm from #DeepMind improves the state-of-the-art in #GAN image generation by over 30% measured in FID.
ArXiV: https://arxiv.org/abs/1912.00953
Dream to Control: Learning Behaviors by Latent Imagination
Abstract: Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs are becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
Dreamer learns long-horizon behaviors from images purely by latent imagination. For this, it backpropagates value estimates through trajectories imagined in the compact latent space of a learned world model. Dreamer solves visual control tasks using substantially fewer episodes than strong model-free agents.
Dreamer learns a world model from past experiences that can predict the future. It then learns action and value models in its compact latent space. The value model optimizes Bellman's consistency of imagined trajectories. The action model maximizes value estimates by propagating their analytic gradients back through imagined trajectories. When interacting with the environment, it simply executes the action model.
paper: https://arxiv.org/abs/1912.01603
github: https://github.com/google-research/dreamer
site: https://danijar.com/dreamer
#RL #Dreams #Imagination #DL #GoogleBrain #DeepMind
Abstract: Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs are becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
Dreamer learns long-horizon behaviors from images purely by latent imagination. For this, it backpropagates value estimates through trajectories imagined in the compact latent space of a learned world model. Dreamer solves visual control tasks using substantially fewer episodes than strong model-free agents.
Dreamer learns a world model from past experiences that can predict the future. It then learns action and value models in its compact latent space. The value model optimizes Bellman's consistency of imagined trajectories. The action model maximizes value estimates by propagating their analytic gradients back through imagined trajectories. When interacting with the environment, it simply executes the action model.
paper: https://arxiv.org/abs/1912.01603
github: https://github.com/google-research/dreamer
site: https://danijar.com/dreamer
#RL #Dreams #Imagination #DL #GoogleBrain #DeepMind
A Deep Neural Network's Loss Surface Contains Every Low-dimensional Pattern
New work from #DeepMind built in top of Loss Landscape Sightseeing with Multi-Point Optimization
ArXiV: https://arxiv.org/abs/1912.07559
Predecessor’s github: https://github.com/universome/loss-patterns
New work from #DeepMind built in top of Loss Landscape Sightseeing with Multi-Point Optimization
ArXiV: https://arxiv.org/abs/1912.07559
Predecessor’s github: https://github.com/universome/loss-patterns
DeepMind significally (+100%) improved protein folding modelling
Why is this important: protein folding = protein structure = protein function = how protein works in the living speciment and what it does.
What this means: better vaccines, better meds, more curable diseases and more calamities easen by the medications or better understanding.
Dataset: ~170000 available protein structures from PDB
Hardware: 128 TPUv3 cores (roughly equivalent to ~100-200 GPUs)
Link: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
#DL #NLU #proteinmodelling #bio #biolearning #insilico #deepmind #AlphaFold
Why is this important: protein folding = protein structure = protein function = how protein works in the living speciment and what it does.
What this means: better vaccines, better meds, more curable diseases and more calamities easen by the medications or better understanding.
Dataset: ~170000 available protein structures from PDB
Hardware: 128 TPUv3 cores (roughly equivalent to ~100-200 GPUs)
Link: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
#DL #NLU #proteinmodelling #bio #biolearning #insilico #deepmind #AlphaFold
Protein Folding explained
#deepmind didn’t stop at delivering the improved technology for folding #proteinfolding . They also provided really cool video explaining why is that so cool.
YouTube: https://www.youtube.com/watch?v=KpedmJdrTpY&feature=emb_title
#explained #bio #proteinmodelling
#deepmind didn’t stop at delivering the improved technology for folding #proteinfolding . They also provided really cool video explaining why is that so cool.
YouTube: https://www.youtube.com/watch?v=KpedmJdrTpY&feature=emb_title
#explained #bio #proteinmodelling
YouTube
Protein folding explained
Join DeepMind Science Engineer Kathryn Tunyasuvunakool to explore the hidden world of proteins.
These tiny molecular machines underpin every biological process in every living thing and each one has a unique 3D shape that determines how it works and what…
These tiny molecular machines underpin every biological process in every living thing and each one has a unique 3D shape that determines how it works and what…
Solving Mixed Integer Programs Using Neural Networks
Article on speeding up Mixed Integer Programs with ML. Mixed Integer Programs are usually NP-hard problems:
- Problems solved with linear programming
- Production planning (pipeline optimization)
- Scheduling / Dispatching
Or any problems where integers represent various decisions (including some of the graph problems).
ArXiV: https://arxiv.org/abs/2012.13349
Wikipedia on Mixed Integer Programming: https://en.wikipedia.org/wiki/Integer_programming
#NPhard #MILP #DeepMind #productionml #linearprogramming #optimizationproblem
Article on speeding up Mixed Integer Programs with ML. Mixed Integer Programs are usually NP-hard problems:
- Problems solved with linear programming
- Production planning (pipeline optimization)
- Scheduling / Dispatching
Or any problems where integers represent various decisions (including some of the graph problems).
ArXiV: https://arxiv.org/abs/2012.13349
Wikipedia on Mixed Integer Programming: https://en.wikipedia.org/wiki/Integer_programming
#NPhard #MILP #DeepMind #productionml #linearprogramming #optimizationproblem