Handbook of Graphical Models
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Causal Inference: What If
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Breast Histopathology Images Dataset
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Kaggle
Breast Histopathology Images
198,738 IDC(-) image patches; 78,786 IDC(+) image patches
A tutorial to implement state-of-the-art NLP models with Fastai for Sentiment Analysis
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Pre-Debate Material :
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
arXiv.org
Recurrent Independent Mechanisms
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose...
Pre-Debate Material :
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
YouTube
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio
Speaker: Yoshua Bengio
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
A decade ago we weren’t sure neural nets could ever deal with language.
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
SuperGLUE Benchmark
SuperGLUE is a new benchmark styled after original GLUE benchmark with a set of more difficult language understanding tasks, improved resources, and a new public leaderboard.
Pre-Debate Material :
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
YouTube
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning
Moving beyond supervised deep learning
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Meta-transfer learning for factorizing representations, casual graphs and knowledge for AI
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Best Releases and Papers from OpenAI in 2019 So Far
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
Open Data Science - Your News Source for AI, Machine Learning & more
Best Releases and Papers from OpenAI in 2019 So Far
OpenAI is one of the leaders in research on artificial general intelligence, here’s our picks of the 9 best releases and papers from OpenAI in 2019 so far.
Top 100 Neuroscience Blogs And Websites For Neuroscientists in 2019
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
Feedspot
90 Best Neuroscience Blogs and Websites To Follow in 2023
Neuroscience Blogs Best List. Find information on neuroscience news, journals, research papers, neurology, cognitive neuroscience, neuropsychology, neurosurgery, brain science, neurodegeneration research at the molecular and cellular levels, neuropatholog
The first video GAN with sparse input release by Facebook recently
Paper: https://research.fb.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/
Github: https://github.com/facebookresearch/DeepFovea
DeepFovea can decrease the number of computing resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye.
Paper: https://research.fb.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/
Github: https://github.com/facebookresearch/DeepFovea
DeepFovea can decrease the number of computing resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye.
Facebook Research
DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos
Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we…
An Epidemic of AI Misinformation
Gary Marcus : https://thegradient.pub/an-epidemic-of-ai-misinformation/
#ArtificialIntelligence #DeepLearning #MachineLearning
Gary Marcus : https://thegradient.pub/an-epidemic-of-ai-misinformation/
#ArtificialIntelligence #DeepLearning #MachineLearning
The Gradient
An Epidemic of AI Misinformation
> Maybe every paper abstract should have a mandatory field of what the limitations of the proposed approach are. That way some of the science miscommunications and hypes could maybe be avoided. — Sebastian Risi (@risi1979) October 28, 2019 [https://twitt…
Deep Learning
[http://web.stanford.edu/class/cs230/](http://web.stanford.edu/class/cs230/)
[ Natural Language Processing ]
CS 124: From Languages to Information (LINGUIST 180, LINGUIST 280)
[http://web.stanford.edu/class/cs124/](http://web.stanford.edu/class/cs124/)
CS 224N: Natural Language Processing with Deep Learning (LINGUIST 284)
[http://web.stanford.edu/class/cs224n/](http://web.stanford.edu/class/cs224n/)
CS 224U: Natural Language Understanding (LINGUIST 188, LINGUIST 288)
[http://web.stanford.edu/class/cs224u/](http://web.stanford.edu/class/cs224u/)
CS 276: Information Retrieval and Web Search (LINGUIST 286)
[http://web.stanford.edu/class/cs](http://web.stanford.edu/class/cs224u/)276
[ Computer Vision ]
CS 131: Computer Vision: Foundations and Applications
http://[cs131.stanford.edu](http://cs131.stanford.edu/)
CS 205L: Continuous Mathematical Methods with an Emphasis on Machine Learning
[http://web.stanford.edu/class/cs205l/](http://web.stanford.edu/class/cs205l/)
CS 231N: Convolutional Neural Networks for Visual Recognition
[http://cs231n.stanford.edu/](http://cs231n.stanford.edu/)
CS 348K: Visual Computing Systems
[http://graphics.stanford.edu/courses/cs348v-18-winter/](http://graphics.stanford.edu/courses/cs348v-18-winter/)
[ Others ]
CS224W: Machine Learning with Graphs([Yong Dam Kim](https://www.facebook.com/yongdam.kim) )
[http://web.stanford.edu/class/cs224w/](http://web.stanford.edu/class/cs224w/)
CS 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, GENE 236)
[https://canvas.stanford.edu/courses/51037](https://canvas.stanford.edu/courses/51037)
CS 236: Deep Generative Models
[https://deepgenerativemodels.github.io/](https://deepgenerativemodels.github.io/)
CS 228: Probabilistic Graphical Models: Principles and Techniques
[https://cs228.stanford.edu/](https://cs228.stanford.edu/)
CS 337: Al-Assisted Care (MED 277)
[http://cs337.stanford.edu/](http://cs337.stanford.edu/)
CS 229: Machine Learning (STATS 229)
[http://cs229.stanford.edu/](http://cs229.stanford.edu/)
CS 229A: Applied Machine Learning
[https://cs229a.stanford.edu](https://cs229a.stanford.edu/)
CS 234: Reinforcement Learning
http://[s234.stanford.edu](http://cs234.stanford.edu/)
CS 221: Artificial Intelligence: Principles and Techniques
[https://stanford-cs221.github.io/autumn2019/](https://stanford-cs221.github.io/autumn2019/)
[http://web.stanford.edu/class/cs230/](http://web.stanford.edu/class/cs230/)
[ Natural Language Processing ]
CS 124: From Languages to Information (LINGUIST 180, LINGUIST 280)
[http://web.stanford.edu/class/cs124/](http://web.stanford.edu/class/cs124/)
CS 224N: Natural Language Processing with Deep Learning (LINGUIST 284)
[http://web.stanford.edu/class/cs224n/](http://web.stanford.edu/class/cs224n/)
CS 224U: Natural Language Understanding (LINGUIST 188, LINGUIST 288)
[http://web.stanford.edu/class/cs224u/](http://web.stanford.edu/class/cs224u/)
CS 276: Information Retrieval and Web Search (LINGUIST 286)
[http://web.stanford.edu/class/cs](http://web.stanford.edu/class/cs224u/)276
[ Computer Vision ]
CS 131: Computer Vision: Foundations and Applications
http://[cs131.stanford.edu](http://cs131.stanford.edu/)
CS 205L: Continuous Mathematical Methods with an Emphasis on Machine Learning
[http://web.stanford.edu/class/cs205l/](http://web.stanford.edu/class/cs205l/)
CS 231N: Convolutional Neural Networks for Visual Recognition
[http://cs231n.stanford.edu/](http://cs231n.stanford.edu/)
CS 348K: Visual Computing Systems
[http://graphics.stanford.edu/courses/cs348v-18-winter/](http://graphics.stanford.edu/courses/cs348v-18-winter/)
[ Others ]
CS224W: Machine Learning with Graphs([Yong Dam Kim](https://www.facebook.com/yongdam.kim) )
[http://web.stanford.edu/class/cs224w/](http://web.stanford.edu/class/cs224w/)
CS 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, GENE 236)
[https://canvas.stanford.edu/courses/51037](https://canvas.stanford.edu/courses/51037)
CS 236: Deep Generative Models
[https://deepgenerativemodels.github.io/](https://deepgenerativemodels.github.io/)
CS 228: Probabilistic Graphical Models: Principles and Techniques
[https://cs228.stanford.edu/](https://cs228.stanford.edu/)
CS 337: Al-Assisted Care (MED 277)
[http://cs337.stanford.edu/](http://cs337.stanford.edu/)
CS 229: Machine Learning (STATS 229)
[http://cs229.stanford.edu/](http://cs229.stanford.edu/)
CS 229A: Applied Machine Learning
[https://cs229a.stanford.edu](https://cs229a.stanford.edu/)
CS 234: Reinforcement Learning
http://[s234.stanford.edu](http://cs234.stanford.edu/)
CS 221: Artificial Intelligence: Principles and Techniques
[https://stanford-cs221.github.io/autumn2019/](https://stanford-cs221.github.io/autumn2019/)
web.stanford.edu
CS230 Deep Learning
Deep Learning is one of the most highly sought after skills in AI. In this course, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional…
Hamiltonian Graph Networks with ODE Integrators
Sanchez-Gonzalez et al.: https://arxiv.org/abs/1909.12790
#ArtificialIntelligence #Hamiltonian #GraphNetworks
Sanchez-Gonzalez et al.: https://arxiv.org/abs/1909.12790
#ArtificialIntelligence #Hamiltonian #GraphNetworks
Bayesian Deep Learning Benchmarks
Oxford Applied and Theoretical Machine Learning Group : https://github.com/OATML/bdl-benchmarks
#Bayesian #Benchmark #DeepLearning
Oxford Applied and Theoretical Machine Learning Group : https://github.com/OATML/bdl-benchmarks
#Bayesian #Benchmark #DeepLearning
GitHub
GitHub - OATML/bdl-benchmarks: Bayesian Deep Learning Benchmarks
Bayesian Deep Learning Benchmarks. Contribute to OATML/bdl-benchmarks development by creating an account on GitHub.
We’ve completed the first fastMRI image reconstruction challenge to spur development of new AI techniques to make scans 10x faster. Congratulations to the top entrants, who’ve been invited to present at the Medical Imaging Meets NeurIPS workshop!
https://ai.facebook.com/blog/results-of-the-first-fastmri-image-reconstruction-challenge
https://ai.facebook.com/blog/results-of-the-first-fastmri-image-reconstruction-challenge
Facebook
Results of the first fastMRI image reconstruction challenge
Thirty four teams entered the first fastMRI challenge, seeking to develop new ways to use AI to make MRIs 10x faster with no loss in quality. We’re now sharing the results and details.
We just released our #NeurIPS2019 Multimodal Model-Agnostic Meta-Learning (MMAML) code for learning few-shot image classification, which extends MAML to multimodal task distributions (e.g. learning from multiple datasets). The code contains #PyTorch implementations of our model and two baselines (MAML and Multi-MAML) as well as the scripts to evaluate these models to five popular few-shot learning datasets: Omniglot, Mini-ImageNet, FC100 (CIFAR100), CUB-200-2011, and FGVC-Aircraft.
Code: https://github.com/shaohua0116/MMAML-Classification
Paper: https://arxiv.org/abs/1910.13616
#NeurIPS #MachineLearning #ML #code
Code: https://github.com/shaohua0116/MMAML-Classification
Paper: https://arxiv.org/abs/1910.13616
#NeurIPS #MachineLearning #ML #code
GitHub
GitHub - shaohua0116/MMAML-Classification: An official PyTorch implementation of “Multimodal Model-Agnostic Meta-Learning via Task…
An official PyTorch implementation of “Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation” (NeurIPS 2019) by Risto Vuorio*, Shao-Hua Sun*, Hexiang Hu, and Joseph J. Lim - GitHub - sh...
Mathematics for Machine Learning
Free Download Printed Book Cambridge University Press
https://mml-book.github.io/
#artificialintelligence #AI #Mathematics #calculus #linearalgebra #deeplearning #machinelearning
Free Download Printed Book Cambridge University Press
https://mml-book.github.io/
#artificialintelligence #AI #Mathematics #calculus #linearalgebra #deeplearning #machinelearning