#iclr2019
Some early keyword analysis and topic modeling:
https://colab.research.google.com/drive/1jG5ilLQUvxvZ-HB60ovc4Ve_UK5ICoFT
Some early keyword analysis and topic modeling:
https://colab.research.google.com/drive/1jG5ilLQUvxvZ-HB60ovc4Ve_UK5ICoFT
Understanding & Generalizing AlphaGo Zero #ICLR2019
Anonymous: https://openreview.net/forum?id=rkxtl3C5YX
Anonymous: https://openreview.net/forum?id=rkxtl3C5YX
Search ICLR 2019
Having trouble finding the papers that use technique X, dataset D, or cite author ME in the #ICLR2019 submissions?
Search ICLR 2019: http://search.iclr2019.smerity.com/
Having trouble finding the papers that use technique X, dataset D, or cite author ME in the #ICLR2019 submissions?
Search ICLR 2019: http://search.iclr2019.smerity.com/
"Deep Generative Models for Graphs: Methods & Applications"
Slides by Jure Leskovec :
http://i.stanford.edu/~jure/pub/talks2/graph_gen-iclr-may19-long.pdf
#artificialintelligence #deeplearning #machinelearning #ICLR2019
Slides by Jure Leskovec :
http://i.stanford.edu/~jure/pub/talks2/graph_gen-iclr-may19-long.pdf
#artificialintelligence #deeplearning #machinelearning #ICLR2019
#RP Mariya Toneva, Excited to present our work on understanding catastrophic example forgetting at ICLR on Wednesday from 11-1pm! Poster #44. Joint work with Alessandro Sordoni, Remi Tachet, Adam Trischler, Yoshua Bengio, and Geoff Gordon
Paper: http://bit.ly/2H8yQUg
Code: http://bit.ly/2vMH6mw
#ICLR #ICLR2019 #MachineLearning
Paper: http://bit.ly/2H8yQUg
Code: http://bit.ly/2vMH6mw
#ICLR #ICLR2019 #MachineLearning
GitHub
mtoneva/example_forgetting
Contribute to mtoneva/example_forgetting development by creating an account on GitHub.
An Empirical Study of Example Forgetting During Deep Neural Network Learning
Joint work with Alessandro Sordoni, Remi Tachet, Adam Trischler, Yoshua Bengio, and Geoff Gordon
Paper: http://bit.ly/2H8yQUg
Code: http://bit.ly/2vMH6mw
#ICLR #ICLR2019 #MachineLearning
Joint work with Alessandro Sordoni, Remi Tachet, Adam Trischler, Yoshua Bengio, and Geoff Gordon
Paper: http://bit.ly/2H8yQUg
Code: http://bit.ly/2vMH6mw
#ICLR #ICLR2019 #MachineLearning
GitHub
mtoneva/example_forgetting
Contribute to mtoneva/example_forgetting development by creating an account on GitHub.
"Top 8 trends from ICLR 2019"
By Chip Huyen: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
#deeplearning #iclr2019 #technology
By Chip Huyen: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
#deeplearning #iclr2019 #technology
ICLR 2019 posters
By Jonathan Binas and Avital Oliver: https://postersession.ai
#deeplearning #ICLR2019 #technology
By Jonathan Binas and Avital Oliver: https://postersession.ai
#deeplearning #ICLR2019 #technology
#ICLR2019 Best Paper
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
Jonathan Frankle, Michael Carbin: https://arxiv.org/abs/1803.03635
#DeepLearning #MachineLearning #NeuralNetworks
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
Jonathan Frankle, Michael Carbin: https://arxiv.org/abs/1803.03635
#DeepLearning #MachineLearning #NeuralNetworks
arXiv.org
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without...
Deep Convolutional Networks as Shallow Gaussian Processes #iclr2019
By @AdriGarriga
The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on
MNIST, SoTA for GPs with comparable params size
Github
https://github.com/convnets-as-gps/convnets-as-gps
ArXiv
https://arxiv.org/abs/1808.05587
By @AdriGarriga
The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on
MNIST, SoTA for GPs with comparable params size
Github
https://github.com/convnets-as-gps/convnets-as-gps
ArXiv
https://arxiv.org/abs/1808.05587
GitHub
convnets-as-gps/convnets-as-gps
Code for "Deep Convolutional Networks as shallow Gaussian Processes" - convnets-as-gps/convnets-as-gps
Best research paper award at our Debugging ML workshop -- "Similarity of Neural Network Representations Revisited" by Geoffrey Hinton , Mohammad Norouzi, Honglak Lee, and Simon Kornblith
https://arxiv.org/abs/1905.00414
#ICLR2019 https://t.me/ArtificialIntelligenceArticles
https://arxiv.org/abs/1905.00414
#ICLR2019 https://t.me/ArtificialIntelligenceArticles
Top 8 trends from ICLR 2019
Overview of trends on #ICLR2019:
1. Inclusivity
2. Unsupervised representation learning & transfer learning
3. Retro ML
4. RNN is losing its luster with researchers
5. GANs are still going on strong
6. The lack of biologically inspired deep learning
7. Reinforcement learning is still the most popular topic by submissions
8. Most accepted papers will be quickly forgotten
Link: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
Overview of trends on #ICLR2019:
1. Inclusivity
2. Unsupervised representation learning & transfer learning
3. Retro ML
4. RNN is losing its luster with researchers
5. GANs are still going on strong
6. The lack of biologically inspired deep learning
7. Reinforcement learning is still the most popular topic by submissions
8. Most accepted papers will be quickly forgotten
Link: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
Huyenchip
Top 8 trends from ICLR 2019
[Twitter thread] Disclaimer: This post doesn’t reflect the view of any of the organizations I’m associated with and is probably peppered with my personal and...