CE7454 : Deep Learning for Data Science
Lecture 13: Attention Neural Networks
Xavier Bresson : https://dropbox.com/s/kbrsvhwe2lac1uo/lecture13_attention_neural_networks.pdf?dl=0
Demo :
https://github.com/xbresson/CE7454_2019/blob/master/codes/labs_lecture13/seq2seq_transformers_demo.ipynb
#DeepLearning #DataScience #Transformer
Lecture 13: Attention Neural Networks
Xavier Bresson : https://dropbox.com/s/kbrsvhwe2lac1uo/lecture13_attention_neural_networks.pdf?dl=0
Demo :
https://github.com/xbresson/CE7454_2019/blob/master/codes/labs_lecture13/seq2seq_transformers_demo.ipynb
#DeepLearning #DataScience #Transformer
Dropbox
lecture13_attention_neural_networks.pdf
Shared with Dropbox
My position is very similar to Yoshua's.
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way
https://t.me/ArtificialIntelligenceArticles
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.
Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way
https://t.me/ArtificialIntelligenceArticles
Telegram
ArtificialIntelligenceArticles
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
Accelerating TSNE with GPUs: From hours to seconds
Blog by Daniel Han-Chen : https://medium.com/rapids-ai/tsne-with-gpus-hours-to-seconds-9d9c17c941db
#MachineLearning #DataVisualization #DataScience
Blog by Daniel Han-Chen : https://medium.com/rapids-ai/tsne-with-gpus-hours-to-seconds-9d9c17c941db
#MachineLearning #DataVisualization #DataScience
Medium
Accelerating TSNE with GPUs: From hours to seconds
RAPIDS TSNE runs 2000x faster on GPUs — That’s 3 hours down to 5 seconds!
This is an exhaustive list of Monte Carlo tree search papers from major conferences including NIPS, ICML, and AAAI. Some of them with publicly available implementations.
https://github.com/benedekrozemberczki/awesome-monte-carlo-tree-search-papers
#datascience #machinelearning #deeplearning #python #ai #analytics #datamining
https://github.com/benedekrozemberczki/awesome-monte-carlo-tree-search-papers
#datascience #machinelearning #deeplearning #python #ai #analytics #datamining
GitHub
GitHub - benedekrozemberczki/awesome-monte-carlo-tree-search-papers: A curated list of Monte Carlo tree search papers with implementations.
A curated list of Monte Carlo tree search papers with implementations. - GitHub - benedekrozemberczki/awesome-monte-carlo-tree-search-papers: A curated list of Monte Carlo tree search papers with ...
How To Build Your Own MuZero AI Using Python (Part 1/3)
Blog by David Foster : https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a
#MachineLearning #DeepLearning #DataScience #ArtificialIntelligence #AI
Blog by David Foster : https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a
#MachineLearning #DeepLearning #DataScience #ArtificialIntelligence #AI
Medium
MuZero: The Walkthrough (Part 1/3)
Teaching A Machine To Play Games Using Self-Play And Deep Learning…Without Telling It The Rules 🤯
Postdoctoral Fellow in Bioinformatics, Deep Learning
https://bioinformatics.ca/job-postings/a24301d0-1c3b-11ea-947d-63bc5c89c0f8/#/?&order=desc
https://t.me/ArtificialIntelligenceArticles
https://bioinformatics.ca/job-postings/a24301d0-1c3b-11ea-947d-63bc5c89c0f8/#/?&order=desc
https://t.me/ArtificialIntelligenceArticles
Telegram
ArtificialIntelligenceArticles
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
Analyzed 1k+ Deep Learning Projects on Github and related StackOverflow issues. And interviewed 20 researchers and practitioners
https://arxiv.org/abs/1910.11015
https://t.me/ArtificialIntelligenceArticles
https://arxiv.org/abs/1910.11015
https://t.me/ArtificialIntelligenceArticles
Telegram
ArtificialIntelligenceArticles
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience
6. #ResearchPapers
7. Related Courses and Ebooks
What is adversarial machine learning, and how is it used today?
-Generative modeling, security, model-based optimization, neuroscience, fairness, and more!
Here's a fantastic video overview by Ian Goodfellow.
http://videos.re-work.co/videos/1351-ian-goodfellow
#ML #adversarialML #AI #datascience
-Generative modeling, security, model-based optimization, neuroscience, fairness, and more!
Here's a fantastic video overview by Ian Goodfellow.
http://videos.re-work.co/videos/1351-ian-goodfellow
#ML #adversarialML #AI #datascience
videos.re-work.co
Ian Goodfellow
At the time of his presentation, Ian was a Senior Staff Research Scientist at Google and gave an insight into some of the latest breakthroughs in GANs. Dubbed the 'Godfather of GANs', who better to get an overview from than Ian? Post discussion, Ian had one…
XGBoost: An Intuitive Explanation
Ashutosh Nayak : https://towardsdatascience.com/xgboost-an-intuitive-explanation-88eb32a48eff
#MachineLearning #DataScience #RandomForest #Xgboost #DecisionTree
Ashutosh Nayak : https://towardsdatascience.com/xgboost-an-intuitive-explanation-88eb32a48eff
#MachineLearning #DataScience #RandomForest #Xgboost #DecisionTree