"Linear Algebra"
Instructor : Prof. Gilbert Strang
MIT OpenCourseWare : https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
#LinearAlgebra #MatrixTheory
Instructor : Prof. Gilbert Strang
MIT OpenCourseWare : https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/
#LinearAlgebra #MatrixTheory
Yan lecun:
About to give a keynote at AAAI after Geoff Hinton and just before Yoshua Bengio.
My slides here: https://drive.google.com/open?id=1r-mDL4IX_hzZLDBKp8_e8VZqD7fOzBkF
About to give a keynote at AAAI after Geoff Hinton and just before Yoshua Bengio.
My slides here: https://drive.google.com/open?id=1r-mDL4IX_hzZLDBKp8_e8VZqD7fOzBkF
Google Docs
lecun-20200209-aaai.pdf
EmotioNet Challenge
http://cbcsl.ece.ohio-state.edu/EmotionNetChallenge/index.html
http://cbcsl.ece.ohio-state.edu/EmotionNetChallenge/index.html
t-SNE algorithm on MNIST dataset in Kaggle kernels. NVIDIA's Rapids library with GPU acceleration. The algorithm achieves a 2000x speedup as compared to the sklearn version on CPU!
https://www.kaggle.com/tunguz/mnist-2d-t-sne-with-rapids
https://www.kaggle.com/tunguz/mnist-2d-t-sne-with-rapids
Kaggle
MNIST 2D t-SNE with Rapids
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources
Fixed smooth convolutional layer for avoiding checkerboard artifacts in CNNs. http://arxiv.org/abs/2002.02117
Unbalanced GANs: Pre-training the Generator of Generative Adversarial Network using Varia... http://arxiv.org/abs/2002.02112
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications
Hamdi et al.: http://arxiv.org/abs/1812.02132
Code http://github.com/ajhamdi/SADA
Video http://youtu.be/clguL24kVG0
#Cryptography #MachineLearning #Robotics
Hamdi et al.: http://arxiv.org/abs/1812.02132
Code http://github.com/ajhamdi/SADA
Video http://youtu.be/clguL24kVG0
#Cryptography #MachineLearning #Robotics
GitHub
ajhamdi/SADA
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications - ajhamdi/SADA
Nobel prize winner Danny Kahneman cited Gary Marcus (see the recent #AIdebate) and the need for hybrid models in science/AI (including reasoning and logic, in addition to learning), when referring to his systems 1 and 2 (see Thinking Fast and Slow). Neural-symbolic computing is a foundation for this line of research. #AAAI2020 congratulations to Francesca Rossi for the panel with Kahneman and the Turing Award winners. https://link.springer.com/book/10.1007/978-3-540-73246-4
SpringerLink
Neural-Symbolic Cognitive Reasoning
Humans are often extraordinary at performing practical reasoning. There are cases where the human computer, slow as it is, is faster than any artificial intelligence system. Are we faster because of the way we perceive knowledge as opposed to the way we represent…
PyTorch Wrapper version 1.1 is out!
New Features:
- Samplers for smart batching based on text length for faster training.
- Loss and Evaluation wrappers for token prediction tasks.
- New nn.modules for attention based models.
- Support for multi GPU training / evaluation / prediction.
- Verbose argument in system's methods.
- Examples using Transformer based models like BERT for text classification.
Check it out in the following links:
install with: pip install pytorch-wrapper
GitHub: https://github.com/jkoutsikakis/pytorch-wrapper
docs: https://pytorch-wrapper.readthedocs.io/en/latest/
examples: https://github.com/jkouts…/pytorch-wrapper/…/master/examples
#DeepLearning #PyTorch #NeuralNetworks #MachineLearning #DataScience #python #TensorFlow
New Features:
- Samplers for smart batching based on text length for faster training.
- Loss and Evaluation wrappers for token prediction tasks.
- New nn.modules for attention based models.
- Support for multi GPU training / evaluation / prediction.
- Verbose argument in system's methods.
- Examples using Transformer based models like BERT for text classification.
Check it out in the following links:
install with: pip install pytorch-wrapper
GitHub: https://github.com/jkoutsikakis/pytorch-wrapper
docs: https://pytorch-wrapper.readthedocs.io/en/latest/
examples: https://github.com/jkouts…/pytorch-wrapper/…/master/examples
#DeepLearning #PyTorch #NeuralNetworks #MachineLearning #DataScience #python #TensorFlow
GitHub
jkoutsikakis/pytorch-wrapper
Provides a systematic and extensible way to build, train, evaluate, and tune deep learning models using PyTorch. - jkoutsikakis/pytorch-wrapper
Deep Learning: State of the Art (2020). Slides + video
Slides:
https://drive.google.com/file/d/10rZ4ldHUBoDu8bVoqGn-CEqGgX-w-oKm/view?usp=drivesdk
Video :
https://youtu.be/0VH1Lim8gL8
Slides:
https://drive.google.com/file/d/10rZ4ldHUBoDu8bVoqGn-CEqGgX-w-oKm/view?usp=drivesdk
Video :
https://youtu.be/0VH1Lim8gL8
Image Fine-grained Inpainting
Hui et al.: https://arxiv.org/abs/2002.02609
Github: https://github.com/Zheng222/DMFN
#ArtificialIntelligence #DeepLearning #MachineLearning
Hui et al.: https://arxiv.org/abs/2002.02609
Github: https://github.com/Zheng222/DMFN
#ArtificialIntelligence #DeepLearning #MachineLearning
GitHub
GitHub - Zheng222/DMFN: Image Fine-grained Inpainting (Winner Award of ECCVW AIM 2020 Extreme Inpainting Track1&Track2)
Image Fine-grained Inpainting (Winner Award of ECCVW AIM 2020 Extreme Inpainting Track1&Track2) - Zheng222/DMFN
Connections: Log Likelihood, Cross-Entropy, KL Divergence, Logistic Regression, and Neural Networks
https://towardsdatascience.com/connections-log-likelihood-cross-entropy-kl-divergence-logistic-regression-and-neural-networks-40043dfb6200
https://towardsdatascience.com/connections-log-likelihood-cross-entropy-kl-divergence-logistic-regression-and-neural-networks-40043dfb6200
An Introduction to Reinforcement Learning - Lex Fridman, MIT
Blog by Luke Kenworthy, RE•WORK : https://blog.re-work.co/an-introduction-to-reinforcement-learning-lex-fridman-mit/
#ReinforcementLearning #reworkAI #reworkDL
Blog by Luke Kenworthy, RE•WORK : https://blog.re-work.co/an-introduction-to-reinforcement-learning-lex-fridman-mit/
#ReinforcementLearning #reworkAI #reworkDL
Continuous Geodesic Convolutions for Learning on 3D Shapes. http://arxiv.org/abs/2002.02506
The Future of Deep Learning Is Unsupervised, AI Pioneers Say
https://www.wsj.com/articles/the-future-of-deep-learning-is-unsupervised-ai-pioneers-say-11581330600
https://www.wsj.com/articles/the-future-of-deep-learning-is-unsupervised-ai-pioneers-say-11581330600
How to train a model with 10^11 parameters without running out of GPU memory?
Use DeepSpeed from Microsoft Research!
It's PyTorch compatible.
It partitions the network onto multiple processors automatically and efficiently.
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
Use DeepSpeed from Microsoft Research!
It's PyTorch compatible.
It partitions the network onto multiple processors automatically and efficiently.
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
Microsoft Research
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters - Microsoft Research
The latest trend in AI is that larger natural language models provide better accuracy; however, larger models are difficult to train because of cost, time, and ease of code integration. Microsoft is releasing an open-source library called DeepSpeed, which…