@Machine_learn
The HSIC Bottleneck: Deep Learning without Back-Propagation🥺
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#backpropagation #DL
The HSIC Bottleneck: Deep Learning without Back-Propagation🥺
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#backpropagation #DL
arXiv.org
The HSIC Bottleneck: Deep Learning without Back-Propagation
We introduce the HSIC (Hilbert-Schmidt independence criterion) bottleneck for training deep neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy loss and...
@Machine_learn
Rank-consistent Ordinal Regression for Neural Networks
Article: https://arxiv.org/abs/1901.07884
PyTorch: https://github.com/Raschka-research-group/coral-cnn
Rank-consistent Ordinal Regression for Neural Networks
Article: https://arxiv.org/abs/1901.07884
PyTorch: https://github.com/Raschka-research-group/coral-cnn
arXiv.org
Rank consistent ordinal regression for neural networks with...
In many real-world prediction tasks, class labels include information about the relative ordering between labels, which is not captured by commonly-used loss functions such as multi-category...
@Machine_leaen
ai ,machine learning
#code #datasets #paper
• 1146 leaderboards
• 1223 tasks
• 1105 datasets
• 14779 papers with code
https://paperswithcode.com/sota
ai ,machine learning
#code #datasets #paper
• 1146 leaderboards
• 1223 tasks
• 1105 datasets
• 14779 papers with code
https://paperswithcode.com/sota
GitHub
Papers with code
Papers with code has 13 repositories available. Follow their code on GitHub.
@Machine_learn
Memory-Efficient Adaptive Optimization
Source: https://arxiv.org/abs/1901.11150
Code: https://github.com/google-research/google-research/tree/master/sm3
Memory-Efficient Adaptive Optimization
Source: https://arxiv.org/abs/1901.11150
Code: https://github.com/google-research/google-research/tree/master/sm3
arXiv.org
Memory-Efficient Adaptive Optimization
Adaptive gradient-based optimizers such as Adagrad and Adam are crucial for achieving state-of-the-art performance in machine translation and language modeling. However, these methods maintain...
@Machine_learn
🚀 Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0
Tensorflow blog: https://medium.com/tensorflow/introducing-tf-gan-a-lightweight-gan-library-for-tensorflow-2-0-36d767e1abae
Code: https://github.com/tensorflow/gan
Free course: https://developers.google.com/machine-learning/gan/
Paper: https://arxiv.org/abs/1805.08318
🚀 Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0
Tensorflow blog: https://medium.com/tensorflow/introducing-tf-gan-a-lightweight-gan-library-for-tensorflow-2-0-36d767e1abae
Code: https://github.com/tensorflow/gan
Free course: https://developers.google.com/machine-learning/gan/
Paper: https://arxiv.org/abs/1805.08318
Medium
Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0
Posted by Joel Shor, Yoel Drori, Google Research Tel Aviv, Aaron Sarna, David Westbrook, Paige Bailey
@Machine_learn
DeepMind's OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
code: https://github.com/deepmind/open_spiel
article: https://arxiv.org/abs/1908.09453
DeepMind's OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
code: https://github.com/deepmind/open_spiel
article: https://arxiv.org/abs/1908.09453
GitHub
GitHub - google-deepmind/open_spiel: OpenSpiel is a collection of environments and algorithms for research in general reinforcement…
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. - google-deepmind/open_spiel
Deep Learning with Python
The ultimate beginners guide to Learn Deep Learning with Python Step by Step
#book #DL #python
@Machine_learn
The ultimate beginners guide to Learn Deep Learning with Python Step by Step
#book #DL #python
@Machine_learn
4_5994449442294466012.pdf
1.9 MB
Deep Learning with Python
The ultimate beginners guide to Learn Deep Learning with Python Step by Step
#book #DL #python
@Machine_learn
The ultimate beginners guide to Learn Deep Learning with Python Step by Step
#book #DL #python
@Machine_learn
@Machine_learn
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
How normalization applied to layers helps to reach faster convergence.
ArXiV: https://arxiv.org/abs/1502.03167
#NeuralNetwork #nn #normalization #DL
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
How normalization applied to layers helps to reach faster convergence.
ArXiV: https://arxiv.org/abs/1502.03167
#NeuralNetwork #nn #normalization #DL
arXiv.org
Batch Normalization: Accelerating Deep Network Training by...
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the...
@Machine_learn
The largest publicly available language model: CTRL has 1.6B parameters and can be guided by control codes for style, content, and task-specific behavior.
code: https://github.com/salesforce/ctrl
article: https://einstein.ai/presentations/ctrl.pdf
C-write:ai_machinelearning_big_data
https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/
The largest publicly available language model: CTRL has 1.6B parameters and can be guided by control codes for style, content, and task-specific behavior.
code: https://github.com/salesforce/ctrl
article: https://einstein.ai/presentations/ctrl.pdf
C-write:ai_machinelearning_big_data
https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/
GitHub
GitHub - salesforce/ctrl: Conditional Transformer Language Model for Controllable Generation
Conditional Transformer Language Model for Controllable Generation - salesforce/ctrl
@Machine_learn
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
How normalization applied to layers helps to reach faster convergence.
ArXiV: https://arxiv.org/abs/1502.03167
#NeuralNetwork #nn #normalization #DL
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
How normalization applied to layers helps to reach faster convergence.
ArXiV: https://arxiv.org/abs/1502.03167
#NeuralNetwork #nn #normalization #DL
arXiv.org
Batch Normalization: Accelerating Deep Network Training by...
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the...
Forwarded from Machine learning books and papers (Ramin Mousa)
discriminative :
1:#Regression
2:#Logistic regression
3:#decision tree(Hunt)
4:#neural network(traditional network, deep network)
5:#Support Vector Machine(SVM)
Generative:
1:#Hidden Markov model
2:#Naive bayes
3:#K-nearest neighbor(KNN)
4:#Generative adversarial networks(GANs)
Deep learning:
1:CNN
R_CNN
Fast-RCNN
Mask-RCNN
2:RNN
3:LSTM
4:CapsuleNet
5:Siamese:
siamese cnn
siamese lstm
siamese bi-lstm
siamese CapsuleNet
6:time series data
SVR
DT(cart)
Random Forest linear
Bagging
Boosting
جهت درخواست و راهنمایی در رابطه با پیاده سازی مقالات و پایان نامه ها در رابطه با مباحث deep learning و machine learning با ایدی زیر در ارتباط باشید
@Raminmousa
1:#Regression
2:#Logistic regression
3:#decision tree(Hunt)
4:#neural network(traditional network, deep network)
5:#Support Vector Machine(SVM)
Generative:
1:#Hidden Markov model
2:#Naive bayes
3:#K-nearest neighbor(KNN)
4:#Generative adversarial networks(GANs)
Deep learning:
1:CNN
R_CNN
Fast-RCNN
Mask-RCNN
2:RNN
3:LSTM
4:CapsuleNet
5:Siamese:
siamese cnn
siamese lstm
siamese bi-lstm
siamese CapsuleNet
6:time series data
SVR
DT(cart)
Random Forest linear
Bagging
Boosting
جهت درخواست و راهنمایی در رابطه با پیاده سازی مقالات و پایان نامه ها در رابطه با مباحث deep learning و machine learning با ایدی زیر در ارتباط باشید
@Raminmousa
👍1