Neural networks in NLP are vulnerable to adversarially crafted inputs.
We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:
https://arxiv.org/abs/1909.01492
We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:
https://arxiv.org/abs/1909.01492
arXiv.org
Achieving Verified Robustness to Symbol Substitutions via Interval...
Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and...
Forwarded from بینام
Deep Learning for Biometrics.pdf
14.5 MB
Forwarded from بینام
Deep Feature Flow for Video Recognition.pdf
3.7 MB
Forwarded from Machine learning books and papers (Ramin Mousa)
discriminative :
1:#Regression
2:#Logistic regression
3:#decision tree(Hunt)
4:#neural network(traditional network, deep network)
5:#Support Vector Machine(SVM)
Generative:
1:#Hidden Markov model
2:#Naive bayes
3:#K-nearest neighbor(KNN)
4:#Generative adversarial networks(GANs)
Deep learning:
1:CNN
R_CNN
Fast-RCNN
Mask-RCNN
2:RNN
3:LSTM
4:CapsuleNet
5:Siamese:
siamese cnn
siamese lstm
siamese bi-lstm
siamese CapsuleNet
6:time series data
SVR
DT(cart)
Random Forest linear
Bagging
Boosting
جهت درخواست و راهنمایی در رابطه با پیاده سازی مقالات و پایان نامه ها در رابطه با مباحث deep learning و machine learning با ایدی زیر در ارتباط باشید
@Raminmousa
1:#Regression
2:#Logistic regression
3:#decision tree(Hunt)
4:#neural network(traditional network, deep network)
5:#Support Vector Machine(SVM)
Generative:
1:#Hidden Markov model
2:#Naive bayes
3:#K-nearest neighbor(KNN)
4:#Generative adversarial networks(GANs)
Deep learning:
1:CNN
R_CNN
Fast-RCNN
Mask-RCNN
2:RNN
3:LSTM
4:CapsuleNet
5:Siamese:
siamese cnn
siamese lstm
siamese bi-lstm
siamese CapsuleNet
6:time series data
SVR
DT(cart)
Random Forest linear
Bagging
Boosting
جهت درخواست و راهنمایی در رابطه با پیاده سازی مقالات و پایان نامه ها در رابطه با مباحث deep learning و machine learning با ایدی زیر در ارتباط باشید
@Raminmousa
Hamiltonian Neural Networks
https://eng.uber.com/research/hamiltonian-neural-networks/
paper: https://arxiv.org/pdf/1906.01563.pdf
code: https://github.com/greydanus/hamiltonian-nn
https://eng.uber.com/research/hamiltonian-neural-networks/
paper: https://arxiv.org/pdf/1906.01563.pdf
code: https://github.com/greydanus/hamiltonian-nn
🔥OpenAI realesed the 1.5billion parameter GPT-2 model
Post: https://openai.com/blog/gpt-2-1-5b-release/
GPT-2 output detection model: https://github.com/openai/gpt-2-output-dataset/tree/master/detector
Research from partners on potential malicious uses: https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf
#NLU #GPT2 #OpenAI #NLP
Post: https://openai.com/blog/gpt-2-1-5b-release/
GPT-2 output detection model: https://github.com/openai/gpt-2-output-dataset/tree/master/detector
Research from partners on potential malicious uses: https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf
#NLU #GPT2 #OpenAI #NLP
Openai
GPT-2: 1.5B release
As the final model release of GPT-2’s staged release, we’re releasing the largest version (1.5B parameters) of GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models. While there have been larger language models released…
HoloGAN (A new generative model) learns 3D representation from natural images
Article: https://arxiv.org/pdf/1904.01326.pdf
Code: https://github.com/thunguyenphuoc/HoloGAN
Dataset: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Article: https://arxiv.org/pdf/1904.01326.pdf
Code: https://github.com/thunguyenphuoc/HoloGAN
Dataset: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
GitHub
GitHub - thunguyenphuoc/HoloGAN: HoloGAN
HoloGAN. Contribute to thunguyenphuoc/HoloGAN development by creating an account on GitHub.
Forwarded from بینام
Applied Deep Learning (en).pdf
12.6 MB
Stacked Capsule Autoencoders
https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders
paper : https://arxiv.org/abs/1906.06818
http://akosiorek.github.io/ml/2019/06/23/stacked_capsule_autoencoders.html
https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders
paper : https://arxiv.org/abs/1906.06818
http://akosiorek.github.io/ml/2019/06/23/stacked_capsule_autoencoders.html
GitHub
google-research/stacked_capsule_autoencoders at master · google-research/google-research
Google Research. Contribute to google-research/google-research development by creating an account on GitHub.
This AI Learned To Animate Humanoids 🚶
https://www.youtube.com/watch?v=cTqVhcrilrE
code: https://github.com/sebastianstarke/AI4Animation
Check out Lambda here and sign up for their GPU Cloud : https://lambdalabs.com/papers
https://www.youtube.com/watch?v=cTqVhcrilrE
code: https://github.com/sebastianstarke/AI4Animation
Check out Lambda here and sign up for their GPU Cloud : https://lambdalabs.com/papers
YouTube
This AI Learned To Animate Humanoids!🚶
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers
📝 The paper "Neural State Machine for Character-Scene Interactions" is available here:
https://github.com/sebastianstarke/AI4Animation
🙏 We would like to thank our generous…
📝 The paper "Neural State Machine for Character-Scene Interactions" is available here:
https://github.com/sebastianstarke/AI4Animation
🙏 We would like to thank our generous…
Forwarded from Machinelearning
Linear Algebra Vectors.pdf
7.5 MB
Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares
https://web.stanford.edu/~boyd/vmls/
@ai_machinelearning_big_data
https://web.stanford.edu/~boyd/vmls/
@ai_machinelearning_big_data
👌Finding label errors in datasets and learning with noisy labels.
https://github.com/cgnorthcutt/cleanlab/
https://github.com/cgnorthcutt/cleanlab/
GitHub
GitHub - cgnorthcutt/cleanlab: Official cleanlab repo is at https://github.com/cleanlab/cleanlab
Official cleanlab repo is at https://github.com/cleanlab/cleanlab - cgnorthcutt/cleanlab
Forwarded from بینام
Deep-Learning-with-PyTorch.pdf
16.8 MB
GNNExplainer: Generating Explanations for Graph Neural Networks
https://arxiv.org/abs/1903.03894
Github : https://github.com/RexYing/gnn-model-explainer/
https://arxiv.org/abs/1903.03894
Github : https://github.com/RexYing/gnn-model-explainer/
GitHub
GitHub - RexYing/gnn-model-explainer: gnn explainer
gnn explainer. Contribute to RexYing/gnn-model-explainer development by creating an account on GitHub.
Forwarded from بینام
Practical Machine Learning with Python (en).pdf
19.4 MB
Forwarded from بینام
Hollemans_M_,_LaPollo_C_,_Tam_A.pdf
74.6 MB
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs
https://arxiv.org/abs/1910.06922
SIte : https://ajolicoeur.wordpress.com/
Github : https://github.com/AlexiaJM/MaximumMarginGANs
https://arxiv.org/abs/1910.06922
SIte : https://ajolicoeur.wordpress.com/
Github : https://github.com/AlexiaJM/MaximumMarginGANs
arXiv.org
Gradient penalty from a maximum margin perspective
A popular heuristic for improved performance in Generative adversarial networks (GANs) is to use some form of gradient penalty on the discriminator. This gradient penalty was originally motivated...
T5: Text-To-Text Transfer Transformer
Github: https://github.com/google-research/text-to-text-transfer-transformer
Paper: https://arxiv.org/abs/1910.10683
@Machine_learn
Github: https://github.com/google-research/text-to-text-transfer-transformer
Paper: https://arxiv.org/abs/1910.10683
@Machine_learn
GitHub
GitHub - google-research/text-to-text-transfer-transformer: Code for the paper "Exploring the Limits of Transfer Learning with…
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" - google-research/text-to-text-transfer-transformer