Gaussian Differential Privacy
Dong et al.: https://arxiv.org/abs/1905.02383
#MachineLearning #Cryptography #Security #DataStructures #Algorithms
Dong et al.: https://arxiv.org/abs/1905.02383
#MachineLearning #Cryptography #Security #DataStructures #Algorithms
arXiv.org
Gaussian Differential Privacy
Differential privacy has seen remarkable success as a rigorous and practical formalization of data privacy in the past decade. This privacy definition and its divergence based relaxations,...
Robustness beyond Security: Computer Vision Applications
Engstrom et al.: http://gradientscience.org/robust_apps/
#artificialintelligence #computervision #security #technology
Engstrom et al.: http://gradientscience.org/robust_apps/
#artificialintelligence #computervision #security #technology
gradient science
Robustness Beyond Security: Computer Vision Applications
An off-the-shelf robust classifier can be used to perform a range of computer vision tasks beyond classification.
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko and Matthias Hein: https://arxiv.org/abs/1906.03526
Code: https://github.com/max-andr/provably-robust-boosting
#MachineLearning #Cryptography #Security
Maksym Andriushchenko and Matthias Hein: https://arxiv.org/abs/1906.03526
Code: https://github.com/max-andr/provably-robust-boosting
#MachineLearning #Cryptography #Security
arXiv.org
Provably Robust Boosted Decision Stumps and Trees against...
The problem of adversarial robustness has been studied extensively for neural
networks. However, for boosted decision trees and decision stumps there are
almost no results, even though they are...
networks. However, for boosted decision trees and decision stumps there are
almost no results, even though they are...
Adversarial Examples Are Not Bugs, They Are Features
Ilyas et al.: https://arxiv.org/abs/1905.02175
#MachineLearning #Cryptography #Security
Ilyas et al.: https://arxiv.org/abs/1905.02175
#MachineLearning #Cryptography #Security
arXiv.org
Adversarial Examples Are Not Bugs, They Are Features
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be...
US National Security Commission on Artificial Intelligence
Interim Report for Congress, November 2019
#AI #ArtificialIntelligence #Security #NSCAI
https://www.nationaldefensemagazine.org/-/media/sites/magazine/03_linkedfiles/nscai-interim-report-for-congress.ashx?la=en
Interim Report for Congress, November 2019
#AI #ArtificialIntelligence #Security #NSCAI
https://www.nationaldefensemagazine.org/-/media/sites/magazine/03_linkedfiles/nscai-interim-report-for-congress.ashx?la=en
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Wu et al.: https://arxiv.org/abs/1910.14667
#Cryptography #Security #MachineLearning
Wu et al.: https://arxiv.org/abs/1910.14667
#Cryptography #Security #MachineLearning
arXiv.org
Making an Invisibility Cloak: Real World Adversarial Attacks on...
We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores...
Advbox: a toolbox to generate adversarial examples that fool neural networks
Goodman et al.: https://arxiv.org/abs/2001.05574
GitHub: https://github.com/advboxes/AdvBox
#MachineLearning #Cryptography #Security
Goodman et al.: https://arxiv.org/abs/2001.05574
GitHub: https://github.com/advboxes/AdvBox
#MachineLearning #Cryptography #Security
GitHub
GitHub - advboxes/AdvBox: Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTor…
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning mode...