How to hide from the AI surveillance state with a color printout
MITβs team studied how to fool camera with and #adversarial print, exploiting the fact that #CNN can be tricked by adversarial examples into recognizing something wrong or not recongnizing image at all.
Link: https://www.technologyreview.com/f/613409/how-to-hide-from-the-ai-surveillance-state-with-a-color-printout/
#CV #DL #MIT
MITβs team studied how to fool camera with and #adversarial print, exploiting the fact that #CNN can be tricked by adversarial examples into recognizing something wrong or not recongnizing image at all.
Link: https://www.technologyreview.com/f/613409/how-to-hide-from-the-ai-surveillance-state-with-a-color-printout/
#CV #DL #MIT
MIT Technology Review
How to hide from the AI surveillance state with a color printout
AI-powered video technology is becoming ubiquitous, tracking our faces and bodies through stores, offices, and public spaces. In some countries the technology constitutes a powerful new layer of policing and government surveillance. Fortunately, as some researchersβ¦
ββNew dataset with adversarial examples
Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months.
ArXiV: https://arxiv.org/abs/1907.07174
Dataset and code: https://github.com/hendrycks/natural-adv-examples
#Dataset #Adversarial
Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months.
ArXiV: https://arxiv.org/abs/1907.07174
Dataset and code: https://github.com/hendrycks/natural-adv-examples
#Dataset #Adversarial
ββTesting Robustness Against Unforeseen Adversaries
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
ββFreeLB: Enhanced Adversarial Training for Language Understanding
The authors propose a novel adversarial training algorithm β FreeLB, that promotes higher robustness and invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples, applied to Transformer-based models for NLU & commonsense reasoning tasks.
Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores:
* of BERT-based model from 78.3 -> 79.4
* RoBERTa-large model from 88.5 -> 88.8
The proposed approach achieves SOTA single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge.
paper: https://arxiv.org/abs/1909.11764
#nlp #nlu #bert #adversarial #ICLR
The authors propose a novel adversarial training algorithm β FreeLB, that promotes higher robustness and invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples, applied to Transformer-based models for NLU & commonsense reasoning tasks.
Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores:
* of BERT-based model from 78.3 -> 79.4
* RoBERTa-large model from 88.5 -> 88.8
The proposed approach achieves SOTA single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge.
paper: https://arxiv.org/abs/1909.11764
#nlp #nlu #bert #adversarial #ICLR
ββYet another article about adversarial prints to fool face recognition
With progress on facial recognition, such prints will become more popular.
Link: https://medium.com/syncedreview/personal-invisibility-cloak-stymies-people-detectors-15bebdcc7943
#facerecognition #adversarial
With progress on facial recognition, such prints will become more popular.
Link: https://medium.com/syncedreview/personal-invisibility-cloak-stymies-people-detectors-15bebdcc7943
#facerecognition #adversarial
ββREST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild
New approach for sleep monitoring.
Nowadays a lot of people suffer from sleep disorders thataffects their daily functioning, long-term health and longevity. Thelong-term effects of sleep deprivation and sleep disorders includean increased risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. As a result sleep monitoring is a very important topic.
Currently automatical documentation of sleep stages isn't robust against noises (which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration)) and isn't computationaly efficient enough for fast calculations on user devices.
The authors offer the following improvenents:
- adversarial training and spectral regularization to improve robustness to noise
- sparsity regularization to improve energy and computational efficiency
Rest models achieves a macro-F1 score of 0.67 vs. 0.39 for the state-of-the-art model in the presence of Gaussian noise, with 19Γparameter and 15ΓMFLOPS reduction.
The model is also deployed onto a Pixel 2 smartphone. It achieves 17x energy reduction and 9x faster inference compared to uncompressed models.
Paper: https://arxiv.org/abs/2001.11363
Code: https://github.com/duggalrahul/REST
#deeplearning #compression #adversarial #sleepstaging
New approach for sleep monitoring.
Nowadays a lot of people suffer from sleep disorders thataffects their daily functioning, long-term health and longevity. Thelong-term effects of sleep deprivation and sleep disorders includean increased risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. As a result sleep monitoring is a very important topic.
Currently automatical documentation of sleep stages isn't robust against noises (which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration)) and isn't computationaly efficient enough for fast calculations on user devices.
The authors offer the following improvenents:
- adversarial training and spectral regularization to improve robustness to noise
- sparsity regularization to improve energy and computational efficiency
Rest models achieves a macro-F1 score of 0.67 vs. 0.39 for the state-of-the-art model in the presence of Gaussian noise, with 19Γparameter and 15ΓMFLOPS reduction.
The model is also deployed onto a Pixel 2 smartphone. It achieves 17x energy reduction and 9x faster inference compared to uncompressed models.
Paper: https://arxiv.org/abs/2001.11363
Code: https://github.com/duggalrahul/REST
#deeplearning #compression #adversarial #sleepstaging
ββDo Adversarially Robust ImageNet Models Transfer Better?
TLDR - Yes.
Authors decide to check will adversarial trained network performed better on transfer learning tasks despite on worst accuracy on the trained dataset (ImageNet of course). And it is true.
They tested this idea on a frozen pre-trained feature extractor and trained only linear classifier that outperformed classic counterpart. And they tested on a full unfrozen fine-tuned network, that outperformed too on transfer learning tasks.
On pre-train task they use the adversarial robustness prior, that refers to a modelβs invariance to small (often imperceptible) perturbations of its inputs.
They show also that such an approach gives better future representation properties of the networks.
They did many experiments (14 pages of graphics) and an ablation study.
paper: https://arxiv.org/abs/2007.08489
code: https://github.com/Microsoft/robust-models-transfer
#transfer_learning #SOTA #adversarial
TLDR - Yes.
Authors decide to check will adversarial trained network performed better on transfer learning tasks despite on worst accuracy on the trained dataset (ImageNet of course). And it is true.
They tested this idea on a frozen pre-trained feature extractor and trained only linear classifier that outperformed classic counterpart. And they tested on a full unfrozen fine-tuned network, that outperformed too on transfer learning tasks.
On pre-train task they use the adversarial robustness prior, that refers to a modelβs invariance to small (often imperceptible) perturbations of its inputs.
They show also that such an approach gives better future representation properties of the networks.
They did many experiments (14 pages of graphics) and an ablation study.
paper: https://arxiv.org/abs/2007.08489
code: https://github.com/Microsoft/robust-models-transfer
#transfer_learning #SOTA #adversarial