Data Science by ODS.ai 🦜
51.5K subscribers
322 photos
29 videos
7 files
1.49K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
​​New dataset with adversarial examples

Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months.

ArXiV: https://arxiv.org/abs/1907.07174
Dataset and code: https://github.com/hendrycks/natural-adv-examples

#Dataset #Adversarial
​​Testing Robustness Against Unforeseen Adversaries

OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.

Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar

#GAN #Adversarial #OpenAI
​​FreeLB: Enhanced Adversarial Training for Language Understanding

The authors propose a novel adversarial training algorithm – FreeLB, that promotes higher robustness and invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples, applied to Transformer-based models for NLU & commonsense reasoning tasks.

Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores:
* of BERT-based model from 78.3 -> 79.4
* RoBERTa-large model from 88.5 -> 88.8

The proposed approach achieves SOTA single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge.

paper: https://arxiv.org/abs/1909.11764

#nlp #nlu #bert #adversarial #ICLR
​​Yet another article about adversarial prints to fool face recognition

With progress on facial recognition, such prints will become more popular.

Link: https://medium.com/syncedreview/personal-invisibility-cloak-stymies-people-detectors-15bebdcc7943

#facerecognition #adversarial
​​REST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild

New approach for sleep monitoring.

Nowadays a lot of people suffer from sleep disorders thataffects their daily functioning, long-term health and longevity. Thelong-term effects of sleep deprivation and sleep disorders includean increased risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. As a result sleep monitoring is a very important topic.
Currently automatical documentation of sleep stages isn't robust against noises (which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration)) and isn't computationaly efficient enough for fast calculations on user devices.

The authors offer the following improvenents:
- adversarial training and spectral regularization to improve robustness to noise
- sparsity regularization to improve energy and computational efficiency

Rest models achieves a macro-F1 score of 0.67 vs. 0.39 for the state-of-the-art model in the presence of Gaussian noise, with 19Γ—parameter and 15Γ—MFLOPS reduction.
The model is also deployed onto a Pixel 2 smartphone. It achieves 17x energy reduction and 9x faster inference compared to uncompressed models.

Paper: https://arxiv.org/abs/2001.11363
Code: https://github.com/duggalrahul/REST


#deeplearning #compression #adversarial #sleepstaging
​​Do Adversarially Robust ImageNet Models Transfer Better?

TLDR - Yes.

Authors decide to check will adversarial trained network performed better on transfer learning tasks despite on worst accuracy on the trained dataset (ImageNet of course). And it is true.

They tested this idea on a frozen pre-trained feature extractor and trained only linear classifier that outperformed classic counterpart. And they tested on a full unfrozen fine-tuned network, that outperformed too on transfer learning tasks.

On pre-train task they use the adversarial robustness prior, that refers to a model’s invariance to small (often imperceptible) perturbations of its inputs.

They show also that such an approach gives better future representation properties of the networks.

They did many experiments (14 pages of graphics) and an ablation study.


paper: https://arxiv.org/abs/2007.08489
code: https://github.com/Microsoft/robust-models-transfer

#transfer_learning #SOTA #adversarial