AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Neural networks in NLP are vulnerable to adversarially crafted inputs.

We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:

https://arxiv.org/abs/1909.01492
2_5203986206391534542.pdf
1.5 MB
Sarbazi, M., Sadeghzadeh, M., & Mir Abedini, S. J. (2019). Improving resource allocation in software-defined networks using clustering. Cluster Computing.
doi:10.1007/s10586-019-02985-3

❇️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
2_5203986206391534542.pdf
If you just published a paper let us inform other members.
@ai_python_en
Google researchers just released #ALBERT , that has beaten all models across various benchmarks.

Also, did you know that most NLP models achieves performance that outpaces average human performance?
——————————————————
ALBERT uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT

1. They topped GLUE ( https://lnkd.in/dkWNRVk ) — 92.2%

2. SQuAD (https://lnkd.in/d_Xrba8 ) leaderboards. — 89.4%

3. RACE - they came third with their ensemble model (https://lnkd.in/d2yWbtC ) — 89.4%
——————————————————
Paper at openreview: https://lnkd.in/dzRvWYS
#deeplearning #machinelearning #NLU #NLG #artificiallintelligence #ai
Artificial Design: Modeling Artificial Super Intelligence with Extended General Relativity and Universal Darwinism via Geometrization for Universal Design Automation

https://openreview.net/forum?id=SyxQ_TEFwS
Self-Paced Learning:
- supervised method from 2010 #NIPS
- idea: start learning with the easiest samples first and only then learn the difficult ones
- distinct from curriculum learning, where samples are pre-classified to easy/hard: we need to decide the order on our own
sample in a latent model (outliers will be the hardest)
- a better measure (!): how good are the initial predictions for the sample (samples far away from the decision boundary are the easiest).

- for #classification, samples are only easy in context of other samples!
- the set of easy samples is iteratively enlarged
- results: outperforms CCCP in #DNA Motif Finding, handwritten digit recognition and others problems
- link: https://papers.nips.cc/paper/3923-self-paced-learning-for-latent-variable-models