AI, Python, Cognitive Neuroscience
3.82K subscribers
1.09K photos
46 videos
78 files
891 links
Download Telegram
What makes a good conversation?
How controllable attributes affect human judgments

A great post on conversation scoring.

Link:
http://www.abigailsee.com/2019/08/13/what-makes-a-good-conversation.html
Paper:
https://www.aclweb.org/anthology/N19-1170

#NLP #NLU #DL

❇️ @ai_python_en
Google researchers just released #ALBERT , that has beaten all models across various benchmarks.

Also, did you know that most NLP models achieves performance that outpaces average human performance?
——————————————————
ALBERT uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT

1. They topped GLUE ( https://lnkd.in/dkWNRVk ) — 92.2%

2. SQuAD (https://lnkd.in/d_Xrba8 ) leaderboards. — 89.4%

3. RACE - they came third with their ensemble model (https://lnkd.in/d2yWbtC ) — 89.4%
——————————————————
Paper at openreview: https://lnkd.in/dzRvWYS
#deeplearning #machinelearning #NLU #NLG #artificiallintelligence #ai
Simple, Scalable Adaptation for Neural Machine Translation

Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.

Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.

Paper: https://arxiv.org/pdf/1909.08478.pdf

#BERT

❇️ @AI_Python_EN
Communication-based Evaluation for Natural Language Generation (#NLG) that's dramatically out-performed standard n-gram-based methods.

Have you ever think that n-gram overlap measures like #BLEU or #ROUGE is not good enough for #NLG evaluation and human based evaluation is too expensive? Researchers from Stanford University also think so. The main shortcoming of #BLEU or #ROUGE methods is that they fail to take into account the communicative function of language; a speaker's goal is not only to produce well-formed expressions, but also to convey relevant information to a listener.

Researchers propose approach based on color reference game. In this game, a speaker and a listener see a set of three colors. The speaker is told one color is the target and tries to communicate the target to the listener using a natural language utterance. A good utterance is more likely to lead the listener to select the target, while a bad utterance is less likely to do so. In turn, effective metrics should assign high scores to good utterances and low scores to bad ones.

Paper: https://arxiv.org/pdf/1909.07290.pdf
Code: https://github.com/bnewm0609/comm-eval

#NLP #NLU

❇️ @AI_Python_EN