AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
As it turns out, Wang Ling was way ahead of the curve re NLP's muppet craze (see slides from LxMLS '16 & Oxford #NLP course '17 below).


https://github.com/oxford-cs-deepnlp-2017/lectures

❇️ @AI_Python_EN
Transformers v2.2 is out, with *4* new models and seq2seq capabilities!

ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!

Encoder-decoder with
Model2Model

Available on

https://github.com/huggingface/transformers/releases/tag/v2.2.0

#NLP

❇️ @AI_Python_EN
📢📢📢 Twitter Cortex is creating a NLP Research team. Brand new #NLP Researcher💫 job posting👇 Please spread the word.
https://careers.twitter.com/en/work-for-twitter/201911/machine-learning-researcher-nlp-cortex-applied-machine-learning.html

❇️ @AI_Python_EN
Single Headed Attention RNN: Stop Thinking With Your Head

https://arxiv.org/abs/1911.11423

#ArtificialIntelligence #NeuralComputing #NLP


❇️ @AI_Python_EN
ever wondered how we translate questions and commands into programs a machine can run? Jonathan Berant gives us an overview of (executable) semantic parsing.
#NLP

https://t.co/Mzvks7f9GR

❇️ @AI_Python_EN
Very interesting use of #AI to tackle bias in the written text by substituting words automatically to more neutral wording. However, one must also consider the challenges and ramifications such technology could mean to the written language as it can not only accidentally change the meaning of what was written, it can also change the tone and expression of the author and neutralize the point-of-view and remove emotion from language.
#NLP
https://arxiv.org/pdf/1911.09709.pdf

❇️ @AI_Python_EN
🔥 As you know ML has proven its importance in many fields, like computer vision, NLP, reinforcement learning, adversarial learning, etc .. Unfortunately, there is a little work to make machine learning accessible for Arabic-speaking people. Arabic language has many complicated features compared to other languages. First, Arabic language is written right to left. Second, it contains many letters that cannot be pronounced by most foreigners like ض ، غ ، ح ، خ، ظ. Moreover, Arabic language contains special characters called Diacritics which are special characters that help readers pronounced words correctly. For instance the statement السَّلامُ عَلَيْكُمْ وَرَحْمَةُ اللَّهِ وَبَرَكَاتُهُ containts special characters after most of the letters. The diactrics follow special rules to be given to a certain character. These rules are construct a complete area called النَّحْوُ الْعَرَبِيُّ. Compared to English, the Arabic language words letters are mostly connected اللغة as making them disconnected is difficult to read ا ل ل غ ة. ArbML helps fixing this by implementing many open-source projects that support Arabic, ML and NLP.

https://github.com/zaidalyafeai/ARBML

#machinelearning #deeplearning #artificialintelligence #nlp

❇️ @AI_Python_EN
Google Research • Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.

Blogpost

https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1

Paper

https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
Google Research • Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.

Blogpost

https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1

Paper

https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp