Modeling question asking using neural program generation
Ziyun Wang and Brenden M. Lake : https://arxiv.org/abs/1907.09899
#artificialintelligence #naturallanguageprocessing #reinforcementlearning
Ziyun Wang and Brenden M. Lake : https://arxiv.org/abs/1907.09899
#artificialintelligence #naturallanguageprocessing #reinforcementlearning
Real-Time Voice Cloning
GitHub, by Corentin Jemine : https://github.com/CorentinJ/Real-Time-Voice-Cloning
#deeplearning #pytorch #tensorflow #naturallanguageprocessing
GitHub, by Corentin Jemine : https://github.com/CorentinJ/Real-Time-Voice-Cloning
#deeplearning #pytorch #tensorflow #naturallanguageprocessing
GitHub
GitHub - CorentinJ/Real-Time-Voice-Cloning: Clone a voice in 5 seconds to generate arbitrary speech in real-time
Clone a voice in 5 seconds to generate arbitrary speech in real-time - CorentinJ/Real-Time-Voice-Cloning
"Advanced NLP with spaCy"
By Ines Montani : https://course.spacy.io
#machinelearning #nlp #naturallanguageprocessing
By Ines Montani : https://course.spacy.io
#machinelearning #nlp #naturallanguageprocessing
CS224N Natural Language Processing with Deep Learning
Stanford University School of Engineering
:
https://www.youtube.com/playlist?list=PLU40WL8Ol94IJzQtileLTqGZuXtGlLMP_
#NaturalLanguageProcessing #MachineLearning #DeepLearning
Stanford University School of Engineering
:
https://www.youtube.com/playlist?list=PLU40WL8Ol94IJzQtileLTqGZuXtGlLMP_
#NaturalLanguageProcessing #MachineLearning #DeepLearning
Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations
Talman et al.: https://arxiv.org/abs/1908.02262
GitHub: https://github.com/Helsinki-NLP/prosody
#dataset #machinelearning #naturallanguageprocessing
Talman et al.: https://arxiv.org/abs/1908.02262
GitHub: https://github.com/Helsinki-NLP/prosody
#dataset #machinelearning #naturallanguageprocessing
arXiv.org
Predicting Prosodic Prominence from Text with Pre-trained...
In this paper we introduce a new natural language processing dataset and
benchmark for predicting prosodic prominence from written text. To our
knowledge this will be the largest publicly...
benchmark for predicting prosodic prominence from written text. To our
knowledge this will be the largest publicly...
Predicting Prosodic Prominence from Text with Pre-trained Contextualized Word Representations
Talman et al.: https://arxiv.org/abs/1908.02262
GitHub: https://github.com/Helsinki-NLP/prosody
#dataset #machinelearning #naturallanguageprocessing
Talman et al.: https://arxiv.org/abs/1908.02262
GitHub: https://github.com/Helsinki-NLP/prosody
#dataset #machinelearning #naturallanguageprocessing
arXiv.org
Predicting Prosodic Prominence from Text with Pre-trained...
In this paper we introduce a new natural language processing dataset and
benchmark for predicting prosodic prominence from written text. To our
knowledge this will be the largest publicly...
benchmark for predicting prosodic prominence from written text. To our
knowledge this will be the largest publicly...
Compressing BERT for faster prediction
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Rasa
Learn how to make BERT smaller and faster
Let's look at compression methods for neural networks, such as quantization and pruning. Then, we apply one to BERT using TensorFlow Lite.
Visualizing and Measuring the Geometry of BERT
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
arXiv.org
Visualizing and Measuring the Geometry of BERT
Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks...
ACL 2019 Thoughts and Notes
By Vinit Ravishankar, Daniel Hershcovich; edited by Artur Kulmizev, Mostafa Abdou : https://supernlp.github.io/2019/08/16/acl-2019/
#naturallanguageprocessing #machinelearning #deeplearning
By Vinit Ravishankar, Daniel Hershcovich; edited by Artur Kulmizev, Mostafa Abdou : https://supernlp.github.io/2019/08/16/acl-2019/
#naturallanguageprocessing #machinelearning #deeplearning
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models"
Sandeep Subramanian, Raymond Li, Jonathan Pilault, Christopher Pal : https://arxiv.org/abs/1909.03186
#transformer #naturallanguageprocessing #machinelearning
Sandeep Subramanian, Raymond Li, Jonathan Pilault, Christopher Pal : https://arxiv.org/abs/1909.03186
#transformer #naturallanguageprocessing #machinelearning
Transformers: State-of-the-art Natural Language Processing
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Jamie Brew : https://arxiv.org/abs/1910.03771
#Transformers #NaturalLanguageProcessing #PyTorch #TensorFlow
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Jamie Brew : https://arxiv.org/abs/1910.03771
#Transformers #NaturalLanguageProcessing #PyTorch #TensorFlow
The State of Transfer Learning in NLP
By Sebastian Ruder : http://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
By Sebastian Ruder : http://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing #NLP
ruder.io
The State of Transfer Learning in NLP
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. It highlights key insights and takeaways and provides updates based on recent work.
"MultiFiT: Efficient Multi-lingual Language Model Fine-tuning"
Eisenschlos et al.: https://arxiv.org/abs/1909.04761
Post: http://nlp.fast.ai/classification/2019/09/10/multifit.html
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Eisenschlos et al.: https://arxiv.org/abs/1909.04761
Post: http://nlp.fast.ai/classification/2019/09/10/multifit.html
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Evaluating the Factual Consistency of Abstractive Text Summarization
Kryscinski et al.: https://arxiv.org/abs/1910.12840
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
Kryscinski et al.: https://arxiv.org/abs/1910.12840
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
arXiv.org
Evaluating the Factual Consistency of Abstractive Text Summarization
Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents. We propose a weakly-supervised, model-based...
Q8BERT: Quantized 8Bit BERT
Zafrir et al.: https://arxiv.org/abs/1910.06188
#NaturalLanguageProcessing #NLP #Transformer
Zafrir et al.: https://arxiv.org/abs/1910.06188
#NaturalLanguageProcessing #NLP #Transformer
arXiv.org
Q8BERT: Quantized 8Bit BERT
Recently, pre-trained Transformer based language models such as BERT and GPT, have shown great improvement in many Natural Language Processing (NLP) tasks. However, these models contain a large...
Language Models as Knowledge Bases?
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning
arXiv.org
Language Models as Knowledge Bases?
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be...
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Clark et al.: https://openreview.net/forum?id=r1xMH1BtvB
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
Clark et al.: https://openreview.net/forum?id=r1xMH1BtvB
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
OpenReview
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than...
A text encoder trained to distinguish real input tokens from plausible fakes efficiently learns effective language representations.
Language Models as Knowledge Bases?
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning
arXiv.org
Language Models as Knowledge Bases?
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be...
CS224N : Natural Language Processing with Deep Learning
https://www.youtube.com/playlist?list=PLU40WL8Ol94IJzQtileLTqGZuXtGlLMP_
#NaturalLanguageProcessing #DeepLearning #ArtificialIntelligence
https://www.youtube.com/playlist?list=PLU40WL8Ol94IJzQtileLTqGZuXtGlLMP_
#NaturalLanguageProcessing #DeepLearning #ArtificialIntelligence
Language Models as Knowledge Bases?
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning
Petroni et al.: https://arxiv.org/abs/1909.01066
#Transformers #NaturalLanguageProcessing #MachineLearning