​​XLNet: Generalized Autoregressive Pretraining for Language Understanding
Researchers at Google Brain and Carnegie Mellon introduce #XLNet, a pre-training algorithm for natural language processing systems. It helps NLP models (in this case, based on Transformer-XL) achieve state-of-the-art results in 18 diverse language-understanding tasks including question answering and sentiment analysis.
Article: https://towardsdatascience.com/what-is-xlnet-and-why-it-outperforms-bert-8d8fce710335
ArXiV: https://arxiv.org/pdf/1906.08237.pdf
#Google #GoogleBrain #CMU #NLP #SOTA #DL
Researchers at Google Brain and Carnegie Mellon introduce #XLNet, a pre-training algorithm for natural language processing systems. It helps NLP models (in this case, based on Transformer-XL) achieve state-of-the-art results in 18 diverse language-understanding tasks including question answering and sentiment analysis.
Article: https://towardsdatascience.com/what-is-xlnet-and-why-it-outperforms-bert-8d8fce710335
ArXiV: https://arxiv.org/pdf/1906.08237.pdf
#Google #GoogleBrain #CMU #NLP #SOTA #DL