Whatβs wrong with transformer architecture: an overview
How the Transformers broke NLP leaderboards and why that can be bad for industry.
Link: https://hackingsemantics.xyz/2019/leaderboards/
#NLP #overview #transformer #BERT #XLNet
How the Transformers broke NLP leaderboards and why that can be bad for industry.
Link: https://hackingsemantics.xyz/2019/leaderboards/
#NLP #overview #transformer #BERT #XLNet
Hacking semantics
How the Transformers broke NLP leaderboards
With the huge Transformer-based models such as BERT, GPT-2, and XLNet, are we losing track of how the state-of-the-art performance is achieved?
Top 8 trends from ICLR 2019
Overview of trends on #ICLR2019:
1. Inclusivity
2. Unsupervised representation learning & transfer learning
3. Retro ML
4. RNN is losing its luster with researchers
5. GANs are still going on strong
6. The lack of biologically inspired deep learning
7. Reinforcement learning is still the most popular topic by submissions
8. Most accepted papers will be quickly forgotten
Link: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
#ICLR #overview
Overview of trends on #ICLR2019:
1. Inclusivity
2. Unsupervised representation learning & transfer learning
3. Retro ML
4. RNN is losing its luster with researchers
5. GANs are still going on strong
6. The lack of biologically inspired deep learning
7. Reinforcement learning is still the most popular topic by submissions
8. Most accepted papers will be quickly forgotten
Link: https://huyenchip.com/2019/05/12/top-8-trends-from-iclr-2019.html
#ICLR #overview
Huyenchip
Top 8 trends from ICLR 2019
[Twitter thread] Disclaimer: This post doesnβt reflect the view of any of the organizations Iβm associated with and is probably peppered with my personal and...