Top 10 #deeplearning research papers as per this website
https://lnkd.in/dPYayt9
Of course the choice remains biased but we do like these besides a few hundred other papers.
Remember, it is not the popular but the meaningful and industry relevant research that is worth paying attention to.
Here's the list:
1. Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/dhj5SyM
2. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
https://lnkd.in/d44kt3Q
3. Deep Contextualized Word Representations
https://lnkd.in/dkP68Fb
4. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/dAhYzge
5. Delayed Impact of Fair Machine Learning
https://lnkd.in/dvTvG2s
6. World Models
7. Taskonomy: Disentangling Task Transfer Learning
https://lnkd.in/dYxMjAd
8. Know What You Donβt Know: Unanswerable Questions for SQuAD
https://lnkd.in/d--grME
9. Large Scale GAN Training for High Fidelity Natural Image Synthesis
https://lnkd.in/dY6psf4
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://lnkd.in/dgtnD7n
#machinelearning #research #deeplearning #artificialintelligence
----------
@machinelearning_tuts
https://lnkd.in/dPYayt9
Of course the choice remains biased but we do like these besides a few hundred other papers.
Remember, it is not the popular but the meaningful and industry relevant research that is worth paying attention to.
Here's the list:
1. Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/dhj5SyM
2. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
https://lnkd.in/d44kt3Q
3. Deep Contextualized Word Representations
https://lnkd.in/dkP68Fb
4. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/dAhYzge
5. Delayed Impact of Fair Machine Learning
https://lnkd.in/dvTvG2s
6. World Models
7. Taskonomy: Disentangling Task Transfer Learning
https://lnkd.in/dYxMjAd
8. Know What You Donβt Know: Unanswerable Questions for SQuAD
https://lnkd.in/d--grME
9. Large Scale GAN Training for High Fidelity Natural Image Synthesis
https://lnkd.in/dY6psf4
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://lnkd.in/dgtnD7n
#machinelearning #research #deeplearning #artificialintelligence
----------
@machinelearning_tuts
TOPBOTS
Easy-To-Read Summary of Important AI Research Papers of 2018
Trying to keep up with AI research papers can feel like an exercise in futility given how quickly the industry moves. If youβre buried in papers to read that you havenβt quite gotten around to, youβre in luck. To help you catch up, weβve summarized 10 importantβ¦
Why walk when you can flop?
In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.
Blog by Janelle Shane: https://lnkd.in/dQnCVa9
Original paper: https://lnkd.in/dt63hJR
#algorithm #artificialintelligence #machinelearning #reinforcementlearning #technology
----------
@machinelearning_tuts
In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.
Blog by Janelle Shane: https://lnkd.in/dQnCVa9
Original paper: https://lnkd.in/dt63hJR
#algorithm #artificialintelligence #machinelearning #reinforcementlearning #technology
----------
@machinelearning_tuts
How do you go from self-play to the real world? : Transfer learning
NeurIPS 2017 Meta Learning Symposium: https://lnkd.in/e7MdpPc
A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation.
#artificialintelligence #deeplearning #metalearning #reinforcementlearning
----------
@machinelearning_tuts
NeurIPS 2017 Meta Learning Symposium: https://lnkd.in/e7MdpPc
A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation.
#artificialintelligence #deeplearning #metalearning #reinforcementlearning
----------
@machinelearning_tuts