COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Best paper ICML 2019
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Locatello et al.: https://arxiv.org/pdf/1811.12359.pdf
#deeplearning #disentangledrepresentations #unsupervisedlearning
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Locatello et al.: https://arxiv.org/pdf/1811.12359.pdf
#deeplearning #disentangledrepresentations #unsupervisedlearning
Neurobiologists train artificial neural networks to map the brain
http://bit.do/eVNef
#cellularmorpoholopyneuralnetworks #unsupervisedlearning
#analyzinglargedatasets #CNN #AI
The human brain consists of about 86 billion nerve cells and about as many glial cells. In addition, there are about 100 trillion connections between the nerve cells alone. While mapping all the connections of a human brain remains out of reach, scientists have started to address the problem on a smaller scale. Through the development of serial block-face scanning electron microscopy, all cells and connections of a particular brain area can now be automatically surveyed and displayed in a three-dimensional image.
“It can take several months to survey a 0.3 mm3 piece of brain under an electron microscope. Depending on the size of the brain, this seems like a lot of time for a tiny piece. But even this contains thousands of cells. Such a data set would also require almost 100 terabytes of storage space. However, it is not the collection and storage but rather the data analysis that is the difficult part."
http://bit.do/eVNef
#cellularmorpoholopyneuralnetworks #unsupervisedlearning
#analyzinglargedatasets #CNN #AI
The human brain consists of about 86 billion nerve cells and about as many glial cells. In addition, there are about 100 trillion connections between the nerve cells alone. While mapping all the connections of a human brain remains out of reach, scientists have started to address the problem on a smaller scale. Through the development of serial block-face scanning electron microscopy, all cells and connections of a particular brain area can now be automatically surveyed and displayed in a three-dimensional image.
“It can take several months to survey a 0.3 mm3 piece of brain under an electron microscope. Depending on the size of the brain, this seems like a lot of time for a tiny piece. But even this contains thousands of cells. Such a data set would also require almost 100 terabytes of storage space. However, it is not the collection and storage but rather the data analysis that is the difficult part."
CS294-158 Deep Unsupervised Learning Spring 2019
Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas - https://sites.google.com/view/berkeley-cs294-158-sp19/home
#unsupervisedlearning #machinelearning #deeplearning
Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas - https://sites.google.com/view/berkeley-cs294-158-sp19/home
#unsupervisedlearning #machinelearning #deeplearning
Google
CS294-158-SP19 Deep Unsupervised Learning Spring 2019
About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data…
What Does BERT Look At? An Analysis of BERT's Attention
Clark et al.: https://arxiv.org/abs/1906.04341
Code: https://github.com/clarkkev/attention-analysis
#bert #naturallanguage #unsupervisedlearning
Clark et al.: https://arxiv.org/abs/1906.04341
Code: https://github.com/clarkkev/attention-analysis
#bert #naturallanguage #unsupervisedlearning
Probing Neural Network Comprehension of Natural Language Arguments
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
Self-supervised Learning for Video Correspondence Flow
Zihang Lai and Weidi Xie: https://zlai0.github.io/CorrFlow/
#MachineLearning #SelfSupervisedLearning #UnsupervisedLearning
Zihang Lai and Weidi Xie: https://zlai0.github.io/CorrFlow/
#MachineLearning #SelfSupervisedLearning #UnsupervisedLearning
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Liu et al.: https://arxiv.org/abs/1907.11692
#bert #naturallanguageprocessing #unsupervisedlearning
Liu et al.: https://arxiv.org/abs/1907.11692
#bert #naturallanguageprocessing #unsupervisedlearning
arXiv.org
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private...
Tensorflow implementation of U-GAT-IT
Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
GitHub, by Junho Kim : https://github.com/taki0112/UGATIT
#tensorflow #unsupervisedlearning #generativemodels
Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
GitHub, by Junho Kim : https://github.com/taki0112/UGATIT
#tensorflow #unsupervisedlearning #generativemodels
GitHub
GitHub - taki0112/UGATIT: Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive…
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020) - taki0112/UGATIT
Tensorflow implementation of U-GAT-IT
Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
GitHub, by Junho Kim : https://github.com/taki0112/UGATIT
#tensorflow #unsupervisedlearning #generativemodels
Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
GitHub, by Junho Kim : https://github.com/taki0112/UGATIT
#tensorflow #unsupervisedlearning #generativemodels
GitHub
GitHub - taki0112/UGATIT: Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive…
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020) - taki0112/UGATIT
Energy-Based Adversarial Training and Video Prediction, NeurIPS 2016
By Yann LeCun, Facebook AI Research
YouTube: https://youtu.be/x4sI5qO6O2Y
#DeepLearning #EnergyBasedModels #UnsupervisedLearning
By Yann LeCun, Facebook AI Research
YouTube: https://youtu.be/x4sI5qO6O2Y
#DeepLearning #EnergyBasedModels #UnsupervisedLearning
YouTube
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
NIPS 2016 Workshop on Adversarial Training https://arxiv.org/abs/1609.03126 We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) whic...
Unsupervised Separation of Dynamics from Pixels
Silvia Chiappa and Ulrich Paquet : https://arxiv.org/abs/1907.12906
#DeepLearning #MachineLearning #UnsupervisedLearning
Silvia Chiappa and Ulrich Paquet : https://arxiv.org/abs/1907.12906
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Unsupervised Separation of Dynamics from Pixels
We present an approach to learn the dynamics of multiple objects from image
sequences in an unsupervised way. We introduce a probabilistic model that first
generate noisy positions for each object...
sequences in an unsupervised way. We introduce a probabilistic model that first
generate noisy positions for each object...
U-GAT-IT
Official TensorFlow Implementation : https://github.com/taki0112/UGATIT
#DeepLearning #Tensorflow #UnsupervisedLearning
Official TensorFlow Implementation : https://github.com/taki0112/UGATIT
#DeepLearning #Tensorflow #UnsupervisedLearning
GitHub
GitHub - taki0112/UGATIT: Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive…
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020) - taki0112/UGATIT
Compressing BERT for faster prediction
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Rasa
Learn how to make BERT smaller and faster
Let's look at compression methods for neural networks, such as quantization and pruning. Then, we apply one to BERT using TensorFlow Lite.
"Hamiltonian Neural Networks"
Greydanus et al.: https://arxiv.org/abs/1906.01563
Blog: https://greydanus.github.io/2019/05/15/hamiltonian-nns/
#Hamiltonian #NeuralNetworks #UnsupervisedLearning
Greydanus et al.: https://arxiv.org/abs/1906.01563
Blog: https://greydanus.github.io/2019/05/15/hamiltonian-nns/
#Hamiltonian #NeuralNetworks #UnsupervisedLearning
arXiv.org
Hamiltonian Neural Networks
Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration...
Visualizing and Measuring the Geometry of BERT
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
arXiv.org
Visualizing and Measuring the Geometry of BERT
Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks...
Deep causal representation learning for unsupervised domain adaptation
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Deep causal representation learning for unsupervised domain adaptation
Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we...
Momentum Contrast for Unsupervised Visual Representation Learning
He et al.: https://arxiv.org/abs/1911.05722
#ArtificialIntelligence #DeepLearning #UnsupervisedLearning
He et al.: https://arxiv.org/abs/1911.05722
#ArtificialIntelligence #DeepLearning #UnsupervisedLearning
arXiv.org
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue...
The Illustrated GPT-2 (Visualizing Transformer Language Models)
Blog by Jay Alammar : https://jalammar.github.io/illustrated-gpt2/
#ArtificialIntelligence #NLP #UnsupervisedLearning
Blog by Jay Alammar : https://jalammar.github.io/illustrated-gpt2/
#ArtificialIntelligence #NLP #UnsupervisedLearning
"Fast Task Inference with Variational Intrinsic Successor Features"
Hansen et al.: https://arxiv.org/abs/1906.05030
#DeepLearning #ReinforcementLearning #UnsupervisedLearning
Hansen et al.: https://arxiv.org/abs/1906.05030
#DeepLearning #ReinforcementLearning #UnsupervisedLearning