How Xnor.ai Managed to Squeeze a Deep Neural Network onto a $20 Wyze Camera
Blog by Carlo C del Mundo: https://medium.com/@xnor_ai/how-xnor-ai-managed-to-squeeze-a-deep-neural-network-onto-a-20-wyze-camera-88a5f9fc3466
#neuralnetwork #deeplearning #machinelearning
Blog by Carlo C del Mundo: https://medium.com/@xnor_ai/how-xnor-ai-managed-to-squeeze-a-deep-neural-network-onto-a-20-wyze-camera-88a5f9fc3466
#neuralnetwork #deeplearning #machinelearning
Medium
How Xnor.ai Managed to Squeeze a Deep Neural Network onto a $20 Wyze Camera
By Carlo C del Mundo
"HighRes-net for Multi-Frame Super-Resolution by Recursive Fusion"
Pytorch implementation of HighRes-net, a neural network trained and tested on the European Space Agency’s Kelvin competition. GitHub, by ElementAI AI for Good lab and Mila: https://github.com/ElementAI/HighRes-net
Blog, by ElementAI's Team AI for Good: https://www.elementai.com/news/2019/computer-enhance-please
#ai4good #neuralnetwork #pytorch
Pytorch implementation of HighRes-net, a neural network trained and tested on the European Space Agency’s Kelvin competition. GitHub, by ElementAI AI for Good lab and Mila: https://github.com/ElementAI/HighRes-net
Blog, by ElementAI's Team AI for Good: https://www.elementai.com/news/2019/computer-enhance-please
#ai4good #neuralnetwork #pytorch
GitHub
GitHub - ServiceNow/HighRes-net: Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained…
Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition. This is a ServiceNow Research project...
MintNet: Building Invertible Neural Networks with Masked Convolutions
Song et al.: https://arxiv.org/abs/1907.07945
#machinelearning #neuralnetworks #neuralnetwork
Song et al.: https://arxiv.org/abs/1907.07945
#machinelearning #neuralnetworks #neuralnetwork
arXiv.org
MintNet: Building Invertible Neural Networks with Masked Convolutions
We propose a new way of constructing invertible neural networks by combining simple building blocks with a novel set of composition rules. This leads to a rich set of invertible architectures,...
Probing Neural Network Comprehension of Natural Language Arguments
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
"We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them."
Timothy Niven and Hung-Yu Kao: https://arxiv.org/abs/1907.07355
#naturallanguage #neuralnetwork #reasoning #unsupervisedlearning
Let’s code a Neural Network in plain NumPy
Blog by Piotr Skalski: https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae7e74410795
#artificialintelligence #neuralnetwork #numpy
@ArtificialIntelligenceArticles
Blog by Piotr Skalski: https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae7e74410795
#artificialintelligence #neuralnetwork #numpy
@ArtificialIntelligenceArticles
Medium
Let’s code a Neural Network in plain NumPy
Mysteries of Neural Networks Part III
"Rules-of-thumb for building a Neural Network"
Blog by Chitta Ranjan : https://towardsdatascience.com/17-rules-of-thumb-for-building-a-neural-network-93356f9930af
#MachineLearning #NeuralNetwork #TensorFlow
Blog by Chitta Ranjan : https://towardsdatascience.com/17-rules-of-thumb-for-building-a-neural-network-93356f9930af
#MachineLearning #NeuralNetwork #TensorFlow
Medium
Rules-of-thumb for building a Neural Network
In this article, we will get a starting point to build an initial Neural Network. We will learn the thumb-rules, e.g. the number of hidden…
An Actor-Critic-Attention Mechanism for Deep Reinforcement Learning in Multi-view Environments
Elaheh Barati and Xuewen Chen : https://arxiv.org/abs/1907.09466
#reinforcementlearning #neuralnetwork #neuralnetworks #deeplearning
Elaheh Barati and Xuewen Chen : https://arxiv.org/abs/1907.09466
#reinforcementlearning #neuralnetwork #neuralnetworks #deeplearning
arXiv.org
An Actor-Critic-Attention Mechanism for Deep Reinforcement...
In reinforcement learning algorithms, leveraging multiple views of the environment can improve the learning of complicated policies. In multi-view environments, due to the fact that the views may...
Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP
Yu et al.: https://arxiv.org/abs/1906.02768
#nlp #neuralnetwork #reinforcementlearning #neuralnetworks
Yu et al.: https://arxiv.org/abs/1906.02768
#nlp #neuralnetwork #reinforcementlearning #neuralnetworks
NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data
Sun et al.: https://arxiv.org/abs/1908.03190
#ArtificialIntelligence #NeuralNetwork #PartialDifferentialEquations
Sun et al.: https://arxiv.org/abs/1908.03190
#ArtificialIntelligence #NeuralNetwork #PartialDifferentialEquations
arXiv.org
NeuPDE: Neural Network Based Ordinary and Partial Differential...
We propose a neural network based approach for extracting models from dynamic
data using ordinary and partial differential equations. In particular, given a
time-series or spatio-temporal dataset,...
data using ordinary and partial differential equations. In particular, given a
time-series or spatio-temporal dataset,...
Write With Transformer
See how a modern neural network auto-completes your text 🤗
With the brand new GPT-2 large!
Built by the phenomenal Hugging Face team : https://transformer.huggingface.co
H / T : Lysandre Debut
#GPT2 #NeuralNetwork #Transformer
See how a modern neural network auto-completes your text 🤗
With the brand new GPT-2 large!
Built by the phenomenal Hugging Face team : https://transformer.huggingface.co
H / T : Lysandre Debut
#GPT2 #NeuralNetwork #Transformer
"Two neural nets learn to communicate through their own emergent visual language"
Here set in clay tablets.
By Joel Simon : https://github.com/joel-simon/dimensions-of-dialogue/blob/master/emergent_characters.ipynb
#language #neuralnetwork #deeplearning
Here set in clay tablets.
By Joel Simon : https://github.com/joel-simon/dimensions-of-dialogue/blob/master/emergent_characters.ipynb
#language #neuralnetwork #deeplearning
GitHub
joel-simon/dimensions-of-dialogue
Contribute to joel-simon/dimensions-of-dialogue development by creating an account on GitHub.
A Probabilistic Representation of Deep Learning
Xinjie Lan, Kenneth E. Barner : https://arxiv.org/abs/1908.09772v1
#deeplearning #machinelearning #neuralnetwork
Xinjie Lan, Kenneth E. Barner : https://arxiv.org/abs/1908.09772v1
#deeplearning #machinelearning #neuralnetwork
Attention? Attention!
Blog by Lilian Weng : https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
#machinelearning #neuralnetwork #transformers
Blog by Lilian Weng : https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
#machinelearning #neuralnetwork #transformers
Lil'Log
Attention Attention
Extreme Language Model Compression with Optimal Subwords and Shared Projections
Zhao et al.: https://arxiv.org/abs/1909.11687
#neuralnetwork #bert #nlp
Zhao et al.: https://arxiv.org/abs/1909.11687
#neuralnetwork #bert #nlp
arXiv.org
Extremely Small BERT Models from Mixed-Vocabulary Training
Pretrained language models like BERT have achieved good results on NLP tasks, but are impractical on resource-limited devices due to memory footprint. A large fraction of this footprint comes from...
Emergent properties of the local geometry of neural loss landscapes
Stanislav Fort and Surya Ganguli : https://arxiv.org/abs/1910.05929
#MachineLearning #NeuralNetwork #DeepLearning
Stanislav Fort and Surya Ganguli : https://arxiv.org/abs/1910.05929
#MachineLearning #NeuralNetwork #DeepLearning
arXiv.org
Emergent properties of the local geometry of neural loss landscapes
The local geometry of high dimensional neural network loss landscapes can both challenge our cherished theoretical intuitions as well as dramatically impact the practical success of neural network...
On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks
Michela Paganini, Jessica Forde: https://arxiv.org/abs/2001.05050
#ArtificialIntelligence #MachineLearning #NeuralNetwork
Michela Paganini, Jessica Forde: https://arxiv.org/abs/2001.05050
#ArtificialIntelligence #MachineLearning #NeuralNetwork