The power of deeper networks for expressing natural functions
David Rolnick & Max Tegmark: https://arxiv.org/abs/1705.05502
#DeepLearning #MachineLearning #NeuralComputing
David Rolnick & Max Tegmark: https://arxiv.org/abs/1705.05502
#DeepLearning #MachineLearning #NeuralComputing
Evolved Art with Transparent, Overlapping, and Geometric Shapes
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
Berg et al.: https://arxiv.org/abs/1904.06110
#NeuralComputing #EvolutionaryComputing #ArtificialIntelligence
A Mean Field Theory of Batch Normalization
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
"Cellular automata as convolutional neural networks"
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
Cellular automata as convolutional neural networks"
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
By William Gilpin: https://arxiv.org/abs/1809.02942
#CellularAutomata #NeuralNetworks #NeuralComputing #EvolutionaryComputing #ComputationalPhysics
Multi-Sample Dropout for Accelerated Training and Better Generalization
Hiroshi Inoue: https://arxiv.org/abs/1905.09788
#ArtificialIntelligence #NeuralComputing #MachineLearning
Hiroshi Inoue: https://arxiv.org/abs/1905.09788
#ArtificialIntelligence #NeuralComputing #MachineLearning
arXiv.org
Multi-Sample Dropout for Accelerated Training and Better Generalization
Dropout is a simple but efficient regularization technique for achieving better generalization of deep neural networks (DNNs); hence it is widely used in tasks based on DNNs. During training,...
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers and Luke Zettlemoyer: https://arxiv.org/abs/1907.04840
Paper: https://arxiv.org/abs/1907.04840
Blog post: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/
Code: https://github.com/TimDettmers/sparse_learning
#MachineLearning #NeuralComputing #EvolutionaryComputing
Tim Dettmers and Luke Zettlemoyer: https://arxiv.org/abs/1907.04840
Paper: https://arxiv.org/abs/1907.04840
Blog post: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/
Code: https://github.com/TimDettmers/sparse_learning
#MachineLearning #NeuralComputing #EvolutionaryComputing
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...
A Fine-Grained Spectral Perspective on Neural Networks
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Morcos et al.: https://arxiv.org/abs/1906.02773
#ArtificialIntelligence #MachineLearning #NeuralComputing
Morcos et al.: https://arxiv.org/abs/1906.02773
#ArtificialIntelligence #MachineLearning #NeuralComputing
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers and Luke Zettlemoyer : https://arxiv.org/abs/1907.04840
#ArtificialIntelligence #MachineLearning #NeuralComputing
Tim Dettmers and Luke Zettlemoyer : https://arxiv.org/abs/1907.04840
#ArtificialIntelligence #MachineLearning #NeuralComputing
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...
Unconstrained Monotonic Neural Networks
Antoine Wehenkel and Gilles Louppe : https://arxiv.org/abs/1908.05164
#NeuralNetworks #MachineLearning #NeuralComputing
Antoine Wehenkel and Gilles Louppe : https://arxiv.org/abs/1908.05164
#NeuralNetworks #MachineLearning #NeuralComputing
BagNet: Berkeley Analog Generator with Layout Optimizer Boosted with Deep Neural Networks
Hakhamaneshi et al.: https://arxiv.org/abs/1907.10515
#SignalProcessing #MachineLearning #NeuralComputing
Hakhamaneshi et al.: https://arxiv.org/abs/1907.10515
#SignalProcessing #MachineLearning #NeuralComputing
Single Headed Attention RNN: Stop Thinking With Your Head
Stephen Merity : https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
Stephen Merity : https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
arXiv.org
Single Headed Attention RNN: Stop Thinking With Your Head
The leading approaches in language modeling are all obsessed with TV shows of my youth - namely Transformers and Sesame Street. Transformers this, Transformers that, and over here a bonfire worth...
Single Headed Attention RNN: Stop Thinking With Your Head
Stephen Merity : https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
Stephen Merity : https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
arXiv.org
Single Headed Attention RNN: Stop Thinking With Your Head
The leading approaches in language modeling are all obsessed with TV shows of my youth - namely Transformers and Sesame Street. Transformers this, Transformers that, and over here a bonfire worth...
Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level
Paul Bertens, Seong-Whan Lee : https://arxiv.org/abs/1912.07589
#NeuralComputing #MachineLearning #ArtificialIntelligence
Paul Bertens, Seong-Whan Lee : https://arxiv.org/abs/1912.07589
#NeuralComputing #MachineLearning #ArtificialIntelligence
arXiv.org
Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level
Although Deep Neural Networks have seen great success in recent years through
various changes in overall architectures and optimization strategies, their
fundamental underlying design remains...
various changes in overall architectures and optimization strategies, their
fundamental underlying design remains...
"Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm"
Chelsea Finn and Sergey Levine : https://arxiv.org/abs/1710.11622
#MachineLearning #ArtificialIntelligence #MetaLearning #NeuralComputing
Chelsea Finn and Sergey Levine : https://arxiv.org/abs/1710.11622
#MachineLearning #ArtificialIntelligence #MetaLearning #NeuralComputing
Neuroevolution of Self-Interpretable Agents
Tang et al.: https://arxiv.org/abs/2003.08165
#NeuralComputing #EvolutionaryComputing #MachineLearning
Tang et al.: https://arxiv.org/abs/2003.08165
#NeuralComputing #EvolutionaryComputing #MachineLearning
Playing Atari with Six Neurons
Cuccu et al.: https://arxiv.org/abs/1806.01363
#MachineLearning #ArtificialIntelligence #NeuralComputing
Cuccu et al.: https://arxiv.org/abs/1806.01363
#MachineLearning #ArtificialIntelligence #NeuralComputing
arXiv.org
Playing Atari with Six Neurons
Deep reinforcement learning, applied to vision-based problems like Atari games, maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting...