Torch, TF, Lasagne code for audio style transfer.
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
http://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/
#dl #audio #styletransfer #torch #tf #lasagne
Dmitry Ulyanov
Audio texture synthesis and style transfer
by Dmitry Ulyanov and Vadim Lebedev We present an extension of texture synthesis and style transfer method of Leon Gatys et al. for audio. We have developed the same code for three frameworks (well, it is cold in Moscow), choose your favorite: Torch TensorFlowβ¦
Great collections of Data Science learning materials
The list includes free books and online courses on range of DS-related disciplines:
Machine learning (#ML)
Deep Learning (#DL)
Reinforcement learning (#RL)
#NLP
Tutorials on #Keras, #Tensorflow, #Torch, #PyTorch, #Theano
Notable researchers, papers and even #datasets. It is a great place to start reviewing your knowledge or learning something new.
Link: https://hackmd.io/@chanderA/aiguide
#wheretostart #entrylevel #novice #studycontent #studymaterials #books #MOOC #meta
The list includes free books and online courses on range of DS-related disciplines:
Machine learning (#ML)
Deep Learning (#DL)
Reinforcement learning (#RL)
#NLP
Tutorials on #Keras, #Tensorflow, #Torch, #PyTorch, #Theano
Notable researchers, papers and even #datasets. It is a great place to start reviewing your knowledge or learning something new.
Link: https://hackmd.io/@chanderA/aiguide
#wheretostart #entrylevel #novice #studycontent #studymaterials #books #MOOC #meta
Forwarded from Graph Machine Learning
Simple scalable graph neural networks
Michael Bronstein continues a marathon of great blog posts on GML. In a new post he describes their recent work on scaling GNNs to large network. There is a good introduction to sampling-based methods (e.g. SAGE, GraphSAINT, ClusterGCN), which sample a subgraph of a large graph and then train GNN only on a subgraph.
Then, he describes that it can be beneficial just precompute r-hop matrices, A^r X, and use MLP on these features. This way, you use topology of your graph and you apply mini-batch training with MLP.
What's cool is that the algorithm is already available in pytorch-geometric as a transform, which is implemented via sparseTensor matrix multiplication.
Michael Bronstein continues a marathon of great blog posts on GML. In a new post he describes their recent work on scaling GNNs to large network. There is a good introduction to sampling-based methods (e.g. SAGE, GraphSAINT, ClusterGCN), which sample a subgraph of a large graph and then train GNN only on a subgraph.
Then, he describes that it can be beneficial just precompute r-hop matrices, A^r X, and use MLP on these features. This way, you use topology of your graph and you apply mini-batch training with MLP.
What's cool is that the algorithm is already available in pytorch-geometric as a transform, which is implemented via sparseTensor matrix multiplication.
Medium
Simple scalable graph neural networks
One of the practical challenges of graph neural networks in scalability to large graphs. We present a simple solution for scalable GNNs.
mingpt β a minimal pytorch re-implementation of the openai generative pretrained transformer training
by karpathy
small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out.
with a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce gpt-1/gpt-2 results, though they haven't tried $$$. gpt-3 is likely out of reach as his understanding is that it does not fit into gpu memory and requires a more careful model-parallel treatment.
https://twitter.com/karpathy/status/1295410274095095810?s=20
#nlp #karpathy #gpt #torch
by karpathy
small, clean, interpretable and educational, as most of the currently available ones are a bit sprawling. this implementation is appropriately about 300 lines of code, including boilerplate and a totally unnecessary custom causal self-attention module. all that's going on is that a sequence of indices goes into a sequence of transformer blocks, and a probability distribution of the next index comes out.
with a bpe encoder, distributed training and maybe fp16 this implementation may be able to reproduce gpt-1/gpt-2 results, though they haven't tried $$$. gpt-3 is likely out of reach as his understanding is that it does not fit into gpu memory and requires a more careful model-parallel treatment.
https://twitter.com/karpathy/status/1295410274095095810?s=20
#nlp #karpathy #gpt #torch
Twitter
Andrej Karpathy
I wrote a minimal/educational GPT training library in PyTorch, am calling it minGPT as it is only around ~300 lines of code: https://t.co/79S9lShJRN +demos for addition and character-level language model. (quick weekend project, may contain sharp edges)