@Machine_learn
Deep unfolding network for image super-resolution
Deep unfolding network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model, while maintaining the advantages of learning-based methods.
Github: https://github.com/cszn/USRNet
Paper: https://arxiv.org/pdf/2003.10428.pdf
Deep unfolding network for image super-resolution
Deep unfolding network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model, while maintaining the advantages of learning-based methods.
Github: https://github.com/cszn/USRNet
Paper: https://arxiv.org/pdf/2003.10428.pdf
@Machine_learn
TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Github: https://github.com/jayleicn/TVRetrieval
PyTorch implementation : https://github.com/jayleicn/TVCaption
Paper: https://arxiv.org/abs/2001.09099v1
TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Github: https://github.com/jayleicn/TVRetrieval
PyTorch implementation : https://github.com/jayleicn/TVCaption
Paper: https://arxiv.org/abs/2001.09099v1
@Machine_learn
Free course Deep Unsupervised Learning
https://sites.google.com/view/berkeley-cs294-158-sp20/home
Free course Deep Unsupervised Learning
https://sites.google.com/view/berkeley-cs294-158-sp20/home
Google
CS294-158-SP20 Deep Unsupervised Learning Spring 2020
About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data…
@Machine_learn
Hidden Markov Model - Implemented from scratch
https://zerowithdot.com/hidden-markov-model/
Hidden Markov Model - Implemented from scratch
https://zerowithdot.com/hidden-markov-model/
Zerowithdot
Hidden Markov Model - Implemented from scratch
Python step-by-step implementation of Hidden Markov Model from scratch.
@Machine_learn
Regularizing Meta-Learning via Gradient Dropout
Code: https://github.com/hytseng0509/DropGrad
Paper: https://arxiv.org/abs/2004.05859
Regularizing Meta-Learning via Gradient Dropout
Code: https://github.com/hytseng0509/DropGrad
Paper: https://arxiv.org/abs/2004.05859
GitHub
GitHub - hytseng0509/DropGrad: Regularizing Meta-Learning via Gradient Dropout
Regularizing Meta-Learning via Gradient Dropout. Contribute to hytseng0509/DropGrad development by creating an account on GitHub.
@Machine_learn
Machine Learning and Data Science free online courses to do in quarantine
A. Beginner courses
1. Machine Learning
2. Machine Learning with Python
B. Intermediate courses
3. Neural Networks and Deep Learning
4. Convolutional Neural Networks
C. Advanced course
5. Advanced Machine Learning Specialization
Machine Learning and Data Science free online courses to do in quarantine
A. Beginner courses
1. Machine Learning
2. Machine Learning with Python
B. Intermediate courses
3. Neural Networks and Deep Learning
4. Convolutional Neural Networks
C. Advanced course
5. Advanced Machine Learning Specialization
❤1
@Machine_learn
Local-Global Video-Text Interactions for Temporal Grounding
Github: https://github.com/JonghwanMun/LGI4temporalgrounding
Paper: https://arxiv.org/abs/2004.07514
Local-Global Video-Text Interactions for Temporal Grounding
Github: https://github.com/JonghwanMun/LGI4temporalgrounding
Paper: https://arxiv.org/abs/2004.07514
This media is not supported in your browser
VIEW IN TELEGRAM
@Machine_learn
In a chord diagram (or radial network), entities are arranged radially as segments with their relationships visualised by arcs that connect them. The size of the segments illustrates the numerical proportions, whilst the size of the arc illustrates the significance of the relationships1.
Chord diagrams are useful when trying to convey relationships between different entities, and they can be beautiful and eye-catching.
https://github.com/shahinrostami/chord
#python
In a chord diagram (or radial network), entities are arranged radially as segments with their relationships visualised by arcs that connect them. The size of the segments illustrates the numerical proportions, whilst the size of the arc illustrates the significance of the relationships1.
Chord diagrams are useful when trying to convey relationships between different entities, and they can be beautiful and eye-catching.
https://github.com/shahinrostami/chord
#python
@Machine_learn
The TensorFlow tutorials are written as Jupyter notebooks and run directly in Google Colab—a hosted notebook environment that requires no setup.
FROM BEGINNERS TO EXPERTS
* Source Codes
* Videos
* Libraries and extensions
https://www.tensorflow.org/tutorials
The TensorFlow tutorials are written as Jupyter notebooks and run directly in Google Colab—a hosted notebook environment that requires no setup.
FROM BEGINNERS TO EXPERTS
* Source Codes
* Videos
* Libraries and extensions
https://www.tensorflow.org/tutorials
@Machine_learn
NeRF: Neural Radiance Fields
http://www.matthewtancik.com/nerf
Tensorflow implementation: https://github.com/bmild/nerf
Paper: https://arxiv.org/abs/2003.08934v1
NeRF: Neural Radiance Fields
http://www.matthewtancik.com/nerf
Tensorflow implementation: https://github.com/bmild/nerf
Paper: https://arxiv.org/abs/2003.08934v1
Training with quantization noise for extreme model compression
@Machine_learn
https://ai.facebook.com/blog/training-with-quantization-noise-for-extreme-model-compression/
Paper: https://arxiv.org/abs/2004.07320
GitHub: https://github.com/pytorch/fairseq/tree/master/examples/quant_noise
@Machine_learn
https://ai.facebook.com/blog/training-with-quantization-noise-for-extreme-model-compression/
Paper: https://arxiv.org/abs/2004.07320
GitHub: https://github.com/pytorch/fairseq/tree/master/examples/quant_noise
@Machine_learn
A Gentle Introduction to the Fbeta-Measure for Machine Learning
https://machinelearningmastery.com/fbeta-measure-for-machine-learning/
A Gentle Introduction to the Fbeta-Measure for Machine Learning
https://machinelearningmastery.com/fbeta-measure-for-machine-learning/
This media is not supported in your browser
VIEW IN TELEGRAM
Adversarial Latent Autoencoders (ALAE) not only generate 1024x1024 images with StyleGAN’s quality but also allow to manipulate real-world images in a feed-forward manner. Your move, StyleGAN team!
paper: arxiv.org/abs/2004.04467
code: github.com/podgorskiy/ALAE
@Machine_learn
paper: arxiv.org/abs/2004.04467
code: github.com/podgorskiy/ALAE
@Machine_learn