Francesco Cardinale
I'm happy to announce that we just open-sourced a major update for our image super-resolution project: using an adversarial network and convolutional feature maps for the loss, we got some interesting results in terms realism and noise cancellation.
Pre-trained weights and GANs training code are available on GitHub!
If you want to read up about the process, check out the blog post.
Also, we released a pip package, 'ISR' (admittedly not the most creative name:D), with a nice documentation and colab notebooks to play around and experiment yourself on FREE GPU(#mindblown). Thanks to Dat Tran for the big help.
π»Blog: https://lnkd.in/dUnvXQZ
πDocumentation: https://lnkd.in/dAuu2Dk
π€Github: https://lnkd.in/dmtV2ht
πColab (prediction): https://lnkd.in/dThVb_p
πColab (training): https://lnkd.in/diPTgWj
https://lnkd.in/dVBaKv4
#opensource #deeplearning #gans #machinelearning #keras
β΄οΈ @AI_Python_EN
I'm happy to announce that we just open-sourced a major update for our image super-resolution project: using an adversarial network and convolutional feature maps for the loss, we got some interesting results in terms realism and noise cancellation.
Pre-trained weights and GANs training code are available on GitHub!
If you want to read up about the process, check out the blog post.
Also, we released a pip package, 'ISR' (admittedly not the most creative name:D), with a nice documentation and colab notebooks to play around and experiment yourself on FREE GPU(#mindblown). Thanks to Dat Tran for the big help.
π»Blog: https://lnkd.in/dUnvXQZ
πDocumentation: https://lnkd.in/dAuu2Dk
π€Github: https://lnkd.in/dmtV2ht
πColab (prediction): https://lnkd.in/dThVb_p
πColab (training): https://lnkd.in/diPTgWj
https://lnkd.in/dVBaKv4
#opensource #deeplearning #gans #machinelearning #keras
β΄οΈ @AI_Python_EN
TensorFlow is dead, long live TensorFlow!
#TensorFlow just went full #Keras! (!!!!!) Here's why that's an earthquake for #AI and #DataScience...
π TensorFlow
β΄οΈ @AI_Python_EN
#TensorFlow just went full #Keras! (!!!!!) Here's why that's an earthquake for #AI and #DataScience...
π TensorFlow
β΄οΈ @AI_Python_EN
image_2019-04-27_21-09-51.png
1.1 MB
The best way to learn #DeepLearning is by practicing it. But which framework to use? Here are 5 articles to get you started!
A Comprehensive Introduction to #PyTorch - https://bit.ly/2L8Rj7n
Learn How to Build Quick & Accurate Neural Networks using PyTorch (& 4 Case Studies) - https://bit.ly/2Vts9nY
Get Started with Deep Learning using #Keras and #TensorFlow in #R - https://bit.ly/2Iro2BY
TensorFlow 101: Understanding Tensors and Graphs - https://bit.ly/2GNg195
An Introduction to Implementing #NeuralNetworks using TensorFlow - https://bit.ly/2V17cBs
β΄οΈ @AI_Python_EN
A Comprehensive Introduction to #PyTorch - https://bit.ly/2L8Rj7n
Learn How to Build Quick & Accurate Neural Networks using PyTorch (& 4 Case Studies) - https://bit.ly/2Vts9nY
Get Started with Deep Learning using #Keras and #TensorFlow in #R - https://bit.ly/2Iro2BY
TensorFlow 101: Understanding Tensors and Graphs - https://bit.ly/2GNg195
An Introduction to Implementing #NeuralNetworks using TensorFlow - https://bit.ly/2V17cBs
β΄οΈ @AI_Python_EN
A #Keras usage pattern that allows for maximum flexibility when defining arbitrary losses and metrics (that don't match the usual signature) is the "endpoint layer" pattern. It works like this: https://colab.research.google.com/drive/1zzLcJ2A2qofIvv94YJ3axRknlA6cBSIw
In short, you use
Of course logistic regression is a basic case that doesn't actually need this advanced pattern. But endpoint layers will work every time, even when you have losses & metrics that don't match the usual
β΄οΈ @AI_Python_EN
In short, you use
add_loss
/add_metric
inside an "endpoint layer" that also has access to model targets. The layer then returns the inference-time predictions. You compile without an external "loss" argument, and you fit with a dictionary of data that contains the targets.Of course logistic regression is a basic case that doesn't actually need this advanced pattern. But endpoint layers will work every time, even when you have losses & metrics that don't match the usual
fn(y_true, y_pred, sampl_weight)
signature that is required in compile
.β΄οΈ @AI_Python_EN
A Convolutional Neural Network Tutorial in Keras and Tensorflow 2
https://medium.com/@isakbosman/a-convolutional-neural-network-tutorial-in-keras-and-tensorflow-2-2bff79f477c0
#Keras #neuralnetwork #TensorFlow #ConvolutionalNeuralNetwork
β΄οΈ @AI_Python_EN
https://medium.com/@isakbosman/a-convolutional-neural-network-tutorial-in-keras-and-tensorflow-2-2bff79f477c0
#Keras #neuralnetwork #TensorFlow #ConvolutionalNeuralNetwork
β΄οΈ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
This is a neat project for reusable text generation with #Keras:
https://github.com/minimaxir/textgenrnn
β΄οΈ @AI_Python_EN
https://github.com/minimaxir/textgenrnn
β΄οΈ @AI_Python_EN
FranΓ§ois Chollet
This is how you implement a network in Chainer. Chainer, the original eager-first #deeplearning framework, has had this API since launch, in mid-2015. When PyTorch got started, it followed the Chainer template (in fact, the prototype of PyTorch was literally a fork of Chainer).
Nearly every day, I am getting ignorant messages saying, "PyTorch is an original innovation that TensorFlow/Keras copied". This is incorrect. Subclassing is a fairly obvious way to do things in Python, and Chainer had this API first. Many others followed.
I had been looking at adding a Model subclassing API to Keras as soon as late 2015 (before the Functional API even existed, and over a year before being aware of PyTorch), inspired by Chainer. Our first discussions about adding an eager execution mode also predate PyTorch.
By the time #PyTorch came out, I had been looking at its API (which is exactly the Chainer API) for 1.5 year (since the release of Chainer). It wasn't exactly a shock. There was nothing we didn't already know.
To be clear, it's a good thing that API patterns and technical innovations are cross-pollinating among deep learning framework. The #Keras API itself has a had a pretty big influence over libraries that came after. It's completely fine, and it all benefits end users.
β΄οΈ @AI_Python_EN
This is how you implement a network in Chainer. Chainer, the original eager-first #deeplearning framework, has had this API since launch, in mid-2015. When PyTorch got started, it followed the Chainer template (in fact, the prototype of PyTorch was literally a fork of Chainer).
Nearly every day, I am getting ignorant messages saying, "PyTorch is an original innovation that TensorFlow/Keras copied". This is incorrect. Subclassing is a fairly obvious way to do things in Python, and Chainer had this API first. Many others followed.
I had been looking at adding a Model subclassing API to Keras as soon as late 2015 (before the Functional API even existed, and over a year before being aware of PyTorch), inspired by Chainer. Our first discussions about adding an eager execution mode also predate PyTorch.
By the time #PyTorch came out, I had been looking at its API (which is exactly the Chainer API) for 1.5 year (since the release of Chainer). It wasn't exactly a shock. There was nothing we didn't already know.
To be clear, it's a good thing that API patterns and technical innovations are cross-pollinating among deep learning framework. The #Keras API itself has a had a pretty big influence over libraries that came after. It's completely fine, and it all benefits end users.
β΄οΈ @AI_Python_EN
Keras notebooks
Material used for Deep Learning related workshops for Machine Learning Tokyo (MLT)
ConvNets: colab notebook with functions for constructing #keras models. Models:
AlexNet
VGG
Inception
MobileNet
ShuffleNet
ResNet
DenseNet
Xception
Unet
SqueezeNet
YOLO
RefineNet
https://github.com/Machine-Learning-Tokyo/DL-workshop-series
β΄οΈ @AI_Python_EN
Material used for Deep Learning related workshops for Machine Learning Tokyo (MLT)
ConvNets: colab notebook with functions for constructing #keras models. Models:
AlexNet
VGG
Inception
MobileNet
ShuffleNet
ResNet
DenseNet
Xception
Unet
SqueezeNet
YOLO
RefineNet
https://github.com/Machine-Learning-Tokyo/DL-workshop-series
β΄οΈ @AI_Python_EN
than standard Adam? - I ran 24 experiments to find out. - The answer? Meh, not really. Full tutorial w/ #Python code here:
http://pyimg.co/asash
#DeepLearning #Keras #MachineLearning #ArtificialIntelligence #AI #DataScience
βοΈ @AI_Python_en
http://pyimg.co/asash
#DeepLearning #Keras #MachineLearning #ArtificialIntelligence #AI #DataScience
βοΈ @AI_Python_en
.
just published my (free) 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IOT
- and more!
Check it out here:
http://pyimg.co/getstarted
And if you liked it, please do give it a share to spread the word. Thank you!
#Python #Keras #MachineLearning #ArtificialIntelligence #AI
βοΈ @AI_Python_EN
just published my (free) 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IOT
- and more!
Check it out here:
http://pyimg.co/getstarted
And if you liked it, please do give it a share to spread the word. Thank you!
#Python #Keras #MachineLearning #ArtificialIntelligence #AI
βοΈ @AI_Python_EN
New tutorial! Traffic Sign Classification with #Keras and #TensorFlow 2.0
- 95% accurate
- Includes pre-trained model
- Full tutorial w/ #Python code
http://pyimg.co/5wzc5
#DeepLearning #MachineLearning #ArtificialIntelligence #DataScience #AI #computervision
βοΈ @AI_Python_EN
- 95% accurate
- Includes pre-trained model
- Full tutorial w/ #Python code
http://pyimg.co/5wzc5
#DeepLearning #MachineLearning #ArtificialIntelligence #DataScience #AI #computervision
βοΈ @AI_Python_EN