Introduction to Autoencoders - Unsupervised Deep Learning Models (Cont'd) | Coursera
https://bit.ly/2Nw5CCh
❇️ @AI_Python_EN
https://bit.ly/2Nw5CCh
❇️ @AI_Python_EN
Can We Learn the Language of Proteins?
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/33jCof6
❇️ @AI_Python_EN
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/33jCof6
❇️ @AI_Python_EN
What if you can generate a whole new Image just by giving its textual description?
Learn for Shibsankar Das in his hack session here:
http://bit.ly/DHS2019_66
He’ll be talking about “Generating Synthetic Images from Textual Description using GANs” in which he’ll implement GANs from scratch, formulate business use-cases
❇️ @AI_Python_EN
Learn for Shibsankar Das in his hack session here:
http://bit.ly/DHS2019_66
He’ll be talking about “Generating Synthetic Images from Textual Description using GANs” in which he’ll implement GANs from scratch, formulate business use-cases
❇️ @AI_Python_EN
DataHack Summit 2019
Hack Session: Generating Synthetic Images from Textual Description using GANs - DataHack Summit 2019
Generating images from natural language is one of the primary applications of recent conditional generative models. Besides testing our ability to model conditional, highly dimensional distributions, text to image synthesis has many exciting and practical…
A neural network that transforms a design mock-up into a static website
https://github.com/emilwallner/Screenshot-to-code
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
https://github.com/emilwallner/Screenshot-to-code
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
Optimizing Millions of Hyperparameters by Implicit Differentiation Lorraine et al.:
https://arxiv.org/abs/1911.02590
#ArtificialIntelligence #MachineLearning
❇️ #AI_Python_EN
https://arxiv.org/abs/1911.02590
#ArtificialIntelligence #MachineLearning
❇️ #AI_Python_EN
Story Realization: Expanding Plot Events into Sentences Ammanabrolu et al.:
https://arxiv.org/abs/1909.03480
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
https://arxiv.org/abs/1909.03480
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
News classification using classic Machine Learning tools (TF-IDF) and modern NLP approach based on transfer learning (ULMFIT) deployed on GCP
Github:
https://github.com/imadelh/NLP-news-classification
Blog:
https://imadelhanafi.com/posts/text_classification_ulmfit/
#DeepLearning #MachineLearning #NLP
❇️ @AI_Python_EN
Github:
https://github.com/imadelh/NLP-news-classification
Blog:
https://imadelhanafi.com/posts/text_classification_ulmfit/
#DeepLearning #MachineLearning #NLP
❇️ @AI_Python_EN
Research Guide: Advanced Loss Functions for Machine Learning Models
http://bit.ly/36HBefu
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
http://bit.ly/36HBefu
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
Intro to optimization in deep learning: Momentum, RMSProp and Adam
https://bit.ly/2zwBLV0
❇️ @AI_Python_EN
https://bit.ly/2zwBLV0
❇️ @AI_Python_EN
A list of the biggest machine learning datasets from across the web
https://bit.ly/2TYGdVD
❇️ @AI_Python_EN
https://bit.ly/2TYGdVD
❇️ @AI_Python_EN
Self-training with Noisy Student improves ImageNet classification Xie et al.:
https://arxiv.org/abs/1911.04252
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_PythonEN
https://arxiv.org/abs/1911.04252
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_PythonEN
Memory Augmented Recursive Neural Networks
Arabshahi et al.:
https://arxiv.org/abs/1911.01545
#ArtificialIntelligence #MachineLearning #NeuralNetworks
❇️ @AI_Python_EN
Arabshahi et al.:
https://arxiv.org/abs/1911.01545
#ArtificialIntelligence #MachineLearning #NeuralNetworks
❇️ @AI_Python_EN
Visualizing an AI model’s blind spots
http://bit.ly/2CosFZn
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_PYTHON_EN
http://bit.ly/2CosFZn
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_PYTHON_EN
"SEMINAL DEBATE : YOSHUA BENGIO | GARY MARCUS" This Is The Debate The AI World Has Been Waiting For LIVE STREAMING :
https://www.eventbrite.ca/e/seminal-debate-yoshua-bengio-gary-marcus-live-streaming-tickets-81620778947
Date and Time : December 23, 2019 | 7:00 PM – 8:30 PM EST
#ArtificialIntelligence
❇️ @AI_Python_EN
https://www.eventbrite.ca/e/seminal-debate-yoshua-bengio-gary-marcus-live-streaming-tickets-81620778947
Date and Time : December 23, 2019 | 7:00 PM – 8:30 PM EST
#ArtificialIntelligence
❇️ @AI_Python_EN
A free linear algebra #textbook with solutions by Jim Hefferon. This knowledge will be very useful for understanding #machinelearning and beyond.
http://joshua.smcvt.edu/linearalgebra/#current_version
#book
❇️ @AI_Python_EN
http://joshua.smcvt.edu/linearalgebra/#current_version
#book
❇️ @AI_Python_EN
Part of the communication challenges between data scientists and the business result from thinking one methodology is going to solve two problems. Illustrative example: The biz asks for a highly predictive churn model (this could be extended to many different use cases, but we're keeping it simple here). In reality, the biz wants to be able to:
1. Accurately identify customers with a high risk of churn so that they can implement some type of corrective measures.
2. They also want recommendations (based on data) that will inform what corrective measures could potentially have the biggest impact on reducing churn. To give the biz what they're expecting, it's possible that you'll need to build two separate models. (one that is highly predictive, the other that is easily interpretable). Bonus, once you've already collected the data, it's not that much incremental effort to build multiple models.
Agree or Disagree? And if you agree, are you already approaching things this way?
❇️ @AI_Python_EN
1. Accurately identify customers with a high risk of churn so that they can implement some type of corrective measures.
2. They also want recommendations (based on data) that will inform what corrective measures could potentially have the biggest impact on reducing churn. To give the biz what they're expecting, it's possible that you'll need to build two separate models. (one that is highly predictive, the other that is easily interpretable). Bonus, once you've already collected the data, it's not that much incremental effort to build multiple models.
Agree or Disagree? And if you agree, are you already approaching things this way?
❇️ @AI_Python_EN
Teaching a neural network to use a calculator.
https://reiinakano.com/2019/11/12/solving-probability.html
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
https://reiinakano.com/2019/11/12/solving-probability.html
#ArtificialIntelligence #DeepLearning #MachineLearning
❇️ @AI_Python_EN
Machine ignoring = underfitting
Machine learning = optimal fitting
Machine memorization = overfitting
#datascience #machinelearning
❇️ @AI_Python_EN
Machine learning = optimal fitting
Machine memorization = overfitting
#datascience #machinelearning
❇️ @AI_Python_EN