With 180+ papers mentioning
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN
Microsoft Open Source Engineer pythiccoder
explores nine advanced tips for production #ML. Read:
https://medium.com/microsoftazure/9-advanced-tips-for-production-machine-learning-6bbdebf49a6f
❇️ @AI_PYTHON_EN
explores nine advanced tips for production #ML. Read:
https://medium.com/microsoftazure/9-advanced-tips-for-production-machine-learning-6bbdebf49a6f
❇️ @AI_PYTHON_EN
Spooky Lavanya
Weights & Biases is officially included in Stanford's CS 197 class!
I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B!
Class:
http://cs197.stanford.edu/assignments/a3.shtml
Code:
https://colab.research.google.com/drive/1zkoPdBZWUMsTpvA35ShVNAP0QcRsPUjf
#MachineLearning
❇️ @AI_Python_en
Weights & Biases is officially included in Stanford's CS 197 class!
I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B!
Class:
http://cs197.stanford.edu/assignments/a3.shtml
Code:
https://colab.research.google.com/drive/1zkoPdBZWUMsTpvA35ShVNAP0QcRsPUjf
#MachineLearning
❇️ @AI_Python_en
AI, Python, Cognitive Neuroscience
Spooky Lavanya Weights & Biases is officially included in Stanford's CS 197 class! I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B! Class: http://cs197.stanford.edu/assignments/a3.shtml Code: …
Building a neural network can be confusing! I wrote a guide to help you navigate the treacherous NN waters using
Highly rec forking the kernel & playing with the code!
Post:
https://lavanya.ai/2019/08/10/training-a-neural-network-start-here/
Code:
https://kaggle.com/lavanyashukla01/training-a-neural-network-start-here
#MachineLearning #DataScience
❇️ @ai_python_en
Highly rec forking the kernel & playing with the code!
Post:
https://lavanya.ai/2019/08/10/training-a-neural-network-start-here/
Code:
https://kaggle.com/lavanyashukla01/training-a-neural-network-start-here
#MachineLearning #DataScience
❇️ @ai_python_en
Course 1 : A Learning Path to become Data Scientist in 2019
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
According to common belief, neural networks' main advantage over traditional ML algorithms is that NNs learn features by themselves while in the traditional ML, you handcraft features. This is not exactly true. Well, it's true for vanilla feed-forward NNs consisting only of fully connected layers. But those are very hard to train for high dimensional inputs like images.
When you use a convolutional neural network, you already use two types of handcrafted features: convolution filters and pooling filters.
The designer of the convolutional NN for image classification has looked into the input data (this is what traditional ML engineers do to invent features) and decided that patches of pixels close to each other contain information that could help in classification, and at the same time reduce the number of NN parameters.
The same reasoning is used when we classify texts using bag-of-words features. We look at the data and decide that individual words and n-grams of words would be good features to classify a document. This reduces the number of input features while allowing us to accurately classify documents.
BTW, the way convolutional filters apply (sum of element-wise multiplications, spanning over channels resulting in one number) is a hell of a feature!
Burkov
❇️ @AI_Python_EN
When you use a convolutional neural network, you already use two types of handcrafted features: convolution filters and pooling filters.
The designer of the convolutional NN for image classification has looked into the input data (this is what traditional ML engineers do to invent features) and decided that patches of pixels close to each other contain information that could help in classification, and at the same time reduce the number of NN parameters.
The same reasoning is used when we classify texts using bag-of-words features. We look at the data and decide that individual words and n-grams of words would be good features to classify a document. This reduces the number of input features while allowing us to accurately classify documents.
BTW, the way convolutional filters apply (sum of element-wise multiplications, spanning over channels resulting in one number) is a hell of a feature!
Burkov
❇️ @AI_Python_EN
Google’s SummAE generates abstract summaries of paragraphs
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/2pVMjZJ
❇️ @AI_Python_EN
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/2pVMjZJ
❇️ @AI_Python_EN
.
just published my (free) 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IOT
- and more!
Check it out here:
http://pyimg.co/getstarted
And if you liked it, please do give it a share to spread the word. Thank you!
#Python #Keras #MachineLearning #ArtificialIntelligence #AI
❇️ @AI_Python_EN
just published my (free) 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IOT
- and more!
Check it out here:
http://pyimg.co/getstarted
And if you liked it, please do give it a share to spread the word. Thank you!
#Python #Keras #MachineLearning #ArtificialIntelligence #AI
❇️ @AI_Python_EN
The war between ML frameworks has raged on since the rebirth of deep learning. Who is winning? Horace He data analysis shows clear trends: PyTorch is winning dramatically among researchers, while Tensorflow still dominates industry.
#PyTorch #Tensorflow
https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
❇️ @AI_Python_EN
#PyTorch #Tensorflow
https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
❇️ @AI_Python_EN
.
Free Book: Deep Learning and Computer Vision with CNNs
https://www.datasciencecentral.com/profiles/blogs/free-book-deep-learning-and-computer-vision-with-cnns
❇️ @AI_Python_EN
Free Book: Deep Learning and Computer Vision with CNNs
https://www.datasciencecentral.com/profiles/blogs/free-book-deep-learning-and-computer-vision-with-cnns
❇️ @AI_Python_EN
If you're interested in using pytorch on free Colab TPUs, here are some notebooks to get you started
https://github.com/pytorch/xla/tree/master/contrib/colab
❇️ @AI_Python_EN
https://github.com/pytorch/xla/tree/master/contrib/colab
❇️ @AI_Python_EN
Image Quality Assessment for Rigid Motion Compensation.
https://t.co/3TC1f2fst3
https://t.co/ZpwS8PIYGj
https://t.co/3TC1f2fst3
https://t.co/ZpwS8PIYGj
arXiv.org
Image Quality Assessment for Rigid Motion Compensation
Diagnostic stroke imaging with C-arm cone-beam computed tomography (CBCT)
enables reduction of time-to-therapy for endovascular procedures. However, the
prolonged acquisition time compared to...
enables reduction of time-to-therapy for endovascular procedures. However, the
prolonged acquisition time compared to...
"Span-core Decomposition for Temporal.." is the #1 paper on Arxiv today in data structures and algorithms. Github code (span_cores) supports their results.See it at
http://assert.pub/arxiv/cs/cs.ds
http://assert.pub/papers/1910.03645
❇️ @AI_Python_EN
http://assert.pub/arxiv/cs/cs.ds
http://assert.pub/papers/1910.03645
❇️ @AI_Python_EN
Torch vs Theano vs TensorFlow vs Keras
☞ https://morioh.com/p/a80813c4a01c
#ai #deeplearning
❇️ @AI_Python_EN
☞ https://morioh.com/p/a80813c4a01c
#ai #deeplearning
❇️ @AI_Python_EN
"...can we say now, finally, that computers are as powerful as the human brain? No. Focusing on raw computing power misses the point entirely. Speed alone won’t give us AI. Running a poorly designed algorithm on a faster computer doesn’t make the algorithm better; it just means you get the wrong answer more quickly. (And with more data there are more opportunities for wrong answers!)
The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe...
Turing himself proved that some problems are undecidable by any computer: the problem is well defined, there is an answer, but there cannot exist an algorithm that always finds that answer...
The machine may be far more capable than us, but it will still be far from perfectly rational."
Stuart Russell in "Human Compatible"
❇️ @AI_Python_EN
The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe...
Turing himself proved that some problems are undecidable by any computer: the problem is well defined, there is an answer, but there cannot exist an algorithm that always finds that answer...
The machine may be far more capable than us, but it will still be far from perfectly rational."
Stuart Russell in "Human Compatible"
❇️ @AI_Python_EN
Predicting survival from colorectal cancer histology slides using #deeplearning
1. They conducted the study because:
• Colorectal cancer (CRC) is a common disease with a variable clinical course, and there is a high clinical need to more accurately predict the outcome of individual patients.
• For almost every CRC patient, histological slides of tumor tissue are routinely available.
• Deep learning can be used to extract information from very complex images, and we hypothesized that deep learning can predict clinical outcome directly from histological images of CRC.
2. What did the researchers do and find?
• We trained a deep neural network to identify different tissue types & demonstrated that it can decompose complex tissue into its constituent parts and thereby showed that this score improves survival prediction compared to the SOTA avaiable.
3. Conclusion
• Deep learning is an inexpensive tool to predict the clinical course of CRC patients based on ubiquitously available histological images.
• Prospective validation studies are needed to firmly establish this biomarker for routine clinical use.
☞ Link to research
#healthcare #AI #machinelearning
❇️ @AI_Python_EN
1. They conducted the study because:
• Colorectal cancer (CRC) is a common disease with a variable clinical course, and there is a high clinical need to more accurately predict the outcome of individual patients.
• For almost every CRC patient, histological slides of tumor tissue are routinely available.
• Deep learning can be used to extract information from very complex images, and we hypothesized that deep learning can predict clinical outcome directly from histological images of CRC.
2. What did the researchers do and find?
• We trained a deep neural network to identify different tissue types & demonstrated that it can decompose complex tissue into its constituent parts and thereby showed that this score improves survival prediction compared to the SOTA avaiable.
3. Conclusion
• Deep learning is an inexpensive tool to predict the clinical course of CRC patients based on ubiquitously available histological images.
• Prospective validation studies are needed to firmly establish this biomarker for routine clinical use.
☞ Link to research
#healthcare #AI #machinelearning
❇️ @AI_Python_EN
The State of Transfer Learning in NLP
http://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing
#NLP
❇️ @AI_Python_EN
http://ruder.io/state-of-transfer-learning-in-nlp/
#TransferLearning #NaturalLanguageProcessing
#NLP
❇️ @AI_Python_EN