Python has many advantages, but speed is not one of them. Most production code in the enterprise is currently powered by JVM and .NET. Python has scikit-learn, xgboost and PyTorch, which makes it the de-facto standard in AI. But it's still too slow. Before Kotlin, JVM didn't have anything as convenient as Python. Now, there's Kotlin: concise, intuitive and fast! Kotlin is already the programming language for Android. Now it's time to make it the programming language for AI. What's needed is a lightweight and scalable JVM library that implements the fit/transform/predict interface of scikit-learn. I believe that it's time to build it and I believe that Kotlin is an ideal language for that. If someone wants to lead this project, come forward and start building this library. I will provide publicity support. Burkov
❇️ @AI_Python_en
❇️ @AI_Python_en
Analysis of consumer surveys frequently consists of inspecting column totals and multiple two-way cross tabs. However, looking at data piecemeal increases the risk of spurious "significant" differences while, at the same time, missing patterns that are both real and important. A better approach is to statistically adjust for influential variables simultaneously. When done professionally, a very different picture may emerge than that suggested by column totals and two-way crosstabs. There is no need to statistically model every question in the survey, only the key ones. The models should be interpretable and guided by knowledge of the subjects the survey is addressing. Multivariate analysis is also an alternative to the standard weighting procedures used in consumer research and political polling.
❇️ @AI_PYTHON_EN
❇️ @AI_PYTHON_EN
The list of accepted papers at the NeurIPSConf
Graph Representation Learning Workshop 2019 is online!
https://grlearning.github.io/papers/
(Camera-ready versions will follow later this month). Submission statistics / acceptance rates below
❇️ @AI_Python_en
Graph Representation Learning Workshop 2019 is online!
https://grlearning.github.io/papers/
(Camera-ready versions will follow later this month). Submission statistics / acceptance rates below
❇️ @AI_Python_en
NeuIPS 2019 Accepted paper visualization and statistics of Keywords/Institutions.
https://github.com/gsudllab/AcceptPaperAnalysis/blob/master/NeuIPS%202019.md
❇️ @AI_Python_EN
https://github.com/gsudllab/AcceptPaperAnalysis/blob/master/NeuIPS%202019.md
❇️ @AI_Python_EN
Code for "Learnable Triangulation of Human Pose" is released:
https://github.com/karfly/learnable-triangulation-pytorch
SOTA in 3D human pose estimation! #ICCV19
❇️ @AI_Python_EN
https://github.com/karfly/learnable-triangulation-pytorch
SOTA in 3D human pose estimation! #ICCV19
❇️ @AI_Python_EN
When Does Self-supervision Improve Few-shot Learning?
https://deepai.org/publication/when-does-self-supervision-improve-few-shot-learning
#Classifier #LossFunction
❇️ @AI_Python_EN
https://deepai.org/publication/when-does-self-supervision-improve-few-shot-learning
#Classifier #LossFunction
❇️ @AI_Python_EN
Great starting point for PyTorch Reinforcement Learning projects and fantastic effort by Heinrich Küttler &
for reproducible RL research! "Why PyTorch?" you might ask.
Announcing TorchBeast, an IMPALA-inspired pytorch
platform for distributed RL research. Used in a growing number of projects here at FacebookAI
Paper:
https://arxiv.org/abs/1910.03552
Code:
https://github.com/facebookresearch/torchbeast
❇️ @AI_Python_EN
for reproducible RL research! "Why PyTorch?" you might ask.
Announcing TorchBeast, an IMPALA-inspired pytorch
platform for distributed RL research. Used in a growing number of projects here at FacebookAI
Paper:
https://arxiv.org/abs/1910.03552
Code:
https://github.com/facebookresearch/torchbeast
❇️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
Generalized Inner Loop Meta Learning, aka Gimli https://arxiv.org/abs/1910.01727 ❇️ @AI_Python_EN
In parallel with this paper, FacebookAI
has released higher, a library for bypassing limitations to taking higher-order gradients over an optimization process.
Library:
https://github.com/facebookresearch/higher
Docs:
https://higher.readthedocs.io
❇️ @AI_Python_EN
has released higher, a library for bypassing limitations to taking higher-order gradients over an optimization process.
Library:
https://github.com/facebookresearch/higher
Docs:
https://higher.readthedocs.io
❇️ @AI_Python_EN
Yoshua Bengio, one of the pioneers of deep learning, now wants to his algorithms to ask 'why' things happen:
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
❇️ @AI_PYTHON_EN
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
❇️ @AI_PYTHON_EN
Introducing sotabench : a new service with the mission of benchmarking every open source ML model. We run GitHub repos on free GPU servers to capture their results: compare to papers, other models and see speed/accuracy trade-offs. Check it out:
https://sotabench.com
❇️ @AI_Python_EN
https://sotabench.com
❇️ @AI_Python_EN
With 180+ papers mentioning
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN
Microsoft Open Source Engineer pythiccoder
explores nine advanced tips for production #ML. Read:
https://medium.com/microsoftazure/9-advanced-tips-for-production-machine-learning-6bbdebf49a6f
❇️ @AI_PYTHON_EN
explores nine advanced tips for production #ML. Read:
https://medium.com/microsoftazure/9-advanced-tips-for-production-machine-learning-6bbdebf49a6f
❇️ @AI_PYTHON_EN
Spooky Lavanya
Weights & Biases is officially included in Stanford's CS 197 class!
I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B!
Class:
http://cs197.stanford.edu/assignments/a3.shtml
Code:
https://colab.research.google.com/drive/1zkoPdBZWUMsTpvA35ShVNAP0QcRsPUjf
#MachineLearning
❇️ @AI_Python_en
Weights & Biases is officially included in Stanford's CS 197 class!
I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B!
Class:
http://cs197.stanford.edu/assignments/a3.shtml
Code:
https://colab.research.google.com/drive/1zkoPdBZWUMsTpvA35ShVNAP0QcRsPUjf
#MachineLearning
❇️ @AI_Python_en
AI, Python, Cognitive Neuroscience
Spooky Lavanya Weights & Biases is officially included in Stanford's CS 197 class! I wrote a quick tutorial on how to train a neural network using #PyTorch & track your experiments in W&B! Class: http://cs197.stanford.edu/assignments/a3.shtml Code: …
Building a neural network can be confusing! I wrote a guide to help you navigate the treacherous NN waters using
Highly rec forking the kernel & playing with the code!
Post:
https://lavanya.ai/2019/08/10/training-a-neural-network-start-here/
Code:
https://kaggle.com/lavanyashukla01/training-a-neural-network-start-here
#MachineLearning #DataScience
❇️ @ai_python_en
Highly rec forking the kernel & playing with the code!
Post:
https://lavanya.ai/2019/08/10/training-a-neural-network-start-here/
Code:
https://kaggle.com/lavanyashukla01/training-a-neural-network-start-here
#MachineLearning #DataScience
❇️ @ai_python_en
Course 1 : A Learning Path to become Data Scientist in 2019
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
According to common belief, neural networks' main advantage over traditional ML algorithms is that NNs learn features by themselves while in the traditional ML, you handcraft features. This is not exactly true. Well, it's true for vanilla feed-forward NNs consisting only of fully connected layers. But those are very hard to train for high dimensional inputs like images.
When you use a convolutional neural network, you already use two types of handcrafted features: convolution filters and pooling filters.
The designer of the convolutional NN for image classification has looked into the input data (this is what traditional ML engineers do to invent features) and decided that patches of pixels close to each other contain information that could help in classification, and at the same time reduce the number of NN parameters.
The same reasoning is used when we classify texts using bag-of-words features. We look at the data and decide that individual words and n-grams of words would be good features to classify a document. This reduces the number of input features while allowing us to accurately classify documents.
BTW, the way convolutional filters apply (sum of element-wise multiplications, spanning over channels resulting in one number) is a hell of a feature!
Burkov
❇️ @AI_Python_EN
When you use a convolutional neural network, you already use two types of handcrafted features: convolution filters and pooling filters.
The designer of the convolutional NN for image classification has looked into the input data (this is what traditional ML engineers do to invent features) and decided that patches of pixels close to each other contain information that could help in classification, and at the same time reduce the number of NN parameters.
The same reasoning is used when we classify texts using bag-of-words features. We look at the data and decide that individual words and n-grams of words would be good features to classify a document. This reduces the number of input features while allowing us to accurately classify documents.
BTW, the way convolutional filters apply (sum of element-wise multiplications, spanning over channels resulting in one number) is a hell of a feature!
Burkov
❇️ @AI_Python_EN
Google’s SummAE generates abstract summaries of paragraphs
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/2pVMjZJ
❇️ @AI_Python_EN
#DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/2pVMjZJ
❇️ @AI_Python_EN