“An Explicitly Relational Neural Network Architecture” - new work from the DeepMind cognition team takes a step towards reconciling #deeplearning and symbolic #AI
https://arxiv.org/abs/1905.10307
✴️ @AI_Python_EN
https://arxiv.org/abs/1905.10307
✴️ @AI_Python_EN
MIT and U.S. Air Force launch AI accelerator program which will focus on rapid deployment of artificial intelligence innovations in operations, disaster response, and medical readiness. #DataScience #AI #ArtificialIntelligence
MIT and U.S. Air Force launch AI
✴️ @AI_Python_EN
MIT and U.S. Air Force launch AI
✴️ @AI_Python_EN
Need a PhD writing template? Download our free one now. It's the simplest way to structure your thinking and see your entire PhD on one page. All part of our goal to make PhD life easier, one thesis at a time.
template
✴️ @AI_Python_EN
template
✴️ @AI_Python_EN
Data science is an ever-evolving field. As data scientists, we need to have our finger on the pulse of the latest algorithms and frameworks coming up in the community.
So, if you’re a:
Data science enthusiast
Machine learning practitioner
Data science manager
Deep learning expert
or any mix of the above, this article is for you.
Pranav Dar loved putting together this month’s edition given the sheer scope of topics we have covered. Where computer vision techniques have hit a ceiling (relatively speaking), NLP continues to break through barricades. Sparse Transformer by OpenAI seems like a great NLP project to try out next.
What did you think of this month’s collection? Any data science libraries or discussions I missed out on? Hit me up in the comments section below and let’s discuss!
more to read : https://bit.ly/2Jb2JoB
#machinelearning #datascience #deeplearning
✴️ @AI_Python_EN
So, if you’re a:
Data science enthusiast
Machine learning practitioner
Data science manager
Deep learning expert
or any mix of the above, this article is for you.
Pranav Dar loved putting together this month’s edition given the sheer scope of topics we have covered. Where computer vision techniques have hit a ceiling (relatively speaking), NLP continues to break through barricades. Sparse Transformer by OpenAI seems like a great NLP project to try out next.
What did you think of this month’s collection? Any data science libraries or discussions I missed out on? Hit me up in the comments section below and let’s discuss!
more to read : https://bit.ly/2Jb2JoB
#machinelearning #datascience #deeplearning
✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Implementing a user friendly interface without the need to dig into the code for fastai
https://github.com/asvcode/Vision_UI
✴️ @AI_Python_EN
https://github.com/asvcode/Vision_UI
✴️ @AI_Python_EN
Even when involved in the design of the research, statisticians normally must spend time checking, cleaning, and setting up data, as well as on exploratory data analysis.
Often this can amount to a considerable amount of time. For projects that are repeated, much of this can be automated and should be, when feasible.
The first time, though, trying to cut corners at this stage can be a huge mistake.
As a general rule, we should never delete, recode, or transform a variable unless we know what it is and what it means. When we have missing data, we should try to find out why it is missing. Blind imputation is very risky.
It goes without saying that we shouldn't analyze it either until we know what it is and what it means.
A variable might look like a garbage field, but might actually be telling us that we have the wrong data or that something is seriously amiss with it. If we mechanically delete it, we will not know this.
All this janitorial work pays off in other ways too, by helping us get to know the data, for example.
✴️ @AI_Python_EN
Often this can amount to a considerable amount of time. For projects that are repeated, much of this can be automated and should be, when feasible.
The first time, though, trying to cut corners at this stage can be a huge mistake.
As a general rule, we should never delete, recode, or transform a variable unless we know what it is and what it means. When we have missing data, we should try to find out why it is missing. Blind imputation is very risky.
It goes without saying that we shouldn't analyze it either until we know what it is and what it means.
A variable might look like a garbage field, but might actually be telling us that we have the wrong data or that something is seriously amiss with it. If we mechanically delete it, we will not know this.
All this janitorial work pays off in other ways too, by helping us get to know the data, for example.
✴️ @AI_Python_EN
"Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor"
Do word embeddings really say that man is to doctor as woman is to nurse? Apparently not!
Nissim et al.: https://arxiv.org/abs/1905.09866
#ArtificialIntelligence #MachineLearning #NLProc #bias
✴️ @AI_Python_EN
Do word embeddings really say that man is to doctor as woman is to nurse? Apparently not!
Nissim et al.: https://arxiv.org/abs/1905.09866
#ArtificialIntelligence #MachineLearning #NLProc #bias
✴️ @AI_Python_EN
Understanding Hinton’s Capsule Networks. Part I: Intuition.
Blog by Max Pechyonkin:
https://medium.com/ai³-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b
#MachineLearning #DeepLearning #ArtificialIntelligence
✴️ @AI_Python_EN
Blog by Max Pechyonkin:
https://medium.com/ai³-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b
#MachineLearning #DeepLearning #ArtificialIntelligence
✴️ @AI_Python_EN
AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence
https://arxiv.org/abs/1905.10985
#ArtificialIntelligence #ArtificialGeneralIntelligence
✴️ @AI_Python_EN
https://arxiv.org/abs/1905.10985
#ArtificialIntelligence #ArtificialGeneralIntelligence
✴️ @AI_Python_EN
A curated list of gradient boosting research papers with implementations.
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
✴️ @AI_Python_EN
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
✴️ @AI_Python_EN
All You Need About Common MachineLearning Algorithms.pdf
500.2 KB
All You Need About Common #MachineLearning Algorithms
Here is the list of commonly used machine learning algorithms. The code is provided in both #R and #Python. These algorithms can be applied to almost any data problem:
✅Linear Regression
✅Logistic Regression
✅Decision Tree
✅SVM
✅Naive Bayes
✅kNN
✅K-Means
✅Random Forest
✅Dimensionality Reduction Algorithms
✅Gradient Boosting algorithms
✔️GBM
✔️XGBoost
✔️LightGBM
✔️CatBoost
#ai #datascienece
✴️ @AI_Python_EN
Here is the list of commonly used machine learning algorithms. The code is provided in both #R and #Python. These algorithms can be applied to almost any data problem:
✅Linear Regression
✅Logistic Regression
✅Decision Tree
✅SVM
✅Naive Bayes
✅kNN
✅K-Means
✅Random Forest
✅Dimensionality Reduction Algorithms
✅Gradient Boosting algorithms
✔️GBM
✔️XGBoost
✔️LightGBM
✔️CatBoost
#ai #datascienece
✴️ @AI_Python_EN
TensorFlow Graphics Library for Unsupervised Deep Learning of Computer Vision Model
github: https://github.com/tensorflow/graphics
#machinelearning #deeplearning #computervision
✴️ @AI_Python_EN
github: https://github.com/tensorflow/graphics
#machinelearning #deeplearning #computervision
✴️ @AI_Python_EN
SimpleSelfAttention
The purpose of this repository is two-fold:
-demonstrate improvements brought by the use of a self-attention layer in an image -classification model.
introduce a new layer which I call SimpleSelfAttention
https://github.com/sdoria/SimpleSelfAttention
✴️ @AI_Python_EN
The purpose of this repository is two-fold:
-demonstrate improvements brought by the use of a self-attention layer in an image -classification model.
introduce a new layer which I call SimpleSelfAttention
https://github.com/sdoria/SimpleSelfAttention
✴️ @AI_Python_EN
website, which will provide you all up-to-date and necessary information on #ArtificialIntelligence, #MachineLearning, Deep Learning and some brain activities. You will also find TED Talks, Lectures and academic writings on these issues.
https://www.newworldai.com/
✴️ @AI_Python_EN
https://www.newworldai.com/
✴️ @AI_Python_EN
EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling
http://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html
✴️ @AI_Python_EN
http://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html
✴️ @AI_Python_EN
research.google
EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling
Posted by Mingxing Tan, Staff Software Engineer and Quoc V. Le, Principal Scientist, Google AI Convolutional neural networks (CNNs) are commonly de...
This media is not supported in your browser
VIEW IN TELEGRAM
Set of animated Artificial Intelligence cheatsheets covering Stanford's CS 221 class:
Reflex-based: http://stanford.io/2EqNPHy
States-based: http://stanford.io/2wh4F7u
Variables-based: http://stanford.io/2HAiAfh
Logic-based: http://stanford.io/2M7taia
GitHub: https://github.com/afshinea/stanford-cs-221-artificial-intelligence
✴️ @AI_Python_EN
Reflex-based: http://stanford.io/2EqNPHy
States-based: http://stanford.io/2wh4F7u
Variables-based: http://stanford.io/2HAiAfh
Logic-based: http://stanford.io/2M7taia
GitHub: https://github.com/afshinea/stanford-cs-221-artificial-intelligence
✴️ @AI_Python_EN
Working with mini-imagent for few-shot classification? You might be interested in the pre-trained features from the LEO authors: https://github.com/deepmind/leo 🤖 Great to see these open sourced!
✴️ @AI_Python_EN
✴️ @AI_Python_EN
GitHub
deepmind/leo
Implementation of Meta-Learning with Latent Embedding Optimization - deepmind/leo
MNIST reborn, restored and expanded. Now with an extra 50,000 training samples. If you used the original MNIST test set more than a few times, chances are your models overfit the test set. Time to test them on those extra samples. https://arxiv.org/abs/1905.10498
✴️ @AI_Python_EN
✴️ @AI_Python_EN
The field of statistics has very long history, dating back to ancient times.
Much of marketing data science can be traced to the origins of actuarial science, demography, sociology and psychology, with early statisticians playing major roles in all of these fields.
Big is relative, and statisticians have been working with "big data" all along. "Machine learners" such as SVM and random forests originated in statistics, and neural nets were inspired as much by regression as by theories of the human brain.
Statisticians are involved in a diverse range of fields, including marketing, psychology, pharmacology, economics, meteorology, political science and ecology, and have helped developed research methods and analytics for nearly any kind of data.
The history and richness of #statistics is not always appreciated, though. For example, this morning I was asked "How's your #machinelearning?" :-)
✴️ @AI_Python_EN
Much of marketing data science can be traced to the origins of actuarial science, demography, sociology and psychology, with early statisticians playing major roles in all of these fields.
Big is relative, and statisticians have been working with "big data" all along. "Machine learners" such as SVM and random forests originated in statistics, and neural nets were inspired as much by regression as by theories of the human brain.
Statisticians are involved in a diverse range of fields, including marketing, psychology, pharmacology, economics, meteorology, political science and ecology, and have helped developed research methods and analytics for nearly any kind of data.
The history and richness of #statistics is not always appreciated, though. For example, this morning I was asked "How's your #machinelearning?" :-)
✴️ @AI_Python_EN