AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Learn how to train BERT faster with Tensor Cores for optimized #NLP in this technical blog. Code now available from GitHub.


✴️ @AI_Python_EN
In a simple key driver analysis, we may have a single dependent variable and a dozen or so predictors.

Even in this simple case, there are many ways to analyze the data. We might, for instance, realize, that one or more of the predictors is really endogenous, i.e., itself a dependent variable, or that it does not belong in our analysis at all.

Multicollinearity is common in many kinds of data and can be a major headache. Curvilinear relationships, interaction effects, missing data and clustering are other things we need to think about.

Some recommend machine learning as the solution. Indeed, this may be an option, but we must remember that there are many types of #machinelearning . Each may give very different answers. Machine learners can also be hard to interpret, and explanation is the main purpose of key driver.

Others may be tempted to just use cross tabs. But that too, in a sense, is a model and it may be a very inappropriate one that seriously misleads us.

There often is no simple answer to "simple" problems. Understanding decision makers needs and expectations is a fundamental first step.
Extensive data cleaning may also be necessarily and, in the case of surveys, we may need to adjust for response styles. At the end of our exploratory data analysis, we might also conclude that the data we have aren't right for the task. It's important to bear in mind that key driver analysis is a form of causal analysis, which is usually very challenging.

✴️ @AI_Python_EN
If your data makes sense then it is either fake or generated.
✴️ @AI_Python_EN
LBS Autoencoder: Self-supervised Fitting of Articulated Meshes to Point Clouds

Paper: http://ow.ly/ga4c50rqgsN

#artificialinteligence #machineleaning #bigdata #machinelearning #deeplearning #technology

✴️ @AI_Python_EN
Machine Learning (ML) & Artificial Intelligence (AI): From Black Box to White Box Models in 4 Steps - Resources for Explainable AI & ML Model Interpretability.

βœ”οΈSTEP 1 - ARTICLES

- (short) KDnuggets article: https://lnkd.in/eRyTXcQ

- (long) O'Reilly article: https://lnkd.in/ehMHYsr

βœ”οΈSTEP 2 - BOOKS

- Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (free e-book): https://lnkd.in/eUWfa5y

- An Introduction to Machine Learning Interpretability: An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI (free e-book): https://lnkd.in/dJm595N

βœ”οΈSTEP 3 - COLLABORATE

- Join Explainable AI (XAI) Group: https://lnkd.in/dQjmhZQ

βœ”οΈSTEP 4 - PRACTICE

- Hands-On Practice: Open-Source Tools & Tutorials for ML Interpretability (Python/R): https://lnkd.in/d5bXgV7

- Python Jupyter Notebooks: https://lnkd.in/dETegUH

#machinelearning #datascience #analytics #bigdata #statistics #artificialintelligence #ai #datamining #deeplearning #neuralnetworks #interpretability #science #research #technology #business #healthcare

✴️ @AI_Python_EN
Interpretable and Generalizable Deep Image Matching with Adaptive Convolutions
Researchers: Shengcai Liao, Ling Shao

Paper: http://ow.ly/5z8j50rqdiJ

#artificialinteligence #machineleaning #bigdata #machinelearning #deeplearning

✴️ @AI_Python_EN
Media is too big
VIEW IN TELEGRAM
Today, #LIDAR is used in all autonomous cars except in Tesla

Lidar sensors are big, bulky, expensive, and ugly to look at. Not only that, they do a poor job in snow, sleet, hail, smoke, and smog. If you can’t see the road ahead, neither can LIDAR!.

That last part is one of the reasons Elon Musk refuses to incorporate lidar sensors into the self-driving hardware package for Tesla cars.

Apple & Cornell University have solved the problem of depth precision and this paves the way for faster adoption for safer yet cheaper cars!

Read more here: https://lnkd.in/dZgS6id
Research paper: https://lnkd.in/djRhzq3
#research #selfdriving #deeplearning

✴️ @AI_Python_EN
Despite attempts at standardisation of DL libraries, there are only a few that integrate classification, segmentation, GAN's and detection. And everything is in #PyTorch :)

https://lnkd.in/eTsqKWZ

#ai #objectdetection #machinelearning #gpu #classification #dl

✴️ @AI_Python_EN
Linked Dynamic Graph CNN: Learning on Point Cloud via Linking Hierarchical Features

Zhang et al.: https://lnkd.in/daMV2RX

#ArtificialIntelligence #DeepLearning #MachineLearning

✴️ @AI_Python_EN
The ability to deal with imbalanced datasets is a must-have for any #datascientist. Here are 4 tutorials to learn the different techniques of handling imbalanced data:

How to handle Imbalanced #Classification Problems in #MachineLearning? - https://buff.ly/2sIsR0M

Investigation on Handling Structured & Imbalanced Datasets with #DeepLearning - https://buff.ly/2MpxuG1

This Machine Learning Project on Imbalanced Data Can Add Value to Your #DataScience #Resume - https://buff.ly/2Mpr2i0

Practical Guide to deal with Imbalanced Classification Problems in #R - https://buff.ly/2MrS8Fr

✴️ @AI_Python_EN
✴️ @AI_Python_EN
❇️Top #GAN Research Papers Every Machine Learning Enthusiast Must Peruse

https://www.analyticsindiamag.com/top-gan-research-papers-every-machine-learning-enthusiast-must-peruse/

✴️ @AI_Python_EN
How can we make #computervision networks more robust against image distortions so small that they’re undetectable to the human eye? Check out this paper on stability training as a potential solution and alternative to data augmentation techniques:
http://bit.ly/2XKA7Xj

✴️ @AI_Python_EN
#DeepLearning industry is growing as is the amount of data being trained on a daily basis (Y-Axis).

Courtesy: Nvidia

✴️ @AI_Python_EN
Working with #neuralnet s require mastery of a dark art. Lots of great advice here:
http://karpathy.github.io/2019/04/25/recipe/

✴️ @AI_Python_EN
Network Science meets Deep Learning

By Vinay Uday Prabhu: https://lnkd.in/e78XRWx

#deeplearning #neuralnetworks #technology

✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
In a paper published two days ago in Nature, a group of scientists designed a recurrent neural network that decoded cortical signals to speech signals.
This problem considered much harder than decoding muscle movement from brain signal as the signals that responsible for spoken words are much difficult to decode.
Nature (paywall): https://lnkd.in/fM8EsuE
direct link to pdf: https://lnkd.in/ftrEbe5
#ai #neuralnetwork #science #rnn #neuroscience

✴️ @AI_Python_EN
#Statistics has many uses but, fundamentally, it's a systematic way of dealing uncertainty. When something is certain, there is no need to bring in a statistician or ask anyone for their council.

Since we're concerned with uncertainty, statisticians approach questions probabilistically. To conclude that something is likely to be true does not mean we're claiming it IS true, only that it's more likely to be true than not.

We may estimate this probability as being very high but, again, this is not saying the #probability is perfect (1.0).

Statisticians also think in terms of conditional probabilities, which means we've estimated the probability after having taken other information into account.

For instance, we might estimate the probability of a person buying a certain type of product within the next three months as 0.7 because he is a 25 year-old male. This estimate may have been made with a statistical model and data from thousands or millions of other consumers. For a 55 year-old woman our estimate might be 0.15.

Part of the challenge of being a statistician is that decision-makers often come to us for definitive yes-or-no answers. They can become irritated when we ask for more information or give them very qualified recommendations.

It ain't just math and programming!

tips: If someone says, for example, that A is not the only possible explanation for something and that B, C, or D are other possibilities, a common reaction is for the other party to conclude the first person is saying A is NOT a possible explanation. Humans are funny people.

✴️ @AI_Python_EN
Speech-to-Text using Convolutional Neural Networks
#CNN
Deep Learning beginners quickly learn that Recurrent Neural Network ( #RNN s) are for building models for sequential data tasks (such as language translation) whereas Convolutional Neural Networks (CNNs) are for image and video related tasks. This is a pretty good thumb rule - but recent work at Facebook has shown some great results for sequential data just by using CNNs.

✴️ @AI_Python_EN
Nice tips and tricks for training neural networks by Andrej Karpathy. Most important point which I also can agree on based on my experience: "becoming one with the data" which means understanding your dataset (e.g. understanding distributions, looking for patterns etc.) is core to training your neural network as "neural net is effectively a compressed/compiled version" of the dataset. There are many more other interesting points around tuning the model, establishing model baseline etc.. Definitely check it out. It will save your time to make training neural networks right. #deeplearning #machinelearning

🌎 Link: https://lnkd.in/dppUnnT

✴️ @AI_Python_EN
Building a #Conversational #AI #Agent for medical and healthcare services is one of the products in our pipeline in the coming months.

Here is how a typical chatbot recirculation recurrent #pipeline looks like

#CNN #RNN #GAN #DeepLearning #NLP

✴️ @AI_Python_EN