AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Basics of Python Programming
——————————————-
a. Lists, Tuples, Dictionaries, Conditionals, Loops, etc...
https://lnkd.in/gWRbc3J
b. Data Structures & Algorithms
https://lnkd.in/gYKnJWN
d. NumPy Arrays:
https://lnkd.in/geeFePh
c. Regex:
https://lnkd.in/gzUahNV

Practice Coding Challenges
—————————————
a. Hacker Rank:
https://lnkd.in/gEufBUu
b. Codeacademy:
https://lnkd.in/gGQ7cuv
c. LeetCode:
https://leetcode.com/

Data Manipulation
————————-
a. Pandas:
https://lnkd.in/gxSgfuQ
b. Pandas Cheatsheet:
https://lnkd.in/gfAdcpw
c. SQLAlchemy:
https://lnkd.in/gjvbm7h

Data Visualization
————————
a. Matplotlib:
https://lnkd.in/g_3fx_6
b. Seaborn:
https://lnkd.in/gih7hqz
c. Plotly:
https://lnkd.in/gBYBMXc
d. Python Graph Gallery:
https://lnkd.in/gdGe-ef

Machine Learning / Deep Learning
————————————————
a. Skcikit-Learn Tutorial:
https://lnkd.in/gT5nNwS
b. Deep Learning Tutorial:
https://lnkd.in/gHKWM5m
c. Kaggle Kernels:
https://lnkd.in/e_VcNpk
d. Kaggle Competitions:
https://lnkd.in/epb9c8N
Has Area Under the ROC Curve (AUC-ROC) become Data Science & AI/ML community’s P-Value?

Just returned from day 1 of Intelligent Health AI conference - and while there were some great speakers & talks - one thing stood out. Of the multiple talks reporting machine learning model performance, all except one talk reported AUC-ROC as the only metric - even for unbalanced datasets. It appears that the AUC-ROC metric is being misused similar to how the P-value has been misused & misinterpreted.

There is more to model evaluation than a single number. In addition to AUC-ROC, we have the Precision-Recall (PR) curve, Sensitivity (Recall), Specificity, F1-score, Positive/Negative Predictive Values, Matthews Correlation Coefficient, Calibration, and many other metrics. The graphic below presents a good summary of the various model performance / evaluation metrics (see articles & book for more details):

Regression Metrics:
https://lnkd.in/eRWvRVc

Classification Metrics:
https://lnkd.in/dpYnvGh

Evaluating Machine Learning Models (open-access book):
https://lnkd.in/dHcfZdP
Awesome victory for #DeepLearning 👏🏻

GE Healthcare wins FDA clearance for #algorithms to spot type of collapsed lung!

Here’s how the AI algorithm works
————————————————
1. A patient image scanned on a device is automatically searched for pneumothorax.
2. If pneumothorax is suspected, an alert with the original chest X-ray, is sent to the radiologist to review.
3. That technologist would also receive an on-device notification to highlight prioritized cases.
4. Algorithms would then analyze and flag protocol and field of view errors and auto rotate images on device.

Article is here:
https://lnkd.in/daNYHfP

#machinelearning
Hierarchical Decision Making by Generating and Following Natural Language Instructions

“Experiments show that models using natural language as a latent variable significantly outperform models that directly imitate human actions.”

https://arxiv.org/abs/1906.00744
Counterfactual Story Reasoning and Generation”, presents the TimeTravel dataset that tests causal reasoning capabilities over natural language narratives.

Paper:
https://arxiv.org/abs/1909.04076
Code+Data:
https://github.com/qkaren/Counterfactual-StoryRW
What Kind of Language Is Hard to Language-Model?

Mielke et al.: https://lnkd.in/eDUGmse

#ArtificialIntelligence #MachineLearning #NLP
CvxNets: Learnable Convex Decomposition

Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, Andrea Tagliasacchi : https://lnkd.in/eGUqxjz
The largest publicly available language model: CTRL has 1.6B parameters and can be guided by control codes for style, content, and task-specific behavior.


code: https://github.com/salesforce/ctrl

article: https://einstein.ai/presentations/ctrl.pdf

https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/
What makes a good conversation?
How controllable attributes affect human judgments

A great post on conversation scoring.

Link:
http://www.abigailsee.com/2019/08/13/what-makes-a-good-conversation.html
Paper:
https://www.aclweb.org/anthology/N19-1170

#NLP #NLU #DL

❇️ @ai_python_en
Neural networks in NLP are vulnerable to adversarially crafted inputs.

We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:

https://arxiv.org/abs/1909.01492