AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Awesome victory for #DeepLearning 👏🏻

GE Healthcare wins FDA clearance for #algorithms to spot type of collapsed lung!

Here’s how the AI algorithm works
————————————————
1. A patient image scanned on a device is automatically searched for pneumothorax.
2. If pneumothorax is suspected, an alert with the original chest X-ray, is sent to the radiologist to review.
3. That technologist would also receive an on-device notification to highlight prioritized cases.
4. Algorithms would then analyze and flag protocol and field of view errors and auto rotate images on device.

Article is here:
https://lnkd.in/daNYHfP

#machinelearning
Hierarchical Decision Making by Generating and Following Natural Language Instructions

“Experiments show that models using natural language as a latent variable significantly outperform models that directly imitate human actions.”

https://arxiv.org/abs/1906.00744
Counterfactual Story Reasoning and Generation”, presents the TimeTravel dataset that tests causal reasoning capabilities over natural language narratives.

Paper:
https://arxiv.org/abs/1909.04076
Code+Data:
https://github.com/qkaren/Counterfactual-StoryRW
What Kind of Language Is Hard to Language-Model?

Mielke et al.: https://lnkd.in/eDUGmse

#ArtificialIntelligence #MachineLearning #NLP
CvxNets: Learnable Convex Decomposition

Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, Andrea Tagliasacchi : https://lnkd.in/eGUqxjz
The largest publicly available language model: CTRL has 1.6B parameters and can be guided by control codes for style, content, and task-specific behavior.


code: https://github.com/salesforce/ctrl

article: https://einstein.ai/presentations/ctrl.pdf

https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/
What makes a good conversation?
How controllable attributes affect human judgments

A great post on conversation scoring.

Link:
http://www.abigailsee.com/2019/08/13/what-makes-a-good-conversation.html
Paper:
https://www.aclweb.org/anthology/N19-1170

#NLP #NLU #DL

❇️ @ai_python_en
Neural networks in NLP are vulnerable to adversarially crafted inputs.

We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:

https://arxiv.org/abs/1909.01492