CS234: Reinforcement Learning | Winter 2019
By Emma Brunskill: https://lnkd.in/eyNjZBR
#DeepLearning #MachineLearning #ReinforcementLearning
β΄οΈ @AI_Python_EN
By Emma Brunskill: https://lnkd.in/eyNjZBR
#DeepLearning #MachineLearning #ReinforcementLearning
β΄οΈ @AI_Python_EN
Amazing project success for #DeepLearning for #Radiologists
This CNN model for breast cancer did screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images).
Accuracy? ~ 90% in predicting whether there is a cancer in the breast, when tested on the screening population.
It was a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels.
Paper on #ArXiv https://lnkd.in/ggj5Z6W
Code: https://lnkd.in/gScbpUs
Explanation: https://lnkd.in/gfa9gzM
#ai #deeplearning #radiology #model #breast #mammography
β΄οΈ @AI_Python_EN
This CNN model for breast cancer did screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images).
Accuracy? ~ 90% in predicting whether there is a cancer in the breast, when tested on the screening population.
It was a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels.
Paper on #ArXiv https://lnkd.in/ggj5Z6W
Code: https://lnkd.in/gScbpUs
Explanation: https://lnkd.in/gfa9gzM
#ai #deeplearning #radiology #model #breast #mammography
β΄οΈ @AI_Python_EN
An overview for using #R for validated work:
1.) Base R #Validation for #FDA: https://lnkd.in/ep8TRM8
2.) #RStudio IDE Validation: https://lnkd.in/e34FCXn
3.) Evaluating Package Stability
4.) Evaluating Package Dependencies: https://lnkd.in/eniCXgG
5.) Organizing Packages with an Internal Repository: https://lnkd.in/etSGuk4
#rstats
β΄οΈ @AI_Python_EN
1.) Base R #Validation for #FDA: https://lnkd.in/ep8TRM8
2.) #RStudio IDE Validation: https://lnkd.in/e34FCXn
3.) Evaluating Package Stability
4.) Evaluating Package Dependencies: https://lnkd.in/eniCXgG
5.) Organizing Packages with an Internal Repository: https://lnkd.in/etSGuk4
#rstats
β΄οΈ @AI_Python_EN
We released a new large-scale corpus of English speech derived for TTS; LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech
Dataset: http://www.openslr.org/60/
Paper: http://arxiv.org/abs/1904.02882
β΄οΈ @AI_Python_EN
Dataset: http://www.openslr.org/60/
Paper: http://arxiv.org/abs/1904.02882
β΄οΈ @AI_Python_EN
We released our new interactive annotation approach, which outperforms Polygon-RNN++ and is 10x faster.
Paper:
https://arxiv.org/pdf/1903.06874.pdf
Video:
https://www.youtube.com/watch?v=ycD2BtO-QzU β¦
Code:
https://github.com/fidler-lab/curve-gcn
β΄οΈ @AI_Python_EN
Paper:
https://arxiv.org/pdf/1903.06874.pdf
Video:
https://www.youtube.com/watch?v=ycD2BtO-QzU β¦
Code:
https://github.com/fidler-lab/curve-gcn
β΄οΈ @AI_Python_EN
A preprint for our #naacl2019 paper "Combining Sentiment Lexica with a Multi-View Variational #Autoencoder" is now online! We combine lexica with different polarity scales with a novel multi-view VAE.
https://arxiv.org/abs/1904.02839
β΄οΈ @AI_Python_EN
https://arxiv.org/abs/1904.02839
β΄οΈ @AI_Python_EN
PaintBot: A Reinforcement Learning Approach for Natural Media Painting
Jia et al.: https://lnkd.in/ez5Vqav
#ComputerVision #PatternRecognition #ReinforcementLearning #Painting
β΄οΈ @AI_Python_EN
Jia et al.: https://lnkd.in/ez5Vqav
#ComputerVision #PatternRecognition #ReinforcementLearning #Painting
β΄οΈ @AI_Python_EN
Introduction to the math of backprop
By Deb Panigrahi: https://lnkd.in/ddtyj_U
#ArtificialIntelligence #BackPropagation #DeepLearning #NeuralNetworks
β΄οΈ @AI_Python_EN
By Deb Panigrahi: https://lnkd.in/ddtyj_U
#ArtificialIntelligence #BackPropagation #DeepLearning #NeuralNetworks
β΄οΈ @AI_Python_EN
Music Transformer
Huang et al.: https://lnkd.in/dzHEH4E
#DeepLearning #Transformer #MachineLearning #SpeechProcessing #Music
β΄οΈ @AI_Python_EN
Huang et al.: https://lnkd.in/dzHEH4E
#DeepLearning #Transformer #MachineLearning #SpeechProcessing #Music
β΄οΈ @AI_Python_EN
How to run #Pytorch 1.0 and http://Fast.ai 1.0 on an Nvidia Jetson Nano Board ($99), an ARM Cortex A57 processor board with 4GB of RAM https://forums.fast.ai/t/share-your-work-here/27676/1274
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN
Four troubling trends in Machine Learning scholarship:
1. failure to distinguish between explanation and speculation;
2. failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning;
3. mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and
4. misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.
https://arxiv.org/abs/1807.03341
β΄οΈ @AI_Python_EN
1. failure to distinguish between explanation and speculation;
2. failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning;
3. mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and
4. misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.
https://arxiv.org/abs/1807.03341
β΄οΈ @AI_Python_EN
Six easy ways to run your Jupyter Notebook in the cloud
By Data School: https://lnkd.in/exbAJ-S
β΄οΈ @AI_Python_EN
By Data School: https://lnkd.in/exbAJ-S
β΄οΈ @AI_Python_EN
Understanding Neural ODE's
Blog by Jonty Sinai: https://lnkd.in/e2SEzmZ
#artificialintelligence #machinelearning #neuralnetworks
β΄οΈ @AI_Python_EN
Blog by Jonty Sinai: https://lnkd.in/e2SEzmZ
#artificialintelligence #machinelearning #neuralnetworks
β΄οΈ @AI_Python_EN
New paper & new dataset for spoken language understanding π£ππ€
Spoken language understanding (SLU) maps speech to meaning (or "intent"). (This is usually the actual end goal of speech recognition: you want to figure out what the speaker means/wants, not just what words they said.)
Paper: https://arxiv.org/abs/1904.03670
Code: https://github.com/lorenlugosch/pretrain_speech_model
Data: https://www.fluent.ai/research/fluent-speech-commands/
β΄οΈ @AI_Python_EN
Spoken language understanding (SLU) maps speech to meaning (or "intent"). (This is usually the actual end goal of speech recognition: you want to figure out what the speaker means/wants, not just what words they said.)
Paper: https://arxiv.org/abs/1904.03670
Code: https://github.com/lorenlugosch/pretrain_speech_model
Data: https://www.fluent.ai/research/fluent-speech-commands/
β΄οΈ @AI_Python_EN
AI, Python, Cognitive Neuroscience
New paper & new dataset for spoken language understanding π£ππ€ Spoken language understanding (SLU) maps speech to meaning (or "intent"). (This is usually the actual end goal of speech recognition: you want to figure out what the speaker means/wants, not justβ¦
The conventional way to do SLU is to convert the #speech into text, and then convert the text into the intent. For a great example of this type of system, see this paper by alice coucke and others: https://arxiv.org/abs/1805.10190
Another approach is end-to-end SLU, where the speech is mapped to the intent through a single neural model. End-to-end SLU: -is simpler, -maximizes the actual metric we care about (intent accuracy), -and can harness info not present in the text, like prosody (e.g. sarcasm).
End-to-end #SLU is theoretically nice, but learning to understand speech totally from scratch is really hardβyou need a ton of data to get it to work. Our solution: transfer learning! First, teach the model to recognize words and phonemes; then, teach it SLU.
Some people at GoogleAI and fb research have been doing some excellent work on end-to-end SLU, but without access to their datasets, it's impossible for most people to reproduce their results or do any useful research.
So we created an SLU dataset, Fluent Speech Commands, which http://Fluent.ai is releasing for free!
It's a simple SLU task where the goal is to predict the "action", "object", and "location" for spoken commands.
We hope that you find our dataset, #PyTorch code, pre-trained models, and paper useful. Even if you don't want to do SLU, the dataset can be used as a good old #classification task, adding to the list of open-source #audio datasets. Enjoy!
β΄οΈ @AI_Python_EN
Another approach is end-to-end SLU, where the speech is mapped to the intent through a single neural model. End-to-end SLU: -is simpler, -maximizes the actual metric we care about (intent accuracy), -and can harness info not present in the text, like prosody (e.g. sarcasm).
End-to-end #SLU is theoretically nice, but learning to understand speech totally from scratch is really hardβyou need a ton of data to get it to work. Our solution: transfer learning! First, teach the model to recognize words and phonemes; then, teach it SLU.
Some people at GoogleAI and fb research have been doing some excellent work on end-to-end SLU, but without access to their datasets, it's impossible for most people to reproduce their results or do any useful research.
So we created an SLU dataset, Fluent Speech Commands, which http://Fluent.ai is releasing for free!
It's a simple SLU task where the goal is to predict the "action", "object", and "location" for spoken commands.
We hope that you find our dataset, #PyTorch code, pre-trained models, and paper useful. Even if you don't want to do SLU, the dataset can be used as a good old #classification task, adding to the list of open-source #audio datasets. Enjoy!
β΄οΈ @AI_Python_EN
#Facebook #AI Open-Sources #PyTorch-BigGraph Tool for βExtremely Largeβ Graphs
π Open-Sources PyTorch-BigGraph Tool
β΄οΈ @AI_Python_EN
π Open-Sources PyTorch-BigGraph Tool
β΄οΈ @AI_Python_EN
Defeats GAN: A Simpler Model Outperforms in Knowledge Representation Learning
Heng Wang & Mingzhi Mao: https://lnkd.in/eU6THn5
#MachineLearning #ArtificialIntelligence #GenerativeAdversarialNetworks
β΄οΈ @AI_Python_EN
Heng Wang & Mingzhi Mao: https://lnkd.in/eU6THn5
#MachineLearning #ArtificialIntelligence #GenerativeAdversarialNetworks
β΄οΈ @AI_Python_EN
"Reinforcement Learning with Attention that Works: A Self-Supervised Approach"
Manchin et al.: https://lnkd.in/exJpZDJ
#ReinforcementLearning #DeepLearning
#Visualisation
β΄οΈ @AI_Python_EN
Manchin et al.: https://lnkd.in/exJpZDJ
#ReinforcementLearning #DeepLearning
#Visualisation
β΄οΈ @AI_Python_EN