AI, Python, Cognitive Neuroscience
3.82K subscribers
1.09K photos
46 videos
78 files
891 links
Download Telegram
talk at Interpretability and Robustness in Audio, Speech, and Language (IRASL) Workshop at NeurIPS2018 are now available online: "Deep Within-Class Covariance Analysis for Robust Deep Audio Representation Learning"

#neurips2018 #neurips #irasl

🌎 Link



✴️ @AI_Python_EN
🗣 @AI_Python_Arxiv
❇️ @AI_Python
Slides from Pieter Abbeel's talk at #NeurIPS2018 workshop on RL under Partial Observability:

https://lnkd.in/eFHdb9d

#NeurIPS #ReinforcementLearning

✴️ @AI_Python_EN
🗣 @AI_Python_Arxiv
❇️ @AI_Python
Exciting news from #NeurIPS – the European Laboratory for Learning and Intelligent Systems (ELLIS) has been announced! The centre will support research and help industry leverage #AI.

https://nvda.ws/2roKRfK

❇️ @AI_Python
🗣 @AI_Python_Arxiv
✴️ @AI_Python_EN
A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models

https://arxiv.org/abs/1910.13148

#MachineLearning #neurips, #NeurIPS2019

✴️ @AI_Python_EN
Countdown to NeurIPS 2019 continues... (5th of 8 studies my team will present) A lot of companies ranging from small startups to large corporate giants are releasing Explainable AI toolkits and core features using popular XAI methods like LIME, SHAP, Integrated Gradients, etc. However, one begs the question: Do the explanations provided by these XAI methods really reflect the decisions made by machine learning algorithm? In this study, we introduce a measurable way in an attempt to answer this question, and study some of the most popular XAI methods to see where they stand for deep neural networks. The results may surprise you...
#deeplearning #neurips
https://arxiv.org/abs/1910.07387

❇️ @AI_Python_EN