AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
CS224N Natural Language Processing with Deep Learning 2019
YouTube playlist:
https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z

http://onlinehub.stanford.edu/cs224 #NLProc

✴️ @AI_Python_EN
Yoshua: Research is like a random exploration guided by intuition. It's okay to fail, but more important is to try. At an informal event at MILA Montreal

✴️ @AI_Python_EN
Are you a Data Scientists? Do you use Jupyter? Please help us understand how do you consume content and get connected with other professionals Just answer this 3 minute survey : http://bit.ly/Jupyter-survey-1 #DataScience #MachineLearning

✴️ @AI_Python_EN
Bill Gates: A.I. is like nuclear energy β€” 'both promising and dangerous' - CNBC Read more here: https://ift.tt/2uuayNC #ArtificialIntelligence #AI #DataScience #MachineLearning #BigData #DeepLearning #NLP #Robots #IoT

✴️ @AI_Python_EN
Checklist for debugging neural networks

http://bit.ly/2HSI0W5 #AI #DeepLearning #MachineLearning #DataScience

✴️ @AI_Python_EN
How to write a good machine learning tutorial.

https://bit.ly/2TFUTF6

#MachineLearning #DeepLearning

✴️ @AI_Python_EN
image_2019-03-31_03-37-57.png
1.1 MB
Machine Learning Cheat Sheet #Learning #MachineLearning

✴️ @AI_Python_EN
ArcFace: Additive Angular Margin Loss for Deep Face Recognition. The author used PyTorch 1.0 which is nice.

"We present arguably the most extensive experimental
evaluation of all the recent state-of-the-art face recognition
methods on over 10 face recognition benchmarks including
a new large-scale image database with trillion level
of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead. We release all refined training data, training codes, pre-trained models and training logs , which will help reproducet he results in this paper."

https://lnkd.in/e5Q2qP3
https://lnkd.in/ezWbVhH
✴️ @AI_Python_EN
This week's #machinelearning Q&A is on Underfitting vs Overfitting -

πŸ’‘ How can you tell if your model is underfitting your data?

If your training and validation error are both relatively equal and very high, then your model is most likely underfitting your training data.

πŸ’‘ How can you tell if your model is overfitting your data?

If your training error is low and your validation error is high, then your model is most likely overfitting your training data.

πŸ‘‰ Do you have any favorite heuristics that you use to detect under and over fitting in your models?

#datascience

✴️ @AI_Python_EN
#fun
✴️ @AI_Python_EN
Emoticons were born on this day in 1881 on the pages of Puck Magazine under the heading "Typographical Art," depicting four emotions: joy, melancholy, indifference, and astonishment https://www.brainpickings.org/2012/12/21/100-diagrams-that-changed-the-world/
✴️ @AI_Python_EN
Wishes for TensorFlow/Keras 🀞

- A full merge between Keras and TF
- Make the transition from Keras to custom layers seamless
- Less announcements and more clarity on the existing API family
- An official experimental toolbox (similar to the fastai library)

✴️ @AI_Python_EN
Want to know why training on small data is the future? And more importantly, why Andrew named his daughter Nova? Learn why in Andrew’s chat with MIT Tech Review's Will Knight at #EmTechDigital 2019: http://bit.ly/2VdaGwO

✴️ @AI_Python_EN
Protecting your #DeepLearning model will be the key area to focus on from cyber attacks to your models and algorithms.

Placing these is public cloud environments may severely affect your ability to protect these models and algorithms.

You need to prepare to defend these.

What are these adversarial attacks?

1. l2-norm attacks: in these attacks the attacker aims to minimize squared error between the adversarial and original image. These typically result in a very small amount of noise added to the image.
2. l∞-norm attacks: this is perhaps the simplest class of attacks which aim to limit or minimize the amount that any pixel is perturbed in order to achieve an adversary’s goal.
3. l0-norm attacks: these attacks minimize the number of modified pixels in the image.

Below is an example of an l2-norm attack where the left is classified as jeep but the right as a minivan.

#cyberattacks #algorithms #models #deeplearning

✴️ @AI_Python_EN
You want to be a data scientist ...?
First read this excellent tutorial by https://lnkd.in/eKrDyhN: "How sure are we? Two approaches to statistical inference"
https://lnkd.in/e5JBrN4

✴️ @AI_Python_EN
Download the new Unified Analytics for Dummies eBook to learn how companies are bringing together Data Science and Data Engineering to solve more business problems. https://lnkd.in/gwYe6Jp
✴️ @AI_Python_EN
A super interesting paper on image search and multilingual word embeddings.
"Image search using multilingual texts: a cross-modal learning approach between image and text"
https://lnkd.in/eBwwNne

✴️ @AI_Python_EN
Are Deep Neural Networks Dramatically Overfitted?


deep-neural-networks-

#deepneuralnetworks

✴️ @AI_Python_EN