AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
#NLP is among the hottest and most interesting fields in #datascience. Check out these 5 in-depth and hands-on tutorials to learn NLP:

β€’ The Essential NLP Guide to Solve Top 10 Common NLP Tasks - https://bit.ly/2QCCgR1
β€’ Practical Tutorial for Regular Expressions in #Python - https://bit.ly/2QBChVi
β€’ A Gentle Introduction to #TopicModeling - https://bit.ly/2QCCh7x
β€’ Comprehensive and Intuitive Guide to #WordEmbeddings - https://bit.ly/2VKR4Av
β€’ #TextClassification using ULMFiT and fastai Library in Python - https://bit.ly/2VHHEGa

And test your #NaturalLanguageProcessing knowledge on this challenging question set!

β€’ 30 Questions to test a data scientist on Natural Language Processing - https://bit.ly/2jfGGyT

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
There’s hundreds model-type on machine learning, there’s the most often algorithm used, because sometimes accuracy/simplicity #MachineLearning:

- - -
1. Logistic Regression
https://lnkd.in/gJ2BwhD

2. Decision Trees
https://lnkd.in/gwadA-p

3. Random Forests
https://lnkd.in/gRYHcvt

4-5. Neural Networks (RNN and CNN)
https://lnkd.in/gZQhWyv

6. Bayesian Techniques
https://lnkd.in/gY3qVYP

7. Support Vector Machines
https://lnkd.in/gWJKRyn

8. XGBoost
https://lnkd.in/gv85yDV

9. Light GBM
https://lnkd.in/gTBUtN4

10. Catboost
https://lnkd.in/gFPzuTx

11 Greedy Boost
https://lnkd.in/ghG-giR

12. Elastic Net
https://lnkd.in/g-NMjPb

13. Vowpal Wabbit
https://lnkd.in/g2W9qbD

It goes into great detail and explains the concepts in a simple way!

#artificialintelligence #datascience #python #statistics

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Exploring Quantum Neural Networks

#NeuralNetworks #Quantum

https://bit.ly/2VLVqaP

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
The videos of our NeurIPSConf workshop on security in machine learning are now up. You can now watch all of the contributed and invited talks if you were not able to attend in person! Playlist with all of the talks is here:
https://www.youtube.com/playlist?list=PLFG9vaKTeJq4IpOje38YWA9UQu_COeNve

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Hyper-parameters of Machine Learning algorithms

#machinelearning #datascience #deeplearning #statistics #algorithms

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
View Computer Musings, lectures given by Donald E. Knuth, Professor Emeritus of the Art of Computer Programming at Stanford University. The Stanford Center for Professional Development has digitized more than one hundred tapes of Knuth's musings, lectures, and selected classes and posted them online. These archived tapes resonate with not only his thoughts, but with insights from students, audience members, and other luminaries in mathematics and computer science. They are available to the public free of charge.

https://www.youtube.com/playlist?list=PL94E35692EB9D36F3

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
First lecture on Deep Learning Basics is up on YouTube (see link). It's an introductory lecture overviewing the basics of deep learning.

https://www.youtube.com/watch?v=O5xeyoRL95U

Slides for this lecture:

https://www.dropbox.com/s/c0g3sc1shi63x3q/deep_learning_basics.pdf

Website: https://deeplearning.mit.edu/
GitHub repo with tutorials: https://github.com/lexfridman/mit-deep-learning

For those around MIT, the course is open to all. It runs every day in January at 3pm

https://towardsdatascience.com/the-abcs-of-machine-learning-experts-who-are-driving-the-world-in-ai-2995a8115bea

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
*** Data Science: Data Scientist Bias Machine-Learning ***
~ There are many ways data scientists can influence machine-learning learning.
~ Here are the top human failings:
1. The square peg bias. This is where you just choose the wrong data set because it's what you have.
2. Sampling bias. You choose your data to represent the population under study. Sometimes, you draw incorrectly from the right population, or draw from the wrong population.
3. Bias-variance trade-off. You may cause bias by overcorrecting for variance. If your model is too sensitive to variance, small fluctuations could cause it to model random noise. Too much bias to correct this could miss complexity.
4. Measurement bias. This is when the instrument you use to collect the data has built-in bias, say, a scale that incorrectly overestimates weight.


πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Top 10 #deeplearning research papers as per this website
https://lnkd.in/dPYayt9

Of course the choice remains biased but we do like these besides a few hundred other papers.

Remember, it is not the popular but the meaningful and industry relevant research that is worth paying attention to.

Here's the list:

1. Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/dhj5SyM

2. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
https://lnkd.in/d44kt3Q

3. Deep Contextualized Word Representations
https://lnkd.in/dkP68Fb

4. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/dAhYzge

5. Delayed Impact of Fair Machine Learning
https://lnkd.in/dvTvG2s

6. World Models

7. Taskonomy: Disentangling Task Transfer Learning
https://lnkd.in/dYxMjAd

8. Know What You Don’t Know: Unanswerable Questions for SQuAD
https://lnkd.in/d--grME

9. Large Scale GAN Training for High Fidelity Natural Image Synthesis
https://lnkd.in/dY6psf4

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://lnkd.in/dgtnD7n
#machinelearning #research #deeplearning #artificialintelligence

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Looking to become a data scientist?

Always remember this: data science isn't just about the math. It's about solving problems.

And the most difficult (and valuable) data science problems involve INTEGRATION.

The big wins with data science are not using machine learning to solve already-tractable problems in a more automated way (that’s nice, but not revolutionary).

The big wins come from integrating data science with the rest of the business. They come from taking many different data sources across many parts of your customer’s journey (or business process) and optimizing across the entire experience.

It means going outside the 4 walls that define a customer and understanding their life - understanding their human journey - and helping to improve it.

That is where we see the big wins.

So when you think about data science, think about *integration* and you'll be a lot more successful.

#datascience #machinelearning #innovation #integration

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Here the authors propose an adversarial contextual model for detecting moving objects in images.

A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible.

The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning.

This method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets.

Paper #arxiv link : https://lnkd.in/dhCxbik
#machinelearning #deeplearning

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
"Standard" statistical methods such as regression, cluster and factor analysis all require numerous decisions, many of which are judgmental.

Subject matter knowledge (e.g., marketing), project background and knowing who will use the results, and how and when they will be used are consequential.

Stats cannot be done just by the numbers, even when called machine learning, as these three methods frequently are.

AI can mean anything these days but often refers to some form of artificial neural network (#ANN). Form is the operant word here because, like regression, cluster and factor analysis, ANN come in many shapes, sizes and flavors and cannot be done just by the numbers either. See the link under Comment.

Humans design AI and must make many decisions, some of which are quite subjective. Different AI applied to identical data will not give us identical results. This is no different from statistics.

Moreover, today's AI are task-specific: Alpha Go (Go) and Alpha Zero (chess) are different programs and neither can drive a car or read an MRI scan. Or do regression, cluster or factor analysis.

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Another sneak preview into TensorFlow 2.0. This is how the new architecture will look like:

1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.

I really can't wait anymore to test all the new things out.

#deeplearning #machinelearning

Article: https://lnkd.in/drz7FyV

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
Checkout this post by Adrian Rosebrock on How to get started in Machine Learning with Python. Read the full article here: https://lnkd.in/ghrNn29
#MachineLearning #DeepLearning #Python

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
I finally watched Joel Grus's talk at JupyterCon 2018. He's the guy who doesn't like notebooks, in particular, Jupyter notebooks. Although I don't agree on everything what he says, he makes some good points for reproducible research. His tips are actually pretty useful for data scientists who want to get stronger in software engineering. Things like modularity, testing code, proper linting, dependency management etc. are also very important for my team and me. We actually make use of them all the time but despite that we still all love our notebooks ❀️. Check out the video on YouTube. It's pretty long but very informative and super funny.
#datascience #machinelearning

Slides: https://lnkd.in/dRn4VvQ
Youtube video: https://lnkd.in/dgemtdW

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN
DeepFlash is a nice application of auto-encoders where they trained a neural network to turn a flash selfie into a studio portrait. It's an interesting paper with a real need, I seriously mean it! They've also tested their results against other approaches like pix2pix, style transfer etc.. Somehow from the first glance I had the feeling that pix2pix performed better than their suggested approach but their evaluation metrics (SSIM and PSNR) proved me wrong.
#deeplearning #machinelearning

Paper: https://lnkd.in/eHM5rRx

πŸ—£ @AI_Python_Arxiv
✴️ @AI_Python_EN