Thereβs hundreds model-type on machine learning, thereβs the most often algorithm used, because sometimes accuracy/simplicity #MachineLearning:
- - -
1. Logistic Regression
https://lnkd.in/gJ2BwhD
2. Decision Trees
https://lnkd.in/gwadA-p
3. Random Forests
https://lnkd.in/gRYHcvt
4-5. Neural Networks (RNN and CNN)
https://lnkd.in/gZQhWyv
6. Bayesian Techniques
https://lnkd.in/gY3qVYP
7. Support Vector Machines
https://lnkd.in/gWJKRyn
8. XGBoost
https://lnkd.in/gv85yDV
9. Light GBM
https://lnkd.in/gTBUtN4
10. Catboost
https://lnkd.in/gFPzuTx
11 Greedy Boost
https://lnkd.in/ghG-giR
12. Elastic Net
https://lnkd.in/g-NMjPb
13. Vowpal Wabbit
https://lnkd.in/g2W9qbD
It goes into great detail and explains the concepts in a simple way!
#artificialintelligence #datascience #python #statistics
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
- - -
1. Logistic Regression
https://lnkd.in/gJ2BwhD
2. Decision Trees
https://lnkd.in/gwadA-p
3. Random Forests
https://lnkd.in/gRYHcvt
4-5. Neural Networks (RNN and CNN)
https://lnkd.in/gZQhWyv
6. Bayesian Techniques
https://lnkd.in/gY3qVYP
7. Support Vector Machines
https://lnkd.in/gWJKRyn
8. XGBoost
https://lnkd.in/gv85yDV
9. Light GBM
https://lnkd.in/gTBUtN4
10. Catboost
https://lnkd.in/gFPzuTx
11 Greedy Boost
https://lnkd.in/ghG-giR
12. Elastic Net
https://lnkd.in/g-NMjPb
13. Vowpal Wabbit
https://lnkd.in/g2W9qbD
It goes into great detail and explains the concepts in a simple way!
#artificialintelligence #datascience #python #statistics
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Exploring Quantum Neural Networks
#NeuralNetworks #Quantum
https://bit.ly/2VLVqaP
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#NeuralNetworks #Quantum
https://bit.ly/2VLVqaP
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Elon Musk Releases a Photo of His Latest Rocket, And It's Straight of Science Fiction
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
ScienceAlert
Elon Musk Releases a Photo of His Latest Rocket, And It's Very Silver
Elon Musk has published a photo of an experimental rocket meant to help him achieve his mission of conquering Mars.
The videos of our NeurIPSConf workshop on security in machine learning are now up. You can now watch all of the contributed and invited talks if you were not able to attend in person! Playlist with all of the talks is here:
https://www.youtube.com/playlist?list=PLFG9vaKTeJq4IpOje38YWA9UQu_COeNve
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://www.youtube.com/playlist?list=PLFG9vaKTeJq4IpOje38YWA9UQu_COeNve
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Hyper-parameters of Machine Learning algorithms
#machinelearning #datascience #deeplearning #statistics #algorithms
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#machinelearning #datascience #deeplearning #statistics #algorithms
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
View Computer Musings, lectures given by Donald E. Knuth, Professor Emeritus of the Art of Computer Programming at Stanford University. The Stanford Center for Professional Development has digitized more than one hundred tapes of Knuth's musings, lectures, and selected classes and posted them online. These archived tapes resonate with not only his thoughts, but with insights from students, audience members, and other luminaries in mathematics and computer science. They are available to the public free of charge.
https://www.youtube.com/playlist?list=PL94E35692EB9D36F3
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://www.youtube.com/playlist?list=PL94E35692EB9D36F3
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
First lecture on Deep Learning Basics is up on YouTube (see link). It's an introductory lecture overviewing the basics of deep learning.
https://www.youtube.com/watch?v=O5xeyoRL95U
Slides for this lecture:
https://www.dropbox.com/s/c0g3sc1shi63x3q/deep_learning_basics.pdf
Website: https://deeplearning.mit.edu/
GitHub repo with tutorials: https://github.com/lexfridman/mit-deep-learning
For those around MIT, the course is open to all. It runs every day in January at 3pm
https://towardsdatascience.com/the-abcs-of-machine-learning-experts-who-are-driving-the-world-in-ai-2995a8115bea
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://www.youtube.com/watch?v=O5xeyoRL95U
Slides for this lecture:
https://www.dropbox.com/s/c0g3sc1shi63x3q/deep_learning_basics.pdf
Website: https://deeplearning.mit.edu/
GitHub repo with tutorials: https://github.com/lexfridman/mit-deep-learning
For those around MIT, the course is open to all. It runs every day in January at 3pm
https://towardsdatascience.com/the-abcs-of-machine-learning-experts-who-are-driving-the-world-in-ai-2995a8115bea
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
*** Data Science: Data Scientist Bias Machine-Learning ***
~ There are many ways data scientists can influence machine-learning learning.
~ Here are the top human failings:
1. The square peg bias. This is where you just choose the wrong data set because it's what you have.
2. Sampling bias. You choose your data to represent the population under study. Sometimes, you draw incorrectly from the right population, or draw from the wrong population.
3. Bias-variance trade-off. You may cause bias by overcorrecting for variance. If your model is too sensitive to variance, small fluctuations could cause it to model random noise. Too much bias to correct this could miss complexity.
4. Measurement bias. This is when the instrument you use to collect the data has built-in bias, say, a scale that incorrectly overestimates weight.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
~ There are many ways data scientists can influence machine-learning learning.
~ Here are the top human failings:
1. The square peg bias. This is where you just choose the wrong data set because it's what you have.
2. Sampling bias. You choose your data to represent the population under study. Sometimes, you draw incorrectly from the right population, or draw from the wrong population.
3. Bias-variance trade-off. You may cause bias by overcorrecting for variance. If your model is too sensitive to variance, small fluctuations could cause it to model random noise. Too much bias to correct this could miss complexity.
4. Measurement bias. This is when the instrument you use to collect the data has built-in bias, say, a scale that incorrectly overestimates weight.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Top 10 #deeplearning research papers as per this website
https://lnkd.in/dPYayt9
Of course the choice remains biased but we do like these besides a few hundred other papers.
Remember, it is not the popular but the meaningful and industry relevant research that is worth paying attention to.
Here's the list:
1. Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/dhj5SyM
2. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
https://lnkd.in/d44kt3Q
3. Deep Contextualized Word Representations
https://lnkd.in/dkP68Fb
4. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/dAhYzge
5. Delayed Impact of Fair Machine Learning
https://lnkd.in/dvTvG2s
6. World Models
7. Taskonomy: Disentangling Task Transfer Learning
https://lnkd.in/dYxMjAd
8. Know What You Donβt Know: Unanswerable Questions for SQuAD
https://lnkd.in/d--grME
9. Large Scale GAN Training for High Fidelity Natural Image Synthesis
https://lnkd.in/dY6psf4
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://lnkd.in/dgtnD7n
#machinelearning #research #deeplearning #artificialintelligence
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://lnkd.in/dPYayt9
Of course the choice remains biased but we do like these besides a few hundred other papers.
Remember, it is not the popular but the meaningful and industry relevant research that is worth paying attention to.
Here's the list:
1. Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/dhj5SyM
2. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
https://lnkd.in/d44kt3Q
3. Deep Contextualized Word Representations
https://lnkd.in/dkP68Fb
4. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/dAhYzge
5. Delayed Impact of Fair Machine Learning
https://lnkd.in/dvTvG2s
6. World Models
7. Taskonomy: Disentangling Task Transfer Learning
https://lnkd.in/dYxMjAd
8. Know What You Donβt Know: Unanswerable Questions for SQuAD
https://lnkd.in/d--grME
9. Large Scale GAN Training for High Fidelity Natural Image Synthesis
https://lnkd.in/dY6psf4
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
https://lnkd.in/dgtnD7n
#machinelearning #research #deeplearning #artificialintelligence
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Looking to become a data scientist?
Always remember this: data science isn't just about the math. It's about solving problems.
And the most difficult (and valuable) data science problems involve INTEGRATION.
The big wins with data science are not using machine learning to solve already-tractable problems in a more automated way (thatβs nice, but not revolutionary).
The big wins come from integrating data science with the rest of the business. They come from taking many different data sources across many parts of your customerβs journey (or business process) and optimizing across the entire experience.
It means going outside the 4 walls that define a customer and understanding their life - understanding their human journey - and helping to improve it.
That is where we see the big wins.
So when you think about data science, think about *integration* and you'll be a lot more successful.
#datascience #machinelearning #innovation #integration
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Always remember this: data science isn't just about the math. It's about solving problems.
And the most difficult (and valuable) data science problems involve INTEGRATION.
The big wins with data science are not using machine learning to solve already-tractable problems in a more automated way (thatβs nice, but not revolutionary).
The big wins come from integrating data science with the rest of the business. They come from taking many different data sources across many parts of your customerβs journey (or business process) and optimizing across the entire experience.
It means going outside the 4 walls that define a customer and understanding their life - understanding their human journey - and helping to improve it.
That is where we see the big wins.
So when you think about data science, think about *integration* and you'll be a lot more successful.
#datascience #machinelearning #innovation #integration
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Here the authors propose an adversarial contextual model for detecting moving objects in images.
A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible.
The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning.
This method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets.
Paper #arxiv link : https://lnkd.in/dhCxbik
#machinelearning #deeplearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible.
The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning.
This method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets.
Paper #arxiv link : https://lnkd.in/dhCxbik
#machinelearning #deeplearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
"Standard" statistical methods such as regression, cluster and factor analysis all require numerous decisions, many of which are judgmental.
Subject matter knowledge (e.g., marketing), project background and knowing who will use the results, and how and when they will be used are consequential.
Stats cannot be done just by the numbers, even when called machine learning, as these three methods frequently are.
AI can mean anything these days but often refers to some form of artificial neural network (#ANN). Form is the operant word here because, like regression, cluster and factor analysis, ANN come in many shapes, sizes and flavors and cannot be done just by the numbers either. See the link under Comment.
Humans design AI and must make many decisions, some of which are quite subjective. Different AI applied to identical data will not give us identical results. This is no different from statistics.
Moreover, today's AI are task-specific: Alpha Go (Go) and Alpha Zero (chess) are different programs and neither can drive a car or read an MRI scan. Or do regression, cluster or factor analysis.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Subject matter knowledge (e.g., marketing), project background and knowing who will use the results, and how and when they will be used are consequential.
Stats cannot be done just by the numbers, even when called machine learning, as these three methods frequently are.
AI can mean anything these days but often refers to some form of artificial neural network (#ANN). Form is the operant word here because, like regression, cluster and factor analysis, ANN come in many shapes, sizes and flavors and cannot be done just by the numbers either. See the link under Comment.
Humans design AI and must make many decisions, some of which are quite subjective. Different AI applied to identical data will not give us identical results. This is no different from statistics.
Moreover, today's AI are task-specific: Alpha Go (Go) and Alpha Zero (chess) are different programs and neither can drive a car or read an MRI scan. Or do regression, cluster or factor analysis.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Another sneak preview into TensorFlow 2.0. This is how the new architecture will look like:
1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.
I really can't wait anymore to test all the new things out.
#deeplearning #machinelearning
Article: https://lnkd.in/drz7FyV
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.
I really can't wait anymore to test all the new things out.
#deeplearning #machinelearning
Article: https://lnkd.in/drz7FyV
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Checkout this post by Adrian Rosebrock on How to get started in Machine Learning with Python. Read the full article here: https://lnkd.in/ghrNn29
#MachineLearning #DeepLearning #Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#MachineLearning #DeepLearning #Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
I finally watched Joel Grus's talk at JupyterCon 2018. He's the guy who doesn't like notebooks, in particular, Jupyter notebooks. Although I don't agree on everything what he says, he makes some good points for reproducible research. His tips are actually pretty useful for data scientists who want to get stronger in software engineering. Things like modularity, testing code, proper linting, dependency management etc. are also very important for my team and me. We actually make use of them all the time but despite that we still all love our notebooks β€οΈ. Check out the video on YouTube. It's pretty long but very informative and super funny.
#datascience #machinelearning
Slides: https://lnkd.in/dRn4VvQ
Youtube video: https://lnkd.in/dgemtdW
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#datascience #machinelearning
Slides: https://lnkd.in/dRn4VvQ
Youtube video: https://lnkd.in/dgemtdW
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Deep Learning In RAdiology
Getting Started: https://lnkd.in/efeU8vv
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Getting Started: https://lnkd.in/efeU8vv
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
DeepFlash is a nice application of auto-encoders where they trained a neural network to turn a flash selfie into a studio portrait. It's an interesting paper with a real need, I seriously mean it! They've also tested their results against other approaches like pix2pix, style transfer etc.. Somehow from the first glance I had the feeling that pix2pix performed better than their suggested approach but their evaluation metrics (SSIM and PSNR) proved me wrong.
#deeplearning #machinelearning
Paper: https://lnkd.in/eHM5rRx
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#deeplearning #machinelearning
Paper: https://lnkd.in/eHM5rRx
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Check out the new "Machine Learning Guide for 2019", which includes 20 Free Resources (blogs & videos) to Learn Machine Learning: https://lnkd.in/ejqejpA by the Open Data Science Conference (ODSC) team.
#BigData #DataScience #DataScientists #AI #DeepLearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#BigData #DataScience #DataScientists #AI #DeepLearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN