Here the authors propose an adversarial contextual model for detecting moving objects in images.
A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible.
The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning.
This method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets.
Paper #arxiv link : https://lnkd.in/dhCxbik
#machinelearning #deeplearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
A deep neural network is trained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible.
The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning.
This method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets.
Paper #arxiv link : https://lnkd.in/dhCxbik
#machinelearning #deeplearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
"Standard" statistical methods such as regression, cluster and factor analysis all require numerous decisions, many of which are judgmental.
Subject matter knowledge (e.g., marketing), project background and knowing who will use the results, and how and when they will be used are consequential.
Stats cannot be done just by the numbers, even when called machine learning, as these three methods frequently are.
AI can mean anything these days but often refers to some form of artificial neural network (#ANN). Form is the operant word here because, like regression, cluster and factor analysis, ANN come in many shapes, sizes and flavors and cannot be done just by the numbers either. See the link under Comment.
Humans design AI and must make many decisions, some of which are quite subjective. Different AI applied to identical data will not give us identical results. This is no different from statistics.
Moreover, today's AI are task-specific: Alpha Go (Go) and Alpha Zero (chess) are different programs and neither can drive a car or read an MRI scan. Or do regression, cluster or factor analysis.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Subject matter knowledge (e.g., marketing), project background and knowing who will use the results, and how and when they will be used are consequential.
Stats cannot be done just by the numbers, even when called machine learning, as these three methods frequently are.
AI can mean anything these days but often refers to some form of artificial neural network (#ANN). Form is the operant word here because, like regression, cluster and factor analysis, ANN come in many shapes, sizes and flavors and cannot be done just by the numbers either. See the link under Comment.
Humans design AI and must make many decisions, some of which are quite subjective. Different AI applied to identical data will not give us identical results. This is no different from statistics.
Moreover, today's AI are task-specific: Alpha Go (Go) and Alpha Zero (chess) are different programs and neither can drive a car or read an MRI scan. Or do regression, cluster or factor analysis.
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Another sneak preview into TensorFlow 2.0. This is how the new architecture will look like:
1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.
I really can't wait anymore to test all the new things out.
#deeplearning #machinelearning
Article: https://lnkd.in/drz7FyV
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.
I really can't wait anymore to test all the new things out.
#deeplearning #machinelearning
Article: https://lnkd.in/drz7FyV
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Checkout this post by Adrian Rosebrock on How to get started in Machine Learning with Python. Read the full article here: https://lnkd.in/ghrNn29
#MachineLearning #DeepLearning #Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#MachineLearning #DeepLearning #Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
I finally watched Joel Grus's talk at JupyterCon 2018. He's the guy who doesn't like notebooks, in particular, Jupyter notebooks. Although I don't agree on everything what he says, he makes some good points for reproducible research. His tips are actually pretty useful for data scientists who want to get stronger in software engineering. Things like modularity, testing code, proper linting, dependency management etc. are also very important for my team and me. We actually make use of them all the time but despite that we still all love our notebooks β€οΈ. Check out the video on YouTube. It's pretty long but very informative and super funny.
#datascience #machinelearning
Slides: https://lnkd.in/dRn4VvQ
Youtube video: https://lnkd.in/dgemtdW
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#datascience #machinelearning
Slides: https://lnkd.in/dRn4VvQ
Youtube video: https://lnkd.in/dgemtdW
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Deep Learning In RAdiology
Getting Started: https://lnkd.in/efeU8vv
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Getting Started: https://lnkd.in/efeU8vv
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
DeepFlash is a nice application of auto-encoders where they trained a neural network to turn a flash selfie into a studio portrait. It's an interesting paper with a real need, I seriously mean it! They've also tested their results against other approaches like pix2pix, style transfer etc.. Somehow from the first glance I had the feeling that pix2pix performed better than their suggested approach but their evaluation metrics (SSIM and PSNR) proved me wrong.
#deeplearning #machinelearning
Paper: https://lnkd.in/eHM5rRx
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#deeplearning #machinelearning
Paper: https://lnkd.in/eHM5rRx
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Check out the new "Machine Learning Guide for 2019", which includes 20 Free Resources (blogs & videos) to Learn Machine Learning: https://lnkd.in/ejqejpA by the Open Data Science Conference (ODSC) team.
#BigData #DataScience #DataScientists #AI #DeepLearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#BigData #DataScience #DataScientists #AI #DeepLearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
When algorithms surprise us
Blog by Janelle Shane: https://lnkd.in/dQnCVa9
Original paper: https://lnkd.in/dt63hJR
#algorithm #artificialintelligence #machinelearning #reinforcementlearning #technology
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Blog by Janelle Shane: https://lnkd.in/dQnCVa9
Original paper: https://lnkd.in/dt63hJR
#algorithm #artificialintelligence #machinelearning #reinforcementlearning #technology
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
How do you go from self-play to the real world? : Transfer learning
NeurIPS 2017 Meta Learning Symposium: https://lnkd.in/e7MdpPc
"I think transfer learning is the key to general intelligence. And I think the key to doing transfer learning will be the acquisition of conceptual knowledge that is abstracted away from perceptual details of where you learned it from." β Demis Hassabis
#artificialintelligence #deeplearning #metalearning #reinforcementlearning #selfplay
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
NeurIPS 2017 Meta Learning Symposium: https://lnkd.in/e7MdpPc
"I think transfer learning is the key to general intelligence. And I think the key to doing transfer learning will be the acquisition of conceptual knowledge that is abstracted away from perceptual details of where you learned it from." β Demis Hassabis
#artificialintelligence #deeplearning #metalearning #reinforcementlearning #selfplay
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Google's Artificial Intelligence And Machine Learning Research Priorities: Freelancers, Take Note
https://www.forbes.com/sites/jonyounger/2019/01/16/googles-ai-and-ml-research-priorities-freelancers-take-note/#52abed10344c
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://www.forbes.com/sites/jonyounger/2019/01/16/googles-ai-and-ml-research-priorities-freelancers-take-note/#52abed10344c
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
How to Use The Pre-Trained VGG Model to Classify Objects in Photographs πDiscover the VGG Convolutional Neural Network Models for Image Classification https://buff.ly/2GadALk
#AI #DeepLearning #NeuralNetworks
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#AI #DeepLearning #NeuralNetworks
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
TOP 10 SELECTED MACHINE LEARNING RESOURCES
After long hours curated content on data science resources
selected top 10 selected machine learning resources on http://www.claoudml.co/ . The list lucrative resources to learn machine learning,
Details on http://www.claoudml.co/
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
After long hours curated content on data science resources
selected top 10 selected machine learning resources on http://www.claoudml.co/ . The list lucrative resources to learn machine learning,
Details on http://www.claoudml.co/
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
I use Reddit daily I wanted to post my best topics related to machine learning and data science that I found interesting in this platform and I'll be doing that each week.
Here you can find my favorite ones for the week 1/6/2019 to 1/12/2019
π What websites do you check regularly for data science/big data/ML/AI news and articles? https://lnkd.in/dzHGgZY
π Best book for deep learning for beginner https://lnkd.in/dpHUk6M
π MIT Deep Learning Basics: Introduction and Overview (Lex Fridman ) https://lnkd.in/dYrMRfr
π PyTorch implementations of RL Algorithms. https://lnkd.in/dvamnAp
π 11 Great Articles About Natural Language Processing (NLP) https://lnkd.in/deG7VjR
Hope you find them helpful
And I'll be glad if you can share your favorite topics or thoughts
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Here you can find my favorite ones for the week 1/6/2019 to 1/12/2019
π What websites do you check regularly for data science/big data/ML/AI news and articles? https://lnkd.in/dzHGgZY
π Best book for deep learning for beginner https://lnkd.in/dpHUk6M
π MIT Deep Learning Basics: Introduction and Overview (Lex Fridman ) https://lnkd.in/dYrMRfr
π PyTorch implementations of RL Algorithms. https://lnkd.in/dvamnAp
π 11 Great Articles About Natural Language Processing (NLP) https://lnkd.in/deG7VjR
Hope you find them helpful
And I'll be glad if you can share your favorite topics or thoughts
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
"The future depends on some graduate student who is deeply suspicious of everything I have said." - Geoffrey Hinton. There couldn't be a better quote to end a lecture on deep learning state of the art with: https://lnkd.in/e4C_Ejg
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
This is an excellent overview on the state of the art methods for NLP (natural language processing). An exciting area of research with wide applications.
https://lnkd.in/eKt-fKK
#analytics #machinelearning #datascience #nlp
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://lnkd.in/eKt-fKK
#analytics #machinelearning #datascience #nlp
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
How to fail the Data Science business case
2. Recruiting as quick-fixββββI am looking to recruit 150 Data Scientists in the next 12 monthsβ. I am not kidding: I did get that phone call, and it was about recruiting βData-Scientists-as-consultantsβ. Yes, expertise matters. Yes, there is a shortage of talent. And, yes, as companies struggle to build up data science capabilities they likely will be keen on consultancy services. However, the shortage of experts is real. Moreover, a senior data scientist likely prefers building products over project work, and impact with customers over project management meetings. Overall, I have seen quite a few attempts at using recruiting-as-a-fix, often failing at implementation already, either because of an unrealistic βunicornβ wishlist or because the case couldnβt be made as to why an experienced Data Scientists should join the company. Moreover, Data Scientists frequently report that they are interviewed by non-experts.
#interviews #datascientist #recruiting #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
2. Recruiting as quick-fixββββI am looking to recruit 150 Data Scientists in the next 12 monthsβ. I am not kidding: I did get that phone call, and it was about recruiting βData-Scientists-as-consultantsβ. Yes, expertise matters. Yes, there is a shortage of talent. And, yes, as companies struggle to build up data science capabilities they likely will be keen on consultancy services. However, the shortage of experts is real. Moreover, a senior data scientist likely prefers building products over project work, and impact with customers over project management meetings. Overall, I have seen quite a few attempts at using recruiting-as-a-fix, often failing at implementation already, either because of an unrealistic βunicornβ wishlist or because the case couldnβt be made as to why an experienced Data Scientists should join the company. Moreover, Data Scientists frequently report that they are interviewed by non-experts.
#interviews #datascientist #recruiting #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks
by Adam Geitgey: https://lnkd.in/gZ6sdPW
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
by Adam Geitgey: https://lnkd.in/gZ6sdPW
#artificialintelligence #deeplearning #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
My ML Tip of the Week on Overfitting
π‘ What is overfitting?
Overfitting is when a model makes much better predictions on known data (data included in the training set) than unknown data (data not included in the training set).
π‘ How can you combat overfitting?
π A few ways of combating overfitting are:
β’ simplify the model by use fewer parameters
β’ simply the model by changing the hyperparameters
β’ simplify the model by introducing regularization
β’ select a different model
β’ use more training data
β’ gather better quality data
#datascience #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
π‘ What is overfitting?
Overfitting is when a model makes much better predictions on known data (data included in the training set) than unknown data (data not included in the training set).
π‘ How can you combat overfitting?
π A few ways of combating overfitting are:
β’ simplify the model by use fewer parameters
β’ simply the model by changing the hyperparameters
β’ simplify the model by introducing regularization
β’ select a different model
β’ use more training data
β’ gather better quality data
#datascience #machinelearning
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN