Coloring massive graphs is a notoriously difficult problem to solveβ¦Deep reinforcement learning to the rescue! Jiayi Huang et al use a novel architecture (FastColorNet) to learn new state of the art heuristics for graph coloring: https://lnkd.in/gcE8cWz #TechRec
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN
Statisticians often refer to observed variables and latent variables. Put simply, an observed variable is what we have in our data file. It may be useful as is but, in fields such as marketing research, may be an indicator of a latent variable, often called a factor, which we cannot directly observe. In a more familiar context, a high fever may be an indication of influenza, which our doctor cannot observe directly.
The distinction between the two has practical importance. For instance, if a survey respondent selects Convenience as one of the reason she shops at Chain X, it could indicate many things, for example, that the store is physically nearby, that parking is easy, that checkout is fast, or that the store layout makes it easy to find what she wants.
In questionnaire design and when analyzing survey data, marketing researchers frequently confuse observed and latent variables - "Effective" is another example. We tend to zero in on specific items or statements when we should be thinking about the constructs they represent. Also, the questions we ask may be confusing or meaningless to respondents because of this.
It's best to think about these issues when designing our research and planning our analytics.
β΄οΈ @AI_Python_EN
The distinction between the two has practical importance. For instance, if a survey respondent selects Convenience as one of the reason she shops at Chain X, it could indicate many things, for example, that the store is physically nearby, that parking is easy, that checkout is fast, or that the store layout makes it easy to find what she wants.
In questionnaire design and when analyzing survey data, marketing researchers frequently confuse observed and latent variables - "Effective" is another example. We tend to zero in on specific items or statements when we should be thinking about the constructs they represent. Also, the questions we ask may be confusing or meaningless to respondents because of this.
It's best to think about these issues when designing our research and planning our analytics.
β΄οΈ @AI_Python_EN
While #AI hype machine is in overdrive we are thrilled to see that #DeepLearning is making steady inroads into making lives of #opthalmologists easier.
We believe in this too and include practical and hands-on datasets in our own trainings so more engineers and doctors can work together to diagnose such diseases!
Diabetic retinopathy (DR), a major microvascular complication of diabetes, has a significant impact on the world's health systems. Globally, the number of people with DR will grow from 126.6 million in 2010 to 191.0 million by 2030.
In U.S alone, more than 29 million people have diabetes, and are at risk for diabetic retinopathy, a potentially blinding eye disease. People typically don't notice changes in their vision in the disease's early stages. But as it progresses, diabetic retinopathy usually causes vision loss that in many cases cannot be reversed. That's why it's so important that people with diabetes have yearly screenings.
Unfortunately, the accuracy of screenings can vary significantly. One study found a 49 percent error rate among internists, diabetologists, and medical residents. This is really bad!
Read Google's research https://lnkd.in/gDMW-fD
#artificialintelligence #diabeticretinopathy
β΄οΈ @AI_Python_EN
We believe in this too and include practical and hands-on datasets in our own trainings so more engineers and doctors can work together to diagnose such diseases!
Diabetic retinopathy (DR), a major microvascular complication of diabetes, has a significant impact on the world's health systems. Globally, the number of people with DR will grow from 126.6 million in 2010 to 191.0 million by 2030.
In U.S alone, more than 29 million people have diabetes, and are at risk for diabetic retinopathy, a potentially blinding eye disease. People typically don't notice changes in their vision in the disease's early stages. But as it progresses, diabetic retinopathy usually causes vision loss that in many cases cannot be reversed. That's why it's so important that people with diabetes have yearly screenings.
Unfortunately, the accuracy of screenings can vary significantly. One study found a 49 percent error rate among internists, diabetologists, and medical residents. This is really bad!
Read Google's research https://lnkd.in/gDMW-fD
#artificialintelligence #diabeticretinopathy
β΄οΈ @AI_Python_EN
Artificial intelligence: artβs weird and wonderful new medium
By Francesca Gavine, how to spend it: https://lnkd.in/eXEFkkQ
#art #artificialintelligence #deeplearning
#generativeadversarialnetworks
β΄οΈ @AI_Python_EN
By Francesca Gavine, how to spend it: https://lnkd.in/eXEFkkQ
#art #artificialintelligence #deeplearning
#generativeadversarialnetworks
β΄οΈ @AI_Python_EN
HOW DO I LEARN BASICS MACHINE LEARNING IN 30 DAYS ?
Harshit Ahluwalia updated his github about 30 days strategy to learn machine learning from basics. This one gonna very useful for coaching session on Internship program https://lnkd.in/fJaF-jj
#machinelearning #datascience #python #artificialintelligence #repository #100DaysOfMLCode
β΄οΈ @AI_Python_EN
Harshit Ahluwalia updated his github about 30 days strategy to learn machine learning from basics. This one gonna very useful for coaching session on Internship program https://lnkd.in/fJaF-jj
#machinelearning #datascience #python #artificialintelligence #repository #100DaysOfMLCode
β΄οΈ @AI_Python_EN
Access free GPU compute via Colab
By Google: https://lnkd.in/ds_j5nz
Colaboratory is a research tool for machine learning education and research. Itβs a Jupyter notebook environment that requires no setup to use.
#ArtificialIntelligence #DeepLearning #MachineLearning
β΄οΈ @AI_Python_EN
By Google: https://lnkd.in/ds_j5nz
Colaboratory is a research tool for machine learning education and research. Itβs a Jupyter notebook environment that requires no setup to use.
#ArtificialIntelligence #DeepLearning #MachineLearning
β΄οΈ @AI_Python_EN
Reducing the Need for Labeled Data in Generative Adversarial Networks #DataScience #MachineLearning #ArtificialIntelligence
http://bit.ly/2FqeJiF
β΄οΈ @AI_Python_EN
http://bit.ly/2FqeJiF
β΄οΈ @AI_Python_EN
A PyTorch implementation of BigGAN with pretrained weights and conversion scripts
By Thomas Wolf: https://lnkd.in/e_Pph_T
#pytorch #biggan #computervision #artificialintelligence
#generativeadversarialnetwork
β΄οΈ @AI_Python_EN
By Thomas Wolf: https://lnkd.in/e_Pph_T
#pytorch #biggan #computervision #artificialintelligence
#generativeadversarialnetwork
β΄οΈ @AI_Python_EN
Interpretable machine learning is so important doesnβt matter whether you want to understand a simple linear regression model to more complex ones like neural networks. Understanding your models help to prevent biases, gain trust and help you building better models. If you havenβt done it yet then start with it now! Itβs never too late! #deeplearning #machinelearning #explainableAI
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN
I find Mark Berlinerβs Bayesian Hierarchical Model (BHM) paradigm helpful.
At the top level is the data model, which is a probability model that speciο¬es the distribution of the data given an underlying βtrueβ process (sometimes called the hidden or latent process) and given some parameters that are needed to specify this distribution.
At the next level is the process model, which is a probability model that describes the hidden process (and, thus, its uncertainty) given some parameters. Note that at this level the model does not need to account for measurement uncertainty. The process model can then use science-based theoretical or empirical knowledge, which is often physical or mechanistic.
At the bottom level is the parameter model, where uncertainty about the parameters is modeled. From top to bottom, the levels of a BHM are:
1. Data model: [data|process, parameters]
2. Process model: [process|parameters]
3. Parameter model: [parameters]
Each of these levels could have sub-levels, for which conditional-probability models could be given. Ultimately, we are interested in the posterior distribution:
[process, parameters|data] β [data|process, parameters] Γ[process|parameters] Γ[parameters]
Excerpted from Spatio-Temporal Statistics with R (Wikle et al.)
β΄οΈ @AI_Python_EN
At the top level is the data model, which is a probability model that speciο¬es the distribution of the data given an underlying βtrueβ process (sometimes called the hidden or latent process) and given some parameters that are needed to specify this distribution.
At the next level is the process model, which is a probability model that describes the hidden process (and, thus, its uncertainty) given some parameters. Note that at this level the model does not need to account for measurement uncertainty. The process model can then use science-based theoretical or empirical knowledge, which is often physical or mechanistic.
At the bottom level is the parameter model, where uncertainty about the parameters is modeled. From top to bottom, the levels of a BHM are:
1. Data model: [data|process, parameters]
2. Process model: [process|parameters]
3. Parameter model: [parameters]
Each of these levels could have sub-levels, for which conditional-probability models could be given. Ultimately, we are interested in the posterior distribution:
[process, parameters|data] β [data|process, parameters] Γ[process|parameters] Γ[parameters]
Excerpted from Spatio-Temporal Statistics with R (Wikle et al.)
β΄οΈ @AI_Python_EN
Everyone strives to build accurate and high-performing #datascience models. Check out these articles that list down the different ways to improve your model's accuracy and evaluate it:
8 Proven Ways for improving the βAccuracyβ of a #MachineLearning Model - https://buff.ly/2TP4sGI
Improve Your Model Performance using Cross Validation (in Python and R) - https://buff.ly/2HGNV05
7 Important Model Evaluation Error Metrics Everyone should know - https://buff.ly/2HsYgxm
β΄οΈ @AI_Python_EN
8 Proven Ways for improving the βAccuracyβ of a #MachineLearning Model - https://buff.ly/2TP4sGI
Improve Your Model Performance using Cross Validation (in Python and R) - https://buff.ly/2HGNV05
7 Important Model Evaluation Error Metrics Everyone should know - https://buff.ly/2HsYgxm
β΄οΈ @AI_Python_EN
Excellent new GAN contribution from Berkeley, NVIDIA and MIT: Semantic Image Synthesis with Spatially-Adaptive Normalization (SPADE). Do checkout images and videos, it really is good.
"We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal because the normalization layers tend to wash away semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of synthesis results as well as create multi-modal results."
website: https://lnkd.in/fhi8Fmq
paper: https://lnkd.in/fv8HCGn
github (code coming soon): https://lnkd.in/fwPnMxv
#gan #deeplearning #artificialintelligence
β΄οΈ @AI_Python_EN
"We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal because the normalization layers tend to wash away semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method compared to existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows users to easily control the style and content of synthesis results as well as create multi-modal results."
website: https://lnkd.in/fhi8Fmq
paper: https://lnkd.in/fv8HCGn
github (code coming soon): https://lnkd.in/fwPnMxv
#gan #deeplearning #artificialintelligence
β΄οΈ @AI_Python_EN
To much spelling error in your dataset?
Peter Norvig (Research Director at Google, previously director of search quality) revolutionize search engine quality by giving power to reduce spelling error (by splits, deletes, transposes, replaces, and inserts). You can see the comprehensive guide (with python code) at his website https://lnkd.in/fEb3v2a
#python #datasets #codes #statistician
β΄οΈ @AI_Python_EN
Peter Norvig (Research Director at Google, previously director of search quality) revolutionize search engine quality by giving power to reduce spelling error (by splits, deletes, transposes, replaces, and inserts). You can see the comprehensive guide (with python code) at his website https://lnkd.in/fEb3v2a
#python #datasets #codes #statistician
β΄οΈ @AI_Python_EN
Ranking Tweets with TensorFlow
Blog by Yi Zhuang, Arvind Thiagarajan, and Tim Sweeney: https://lnkd.in/eiNseET
#MachineLearning #TensorFlow #Twitter
β΄οΈ @AI_Python_EN
Blog by Yi Zhuang, Arvind Thiagarajan, and Tim Sweeney: https://lnkd.in/eiNseET
#MachineLearning #TensorFlow #Twitter
β΄οΈ @AI_Python_EN
The evolution of art through the lens of deep convolutional networks
βThe Shape of Art History in the Eyes of the Machineβ, Elgammal et al.: https://lnkd.in/dgjmqYc
#art #artificialintelligence #deeplearning
β΄οΈ @AI_Python_EN
βThe Shape of Art History in the Eyes of the Machineβ, Elgammal et al.: https://lnkd.in/dgjmqYc
#art #artificialintelligence #deeplearning
β΄οΈ @AI_Python_EN
This is a fun application of the superres method from fastdotai lesson 7 - turning line drawings into shaded pictures! https://forums.fast.ai/t/share-your-work-here/27676/1204
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN
Check out new blog post on Coconet π₯₯, the #ml behind the Bach Doodle thatβs live now! Itβs a flexible infilling model that generates counterpoint through rewriting. http://g.co/magenta/coconet
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN
PhD thesis Neural Transfer Learning for Natural Language Processing is now online. It includes a general review of #transferlearning in #NLP as well as new material that I hope will be useful to some. http://ruder.io/thesis/
β΄οΈ @AI_Python_EN
β΄οΈ @AI_Python_EN