Build Graph Nets in Tensorflow https://arxiv.org/abs/1806.01261
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet.
Contact graph-nets@google.com for comments and questions.
What are graph networks?
A graph network takes a graph as input and returns a graph as output. The input graph has edge- (E ), node- (V ), and global-level (u) attributes. The output graph has the same structure, but updated attributes. Graph networks are part of the broader family of "graph neural networks" (Scarselli et al., 2009).
https://lnkd.in/dpG9e7g
✴️ @AI_Python_EN
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet.
Contact graph-nets@google.com for comments and questions.
What are graph networks?
A graph network takes a graph as input and returns a graph as output. The input graph has edge- (E ), node- (V ), and global-level (u) attributes. The output graph has the same structure, but updated attributes. Graph networks are part of the broader family of "graph neural networks" (Scarselli et al., 2009).
https://lnkd.in/dpG9e7g
✴️ @AI_Python_EN
Relating sentence representations in deep neural networks with those encoded by the brain
In their investigation, the researchers considered several neural network architectures for word representation, including two recently proposed models called ELMo and BERT.
They compared how these networks process particular sentences with data collected from human subjects using magnetoencephalography (MEG), a functional neuroimaging technique for mapping brain activity, as they read the same sentences.
To begin with, they decided to use sentences with a simple syntax and basic semantics, such as "the bone was eaten by the dog."
Read more below: or watch this Github link:
https://lnkd.in/dvbiJAz
#deeplearning
✴️ @AI_Python_EN
In their investigation, the researchers considered several neural network architectures for word representation, including two recently proposed models called ELMo and BERT.
They compared how these networks process particular sentences with data collected from human subjects using magnetoencephalography (MEG), a functional neuroimaging technique for mapping brain activity, as they read the same sentences.
To begin with, they decided to use sentences with a simple syntax and basic semantics, such as "the bone was eaten by the dog."
Read more below: or watch this Github link:
https://lnkd.in/dvbiJAz
#deeplearning
✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Classifying Legendary Pokemon Birds 🐦🐦🐦
👉👉👉 Try it yourself:
https://lnkd.in/eYhKNAh 👈👈👈
After only the second fastai "Practical Deep Learning for Coders" class I was able to complete an end-to-end deep learning project! 🤖🤖🤖
The main goal is to classify an image as either one of the Legendary Pokemon Birds - Articuno, Moltres or Zapdos - or an alternative class which includes everything else. Needless to say that sometimes my model gets confused about the alternative class since not so many diverse images were feed into it...
Source code:
https://lnkd.in/eRfkBx8
Forked from:
https://lnkd.in/e_k4nqN
#ai #ml #dl #deeplearning #cnn #python
✴️ @AI_Python_EN
👉👉👉 Try it yourself:
https://lnkd.in/eYhKNAh 👈👈👈
After only the second fastai "Practical Deep Learning for Coders" class I was able to complete an end-to-end deep learning project! 🤖🤖🤖
The main goal is to classify an image as either one of the Legendary Pokemon Birds - Articuno, Moltres or Zapdos - or an alternative class which includes everything else. Needless to say that sometimes my model gets confused about the alternative class since not so many diverse images were feed into it...
Source code:
https://lnkd.in/eRfkBx8
Forked from:
https://lnkd.in/e_k4nqN
#ai #ml #dl #deeplearning #cnn #python
✴️ @AI_Python_EN
One of the best resources for #PyTorch based #pretrained CNN models.
https://lnkd.in/eY87mFf
✴️ @AI_Python_EN
https://lnkd.in/eY87mFf
✴️ @AI_Python_EN
Have you heard of "R-Transformer?", a Recurrent Neural Network Enhanced Transformer
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.
Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.
Here the authors propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.
The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. They evaluated R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.
Github code: https://lnkd.in/dpFckix
#research #algorithms #machinelearning #deeplearning #rnn
✴️ @AI_Python_EN
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.
Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.
Here the authors propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.
The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. They evaluated R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.
Github code: https://lnkd.in/dpFckix
#research #algorithms #machinelearning #deeplearning #rnn
✴️ @AI_Python_EN
On the “steerability” of generative adversarial networks
pdf: https://arxiv.org/pdf/1907.07171.pdf
abs: https://arxiv.org/abs/1907.07171
github: https://github.com/ali-design/gan_steerability
✴️ @AI_Python_EN
pdf: https://arxiv.org/pdf/1907.07171.pdf
abs: https://arxiv.org/abs/1907.07171
github: https://github.com/ali-design/gan_steerability
✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Sinkhorn iteration for Optimal Transport with TF. Code:
https://colab.research.google.com/github/znah/notebooks/blob/master/mini_sinkhorn.ipynb
✴️ @AI_Python_EN
https://colab.research.google.com/github/znah/notebooks/blob/master/mini_sinkhorn.ipynb
✴️ @AI_Python_EN
how weak supervision can train deep learning models using unlabeled cardiac MRI sequences/videos
Key idea: use Snorkel to transform cardiologist domain knowledge into labeling functions --simple rules which capture information about our task-- to take advantage of the massive scale of unlabeled imaging data available in biobank.
This lets us rapidly build training sets for classification tasks such as bicuspid aortic valve, where a Snorkel-based approach improved our end model F1 performance by 64%.
This lets us rapidly build training sets for classification tasks such as bicuspid aortic valve, where a Snorkel-based approach improved our end model F1 performance by 64%.
https://www.nature.com/articles/s41467-019-11012-3
✴️ @AI_Python_EN
Key idea: use Snorkel to transform cardiologist domain knowledge into labeling functions --simple rules which capture information about our task-- to take advantage of the massive scale of unlabeled imaging data available in biobank.
This lets us rapidly build training sets for classification tasks such as bicuspid aortic valve, where a Snorkel-based approach improved our end model F1 performance by 64%.
This lets us rapidly build training sets for classification tasks such as bicuspid aortic valve, where a Snorkel-based approach improved our end model F1 performance by 64%.
https://www.nature.com/articles/s41467-019-11012-3
✴️ @AI_Python_EN
How to ship ML in practice:
1/ Write a simple rule based solution to cover 80% of use cases
2/ Write a simple ML algorithm to cover 95% of cases
3/ Write a filtering algorithm to route inputs to the correct method
4/ Add monitoring
5/ Detect drift
...
24/ Deep Learning
✴️ @AI_Python_EN
1/ Write a simple rule based solution to cover 80% of use cases
2/ Write a simple ML algorithm to cover 95% of cases
3/ Write a filtering algorithm to route inputs to the correct method
4/ Add monitoring
5/ Detect drift
...
24/ Deep Learning
✴️ @AI_Python_EN
'Artificial intelligence' fit to monitor volcanoes: Platform uses 'machine learning' to analyse satellite data
https://www.sciencedaily.com/releases/2019/07/190715103313.htm
✴️ @AI_Python_EN
https://www.sciencedaily.com/releases/2019/07/190715103313.htm
✴️ @AI_Python_EN
EasyGen, a visual programming language for text data pipelines for neural nets.
Colab: https://drive.google.com/open?id=1XNiOuNtMnItl5CPGvRjEvj9C78nDuvXj
Github: https://github.com/markriedl/easygen …
Here’s a program to scrape the web for generating Star Trek/romance book titles.
✴️ @AI_Python_EN
Colab: https://drive.google.com/open?id=1XNiOuNtMnItl5CPGvRjEvj9C78nDuvXj
Github: https://github.com/markriedl/easygen …
Here’s a program to scrape the web for generating Star Trek/romance book titles.
✴️ @AI_Python_EN
NER and Information Extraction Webinar for Akbank
https://www.youtube.com/watch?v=K2q1Z71EV14&feature=share
https://www.youtube.com/watch?v=K2q1Z71EV14&feature=share
YouTube
NER and Information Extraction Webinar for Akbank
This is our new paper at KDD, proposing a new dataset for US traffic records with several attributes. We also did some data analysis to show its potential for further uses in AI and data mining. Hope to see you at KDD 19
https://www.youtube.com/watch?v=FhWO_uTf2Ho&t=2s
✴️ @AI_Python_EN
https://www.youtube.com/watch?v=FhWO_uTf2Ho&t=2s
✴️ @AI_Python_EN
YouTube
Short and Long-term Pattern Discovery Over Large-Scale Geo-Spatiotemporal Data
Authors:
Sobhan Moosavi, Mohammad Hossein Samavatian, Arnab Nandi, Srinivasan Parthasarathy and Rajiv Ramnath
More on https://www.kdd.org/kdd2019/
Sobhan Moosavi, Mohammad Hossein Samavatian, Arnab Nandi, Srinivasan Parthasarathy and Rajiv Ramnath
More on https://www.kdd.org/kdd2019/
For Who Have a Passion For:
1. Artificial Intelligence
2. Machine Learning
3. Deep Learning
4. Data Science
5. Computer vision
6. Image Processing
7. Cognitive Neuroscience
8. Research Papers and Related Courses
https://t.me/DeepLearningML
1. Artificial Intelligence
2. Machine Learning
3. Deep Learning
4. Data Science
5. Computer vision
6. Image Processing
7. Cognitive Neuroscience
8. Research Papers and Related Courses
https://t.me/DeepLearningML