Google Research • Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.
Blogpost
https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1
Paper
https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
Blogpost
https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1
Paper
https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
The Most Important Fundamentals of PyTorch you Should Know
https://blog.exxactcorp.com/the-most-important-fundamentals-of-pytorch-you-should-know/
Code: https://github.com/tirthajyoti/PyTorch_Machine_Learning
https://blog.exxactcorp.com/the-most-important-fundamentals-of-pytorch-you-should-know/
Code: https://github.com/tirthajyoti/PyTorch_Machine_Learning
Exxactcorp
Blog - the most important fundamentals of pytorch you should know | Exxact
Google Research • Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.
Blogpost
https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1
Paper
https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
Blogpost
https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1
Paper
https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks
https://bit.ly/2CYxFpY
https://bit.ly/2CYxFpY
GitHub
GitHub - astorfi/lip-reading-deeplearning: Lip Reading - Cross Audio-Visual Recognition using 3D Architectures
:unlock: Lip Reading - Cross Audio-Visual Recognition using 3D Architectures - GitHub - astorfi/lip-reading-deeplearning: Lip Reading - Cross Audio-Visual Recognition using 3D Architectures
A TensorFlow Modeling Pipeline Using TensorFlow Datasets and TensorBoard
https://www.kdnuggets.com/2020/06/tensorflow-modeling-pipeline-tensorflow-datasets-tensorboard.html
https://www.kdnuggets.com/2020/06/tensorflow-modeling-pipeline-tensorflow-datasets-tensorboard.html
KDnuggets
A TensorFlow Modeling Pipeline Using TensorFlow Datasets and TensorBoard - KDnuggets
This article investigates TensorFlow components for building a toolset to make modeling evaluation more efficient. Specifically, TensorFlow Datasets (TFDS) and TensorBoard (TB) can be quite helpful in this task.
Lecture Notes in Deep Learning: Feedforward Networks — Part 3 | #DataScience #MachineLearning #ArtificialIntelligence #AI
https://bit.ly/2Z2GgQY
https://bit.ly/2Z2GgQY
Medium
Feedforward Networks — Part 3
The Backpropagation Algorithm
TensorFlow, Keras and deep learning, without a PhD access_tim
https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist/#2
https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist/#2
Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules Mittal et al.: #ArtificialIntelligence #DeepLearning #MachineLearning
https://arxiv.org/abs/2006.16981
https://arxiv.org/abs/2006.16981
How to Write a Makefile - Automating Python Setup, Compilation, and Testing
https://stackabuse.com/how-to-write-a-makefile-automating-python-setup-compilation-and-testing/
https://stackabuse.com/how-to-write-a-makefile-automating-python-setup-compilation-and-testing/
Stack Abuse
How to Write a Makefile - Automating Python Setup, Compilation, and Testing
In this tutorial, we'll go over the basics of Makefiles - regex, target notation and bash scripting. We'll write a makefile for a Python project and then execute it with the make utility.
How to read Deep Learning research papers.
⚫ A systematic approach to reading a collection of papers to gain knowledge within a domain
⚫ How to properly read a research paper
⚫ Useful online resources that
can aid you in searching for papers and key information "50–100 papers will primarily provide you with a very good understanding of the domain."
https://towardsdatascience.com/how-you-should-read-research-papers-according-to-andrew-ng-stanford-deep-learning-lectures-98ecbd3ccfb3
⚫ A systematic approach to reading a collection of papers to gain knowledge within a domain
⚫ How to properly read a research paper
⚫ Useful online resources that
can aid you in searching for papers and key information "50–100 papers will primarily provide you with a very good understanding of the domain."
https://towardsdatascience.com/how-you-should-read-research-papers-according-to-andrew-ng-stanford-deep-learning-lectures-98ecbd3ccfb3
All the videos for the Computer Vision lecture
on "Detection, Segmentation, and Tracking" are now public!
Videos: https://youtube.com/playlist?list=PLog3nOPCjKBneGyffEktlXXMfv1OtKmCs…
Slides: https://dvl.in.tum.de/teaching/cv3dst-ss20/
on "Detection, Segmentation, and Tracking" are now public!
Videos: https://youtube.com/playlist?list=PLog3nOPCjKBneGyffEktlXXMfv1OtKmCs…
Slides: https://dvl.in.tum.de/teaching/cv3dst-ss20/
In future #AI hiring other AI be like: Job Profile: *human baby sitter*
- Experience : trained on 100 years of past data.
- Test Accuracy : 99.9999
- Precision: blah
- recall : blah
- AUC : blah blah
- Inference time: A.C
- Trained on : Latest "alien" TPUs and GPUs
- Bias : blah Note: AI trained on old TPUs will not be considered. And then AI will gossip with each other about bias and discrimination they have to go through compared to others like:
- "Wouldn't I be considered if I am trained on X country's data?"
- "Why was she considered even though she has outliers in the data?"
- "I am trained on old TPUs, I won't be considered? What!" LOL #artificialintelligence #machinelearning
- Experience : trained on 100 years of past data.
- Test Accuracy : 99.9999
- Precision: blah
- recall : blah
- AUC : blah blah
- Inference time: A.C
- Trained on : Latest "alien" TPUs and GPUs
- Bias : blah Note: AI trained on old TPUs will not be considered. And then AI will gossip with each other about bias and discrimination they have to go through compared to others like:
- "Wouldn't I be considered if I am trained on X country's data?"
- "Why was she considered even though she has outliers in the data?"
- "I am trained on old TPUs, I won't be considered? What!" LOL #artificialintelligence #machinelearning
A Hybrid Approach for Fake News Detection in Twitter Based on User Features and Graph Embeddings
• Using node2vec to extract features from a twitter follower graph. In conjunction with user features provided by Twitter.
This hybrid approach considers both the characteristics of the user and his social graph. The results show that the approach consistently and significantly outperforms existent approaches limited to user features.
Paper is.gd/LP9uKD
• Using node2vec to extract features from a twitter follower graph. In conjunction with user features provided by Twitter.
This hybrid approach considers both the characteristics of the user and his social graph. The results show that the approach consistently and significantly outperforms existent approaches limited to user features.
Paper is.gd/LP9uKD
Stanford CS224w’s lectures Machine Learning with Graphs, Leskovec et al.: https://lnkd.in/d4Cnahj #DeepLearning #Graphs #MachineLearning