image4.gif
16.9 MB
Learning the Depths of Moving People by Watching Frozen People” (http://goo.gle/2x4tEuQ ), recipient of a #CVPR2019 Best Paper Honorable Mention Award. Learn more about the paper at
http://goo.gle/2ZuZtJt
✴️ @AI_Python_EN
http://goo.gle/2ZuZtJt
✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Rigorously testing Machine learning models using meta-learning. We show how a Neural Process-based meta-learning formulation allows us to efficiently search for hard examples.
inDeepMind, Interested in adversarial tests and reinforcement learning? We combine meta-learning in a general probabilistic paradigm to detect failures, helping us build robust algorithms. Includes results on recommender systems and control: http://arxiv.org/abs/1903.11907
✴️ @AI_Python_EN
inDeepMind, Interested in adversarial tests and reinforcement learning? We combine meta-learning in a general probabilistic paradigm to detect failures, helping us build robust algorithms. Includes results on recommender systems and control: http://arxiv.org/abs/1903.11907
✴️ @AI_Python_EN
Slides: "Three Challenging Research Avenues (in language and vision)" from my VQA workshop #cvpr2019 talk.
https://yoavartzi.com/slides/2019_06_17_vqa_workshop.pdf
Includes a quick summary of some of our recent vision+language work and resources
✴️ @AI_Python_EN
https://yoavartzi.com/slides/2019_06_17_vqa_workshop.pdf
Includes a quick summary of some of our recent vision+language work and resources
✴️ @AI_Python_EN
One-shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization is accepted to Interspeech 2019.
By combining VAE and adaIN, our model is able to do one-shot VC by a reference source utterance and a target utterance.
✴️ @AI_Python_EN
By combining VAE and adaIN, our model is able to do one-shot VC by a reference source utterance and a target utterance.
✴️ @AI_Python_EN
Text similarity
Are these two sentences similar ?!
1) President greets the press in Chicago
2) Obama speaks in Illinois
— Jaccard
— Cosine
— WMD
#naturallanguageprocessing
https://medium.com/@adriensieg/text-similarities-da019229c894
✴️ @AI_Python_EN
Are these two sentences similar ?!
1) President greets the press in Chicago
2) Obama speaks in Illinois
— Jaccard
— Cosine
— WMD
#naturallanguageprocessing
https://medium.com/@adriensieg/text-similarities-da019229c894
✴️ @AI_Python_EN
Medium
Text Similarities : Estimate the degree of similarity between two texts
Note to the reader: Python code is shared at the end
Learn Tensorflow 1: The Hello World of Machine Learning
https://codelabs.developers.google.com/codelabs/tensorflow-lab1-helloworld/#0
https://codelabs.developers.google.com/codelabs/tensorflow-lab1-helloworld/#0
Privacy-Preserving Deep Visual Recognition: An Adversarial Learning Framework and A New Dataset
We have recently introduced PA-HMDB51 (https://github.com/htwang14/PA-HMDB51), the very first human action video dataset with potential privacy leak attributes annotated. This dataset is collected and maintained by the VITA group at the CSE department of Texas A&M University.
The dataset contains 592 videos selected from the HMDB51 dataset [2]. For each video, we provide with frame-level annotation of five privacy attributes: skin color, gender, face, nudity, and relationship. The annotations are all provided in JSON format. Visualized examples can be found in the attachment.
The dataset aims to support and promote research on protecting visual privacy information in smart camera-based applications. A manuscript [1] introduces the dataset and related algorithms that we have developed for this topic.
We hope you will find this dataset useful,
Haotao Wang,
Texas A&M University
We have recently introduced PA-HMDB51 (https://github.com/htwang14/PA-HMDB51), the very first human action video dataset with potential privacy leak attributes annotated. This dataset is collected and maintained by the VITA group at the CSE department of Texas A&M University.
The dataset contains 592 videos selected from the HMDB51 dataset [2]. For each video, we provide with frame-level annotation of five privacy attributes: skin color, gender, face, nudity, and relationship. The annotations are all provided in JSON format. Visualized examples can be found in the attachment.
The dataset aims to support and promote research on protecting visual privacy information in smart camera-based applications. A manuscript [1] introduces the dataset and related algorithms that we have developed for this topic.
We hope you will find this dataset useful,
Haotao Wang,
Texas A&M University
use graph neural networks on assembly code and memory states to predict program behavior https://arxiv.org/abs/1906.07181
✴️ @AI_Python_EN
✴️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
use graph neural networks on assembly code and memory states to predict program behavior https://arxiv.org/abs/1906.07181 ✴️ @AI_Python_EN
why a neural fake news generator, like our Grover model, is the best defense against neural fake news. Among other results, Grover detects GPT-2 generated fake news with over 96% accuracy in a zero-shot setting.
https://medium.com/ai2-blog/counteracting-neural-disinformation-with-grover-6cf6690d463b
✴️ @AI_Python_EN
https://medium.com/ai2-blog/counteracting-neural-disinformation-with-grover-6cf6690d463b
✴️ @AI_Python_EN
Researchers from Facebook AI and NYU Langone Health propose a new approach to MRI reconstruction that restores a high fidelity image from partially observed measurements in less time and with fewer errors. #CVPR2019
https://ai.facebook.com/blog/accelerating-mri-reconstruction/
✴️ @AI_Python_EN
https://ai.facebook.com/blog/accelerating-mri-reconstruction/
✴️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
Researchers from Facebook AI and NYU Langone Health propose a new approach to MRI reconstruction that restores a high fidelity image from partially observed measurements in less time and with fewer errors. #CVPR2019 https://ai.facebook.com/blog/accelerating…
Best paper award at #CVPR2019 main idea: seeing around the corner at non-line-of-sight (NLOS) objects by using Fermat paths, which is a new theory of how NLOS photons follow specific geometric paths.
http://imaging.cs.cmu.edu/fermat_paths/assets/cvpr2019.pdf
✴️ @AI_Python_EN
http://imaging.cs.cmu.edu/fermat_paths/assets/cvpr2019.pdf
✴️ @AI_Python_EN
Researchers at Facebook, Princeton, and UC Berkeley have developed a method that uses AI to find and propose the most efficient design for neural networks based on how and where they'll run, such as on mobile processors. #CVPR2019
https://ai.facebook.com/blog/platform-aware-ai-to-design-neural-networks/
✴️ @AI_Python_EN
https://ai.facebook.com/blog/platform-aware-ai-to-design-neural-networks/
✴️ @AI_Python_EN
Released at #CVPR2019, MediaPipe is Google's new framework for media processing pipelines, combining model-based inference via TensorFlow with traditional CV tasks like optical flow, pose tracking, and more. Used in existing projects like Motion Stills.
https://sites.google.com/view/perception-cv4arvr/mediapipe
✴️ @AI_Python_EN
https://sites.google.com/view/perception-cv4arvr/mediapipe
✴️ @AI_Python_EN
Presenting some work today on how humans and machines perform when doing collaborative visual search at #CVPR2019! A topic of interest for radiologists, surveillance operators and potentially semi-autonomous driving!
✴️ @AI_Python_EN
✴️ @AI_Python_EN
Check out Scene Representation Networks:
https://youtu.be/6vMEBWD8O20
new continuous 3D-aware scene representation reconstructs appearance and geometry just from posed images, generalizes across scenes for single-shot reconstruction, and naturally handles non-rigid deformation!
https://arxiv.org/abs/1906.01618
#computervision
✴️ @AI_Python_EN
https://youtu.be/6vMEBWD8O20
new continuous 3D-aware scene representation reconstructs appearance and geometry just from posed images, generalizes across scenes for single-shot reconstruction, and naturally handles non-rigid deformation!
https://arxiv.org/abs/1906.01618
#computervision
✴️ @AI_Python_EN
When ImageNet: A large-scale hierarchical image database was published in 2009, it showed how large-scale datasets could transform neural network algorithms. Now, its author & HAI co-director Dr Fei-Fei li has won the #cvpr2019 award for the retrospective most impactful paper. #AI
✴️ @AI_Python_EN
✴️ @AI_Python_EN