Tensorflow(@CVision)
Learning from Simulated and Unsupervised Images through Adversarial Training (Apple Inc.)
مقالهی جالب کمپانی اپل!
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
Deep Learning without Backpropagation
[Tutorial: DeepMind's Synthetic Gradients]
https://iamtrask.github.io/2017/03/21/synthetic-gradients/?utm_content=buffer6d5d4&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
✔️Part 1: #Synthetic Gradients Overview
✔️Part 2: Using Synthetic Gradients
✔️Part 3: Generating Synthetic Gradients
✔️Part 4: A Baseline Neural Network
✔️?
✔️Part 6: Synthetic Gradients Based on Layer Output
[Tutorial: DeepMind's Synthetic Gradients]
https://iamtrask.github.io/2017/03/21/synthetic-gradients/?utm_content=buffer6d5d4&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
✔️Part 1: #Synthetic Gradients Overview
✔️Part 2: Using Synthetic Gradients
✔️Part 3: Generating Synthetic Gradients
✔️Part 4: A Baseline Neural Network
✔️?
✔️Part 6: Synthetic Gradients Based on Layer Output
iamtrask.github.io
Deep Learning without Backpropagation - i am trask
A machine learning craftsmanship blog.
ساخت و تولید دیتاست مصنوعی متن فارسی و انگلیسی با لیبل به صورت کاملا اتوماتیک و آسان.
یک تولید کننده دیتاست متن فارسی و انگلیسی با قابلیت انتخاب انواع فونت ها و استایل ها و متون رندوم یا انتخابی از دیکشنری و منبع دیتاست دلخواهتان با تعداد دیتای تولید شده دلخواه
با کمک این ابزار میتوانید دیتاست مورد نیاز برای آموزش شبکه های عمیق پردازش متن را به سرعت و آسان تولید کنید
A synthetic data generator for text recognition with latin, arabic and persian text support
https://github.com/amirmgh1375/TextRecognitionDataGenerator
#آموزش #سورس_کد #دیتاست
#synthetic_data #text_recognition #ctc
#ocr
#dataset_generator
یک تولید کننده دیتاست متن فارسی و انگلیسی با قابلیت انتخاب انواع فونت ها و استایل ها و متون رندوم یا انتخابی از دیکشنری و منبع دیتاست دلخواهتان با تعداد دیتای تولید شده دلخواه
با کمک این ابزار میتوانید دیتاست مورد نیاز برای آموزش شبکه های عمیق پردازش متن را به سرعت و آسان تولید کنید
A synthetic data generator for text recognition with latin, arabic and persian text support
https://github.com/amirmgh1375/TextRecognitionDataGenerator
#آموزش #سورس_کد #دیتاست
#synthetic_data #text_recognition #ctc
#ocr
#dataset_generator