Tensorflow(@CVision)
انسان در واقع اشیاء را بدون ناظر یاد میگیرد و بعد اینکه مثلا مدتی یک شی را دید و یاد گرفت، بلافاصله پس از اینکه نام آن شی را شنید برچسب آن را نیز یاد میگیرد. در حال حاضر بهترین مدلهای بینایی ماشین که در سالهای اخیر، خصوصا بعد از الکسنت سال 2012 ارائه شده…
یان لیکان در این سخنرانی
شبکه های رقابتی مولد
یا
Generative Adversarial Networks
را مهم ترین ایده در 20 سال گذشته برای یادگیری ماشین بیان کرده است.
روشی که مدلها را قادر به یادگیری بدون ناظر میکند.
The major advancements in Deep Learning in 2016
🔗https://tryolabs.com/blog/2016/12/06/major-advancements-deep-learning-2016/
Generative Adversarial Nets
https://arxiv.org/pdf/1406.2661v1.pdf
این روش برای مسائل با تعداد کم و ناکافی داده ی با برچسب نیز مناسب است.
#autoencoder #unsupervised #unsupervised_learning #Generative #Generative_Models
شبکه های رقابتی مولد
یا
Generative Adversarial Networks
را مهم ترین ایده در 20 سال گذشته برای یادگیری ماشین بیان کرده است.
روشی که مدلها را قادر به یادگیری بدون ناظر میکند.
The major advancements in Deep Learning in 2016
🔗https://tryolabs.com/blog/2016/12/06/major-advancements-deep-learning-2016/
Generative Adversarial Nets
https://arxiv.org/pdf/1406.2661v1.pdf
این روش برای مسائل با تعداد کم و ناکافی داده ی با برچسب نیز مناسب است.
#autoencoder #unsupervised #unsupervised_learning #Generative #Generative_Models
Tryolabs
The major advancements in Deep Learning in 2016
✏️Title:
#Unsupervised #Representation Learning with #Deep #Convolutional #Generative #Adversarial Networks
✏️abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
🔗https://arxiv.org/pdf/1511.06434v2.pdf
"Under review as a conference paper at ICLR 2016"
#Unsupervised #Representation Learning with #Deep #Convolutional #Generative #Adversarial Networks
✏️abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
🔗https://arxiv.org/pdf/1511.06434v2.pdf
"Under review as a conference paper at ICLR 2016"
Tensorflow(@CVision)
Learning from Simulated and Unsupervised Images through Adversarial Training (Apple Inc.)
مقالهی جالب کمپانی اپل!
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
Deep Learning 2016: The Year in Review
http://www.deeplearningweekly.com/blog/deep-learning-2016-the-year-in-review
✔️ #Unsupervised and #Reinforcement Learning
✔️ Deep Reinforcement Learning
✔️ #Generative Models
✔️ Continued Openness in AI development
✔️ Partnerships & Acquisitions
✔️ Hardware & Chips
(by Jan Bussieck on December 31, 2016)
In order to understand trends in the field, I find it helpful to think of developments in #deep_learning as being driven by three major frontiers that limit the success of #artificial_intelligence in general and deep learning in particular. Firstly, there is the available #computing power and #infrastructure, such as fast #GPUs, cloud services providers (have you checked out Amazon's new #EC2 P2 instance ?) and tools (#Tensorflow, #Torch, #Keras etc), secondly, there is the amount and quality of the training data and thirdly, the algorithms (#CNN, #LSTM, #SGD) using the training data and running on the hardware. Invariably behind every new development or advancement, lies an expansion of one of these frontiers.
...
http://www.deeplearningweekly.com/blog/deep-learning-2016-the-year-in-review
✔️ #Unsupervised and #Reinforcement Learning
✔️ Deep Reinforcement Learning
✔️ #Generative Models
✔️ Continued Openness in AI development
✔️ Partnerships & Acquisitions
✔️ Hardware & Chips
(by Jan Bussieck on December 31, 2016)
In order to understand trends in the field, I find it helpful to think of developments in #deep_learning as being driven by three major frontiers that limit the success of #artificial_intelligence in general and deep learning in particular. Firstly, there is the available #computing power and #infrastructure, such as fast #GPUs, cloud services providers (have you checked out Amazon's new #EC2 P2 instance ?) and tools (#Tensorflow, #Torch, #Keras etc), secondly, there is the amount and quality of the training data and thirdly, the algorithms (#CNN, #LSTM, #SGD) using the training data and running on the hardware. Invariably behind every new development or advancement, lies an expansion of one of these frontiers.
...
Deeplearningweekly
Deep Learning 2016: The Year in Review | Deep Learning Weekly
A weekly newsletter about the latest developments in Deep Learning
Generative Adversarial Denoising Autoencoder for #Face Completion
pic: http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/images/one.png
🔗http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/
#GAN
#Generative #adversarial #Generative_Models #Autoencoder
pic: http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/images/one.png
🔗http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/
#GAN
#Generative #adversarial #Generative_Models #Autoencoder
The Alien Style of Deep Learning #Generative Design
https://medium.com/intuitionmachine/the-alien-look-of-deep-learning-generative-design-5c5f871f7d10
[Dec 25, 2016, 3min read]
طراحی های بیگانه، عجیب و کارا توسط یادگیری ژرف!
با استفاده از generative modelها ایده های جالبی برای طراحی های صنعتی یا حتی ایده ی دکوراسیون منزل خروجی بگیرید!
همچنین توانسته یک شبکه ی lstm بهینه را بسازد (کاربرد متا مدل)
این روش به طراحان اجازه می دهد به اهداف طراحی ورودی خاص، از جمله الزامات عملکردی، نوع مواد، روش ساخت، معیارهای عملکرد و محدودیت های هزینه را به عنوان ورودی بدهند.
سیستم پس از ایجاد طرح های مختلف و جستجو در بین طرح های خلق شده، بر اساس نیازمندی های ذکر شده بهترین طرح های پیشنهادی را خروجی میدهد!
#Alien_Style #GAN
https://medium.com/intuitionmachine/the-alien-look-of-deep-learning-generative-design-5c5f871f7d10
[Dec 25, 2016, 3min read]
طراحی های بیگانه، عجیب و کارا توسط یادگیری ژرف!
با استفاده از generative modelها ایده های جالبی برای طراحی های صنعتی یا حتی ایده ی دکوراسیون منزل خروجی بگیرید!
همچنین توانسته یک شبکه ی lstm بهینه را بسازد (کاربرد متا مدل)
این روش به طراحان اجازه می دهد به اهداف طراحی ورودی خاص، از جمله الزامات عملکردی، نوع مواد، روش ساخت، معیارهای عملکرد و محدودیت های هزینه را به عنوان ورودی بدهند.
سیستم پس از ایجاد طرح های مختلف و جستجو در بین طرح های خلق شده، بر اساس نیازمندی های ذکر شده بهترین طرح های پیشنهادی را خروجی میدهد!
#Alien_Style #GAN
Medium
The Alien Style of Deep Learning Generative Design
What happens when you have Deep Learning begin to generate your designs? The commons misconception would be that a machine’s design would…
Tensorflow(@CVision)
completion.gif
سورس کد تکمیل نواحی از دست رفته ی تصویر با فریم ورک #تنسورفلو به همراه توضیحات کامل
Image #Completion with Deep Learning in #TensorFlow
🔗blog post: http://bamos.github.io/2016/08/09/deep-completion/
🔗source code: https://github.com/bamos/dcgan-completion.tensorflow
مطالب مرتبط:
https://t.me/cvision/75
https://arxiv.org/abs/1607.07539
#GAN #Generative_Adversarial #DCGAN
Image #Completion with Deep Learning in #TensorFlow
🔗blog post: http://bamos.github.io/2016/08/09/deep-completion/
🔗source code: https://github.com/bamos/dcgan-completion.tensorflow
مطالب مرتبط:
https://t.me/cvision/75
https://arxiv.org/abs/1607.07539
#GAN #Generative_Adversarial #DCGAN
GitHub
GitHub - bamos/dcgan-completion.tensorflow: Image Completion with Deep Learning in TensorFlow
Image Completion with Deep Learning in TensorFlow. Contribute to bamos/dcgan-completion.tensorflow development by creating an account on GitHub.
img: http://bit.ly/2oQJgwo
نرم افزار فیساپ که اخیرا محبوبیت زیادی پیدا کرده, از یادگیری عمیق و به طور خاص
Deep generative convolutional neural network
استفاده میکند
اطلاعات بیشتر:
https://techcrunch.com/2017/02/08/faceapp-uses-neural-networks-for-photorealistic-selfie-tweaks/
لینک دانلود نرم افزار فیس اپ:
Android version: http://bit.ly/2pqO4f9
IOS version: http://apple.co/2qejHWq
#deep_learning #generative #convolutional #neural_network
نرم افزار فیساپ که اخیرا محبوبیت زیادی پیدا کرده, از یادگیری عمیق و به طور خاص
Deep generative convolutional neural network
استفاده میکند
اطلاعات بیشتر:
https://techcrunch.com/2017/02/08/faceapp-uses-neural-networks-for-photorealistic-selfie-tweaks/
لینک دانلود نرم افزار فیس اپ:
Android version: http://bit.ly/2pqO4f9
IOS version: http://apple.co/2qejHWq
#deep_learning #generative #convolutional #neural_network
myimage.gif
10.7 MB
Multi-style #Generative Network for #Real_time Transfer
Paper:
🔗 https://arxiv.org/abs/1703.06953
Code:
🔗 https://github.com/zhanghang1989/MSG-Net
#Style_Transfer #deep_learning #GAN #MSG_Net #CNN
Paper:
🔗 https://arxiv.org/abs/1703.06953
Code:
🔗 https://github.com/zhanghang1989/MSG-Net
#Style_Transfer #deep_learning #GAN #MSG_Net #CNN
Generating Videos with Scene Dynamics
video: http://bit.ly/2q6THM9
تبدیل تصویر به فیلم.
هوش مصنوعی ای که قادر است تنها با یک تصویر ثابت، فیلم چند ثانیه ای حاوی حرکت خروجی دهد...
در این روش به صورت بدون ناظر دو سال ویدیوی جمع آوری از فلیکر به شبکه آموزش داده شده است، سپس شبکه توانسته که نگاشتی از تصاویر به فیلم های چند ثانیه ای ایجاد کند.
🔗 http://web.mit.edu/vondrick/tinyvideo/
#generative #adversarial #GAN #deep_learning
video: http://bit.ly/2q6THM9
تبدیل تصویر به فیلم.
هوش مصنوعی ای که قادر است تنها با یک تصویر ثابت، فیلم چند ثانیه ای حاوی حرکت خروجی دهد...
در این روش به صورت بدون ناظر دو سال ویدیوی جمع آوری از فلیکر به شبکه آموزش داده شده است، سپس شبکه توانسته که نگاشتی از تصاویر به فیلم های چند ثانیه ای ایجاد کند.
🔗 http://web.mit.edu/vondrick/tinyvideo/
#generative #adversarial #GAN #deep_learning
YouTube
Creating Videos of the Future
More info: http://www.csail.mit.edu/creating_videos_of_the_future Paper: http://web.mit.edu/vondrick/tinyvideo/paper.pdf
#مقاله
✔️ایجاد یک نگاشت از تصور به تصویر:
در این کار شبکه های شرطی در مقابل حریف (GAN) آموزش دیده اند که یک نگاشت از تصویر ورودی به تصویر خروجی بیابند...
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
[UC Berkeley] pic: http://bit.ly/2s2OTsm
🔗abstract:
https://arxiv.org/abs/1703.10593
🔗Paper:
https://arxiv.org/pdf/1703.10593.pdf
🔗Project Page:
https://junyanz.github.io/CycleGAN/
🔗codes:
https://github.com/junyanz/CycleGAN
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
مرتیط به مقاله ی:
https://t.me/cvision/171
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #Generative_Models #Generative
✔️ایجاد یک نگاشت از تصور به تصویر:
در این کار شبکه های شرطی در مقابل حریف (GAN) آموزش دیده اند که یک نگاشت از تصویر ورودی به تصویر خروجی بیابند...
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
[UC Berkeley] pic: http://bit.ly/2s2OTsm
🔗abstract:
https://arxiv.org/abs/1703.10593
🔗Paper:
https://arxiv.org/pdf/1703.10593.pdf
🔗Project Page:
https://junyanz.github.io/CycleGAN/
🔗codes:
https://github.com/junyanz/CycleGAN
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
مرتیط به مقاله ی:
https://t.me/cvision/171
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #Generative_Models #Generative
This media is not supported in your browser
VIEW IN TELEGRAM
تبدیل اسب به گورخر!
ایجاد نگاشت تصویر به تصویر توسط هوش مصنوعی...
اطلاعات بیشتر:
https://t.me/cvision/214
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #generative
ایجاد نگاشت تصویر به تصویر توسط هوش مصنوعی...
اطلاعات بیشتر:
https://t.me/cvision/214
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #generative
#آموزش در سایت ماکروسافت
Learning Image to Image Translation with CycleGANs
[Published June 12, 2017]
https://www.microsoft.com/reallifecode/2017/06/12/learning-image-image-translation-cyclegans/
مرتبط با https://t.me/cvision/214
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #Generative_Models #Generative
Learning Image to Image Translation with CycleGANs
[Published June 12, 2017]
https://www.microsoft.com/reallifecode/2017/06/12/learning-image-image-translation-cyclegans/
مرتبط با https://t.me/cvision/214
#CycleGAN #GAN #Generative #CNN #Convolutional #deep_learning #adversarial #Generative_Models #Generative
Real Life Code
Learning Image to Image Translation with CycleGANs - Real Life Code
Microsoft has partnered with Getty Images to explore how Neural Nets could be used to transform the stock photo industry.
#مقاله #سورس_کد
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
+ PyTorch Implementation of StarGAN - #CVPR_2018
🔗abstract:
https://arxiv.org/abs/1711.09020
🔗Paper:
https://arxiv.org/pdf/1711.09020.pdf
🔗Code:
https://github.com/yunjey/StarGAN
🎬Video Demo:
https://www.youtube.com/watch?v=EYjdLppmERE
#GAN #stargan #pytorch #generative #adversarial
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
+ PyTorch Implementation of StarGAN - #CVPR_2018
🔗abstract:
https://arxiv.org/abs/1711.09020
🔗Paper:
https://arxiv.org/pdf/1711.09020.pdf
🔗Code:
https://github.com/yunjey/StarGAN
🎬Video Demo:
https://www.youtube.com/watch?v=EYjdLppmERE
#GAN #stargan #pytorch #generative #adversarial
GitHub
GitHub - yunjey/stargan: StarGAN - Official PyTorch Implementation (CVPR 2018)
StarGAN - Official PyTorch Implementation (CVPR 2018) - yunjey/stargan
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
+ کد
اطلاعات بیشتر
https://t.me/cvision/646
#GAN #stargan #pytorch #generative #adversarial
+ کد
اطلاعات بیشتر
https://t.me/cvision/646
#GAN #stargan #pytorch #generative #adversarial
#دمو #مقاله #سورس_کد
دموی آنلاین تبدیل چهره با شبکه های مولد
https://blog.openai.com/glow/
#GAN #GLOW #generative
دموی آنلاین تبدیل چهره با شبکه های مولد
https://blog.openai.com/glow/
#GAN #GLOW #generative
#مقاله ٫ #سورس_کد کد #دیتاست
https://t.me/cvision/668
Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
🔗abstract:
https://arxiv.org/abs/1807.03039
🔗Paper:
https://arxiv.org/pdf/1807.03039
🔗source code:
https://github.com/openai/glow
🔗Project Page + Online #Demo:
https://blog.openai.com/glow/
#Tensorflow #Horovod #dataset #gan #GLOW #generative
https://t.me/cvision/668
Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
🔗abstract:
https://arxiv.org/abs/1807.03039
🔗Paper:
https://arxiv.org/pdf/1807.03039
🔗source code:
https://github.com/openai/glow
🔗Project Page + Online #Demo:
https://blog.openai.com/glow/
#Tensorflow #Horovod #dataset #gan #GLOW #generative
Telegram
Tensorflow
#دمو #مقاله #سورس_کد
دموی آنلاین تبدیل چهره با شبکه های مولد
https://blog.openai.com/glow/
#GAN #GLOW #generative
دموی آنلاین تبدیل چهره با شبکه های مولد
https://blog.openai.com/glow/
#GAN #GLOW #generative