انسان در واقع اشیاء را بدون ناظر یاد میگیرد و بعد اینکه مثلا مدتی یک شی را دید و یاد گرفت، بلافاصله پس از اینکه نام آن شی را شنید برچسب آن را نیز یاد میگیرد.
در حال حاضر بهترین مدلهای بینایی ماشین که در سالهای اخیر، خصوصا بعد از الکسنت سال 2012 ارائه شده اند با ناظر هستند. خیلی خوب عمل میکنند اما به داده ی برچسب گذاری شده ی زیادی نیاز دارند.
اگر به نحوی بتوانیم از داده های بدون برچسب استفاده کنیم و مدل را آموزش دهیم، سپس در فاز کوتاهی با داده های اندک اشیائی که مدل یاد گرفته است را به صورت با ناظرآموزش دهیم تحول بزرگی در یادگیری مدل ها ایجاد خواهد شد. در این صورت میتوان به سادگی میلیون ها ساعت ویدیو را مثلا با استفاده از یوتیوب به مدل آموزش داد و پس از آموزش مدل، شروع به آموزش نام اشیاء یادگرفته شده به مدل پرداخت روندی که در انسان هم مشاهده میشود! در واقع کودک از بدو تولد اشیاء مختلف را میبیند و آن ها را یاد میگیرد اما با یک یا چند بارشنیدن نام آن به آن دسته یا شئی که قبلا فراگرفته نام اختصاص میدهد.
The Next Frontier in AI: Unsupervised Learning
#Yann_LeCun
Director of AI Research at Facebook, Professor of Computer Science, New York University
November 18, 2016
https://www.youtube.com/watch?v=IbjF5VjniVE
Abstract
The rapid progress of #AI in the last few years are largely the result of advances in #deep_learning and neural nets, combined with the availability of large datasets and fast GPUs. We now have systems that can #recognize images with an accuracy that rivals that of humans. This will lead to revolutions in several domains such as autonomous transportation and #medical #image understanding. But all of these systems currently use #supervised learning in which the machine is trained with inputs labeled by humans. The challenge of the next several years is to let machines learn from raw, #unlabeled_data, such as #video or #text. This is known as #unsupervised learning. AI systems today do not possess "common sense", which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. Some of us see unsupervised learning as the key towards machines with common sense. Approaches to unsupervised learning will be reviewed. This presentation assumes some familiarity with the basic concepts of deep learning.
در حال حاضر بهترین مدلهای بینایی ماشین که در سالهای اخیر، خصوصا بعد از الکسنت سال 2012 ارائه شده اند با ناظر هستند. خیلی خوب عمل میکنند اما به داده ی برچسب گذاری شده ی زیادی نیاز دارند.
اگر به نحوی بتوانیم از داده های بدون برچسب استفاده کنیم و مدل را آموزش دهیم، سپس در فاز کوتاهی با داده های اندک اشیائی که مدل یاد گرفته است را به صورت با ناظرآموزش دهیم تحول بزرگی در یادگیری مدل ها ایجاد خواهد شد. در این صورت میتوان به سادگی میلیون ها ساعت ویدیو را مثلا با استفاده از یوتیوب به مدل آموزش داد و پس از آموزش مدل، شروع به آموزش نام اشیاء یادگرفته شده به مدل پرداخت روندی که در انسان هم مشاهده میشود! در واقع کودک از بدو تولد اشیاء مختلف را میبیند و آن ها را یاد میگیرد اما با یک یا چند بارشنیدن نام آن به آن دسته یا شئی که قبلا فراگرفته نام اختصاص میدهد.
The Next Frontier in AI: Unsupervised Learning
#Yann_LeCun
Director of AI Research at Facebook, Professor of Computer Science, New York University
November 18, 2016
https://www.youtube.com/watch?v=IbjF5VjniVE
Abstract
The rapid progress of #AI in the last few years are largely the result of advances in #deep_learning and neural nets, combined with the availability of large datasets and fast GPUs. We now have systems that can #recognize images with an accuracy that rivals that of humans. This will lead to revolutions in several domains such as autonomous transportation and #medical #image understanding. But all of these systems currently use #supervised learning in which the machine is trained with inputs labeled by humans. The challenge of the next several years is to let machines learn from raw, #unlabeled_data, such as #video or #text. This is known as #unsupervised learning. AI systems today do not possess "common sense", which humans and animals acquire by observing the world, acting in it, and understanding the physical constraints of it. Some of us see unsupervised learning as the key towards machines with common sense. Approaches to unsupervised learning will be reviewed. This presentation assumes some familiarity with the basic concepts of deep learning.
YouTube
RI Seminar: Yann LeCun : The Next Frontier in AI: Unsupervised Learning
Yann LeCun
Director of AI Research at Facebook, Professor of Computer Science, New York University
November 18, 2016
Abstract
The rapid progress of AI in the last few years are largely the result of advances in deep learning and neural nets, combined with…
Director of AI Research at Facebook, Professor of Computer Science, New York University
November 18, 2016
Abstract
The rapid progress of AI in the last few years are largely the result of advances in deep learning and neural nets, combined with…
Tensorflow(@CVision)
انسان در واقع اشیاء را بدون ناظر یاد میگیرد و بعد اینکه مثلا مدتی یک شی را دید و یاد گرفت، بلافاصله پس از اینکه نام آن شی را شنید برچسب آن را نیز یاد میگیرد. در حال حاضر بهترین مدلهای بینایی ماشین که در سالهای اخیر، خصوصا بعد از الکسنت سال 2012 ارائه شده…
یان لیکان در این سخنرانی
شبکه های رقابتی مولد
یا
Generative Adversarial Networks
را مهم ترین ایده در 20 سال گذشته برای یادگیری ماشین بیان کرده است.
روشی که مدلها را قادر به یادگیری بدون ناظر میکند.
The major advancements in Deep Learning in 2016
🔗https://tryolabs.com/blog/2016/12/06/major-advancements-deep-learning-2016/
Generative Adversarial Nets
https://arxiv.org/pdf/1406.2661v1.pdf
این روش برای مسائل با تعداد کم و ناکافی داده ی با برچسب نیز مناسب است.
#autoencoder #unsupervised #unsupervised_learning #Generative #Generative_Models
شبکه های رقابتی مولد
یا
Generative Adversarial Networks
را مهم ترین ایده در 20 سال گذشته برای یادگیری ماشین بیان کرده است.
روشی که مدلها را قادر به یادگیری بدون ناظر میکند.
The major advancements in Deep Learning in 2016
🔗https://tryolabs.com/blog/2016/12/06/major-advancements-deep-learning-2016/
Generative Adversarial Nets
https://arxiv.org/pdf/1406.2661v1.pdf
این روش برای مسائل با تعداد کم و ناکافی داده ی با برچسب نیز مناسب است.
#autoencoder #unsupervised #unsupervised_learning #Generative #Generative_Models
Tryolabs
The major advancements in Deep Learning in 2016
✏️Title:
#Unsupervised #Representation Learning with #Deep #Convolutional #Generative #Adversarial Networks
✏️abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
🔗https://arxiv.org/pdf/1511.06434v2.pdf
"Under review as a conference paper at ICLR 2016"
#Unsupervised #Representation Learning with #Deep #Convolutional #Generative #Adversarial Networks
✏️abstract:
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
🔗https://arxiv.org/pdf/1511.06434v2.pdf
"Under review as a conference paper at ICLR 2016"
Tensorflow(@CVision)
Learning from Simulated and Unsupervised Images through Adversarial Training (Apple Inc.)
مقالهی جالب کمپانی اپل!
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
( Submitted for review to a conference on Nov 15, 2016)
✏️Title:
Learning from Simulated and Unsupervised Images through Adversarial Training
✏️abstract:
With recent progress in graphics, it has become more tractable to train models on #synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using #unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an #adversarial network similar to #Generative Adversarial Networks (#GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.
🔗https://arxiv.org/abs/1612.07828v1
🔗https://arxiv.org/pdf/1612.07828v1.pdf
#unlabeled_data #unsupervised #unsupervised_learning #Generative #Generative_Models
Deep Learning 2016: The Year in Review
http://www.deeplearningweekly.com/blog/deep-learning-2016-the-year-in-review
✔️ #Unsupervised and #Reinforcement Learning
✔️ Deep Reinforcement Learning
✔️ #Generative Models
✔️ Continued Openness in AI development
✔️ Partnerships & Acquisitions
✔️ Hardware & Chips
(by Jan Bussieck on December 31, 2016)
In order to understand trends in the field, I find it helpful to think of developments in #deep_learning as being driven by three major frontiers that limit the success of #artificial_intelligence in general and deep learning in particular. Firstly, there is the available #computing power and #infrastructure, such as fast #GPUs, cloud services providers (have you checked out Amazon's new #EC2 P2 instance ?) and tools (#Tensorflow, #Torch, #Keras etc), secondly, there is the amount and quality of the training data and thirdly, the algorithms (#CNN, #LSTM, #SGD) using the training data and running on the hardware. Invariably behind every new development or advancement, lies an expansion of one of these frontiers.
...
http://www.deeplearningweekly.com/blog/deep-learning-2016-the-year-in-review
✔️ #Unsupervised and #Reinforcement Learning
✔️ Deep Reinforcement Learning
✔️ #Generative Models
✔️ Continued Openness in AI development
✔️ Partnerships & Acquisitions
✔️ Hardware & Chips
(by Jan Bussieck on December 31, 2016)
In order to understand trends in the field, I find it helpful to think of developments in #deep_learning as being driven by three major frontiers that limit the success of #artificial_intelligence in general and deep learning in particular. Firstly, there is the available #computing power and #infrastructure, such as fast #GPUs, cloud services providers (have you checked out Amazon's new #EC2 P2 instance ?) and tools (#Tensorflow, #Torch, #Keras etc), secondly, there is the amount and quality of the training data and thirdly, the algorithms (#CNN, #LSTM, #SGD) using the training data and running on the hardware. Invariably behind every new development or advancement, lies an expansion of one of these frontiers.
...
Deeplearningweekly
Deep Learning 2016: The Year in Review | Deep Learning Weekly
A weekly newsletter about the latest developments in Deep Learning
#مقاله جدید از فیسبوک، برای آموزش بدون ناظر شبکه برای ترجمه متون!
Unsupervised Machine Translation Using Monolingual Corpora Only
در روشهای قبلی آموزش یک شبکه بازگشتی برای ترجمه متون، مجبور بودیم جفت جملات هم معنی بسیاری از زبان های مبدا و مقصد به شبکه نشان دهیم. در این مقاله روشی مطرح گردیده که نیازی به هیچ ترجمه متناظری از جملات نبوده و شبکه بدون ناظر میتواند برای ترجمه از زبانی به زبان دیگر آموزش ببیند.
این روش جملات را از زبانهای مختلف می گیرد و آنها را در یک فضای دیگر تبدیل میکند. با یادگیری برای بازسازی هر زبان از این فضای ویژگی مشترک، مدل به طور موثر یاد می گیرد بدون ناظر و label زبان مبدا را به زبان مقصد ترجمه کند.
https://arxiv.org/abs/1711.00043
#deep_learning #unsupervised #NLP #Translator #Translation
Unsupervised Machine Translation Using Monolingual Corpora Only
در روشهای قبلی آموزش یک شبکه بازگشتی برای ترجمه متون، مجبور بودیم جفت جملات هم معنی بسیاری از زبان های مبدا و مقصد به شبکه نشان دهیم. در این مقاله روشی مطرح گردیده که نیازی به هیچ ترجمه متناظری از جملات نبوده و شبکه بدون ناظر میتواند برای ترجمه از زبانی به زبان دیگر آموزش ببیند.
این روش جملات را از زبانهای مختلف می گیرد و آنها را در یک فضای دیگر تبدیل میکند. با یادگیری برای بازسازی هر زبان از این فضای ویژگی مشترک، مدل به طور موثر یاد می گیرد بدون ناظر و label زبان مبدا را به زبان مقصد ترجمه کند.
https://arxiv.org/abs/1711.00043
#deep_learning #unsupervised #NLP #Translator #Translation
#آموزش #مقاله
New DeepMind Unsupervised Image Model Challenges AlexNet
https://medium.com/syncedreview/new-deepmind-unsupervised-image-model-challenges-alexnet-d658ef92ab1e
⚪️Contrastive Predictive Coding (CPC) that outperforms the fully-supervised AlexNet model in Top-1 and Top-5 accuracy on ImageNet.
⚪️CPC was introduced by DeepMind in 2018. The unsupervised learning approach uses a powerful autoregressive model to extract representations of high-dimensional data to predict future samples.
⚪️Given 13 labeled images per class, DeepMind’s CPC model outperformed state-of-the-art semi-supervised methods by 10 percent in Top-5 accuracy, and supervised methods by 20 percent.
#Unsupervised #DeepMind
New DeepMind Unsupervised Image Model Challenges AlexNet
https://medium.com/syncedreview/new-deepmind-unsupervised-image-model-challenges-alexnet-d658ef92ab1e
⚪️Contrastive Predictive Coding (CPC) that outperforms the fully-supervised AlexNet model in Top-1 and Top-5 accuracy on ImageNet.
⚪️CPC was introduced by DeepMind in 2018. The unsupervised learning approach uses a powerful autoregressive model to extract representations of high-dimensional data to predict future samples.
⚪️Given 13 labeled images per class, DeepMind’s CPC model outperformed state-of-the-art semi-supervised methods by 10 percent in Top-5 accuracy, and supervised methods by 20 percent.
#Unsupervised #DeepMind
#مقاله #سورس_کد
SimCLR - A Simple Framework for Contrastive Learning of Visual Representations https://arxiv.org/abs/2002.05709
کدها با tensorflow
https://github.com/google-research/simclr
مقاله
https://arxiv.org/pdf/2002.05709.pdf
#simclr #contrastive_learnig #representation_learning #self_supervised_learning #unsupervised_learning #computer_vision #metric_learning
SimCLR - A Simple Framework for Contrastive Learning of Visual Representations https://arxiv.org/abs/2002.05709
کدها با tensorflow
https://github.com/google-research/simclr
مقاله
https://arxiv.org/pdf/2002.05709.pdf
#simclr #contrastive_learnig #representation_learning #self_supervised_learning #unsupervised_learning #computer_vision #metric_learning
This media is not supported in your browser
VIEW IN TELEGRAM
#آموزش #مقاله
قبلا در اینجا مقاله و سورس کد(با تنسرفلو) SimpleCLR معرفی شد...
این روش یک رویکرد جدید برای یادگیری self-supervised وsemi-supervised است که بدون نیاز به داده های لیبل خورده بازنماییهای خوبی برای تصویر را میتواند بیاموزد!
همچنین با fine-tuned کردن آن روی فقط 1٪ از داده های label خوردهی imagenet، به دقت رقابتی خیلی بالا رسیده است.
در بلاگ پست گوگل بیشتر بخوانید:
Advancing Self-Supervised and Semi-Supervised Learning with SimCLR
https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html
#simclr #contrastive_learnig #representation_learning #self_supervised_learning #unsupervised_learning #computer_vision #metric_learning
قبلا در اینجا مقاله و سورس کد(با تنسرفلو) SimpleCLR معرفی شد...
این روش یک رویکرد جدید برای یادگیری self-supervised وsemi-supervised است که بدون نیاز به داده های لیبل خورده بازنماییهای خوبی برای تصویر را میتواند بیاموزد!
همچنین با fine-tuned کردن آن روی فقط 1٪ از داده های label خوردهی imagenet، به دقت رقابتی خیلی بالا رسیده است.
در بلاگ پست گوگل بیشتر بخوانید:
Advancing Self-Supervised and Semi-Supervised Learning with SimCLR
https://ai.googleblog.com/2020/04/advancing-self-supervised-and-semi.html
#simclr #contrastive_learnig #representation_learning #self_supervised_learning #unsupervised_learning #computer_vision #metric_learning