Tensorflow(@CVision)
Vision Reconstruction.mp4
بازسازی صحنه های فیلم مشاهده شده توسط فرد با پردازش فعالیت های ناحیه ی بینایی مغز
آیا میتوان یک روز رویاها و خواب ها را با این تکنولوژی ضبط کرد و به صورت فیلم بازیابی کرد؟!
UC Berkeley researchers have succeeded in #decoding and #reconstructing people's dynamic #visual experiences.
The #brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills #Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
more:
http://news.berkeley.edu/2011/09/22/brain-movies/
آیا میتوان یک روز رویاها و خواب ها را با این تکنولوژی ضبط کرد و به صورت فیلم بازیابی کرد؟!
UC Berkeley researchers have succeeded in #decoding and #reconstructing people's dynamic #visual experiences.
The #brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills #Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
more:
http://news.berkeley.edu/2011/09/22/brain-movies/
Berkeley
Scientists use brain imaging to reveal the movies in our mind
Psychology and neuroscience professor Jack Gallant displays videos and brain images used in his research. Video produced by Roxanne Makasdjian, Media Relations. BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on ...
CS231N - Convolutional Neural Networks for Visual Recognition
Enrollment Dates: February 12 to March 24, 2017
Topics Include
-End-to-end models
- Image classification, localization and detection
- Implementation, training and debugging
- Learning algorithms, such as backpropagation
-Long Short Term Memory (LSTM)
-Recurrent Neural Networks (RNN)
-Supervised and unsupervised learning
http://scpd.stanford.edu/search/publicCourseSearchDetails.do?method=load&courseId=42262144
#course #toutorial #stanford #cnn #Convolutional #Visual_Recognition
Enrollment Dates: February 12 to March 24, 2017
Topics Include
-End-to-end models
- Image classification, localization and detection
- Implementation, training and debugging
- Learning algorithms, such as backpropagation
-Long Short Term Memory (LSTM)
-Recurrent Neural Networks (RNN)
-Supervised and unsupervised learning
http://scpd.stanford.edu/search/publicCourseSearchDetails.do?method=load&courseId=42262144
#course #toutorial #stanford #cnn #Convolutional #Visual_Recognition
جالبه! عکسی را آپلود کنید و سپس در مورد این تصویر از کامپیوتر سوال بپرسید:
Online #Visual_Dialog demo:
http://demo.visualdialog.org/
#CVPR_2017 #NLP #VQA #deep_learning
Online #Visual_Dialog demo:
http://demo.visualdialog.org/
#CVPR_2017 #NLP #VQA #deep_learning
Tensorflow(@CVision)
جالبه! عکسی را آپلود کنید و سپس در مورد این تصویر از کامپیوتر سوال بپرسید: Online #Visual_Dialog demo: http://demo.visualdialog.org/ #CVPR_2017 #NLP #VQA #deep_learning
#Visual_Dialog
(مرتبط با https://t.me/cvision/195)
CVPR 2017 paper for this demo:
https://arxiv.org/abs/1611.08669
Code for this demo:
🔗 http://github.com/cloud-cv/visual-chatbot
Torch code for training and evaluating Visual Dialog models:
🔗 http://github.com/batra-mlp-lab/visdial
#encoder_decoder #deep_learning #NLP #VQA
(مرتبط با https://t.me/cvision/195)
CVPR 2017 paper for this demo:
https://arxiv.org/abs/1611.08669
Code for this demo:
🔗 http://github.com/cloud-cv/visual-chatbot
Torch code for training and evaluating Visual Dialog models:
🔗 http://github.com/batra-mlp-lab/visdial
#encoder_decoder #deep_learning #NLP #VQA
Telegram
Tensorflow(@CVision)
جالبه! عکسی را آپلود کنید و سپس در مورد این تصویر از کامپیوتر سوال بپرسید:
Online #Visual_Dialog demo:
http://demo.visualdialog.org/
#CVPR_2017 #NLP #VQA #deep_learning
Online #Visual_Dialog demo:
http://demo.visualdialog.org/
#CVPR_2017 #NLP #VQA #deep_learning
فیلمهای ترم بهار 2017 کورس شبکههای عصبی کانولوشنالی و یادگیری عمیق در بینایی ماشین دانشگاه #استنفورد
Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv
مرتبط با: https://t.me/cvision/164
#course #toutorial #stanford #cnn #Convolutional #Visual_Recognition
Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv
مرتبط با: https://t.me/cvision/164
#course #toutorial #stanford #cnn #Convolutional #Visual_Recognition
YouTube
Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving car...