Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Deep Learning:
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
www.fast.ai
new fast.ai course: A Code-First Introduction to Natural Language Processing
fast.ai's newest course is Code-First Intro to NLP. It covers a blend of traditional NLP techniques, recent deep learning approaches, and urgent ethical issues.
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Simple, Scalable Adaptation for Neural Machine Translation
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.
Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.
Paper: https://arxiv.org/pdf/1909.08478.pdf
#BERT
❇️ @AI_Python_EN
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.
Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.
Paper: https://arxiv.org/pdf/1909.08478.pdf
#BERT
❇️ @AI_Python_EN
مهمترین کتابخانه های علم داده در #پایتون
این نمودار از بررسی سایت Github تهیه و توسط سایت ActiveWizards منتشر شده است.
@ai_python
این نمودار از بررسی سایت Github تهیه و توسط سایت ActiveWizards منتشر شده است.
@ai_python
Convolutional #NeuralNetworks have become a foundational network architecture for numerous deep learning-based #ComputerVision tasks. Here, Heartbeat contributor Brian Mwangi explores their evolution in this excellent review of the research.
https://bit.ly/32fkz0p
https://bit.ly/32fkz0p
Medium
A Research Guide to Convolution Neural Networks
Examining the advancements of CNN architectures over the past few years
Bayesian Optimization Meets Riemannian Manifolds in Robot Learning
Jaquier et al.: https://lnkd.in/gEv2b5g
#BayesianOptimization #Robotics
#MachineLearning
Jaquier et al.: https://lnkd.in/gEv2b5g
#BayesianOptimization #Robotics
#MachineLearning
Slides https://t.co/X5gKgF11bE New optimization: competitive gradient descent (CGD) for training GAN/multi-agent systems. Implicit competitive regularization from CGD means that we get SOTA with no explicit gradient penalty, better stability and no mode collapse
#AI #DeepLearning
❇️ @AI_Python
✴️ @AI_Python_EN
#AI #DeepLearning
❇️ @AI_Python
✴️ @AI_Python_EN
A new antibody search engine with publication data. #Free online platform for academic scientists!
#openaccess #openscience #phdchat
https://landing.benchsci.com/
#openaccess #openscience #phdchat
https://landing.benchsci.com/
Uncertainty Quantification in Deep Learning
https://www.inovex.de/blog/uncertainty-quantification-deep-learning/
https://www.inovex.de/blog/uncertainty-quantification-deep-learning/
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL.
https://arxiv.org/abs/1910.06764
❇️ @AI_Python
✴️ @AI_Python_EN
https://arxiv.org/abs/1910.06764
❇️ @AI_Python
✴️ @AI_Python_EN
DLeX: AI Python
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL. https://arxiv.org/abs/1910.06764 ❇️ @AI_Python ✴️ @AI_Python_EN
By using GTrXL we find large performance gains in reinforcement learning tasks requiring memory and integration of experience through time compared to LSTM, whilst not compromising on more reactive RL tasks.
This architecture really shines on some continuous control tasks requiring long temporal memory horizons, and compared to previous work doesn't require any auxiliary losses.
This architecture really shines on some continuous control tasks requiring long temporal memory horizons, and compared to previous work doesn't require any auxiliary losses.
Forwarded from دستاوردهای یادگیری عمیق(InTec)
راه فراگیری علم داده برای مبتدیان
#منابع #علم_داده #آموزش
#DataScience
❇️ @AI_Python
✴️ @AI_Python_EN
#منابع #علم_داده #آموزش
#DataScience
❇️ @AI_Python
✴️ @AI_Python_EN
مقاله داغ
Detect Online Trolls in Elections
https://arxiv.org/abs/1910.07130
#هوش_مصنوعی #منابع #داده_کاوی #مقاله
#AI #ArtificialIntelligence #datamining
❇️ @AI_Python
✴️ @AI_Python_EN
Detect Online Trolls in Elections
https://arxiv.org/abs/1910.07130
#هوش_مصنوعی #منابع #داده_کاوی #مقاله
#AI #ArtificialIntelligence #datamining
❇️ @AI_Python
✴️ @AI_Python_EN
Forwarded from دستاوردهای یادگیری عمیق(InTec)
Training Data
این سایت هم دیتاستهای مناسب رو یکجا جمعآوری کرده و در دسترس قرار داده و هم کمک میکنه از همون مدلهای اولیه برای لیبل زدن دیتاهای بیشتر استفاده کنید.
Forwarded from Data Experts (javad vhd)
This media is not supported in your browser
VIEW IN TELEGRAM
سخنرانی ارتور بنجامین
در مورد اینکه افراد رو در چه حوزه ای باید تربیت کرد!
ریاضیات یا احتمالات !!
@Data_Experts
در مورد اینکه افراد رو در چه حوزه ای باید تربیت کرد!
ریاضیات یا احتمالات !!
@Data_Experts
Forwarded from بایگانی تبلیغات ۹۸,۹۹
🔷داده یکی از ارزشمندترین و مهم ترین سرمایه های سازمان ها و کسب و کارها می باشد و برای نگهداری و تولید این سرمایه فرصت های شغلی بسیاری وجود دارد که راه رسیدن به ثروت را برای متخصصان این حوزه هموار می سازد اگر شما هم علاقه مند به متخصص شدن در این حوزه هستید با ما تماس بگیرید
☎️مشاوره وثبت نام
۶۶۰۷۵۶۴۱-۶۶۰۷۵۶۲۶
جزئیات دوره تربیت دانشمند داده ↙️
http://bit.ly/2pwzWmp
جزئیات دوره تربیت مهندس داده↙️
http://bit.ly/2Mys1NB
🔷با کانال تلگرام ما همراه باشید↙️
https://t.me/joinchat/AAAAAD6fyUw0AYXKLGbkow
🌐www.SCTAE.info
☎️مشاوره وثبت نام
۶۶۰۷۵۶۴۱-۶۶۰۷۵۶۲۶
جزئیات دوره تربیت دانشمند داده ↙️
http://bit.ly/2pwzWmp
جزئیات دوره تربیت مهندس داده↙️
http://bit.ly/2Mys1NB
🔷با کانال تلگرام ما همراه باشید↙️
https://t.me/joinchat/AAAAAD6fyUw0AYXKLGbkow
🌐www.SCTAE.info
Forwarded from DLeX: AI Python (Deleted Account)
لیست مقالات پذیرش شده در کنفرانس
Neurips
https://neurips.cc/Conferences/2019/AcceptedPapersInitial
#کنفرانس #مقاله #هوش_مصنوعی #الگوریتمها #منابع
❇️ @AI_Python
✴️ @AI_Python_en
Neurips
https://neurips.cc/Conferences/2019/AcceptedPapersInitial
#کنفرانس #مقاله #هوش_مصنوعی #الگوریتمها #منابع
❇️ @AI_Python
✴️ @AI_Python_en
Forwarded from AI, Python, Cognitive Neuroscience (Deleted Account)
Deep Learning:
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
www.fast.ai
new fast.ai course: A Code-First Introduction to Natural Language Processing
fast.ai's newest course is Code-First Intro to NLP. It covers a blend of traditional NLP techniques, recent deep learning approaches, and urgent ethical issues.
Forwarded from RecommenderSystems
2017 Python Data Structures and Algorithms.pdf
11.6 MB