Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Course 1 : A Learning Path to become Data Scientist in 2019
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
Link :
https://bit.ly/2HOthei
Course 2 : Experiments with Data
Link :
https://bit.ly/2HQuQbw
Course 3 : Python for Data Science
Link :
https://bit.ly/2HOG5RG
Course 4 : Twitter Sentiments Analysis
Link :
https://bit.ly/2HR8O8A
Course 5 : Creating Time Series Forecast with Python
Link :
https://bit.ly/2XniU6r
Course 6 : A path for learning Deep Learning in 2019
Link :
https://bit.ly/2HO1VVJ
Course 7 : Loan Prediction Practice problem
Link :
https://bit.ly/2IcynQl
Course 8 : Big mart Sales Problem using R
Link :
https://bit.ly/2JUlZIb
❇️ @AI_Python_EN
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
The war between ML frameworks has raged on since the rebirth of deep learning. Who is winning? Horace He data analysis shows clear trends: PyTorch is winning dramatically among researchers, while Tensorflow still dominates industry.
#PyTorch #Tensorflow
https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
❇️ @AI_Python_EN
#PyTorch #Tensorflow
https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
❇️ @AI_Python_EN
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
If you're interested in using pytorch on free Colab TPUs, here are some notebooks to get you started
https://github.com/pytorch/xla/tree/master/contrib/colab
❇️ @AI_Python_EN
https://github.com/pytorch/xla/tree/master/contrib/colab
❇️ @AI_Python_EN
Forwarded from Mohammad Anisi
#فرصت_شغلی
نورتکس کارشناس برنامه نویسی بکاند استخدام میکند. برای دریافت اطلاعات بیشتر به ایمیل info@neurtex.com رزومه خود را ارسال کنید.
نورتکس کارشناس برنامه نویسی بکاند استخدام میکند. برای دریافت اطلاعات بیشتر به ایمیل info@neurtex.com رزومه خود را ارسال کنید.
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Forwarded from DLeX: AI Python (Farzad)
GANs_from_Scratch_1:_A_deep_introduction.pdf
1.7 MB
آموزشی مقدماتی برای دانشجویان کارشناسی
«مفاهیم و برنامه نویسی شبکه های GAN با تنسرفلو و پایتورچ»
#پایتون #شبکه_عصبی_تخاصمی #تنسرفلو #منابع #یادگیری_عمیق #کتاب #پایتورچ #برنامه_نویسی #الگوریتمها
❇️ @AI_Python
🗣 @AI_Python_arXiv
✴️ @AI_Python_EN
«مفاهیم و برنامه نویسی شبکه های GAN با تنسرفلو و پایتورچ»
#پایتون #شبکه_عصبی_تخاصمی #تنسرفلو #منابع #یادگیری_عمیق #کتاب #پایتورچ #برنامه_نویسی #الگوریتمها
❇️ @AI_Python
🗣 @AI_Python_arXiv
✴️ @AI_Python_EN
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Deep Learning:
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
http://course.fast.ai
NLP:
http://bit.ly/fastai-nlp
Comp Linear Algebra:
http://github.com/fastai/numerical-linear-algebra
Bias, Ethics, & AI:
http://fast.ai/topics/#ai-in-society
Debunk Pipeline Myth:
http://bit.ly/not-pipeline
AI Needs You:
http://bit.ly/rachel-TEDx
Ethics Center:
http://bit.ly/USF-CADE
❇️ @AI_Python_EN
www.fast.ai
new fast.ai course: A Code-First Introduction to Natural Language Processing
fast.ai's newest course is Code-First Intro to NLP. It covers a blend of traditional NLP techniques, recent deep learning approaches, and urgent ethical issues.
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Simple, Scalable Adaptation for Neural Machine Translation
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.
Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.
Paper: https://arxiv.org/pdf/1909.08478.pdf
#BERT
❇️ @AI_Python_EN
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.
Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.
Paper: https://arxiv.org/pdf/1909.08478.pdf
#BERT
❇️ @AI_Python_EN
مهمترین کتابخانه های علم داده در #پایتون
این نمودار از بررسی سایت Github تهیه و توسط سایت ActiveWizards منتشر شده است.
@ai_python
این نمودار از بررسی سایت Github تهیه و توسط سایت ActiveWizards منتشر شده است.
@ai_python
Convolutional #NeuralNetworks have become a foundational network architecture for numerous deep learning-based #ComputerVision tasks. Here, Heartbeat contributor Brian Mwangi explores their evolution in this excellent review of the research.
https://bit.ly/32fkz0p
https://bit.ly/32fkz0p
Medium
A Research Guide to Convolution Neural Networks
Examining the advancements of CNN architectures over the past few years
Bayesian Optimization Meets Riemannian Manifolds in Robot Learning
Jaquier et al.: https://lnkd.in/gEv2b5g
#BayesianOptimization #Robotics
#MachineLearning
Jaquier et al.: https://lnkd.in/gEv2b5g
#BayesianOptimization #Robotics
#MachineLearning
Slides https://t.co/X5gKgF11bE New optimization: competitive gradient descent (CGD) for training GAN/multi-agent systems. Implicit competitive regularization from CGD means that we get SOTA with no explicit gradient penalty, better stability and no mode collapse
#AI #DeepLearning
❇️ @AI_Python
✴️ @AI_Python_EN
#AI #DeepLearning
❇️ @AI_Python
✴️ @AI_Python_EN
A new antibody search engine with publication data. #Free online platform for academic scientists!
#openaccess #openscience #phdchat
https://landing.benchsci.com/
#openaccess #openscience #phdchat
https://landing.benchsci.com/
Uncertainty Quantification in Deep Learning
https://www.inovex.de/blog/uncertainty-quantification-deep-learning/
https://www.inovex.de/blog/uncertainty-quantification-deep-learning/
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL.
https://arxiv.org/abs/1910.06764
❇️ @AI_Python
✴️ @AI_Python_EN
https://arxiv.org/abs/1910.06764
❇️ @AI_Python
✴️ @AI_Python_EN
DLeX: AI Python
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL. https://arxiv.org/abs/1910.06764 ❇️ @AI_Python ✴️ @AI_Python_EN
By using GTrXL we find large performance gains in reinforcement learning tasks requiring memory and integration of experience through time compared to LSTM, whilst not compromising on more reactive RL tasks.
This architecture really shines on some continuous control tasks requiring long temporal memory horizons, and compared to previous work doesn't require any auxiliary losses.
This architecture really shines on some continuous control tasks requiring long temporal memory horizons, and compared to previous work doesn't require any auxiliary losses.
Forwarded from دستاوردهای یادگیری عمیق(InTec)
راه فراگیری علم داده برای مبتدیان
#منابع #علم_داده #آموزش
#DataScience
❇️ @AI_Python
✴️ @AI_Python_EN
#منابع #علم_داده #آموزش
#DataScience
❇️ @AI_Python
✴️ @AI_Python_EN
مقاله داغ
Detect Online Trolls in Elections
https://arxiv.org/abs/1910.07130
#هوش_مصنوعی #منابع #داده_کاوی #مقاله
#AI #ArtificialIntelligence #datamining
❇️ @AI_Python
✴️ @AI_Python_EN
Detect Online Trolls in Elections
https://arxiv.org/abs/1910.07130
#هوش_مصنوعی #منابع #داده_کاوی #مقاله
#AI #ArtificialIntelligence #datamining
❇️ @AI_Python
✴️ @AI_Python_EN