DLeX: AI Python
22.6K subscribers
4.99K photos
1.22K videos
764 files
4.35K links
هوش‌مصنوعی و برنامه‌نویسی

توییتر :

https://twitter.com/NaviDDariya

هماهنگی و تعرفه تبلیغات : @navidviola
Download Telegram
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
The war between ML frameworks has raged on since the rebirth of deep learning. Who is winning? Horace He data analysis shows clear trends: PyTorch is winning dramatically among researchers, while Tensorflow still dominates industry.
#PyTorch #Tensorflow

https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/

❇️ @AI_Python_EN
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
If you're interested in using pytorch on free Colab TPUs, here are some notebooks to get you started

https://github.com/pytorch/xla/tree/master/contrib/colab

❇️ @AI_Python_EN
Forwarded from Mohammad Anisi
#فرصت_شغلی

نورتکس کارشناس برنامه نویسی بک‌اند استخدام می‌کند. برای دریافت اطلاعات بیشتر به ایمیل info@neurtex.com رزومه خود را ارسال کنید.
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Spoken Language Identification using ConvNets.
https://arxiv.org/abs/1910.04269

❇️ @AI_Python_EN
Forwarded from DLeX: AI Python (Farzad)
GANs_from_Scratch_1:_A_deep_introduction.pdf
1.7 MB
آموزشی مقدماتی برای دانشجویان کارشناسی

«مفاهیم و برنامه نویسی شبکه های GAN با تنسرفلو و پایتورچ»

#پایتون #شبکه_عصبی_تخاصمی #تنسرفلو #منابع #یادگیری_عمیق #کتاب #پایتورچ #برنامه_نویسی #الگوریتمها

❇️ @AI_Python
🗣 @AI_Python_arXiv
✴️ @AI_Python_EN
Forwarded from AI, Python, Cognitive Neuroscience (Farzad)
Simple, Scalable Adaptation for Neural Machine Translation

Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.

Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.

Paper: https://arxiv.org/pdf/1909.08478.pdf

#BERT

❇️ @AI_Python_EN
مهم‌ترین کتابخانه های علم داده در #پایتون

این نمودار از بررسی سایت Github تهیه و توسط سایت ActiveWizards منتشر شده است.

@ai_python
Convolutional #NeuralNetworks have become a foundational network architecture for numerous deep learning-based #ComputerVision tasks. Here, Heartbeat contributor Brian Mwangi explores their evolution in this excellent review of the research.
https://bit.ly/32fkz0p
Bayesian Optimization Meets Riemannian Manifolds in Robot Learning

Jaquier et al.: https://lnkd.in/gEv2b5g

#BayesianOptimization #Robotics
#MachineLearning
Slides https://t.co/X5gKgF11bE New optimization: competitive gradient descent (CGD) for training GAN/multi-agent systems. Implicit competitive regularization from CGD means that we get SOTA with no explicit gradient penalty, better stability and no mode collapse
#AI #DeepLearning

❇️ @AI_Python
✴️ @AI_Python_EN
A new antibody search engine with publication data. #Free online platform for academic scientists!
#openaccess #openscience #phdchat

https://landing.benchsci.com/
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL.
https://arxiv.org/abs/1910.06764

❇️ @AI_Python
✴️ @AI_Python_EN
DLeX: AI Python
Transformers working for RL! Two simple modifications: move layer-norm and add gating creates GTrXL: an incredibly stable and effective architecture for integrating experience through time in RL. https://arxiv.org/abs/1910.06764 ❇️ @AI_Python ✴️ @AI_Python_EN
By using GTrXL we find large performance gains in reinforcement learning tasks requiring memory and integration of experience through time compared to LSTM, whilst not compromising on more reactive RL tasks.
This architecture really shines on some continuous control tasks requiring long temporal memory horizons, and compared to previous work doesn't require any auxiliary losses.
35 پروژه علم داده

#منابع #علم_داده #آموزش
#DataScience

💠 Link


❇️ @AI_Python
✴️ @AI_Python_EN
تمام تغییرات مهم شبکه‌های کانولوشن تا ۲۰۱۸ :

Paper link
راه فراگیری علم داده برای مبتدیان

#منابع #علم_داده #آموزش
#DataScience

❇️ @AI_Python
✴️ @AI_Python_EN


Training Data

این سایت هم دیتاست‌های مناسب رو یکجا جمع‌آوری کرده و در دسترس قرار داده و هم کمک می‌کنه از همون مدل‌های اولیه برای لیبل زدن دیتاهای بیشتر استفاده کنید.
Forwarded from Data Experts (javad vhd)
This media is not supported in your browser
VIEW IN TELEGRAM
سخنرانی ارتور بنجامین
در مورد اینکه افراد رو در چه حوزه ای باید تربیت کرد!
ریاضیات یا احتمالات !!

@Data_Experts