AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch

https://huggingface.co/transformers

✴️ @AI_PYTHON_EN
OpenAI’s GPT-2 Text Generator: Wise As a Scholar

https://www.youtube.com/watch?v=0OtZ8dUFxXA

OpenAI's post:
https://openai.com/blog/gpt-2-6-month-follow-up/

✴️ @AI_Python_en
Free ebooks on Deep Learning

PDFs and epub books on Deep Learning. Make sure you comply with copyrights and use this repository only to get familiar with content and purchasing a legal copy afterward!

Also you should save this link somewhere by forwarding message to your Saved messages (just long tap / click on message and then type ‘Saved messages’ in the dialogue search) or your fellow group, because repo might get shutdown for copyright violation.

Link: https://github.com/ontiyonke/Free-Deep-Learning-Books/tree/master/book

#library #ebook

❇️ @AI_PYTHON_en
Why is Andrew reading a 30-year old software engineering paper?

http://worrydream.com/refs/Brooks-NoSilverBullet.pdf

❇️ @AI_Python_en
DYC is a CLI tool that helps with documenting your #python source code. It will help keep you alert for new methods that were added and not documented. Also supports to build a reusable docstring template. Just answer the prompt questions in your terminal to see the effect on your files.

https://github.com/Zarad1993/dyc
share our #NeurIPS2019 paper on generating graphs (~5K nodes) with graph recurrent attention networks (GRAN). It scales much better and achieves SOTA performance and very impressive sample-quality.
https://arxiv.org/abs/1910.00760
Code: https://github.com/lrjconan/GRAN
#strange

An awesome list of dev-related movies:
https://github.com/aryaminus/dev-movies

In case you don't have enough of development at work!
How #Facebook used Mask R-CNN, #PyTorch, and custom hardware integrations like foveated processing to improve Portal’s Smart Camera system.

Link:
https://ai.facebook.com/blog/smart-camera-portal-advances/

#CV #DL #Segmentation
Library for Scikit-learn parallization

Operations like grid search, random forest, and others that use the njobs parameter in Scikit-Learn can automatically hand-off parallelism to a Dask cluster.

Link: https://ml.dask.org/joblib.html

#ML

❇️ @AI_Python_EN
Machine learning datasets: A list of the biggest machine learning datasets from across the web.
https://lnkd.in/e7WZFTw

❇️ @AI_Python_EN
ARTIFICIAL INTELLIGENCE 101 "AI 101: The First World-Class Overview of AI for All." 1) AI 101 CheatSheet:
https://lnkd.in/eXY_q_C 2)
Curated Open-Source Codes:
https://lnkd.in/dWUwH-Z

❇️ @AI_Python_EN
Model interpretation and feature importance is a key for #datascientists to learn when running #machinelearing models. Here is a snippet from the #Genomics perspective.
a) Feature importance scores highlight parts of the input most predictive for the output. For DNA sequence-based models, these can be visualized as a sequence logo of the input sequence, with letter heights proportional to the feature importance score, which may also be negative (as visualized by letters facing upside down).
b ) Perturbation-based approaches perturb each input feature (left) and record the change in model prediction (centre) in the feature importance matrix (right). For DNA sequences, the perturbations correspond to single base substitutions.
c) Backpropagation- based approaches compute the feature importance scores using gradients or augmented gradients such as DeepLIFT (Deep Learning Important FeaTures)* for the input features with respect to model prediction.
Link to this lovely paper:
https://lnkd.in/dfmvP9c

❇️ @AI_Python_EN
This awesome story from ETH Zürich #AI #researchers needs to be told! They used #artificialintelligence to improve quality of images recorded by a relatively new biomedical imaging method. This paves the way towards more accurate #diagnosis and cost-effective devices. How awesome is that! Important note on optoacoustic tomography
They used #machinelearning method to improve optoacoustic imaging. This relatively young #medicalimaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. Paper is here:
https://lnkd.in/dtgUq4A

Code: https://lnkd.in/dYy32Vd

#deeplearning

❇️ @AI_Python_en
than standard Adam? - I ran 24 experiments to find out. - The answer? Meh, not really. Full tutorial w/ #Python code here:
http://pyimg.co/asash

#DeepLearning #Keras #MachineLearning #ArtificialIntelligence #AI #DataScience

❇️ @AI_Python_en
new #python package #imagededup (Image Deduplication) is available on Github now 🤩
GitHub:
https://lnkd.in/d8bTvf6
Docs:
https://lnkd.in/dDRpNiU
It allows you to find duplicate images (near and exact) with a variety of #hashing methods and #ConvolutionalNeuralNetworks. Anyone who is doing applied Computer Vision knows the pain duplicate images can cause, and even in research datasets this can be an issue, see our Cifar-10 example notebook Example:

https://lnkd.in/ddB97nf

❇️ @AI_Python_en
Python has many advantages, but speed is not one of them. Most production code in the enterprise is currently powered by JVM and .NET. Python has scikit-learn, xgboost and PyTorch, which makes it the de-facto standard in AI. But it's still too slow. Before Kotlin, JVM didn't have anything as convenient as Python. Now, there's Kotlin: concise, intuitive and fast! Kotlin is already the programming language for Android. Now it's time to make it the programming language for AI. What's needed is a lightweight and scalable JVM library that implements the fit/transform/predict interface of scikit-learn. I believe that it's time to build it and I believe that Kotlin is an ideal language for that. If someone wants to lead this project, come forward and start building this library. I will provide publicity support. Burkov

❇️ @AI_Python_en