AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
A collection of research papers on decision trees, classification trees, and regression trees with implementations:
https://github.com/benedekrozemberczki/awesome-decision-tree-papers
#BigData #MachineLearning #AI #DataScience #Algorithms #NLProc #Coding #DataScientists

✴️ @AI_Python_EN
François Chollet

This is how you implement a network in Chainer. Chainer, the original eager-first #deeplearning framework, has had this API since launch, in mid-2015. When PyTorch got started, it followed the Chainer template (in fact, the prototype of PyTorch was literally a fork of Chainer).

Nearly every day, I am getting ignorant messages saying, "PyTorch is an original innovation that TensorFlow/Keras copied". This is incorrect. Subclassing is a fairly obvious way to do things in Python, and Chainer had this API first. Many others followed.

I had been looking at adding a Model subclassing API to Keras as soon as late 2015 (before the Functional API even existed, and over a year before being aware of PyTorch), inspired by Chainer. Our first discussions about adding an eager execution mode also predate PyTorch.

By the time #PyTorch came out, I had been looking at its API (which is exactly the Chainer API) for 1.5 year (since the release of Chainer). It wasn't exactly a shock. There was nothing we didn't already know.



To be clear, it's a good thing that API patterns and technical innovations are cross-pollinating among deep learning framework. The #Keras API itself has a had a pretty big influence over libraries that came after. It's completely fine, and it all benefits end users.

✴️ @AI_Python_EN
Gradient Flow: a new notebook that explains automatic differentiation using eager execution in #TensorFlow. I go over computational graphs, vector valued functions, gradient, etc ..

https://colab.research.google.com/github/zaidalyafeai/Notebooks/blob/master/GradientFlow.ipynb

✴️ @AI_Python_EN
Supervisely: end-to-end web-platform for Deep Learning and Computer Vision

🌎 link

✴️ @AI_Python_EN
Wow, so this actually happened. We are the highest rated course in Artificial Intelligence and Computer Science (>1000 courses) on Class Central 170 000 students and counting. Such an honor. #wearehelsinkiuni #ElementsofAI #AIChallenge #AIutmaningen
http://www.elementsofai.com
One of the BEST #MachineLearning Glossary by Google

It will definitely come in handy - https://lnkd.in/gNiE9JT

✴️ @AI_Python_EN
In my line of work, I often need to model multiple dependent (outcome) variables simultaneously.

These may be latent variables, each with several indicator (observed) variables, observed variables, or a combination of the two.

Structural Equation Modeling (SEM) is one method able to handle these situations. Observed variables do not have to be continuous and, in fact, can be a mix of data types, including count variables and binary indicators.

Here are some excellent #book s on SEM for those who'd like to learn more about this method:

- Principles and Practice of Structural Equation Modeling (Kline)
- Linear Causal Modeling with Structural Equations (Mulaik)
- Structural Equations with Latent Variables (Bollen)
- Handbook of Structural Equation Modeling (Hoyle)

It's not an easy method for most of us to master, however, and new developments are happening at a rapid pace. Structural Equation Modeling: A Multidisciplinary Journal (Routledge) is an excellent resource for experienced SEM modelers.

✴️ @AI_Python_EN
**** Advanced NLP with spaCy ****

Credits - Ines Montani

Link - https://course.spacy.io/

#nlp #spacy #naturallanguageprocessing #machinelearing
#datascience

✴️ @AI_Python_EN
Every day we are bombarded with Artificial Intelligence news. Each new product is advertised as packed with AI. Every business wants to be perceived as AI wonder house.

To cut through this information AI storm, I've prepared a simplified (and definitely not complete) chart presented below. What currently is called "Artificial Intelligence" is a type of Machine Learning utilizing multiplier layer (i.e. deep) #ArtificialNeuralNetworks. Processing lager number of neural networks' "hidden" layers involves performing matrix operations which are more efficiently done on Graphics Processing Units (GPUs) than standard CPUs.

And since sometimes we have no f… idea what is going on inside these artificial neural networks, we decided to call it #ArtificialIntelligence ;)

✴️ @AI_Python_EN
Geoffrey Hinton Leads Google Brain Representation Similarity Index Research Aiming to Understand Neural Networks

https://medium.com/syncedreview/geoffrey-hinton-leads-google-brain-representation-similarity-index-research-aiming-to-understand-b5d14bf77f49

✴️ @AI_Python_EN
For anyone who is interested in non-parametric multivariate data analysis, PhD thesis is publicly available on arXiv now.
https://arxiv.org/abs/1905.10716v1

✴️ @AI_Python_EN
#Statistics such as correlation, mean and standard deviation (variance) create strong visual images and meaning. Two different #datasets with the same correlation would sort of look the same. Right?

Not so much.

Each of these very different-looking graphs are plotting datasets with the same correlation, mean and SD. This is why plotting data is so important though oddly so rarely (in my expereince) done.

https://bit.ly/2oZ29MP

✴️ @AI_Python_EN
Misconception 4: Explainable #machinelearning is just models of models.

I do like surrogate models. They have several important uses. However, I also REALLY like tools that explain models directly, including:

- ALE: https://lnkd.in/e3mz23V
- ICE: https://lnkd.in/eaQxk_Q
- Friedman's H-stat: https://lnkd.in/emwNcdy
- Partial dependence: https://lnkd.in/ejnkFYN, Section 10.13.2
- Shapley explanations: https://lnkd.in/ewsMxbU

(What did I miss? Any others?)

Moreover, surrogate models & direct explanatory techniques work very well together! See pic below.

Misconception 3: https://lnkd.in/eM3hVyW

Read more/contribute: https://lnkd.in/e8_hciE

#ai #datascience #deeplearning #aiforall #artificialintelligence #datascience #ml #python

✴️ @AI_Python_EN
5 Computer Vision Textbooks

Textbooks are those books written by experts, often academics, and are designed to be used as a reference for students and practitioners.

They focus mainly on general methods and theory (math), not on the practical concerns of problems and the application of methods (code).

The top five textbooks on computer vision are as follows (in no particular order):

🔸 Computer Vision: Algorithms and Applications, 2010.
🔸 Computer Vision: Models, Learning, and Inference, 2012.
🔸 Computer Vision: A Modern Approach, 2002.
🔸 Introductory Techniques for 3-D Computer Vision, 1998.
🔸 Multiple View Geometry in Computer Vision, 2004.

Top 3 Computer Vision Programmer Books

Programmer #book s are playbooks (e.g. O’Reilly books) written by experts, often developers and engineers, and are designed to be used as a reference by practitioners.


🔸 Learning OpenCV 3, 2017.
🔸 Programming Computer Vision with Python, 2012.
🔸 Practical Computer Vision with SimpleCV, 2012.

#ComputerVision

✴️ @AI_Python_EN
AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence https://arxiv.org/abs/1905.10985 Historically, hand-designed pipelines are ultimately outperformed by entirely learned ones. Will that will be true of creating general AI itself?

Three Pillars are essential for the approach: (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments.

I argue that either the manual approach or the AI-generating algorithm path could produce general AI first, and both are scientifically worthwhile irrespective of which is the fastest path.

Because both are promising, yet the ML community is currently committed to the manual approach, I argue that our community should increase its research investment in the AI-GA approach. To encourage such research, I describe promising work in each of the Three Pillars.
Because it it may be the fastest path to #AGI and it is inherently interesting to understand the conditions in which a simple algorithm can produce general AI (as happened on Earth), I argue that AI-GAs should be considered a new grand challenge of computer science research.
✴️ @AI_Python_EN
Aude Oliva (MIT): "there are about 200 papers using ConvNets to model the activity of the primate visual cortex." She is running a challenge to explain fMRI and MEG data: http://algonauts.csail.mit.edu/challenge.html http://algonauts.csail.mit.edu/challenge.html


Alyosha Efros: "A few year ago I gave a talk whose title was 'the revolution will not be supervised'. Yann has been advertising my slogan. Whenever Yann... https://www.redbubble.com/people/perceptron/works/24771996-the-revolution-will-not-be-supervised-white-font-3d?p=t-shirt
✴️ @AI_Python_EN
“An Explicitly Relational Neural Network Architecture” - new work from the DeepMind cognition team takes a step towards reconciling #deeplearning and symbolic #AI
https://arxiv.org/abs/1905.10307
✴️ @AI_Python_EN