AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Math Blocking you from doing Machine Learning?

#machinelearning

✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
"When you first study a field, it seems like you have to memorize a zillion things. You don't. What you need is to identify the 3-5 core principles that govern the field. The million things you thought you had to memorize are various combinations of the core principles." -J. Reed

“1. Multiply things together
2. Add them up
3. Replaces negatives with zeros
4. Return to step 1, a hundred times” - Jeremy Howard

#artificialintelligence #deeplearning #machinelearning

✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
AI, Python, Cognitive Neuroscience
MIT Introduction to Deep Learning MIT 6.S191 Introduction to Deep Learning: https://lnkd.in/e2qmSWR #artificialintelligence #deeplearning #machinelearning #tensorflow ✴️ @AI_Python_EN ❇️ @AI_Python 🗣 @AI_Python_arXiv
If you want to learn the basics of deep learning and also TensorFlow, then check out MIT's introductory course on deep learning. The complete lecture series is now online with a lot of code examples for TensorFlow 2.0. The course covers a wide range of topics (Computer Vision, GANs, NLP etc) and there are also guest lectures from Google, NVIDIA etc. The code is designed to run seamlessly on Google colab so very easy then. Check it out! #deeplearning #machinelearning

Article: https://lnkd.in/d7yT6dU
Course page: https://lnkd.in/deaTRDZ
Github code: https://lnkd.in/d-jwcPW

✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
Cheat Sheet

Subgradient Descent, Mirror Descent, and Online Learning

By Sebastian Pokutta: https://lnkd.in/eMYrh33

#artificialintelligence #deeplearning #machinelearning

✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
image_2019-03-02_18-12-57.png
1 MB
#DataScience Cheetsheet

✴️ @AI_Python_EN
image_2019-03-02_18-14-28.png
1.8 MB
Any refresher needed for statistics then check out this 10-page cheatsheet that covers a semester's worth of introductory statistics. Very cool it covers all the basic stuff from Bayes' rule to Markov chains. Check it out! #statistics


Github: https://lnkd.in/dgm9GcC

✴️ @AI_Python_EN
image_2019-03-02_18-15-57.png
131.5 KB
Equi-normalization of Neural Networks

Stock et al.: https://lnkd.in/eSCXsr7

#ComputerVision #PatternRecognition #MachineLearning

✴️ @AI_Python_EN
Model predictive control actuates the overhead crane to move objects of different mass. The overhead crane is converted to an inverted pendulum with a sign change in the state space model. Python GEKKO has new capabilities for system identification and state space modeling.

Overhead crane in Python and MATLAB: https://lnkd.in/e_EB23R
Inverted pendulum: https://lnkd.in/eZpgd_3

#modelpredictivecontrol #python

✴️ @AI_Python_EN
image_2019-03-02_18-17-59.png
289.3 KB
7 Steps on Data Science Interview, No 5 Is Game Changer (at least for Me)

Pranav Dar write a concise guidelines for acing the Interview (I'm also hiring, you can see here: https://lnkd.in/f4zQwEw).

Homework (no 5)is game changer, this is the place that can make company feel special.

You can see read complete article here
https://lnkd.in/f92mu9h

#datascience #interviews

✴️ @AI_Python_EN
Robust Re-identification of Manta Rays from Natural Markings by Learning Pose Invariant Embeddings

Moskvyak et al.: https://lnkd.in/eqYaqQD

#ArtificialNeuralNetworks #ComputerVision #PatternRecognition #Technology

✴️ @AI_Python_EN
Counting, division, and taking a logarithm: AI. At least they were honest.

https://qz.com/1563668/lyfts-ipo-filing-highlights-risk-factors-other-companies-dont-mention/

✴️ @AI_Python_EN
"Data Visualization: A practical introduction" A stunning, beautiful, carefully researched, free, online book

https://socviz.co

✴️ @AI_Python_EN
image_2019-03-03_00-47-06.png
806 KB
39 studies about human perception in 30 minutes by kenn elliott Awesome article. If you do any visualization or computer vision, these are things you need to know (but most people still don't know!)

https://medium.com/@kennelliott/39-studies-about-human-perception-in-30-minutes-4728f9e31a73

✴️ @AI_Python_EN
💡 What are the three types of error in a ML model?

👉 1. Bias - error caused by choosing an algorithm that cannot accurately model the signal in the data, i.e. the model is too general or was incorrectly selected. For example, selecting a simple linear regression to model highly non-linear data would result in error due to bias.

👉 2. Variance - error from an estimator being too specific and learning relationships that are specific to the training set but do not generalize to new samples well. Variance can come from fitting too closely to noise in the data, and models with high variance are extremely sensitive to changing inputs. Example: Creating a decision tree that splits the training set until every leaf node only contains 1 sample.

👉 3. Irreducible error - error caused by noise in the data that cannot be removed through modeling. Example: inaccuracy in data collection causes irreducible error.

#datascience

✴️ @AI_Python_EN
A birds-eye view of optimization algorithms

By Fabian Pedregosa: https://lnkd.in/d9cXkVZ

#ArtificialIntelligence #NeuralNetworks

✴️ @AI_Python_EN
Brand image can be studied and used in many ways.

One of the simplest is to ask respondents in a consumer survey to rate brands they know according to attributes thought to reflect the intended positionings of the brands.

This is often tracked over time. Often a simple yes/no "pick any" grid is used in the questionnaire, though ratings on 5-point scales are also common. Correspondence analysis (CA) is frequently employed to "map" the brands and attributes in 2-3 dimensions.

CA will also reduce the effect of brand size, though many users do not seem to know this. Distances on the map are also often interpreted very literally, which is a mistake.

Another, less common, approach is to focus on broad dimensions, such as reliability, believed to be important in consumer choice. The questionnaire items are designed to measure these dimensions.

Principal components factor analysis is often used to map the brands within these dimensions. Brand size adjustments can be made, though many using this approach do not do this.

This is a big topic and I've only had room to mention two simple and popular approaches. There are many more ways, some simple and some complex, for example using a priori "factors" or accounting for consumer heterogeneity in perceptions and response styles.

✴️ @AI_Python_EN
Top 10 movies on data science & machine learning for you to get a data science dose over weekend. Let us know which one you enjoyed the most! https://bit.ly/2C1Hwcb

✴️ @AI_Python_EN
Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

By Huang et al.: https://lnkd.in/e3vY6pq

#ComputerVision #PatternRecognition #ArtificialIntelligence #MachineLearning #Robotics

✴️ @AI_Python_EN
TensorFlow 2.0 is the best bet for Deep Learning Community.

Eager execution for easy prototyping & debugging along with tf.function() advantage,

Distribution Strategies for distributed Training (including multi node, multi accelerator including TPU pods, also Kubernetes),

Smoother building, training,validation with tf.keras and premade Estimators,

Smart deployment (TensorFlow Serving(A TensorFlow library allowing models to be served over HTTP/REST), TensorFlow Lite(TensorFlow’s lightweight solution for mobile and embedded devices), TensorFlow.js(Enables deploying models in JavaScript environments, such as in a web browser or server side through Node.js), TensorFlow Hub),
Compatiable with TF 1.x (also a conversion tool which updates TensorFlow 1.x Python code to use TensorFlow 2.0 compatible APIs, or flags cases where code cannot be converted automatically )

Also great for researchers ( Model Subclassing API, automatic differentiation, Ragged Tensors, TensorFlow Probability, Tensor2Tensor)

For beginners, TensorFlow, https://lnkd.in/fp3AWKk

#tensorflow #research #deeplearning #pyTorch

✴️ @AI_Python_EN