AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Robust Re-identification of Manta Rays from Natural Markings by Learning Pose Invariant Embeddings

Moskvyak et al.: https://lnkd.in/eqYaqQD

#ArtificialNeuralNetworks #ComputerVision #PatternRecognition #Technology

✴️ @AI_Python_EN
Counting, division, and taking a logarithm: AI. At least they were honest.

https://qz.com/1563668/lyfts-ipo-filing-highlights-risk-factors-other-companies-dont-mention/

✴️ @AI_Python_EN
"Data Visualization: A practical introduction" A stunning, beautiful, carefully researched, free, online book

https://socviz.co

✴️ @AI_Python_EN
image_2019-03-03_00-47-06.png
806 KB
39 studies about human perception in 30 minutes by kenn elliott Awesome article. If you do any visualization or computer vision, these are things you need to know (but most people still don't know!)

https://medium.com/@kennelliott/39-studies-about-human-perception-in-30-minutes-4728f9e31a73

✴️ @AI_Python_EN
πŸ’‘ What are the three types of error in a ML model?

πŸ‘‰ 1. Bias - error caused by choosing an algorithm that cannot accurately model the signal in the data, i.e. the model is too general or was incorrectly selected. For example, selecting a simple linear regression to model highly non-linear data would result in error due to bias.

πŸ‘‰ 2. Variance - error from an estimator being too specific and learning relationships that are specific to the training set but do not generalize to new samples well. Variance can come from fitting too closely to noise in the data, and models with high variance are extremely sensitive to changing inputs. Example: Creating a decision tree that splits the training set until every leaf node only contains 1 sample.

πŸ‘‰ 3. Irreducible error - error caused by noise in the data that cannot be removed through modeling. Example: inaccuracy in data collection causes irreducible error.

#datascience

✴️ @AI_Python_EN
A birds-eye view of optimization algorithms

By Fabian Pedregosa: https://lnkd.in/d9cXkVZ

#ArtificialIntelligence #NeuralNetworks

✴️ @AI_Python_EN
Brand image can be studied and used in many ways.

One of the simplest is to ask respondents in a consumer survey to rate brands they know according to attributes thought to reflect the intended positionings of the brands.

This is often tracked over time. Often a simple yes/no "pick any" grid is used in the questionnaire, though ratings on 5-point scales are also common. Correspondence analysis (CA) is frequently employed to "map" the brands and attributes in 2-3 dimensions.

CA will also reduce the effect of brand size, though many users do not seem to know this. Distances on the map are also often interpreted very literally, which is a mistake.

Another, less common, approach is to focus on broad dimensions, such as reliability, believed to be important in consumer choice. The questionnaire items are designed to measure these dimensions.

Principal components factor analysis is often used to map the brands within these dimensions. Brand size adjustments can be made, though many using this approach do not do this.

This is a big topic and I've only had room to mention two simple and popular approaches. There are many more ways, some simple and some complex, for example using a priori "factors" or accounting for consumer heterogeneity in perceptions and response styles.

✴️ @AI_Python_EN
Top 10 movies on data science & machine learning for you to get a data science dose over weekend. Let us know which one you enjoyed the most! https://bit.ly/2C1Hwcb

✴️ @AI_Python_EN
Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

By Huang et al.: https://lnkd.in/e3vY6pq

#ComputerVision #PatternRecognition #ArtificialIntelligence #MachineLearning #Robotics

✴️ @AI_Python_EN
TensorFlow 2.0 is the best bet for Deep Learning Community.

Eager execution for easy prototyping & debugging along with tf.function() advantage,

Distribution Strategies for distributed Training (including multi node, multi accelerator including TPU pods, also Kubernetes),

Smoother building, training,validation with tf.keras and premade Estimators,

Smart deployment (TensorFlow Serving(A TensorFlow library allowing models to be served over HTTP/REST), TensorFlow Lite(TensorFlow’s lightweight solution for mobile and embedded devices), TensorFlow.js(Enables deploying models in JavaScript environments, such as in a web browser or server side through Node.js), TensorFlow Hub),
Compatiable with TF 1.x (also a conversion tool which updates TensorFlow 1.x Python code to use TensorFlow 2.0 compatible APIs, or flags cases where code cannot be converted automatically )

Also great for researchers ( Model Subclassing API, automatic differentiation, Ragged Tensors, TensorFlow Probability, Tensor2Tensor)

For beginners, TensorFlow, https://lnkd.in/fp3AWKk

#tensorflow #research #deeplearning #pyTorch

✴️ @AI_Python_EN
Machine Learning with no code? Its all possible thanks to tools like Uber's Ludwig, Azure, and a few others that i'll demonstrate in this video https://lnkd.in/gagQEfD
Machine Learning with No Code

✴️ @AI_Python_EN
Survey data are sometimes criticized for not being "real data" - just what people say. There are numerous ways to respond to this, for example:

Social media data and customer correspondence, therefore, are not "real data" either.

How real are the digital segments we use?

Even when customer records have been thoroughly cleaned, how clean are they, really? Even when "clean enough" they only show us part of the picture, unless we've been hacking our competitors.

Survey professionals, including scholars and government researchers, have long known that survey data are not exact measurements and usually should be interpreted directionally.

Most importantly, many consumer surveys are concerned with attitudes and opinions, which cannot be measured precisely. If someone says they don't like our snack food brand because it's too salty, do we disregard this?

We should also remember that two people can do the same things for the same reasons, the same things for different reasons, different things for the same reasons, different things for different reasons, and that what we do and why we do it is usually not constant by product category or over time.

Survey research can't do everything, but who ever said it could?

✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Have you ever used a Jupyter notebook? If yes, you know it is a pleasure to use it for interactive programming. If no, you should try it! Or you may be a C++ programmer and thinking Jupyter notebooks are not for you, but wait, imagine our joy when we came across the Xeus-Cling kernel! But what does it do?
https://lnkd.in/gKAazmn

✴️ @AI_Python_EN
It seems like controversial papers or talks in the press about AI being unfair, incomplete or biased is fashionable these days.

We came across this paper which suggests that #selfdrivingcars are more likely to hit a black or dark skinned person.

Here is the paper β€œPredictive inequality in Object Detection”: https://lnkd.in/ebHbP6f

What do you think?

Are algorithms biased / trained on insufficient data? What can be done to solve this problem?
Photo credit: https://lnkd.in/eEqS39J
#algorithms #deeplearning #ai

✴️ @AI_Python_EN
Launching TensorFlow Lite for Microcontrollers

By Pete Warden: https://lnkd.in/ejcJMVn

#artificialintelligence #deeplearning #microcontrollers #tensorflow

✴️ @AI_Python_EN
OpenAI has created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent.

As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures.

Blog: https://lnkd.in/d4i6xQC
Paper: https://lnkd.in/dGNcd4K
Github: https://lnkd.in/d-2WhfN
Demo: https://lnkd.in/dBiHZv3

#deeplearning #research

✴️ @AI_Python_EN
Alibaba Groups new work on BERT for intent classification and slot filling

https://arxiv.org/abs/1902.10909v1

✴️ @AI_Python_EN
How can AI become biased? 2 papers investigate:

Joy Buolamwini show that AI has a higher error rate when recognizing darker-skinned female faces: http://bit.ly/2C2pxT9

IBM responds to their paper, explaining how they reduced that error: http://bit.ly/2C82u9n #TechRec

✴️ @AI_Python_EN
Data science is not about memorization, it's about making connections and applying your knowledge.

If you can understand a lot of different materials and subjects and see how they connect, then you'll naturally remember these subjects better. This is simply how the brain is structured - everything is connected to something else.

➑️ More connections = more retention.

Connect new information with long-term memories and then reinforce those connections to create new long-term memories.

And begin to apply your knowledge to create something new and you will not only increase your understanding of the subject, but you'll increase your retention as well because the information will have context.

Ironically, this approach is much more effective than *trying* to memorize material.

πŸ‘‰ So if you're ever wondering "should I memorize this," just reframe your perspective to ask:

β€’ how does this information relate to what I already know?
β€’ how can I apply this to a problem I'm facing?
β€’ what can I create with this new knowledge?
β€’ who can I teach this new subject to?

and the ability to remember what you've learned will take care of itself.

#datascience #learning

✴️ @AI_Python_EN
What is the fastest LSTM implementation?!
(cuDNNLSTM)

"an informed choice of deep learning framework and LSTM implementation may increase training speed by up to 7.2x on standard input sizes from ASR"
https://lnkd.in/ea66qyn
https://lnkd.in/eTFXN6Z

✴️ @AI_Python_EN