AI, Python, Cognitive Neuroscience
3.87K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression
Article
Code
Online Demo
πŸ—£ @AI_Python_arXiv
✴️ @AI_Python_EN
❇️ @AI_Python
ProjectJupyter⁩ notebook server running on home_assistant⁩ Hassio on an ⁦#Raspberry_Pi⁩ viewed in the iOS app on my Apple⁩ iPhone, what a time to be alive

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
Based on 2018 HackerRank's Developer survey, #Javascript #Java #Python stand out as the top 3 expected Programming languages but what's next is more important. That's being Language Agnostic!

This is very important especially in #DataScience and #MachineLearning where we always put R vs Python, but with market expecting Language Agnostic Developers, It's good to have both the languages at your disposal.

The screenshot is from a Gender-focused #Kaggle Kernel I did sometime back : https://lnkd.in/fXCDHjv

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
AndrewYNg from LandingAI sharing his thoughts around #AI & #MachineLearning.

https://www.swarmapp.com/c/kLTdYT7cXAO

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
"Godel Machines, Meta-Learning, and LSTMs" - interview with Juergen Schmidhuber

Juergen Schmidhuber is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence. The conversation and lectures are free and open to everyone

#MachineLearning #AI

https://youtu.be/3FIo6evmweo

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
High hopes for 2019

#MachineLearning

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
Deep Latent-Variable Models for Natural Language

Tutorial by Kim et al.: https://lnkd.in/eUHDAnP

#NLP #pytorch #unsupervisedlearning


✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
Right now, there's a better investment than learning data science.

Doing data science.

I don't mean to ignore learning data science as a whole. There are some awesome MOOCs, articles, books, videos, and resources out there, but what I really want to put emphasis on is for you to start doubling down on creating more data science projects.

A great data science project:
- Where you treat your skill set like an investment portfolio.
- Where you learn a concept and immediately apply what you just learned.
- Where you take the time to document, work out examples, and build a toy project.
- Where you start doing more of these hands-on style of learning and it keeps you thinking about how to improve.

And you don't have to start out with the latest and greatest data science project.

You can just start with an idea & let your curiosity guide you through the way. As you get stuck, learn how to solve it and keep progressing. Because all great work begins with something small.

Do a project -> Put it on GitHub -> Share your work -> Get feedback -> Improve & Repeat

If you do this, there’s no telling how much you’re going to improve.

Sooner or later you're going to build so much practical skill that no other learning resource will teach you. πŸ™‚

#datascience #machinelearning

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
Why Does Batch Normalization Work?

Blog by Abay Bektursun: https://lnkd.in/eR3cVjm

#BatchNormalization #DeepLearning #MachineLearning

✴️ @AI_Python_EN
❇️ @AI_Python
πŸ—£ @AI_Python_arXiv
The major advancements in Deep Learning in 2018

Blog by Javier Couto: https://lnkd.in/erT7Uq9

#deeplearning #machinelearning #transferlearning

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
GAN β€” LSGAN (How to be a good helper?) – Jonathan Hui – Medium

#deeplearning #machinelearning #transferlearning

Link Review

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
Influence Attacks on Machine Learning

https://youtu.be/zgAzCVk3qgQ
Mark Sherman

explains how #deeplearning is playing an increasing role in developing new applications and how adversaries can attack machine learning systems in a variety of ways.

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
He's making a training set, he's checking it twice, running inference to figure out who's been naughty or nice, Santa Claus is cooommmming to town. Professor Reza Zadeh

#machinelearning

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
Interested in understanding neural networks for #NLProc ? Looking for reading material for your winter break? Check out our new paper, "Analysis Methods in Neural Language Processing: A Survey", to appear in TACL.
Preprint: https://arxiv.org/abs/1812.08951
Website: https://boknilev.github.io/nlp-analysis-methods


#machinelearning

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
#Bayesian Optimization framework can intelligently trade off experiments w/ varying cost & fidelity. We achieve strong regret bounds as well as state-of-the-art performance on multiple real-world #datasets! Preprint: https://arxiv.org/abs/1811.00755v1


#machinelearning

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
Butterflies, only 32 grams each, including two servos, a pair of small batteries, & a laser-made casing. Soon these butterflies achieve superbutterfly intelligence, control the world's nectar supply, & tile the universe with their eggs. https://www.festo.com/group/en/cms/10216.htm

https://twitter.com/Reza_Zadeh/status/1077294105137360896

#machinelearning

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
A structural transition in physical networks

https://www.nature.com/articles/s41586-018-0726-6

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
Deep learning made easier with transfer learning http://bit.ly/2PZD7et #AI #DeepLearning #MachineLearning #DataScience

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
A deep thread that's worth a read by anyone interested in #deeplearning. (And we just launched a Deep Learning Fundamentals course, by the way: http://bit.ly/2SXnrtT )

Explain More:
Regulations, arguably, should not be based on detailed understanding of how AI systems work (which the regulators can't have in any depth). However, AI systems need to be able to explain decisions in terms that humans can understand, if we are to consider them trustworthy. Not explanations involving specifics of the algorithms, weights in a neural network, etc., but explanations that engage people's theories of mind, explanations at the level of Dennett's intentional stance - in terms of values, goals, plans, and intentions.
Previous computer systems, to be comprehensible, and, yes, trustworthy, needed to consistently present behavior that fit people's natural inferences to physical models (e.g., the "desktop"). Anyone old enough to remember programming VCRs? Nerdview is a failure of explanation.
AI systems will need to engage not the mind's physics inference, but its *social* inference (theory of mind). AI systems should behave as minds, and explain their behavior as minds. They must have failure modes predictable and comprehensible like human ones (or physical ones).
The counterargument of "How does a system explain how it decided that a stop sign was there? By listing network weights in a perceptual model?!" is a crimson herring. If the perceptual system is accurate enough in human terms (i.e., without crazy error modes), it can just say "I saw a stop sign," just like a human would. Full stop. More complex decisions, like swerving to avoid a bicyclist and then hitting a pedestrian would require more complex explanations involving values, goals, and intentions, as well as perception ("I didn't see the guy").
Saying that we attain trustworthiness of AI systems just based on experimental performance metrics, however rigorous and comprehensive, is utterly misguided. Measurable performance is necessary, but not at all sufficient, for (at least) two reasons.
First, it is virtually impossible, for a non-specialist to evaluate the sufficiency of an experimental methodology or significance of the results. It's very easy to create misleading experiments, even without intending to. So this can only create trust among the credulous.
Second, experimental performance guarantees do nothing to develop trust between an AI system and the humans it interacts with - that trust must develop through the interaction, which must therefore be comprehensible and explainable.
In a word, to attain trustworthiness, AI systems must be able to form *relationships* with people. Not necessarily deep relationships, but still they must engage with the human mind's systems of social connection.


✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python
Kaggle kernels that Researcher have published.

1. Time Series Analysis - Artificial Neural Networks - https://lnkd.in/f8diQkX

2. Titanic - Data Preprocessing and Visualization - https://lnkd.in/fwrvHr5

3. Everything you can do with Seaborn - https://lnkd.in/fpgQCr8

4. Insights of Kaggle ML and DS Survey - https://lnkd.in/fPyiGyU

5. Time Series Analysis - ARIMA model - https://lnkd.in/fn24ihz

6. Time Series Analysis - LSTM - https://lnkd.in/fuY6DXm

7. Introduction to Regression - Complete Analysis - https://lnkd.in/fM3xsZ2

8. Time Series - Preprocessing to Modelling - https://lnkd.in/fJcar4u

Kaggle community is one of the best community for Data Science.

#machinelearning #artificialintelligence #datascience #deeplearning #data

✴️ @AI_Python_EN
πŸ—£ @AI_Python_arXiv
❇️ @AI_Python