Amazing. Train a network to classify papers (accept/reject). Then run the network on the paper describing the network, and it classifies the paper as a strong reject. This is why we can't have nice paper classifiers.
https://arxiv.org/abs/1812.08775
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
https://arxiv.org/abs/1812.08775
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Names for collections of code in various languages:
A pile of JavaScript
A crystal of Haskell
An undefinedness of C++
A liability of Python
A French grad student of OCaml
An ambition of Rust
A bank of COBOL
A postmodernism of Perl
An accident of C
A Unabomber of Forth
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
A pile of JavaScript
A crystal of Haskell
An undefinedness of C++
A liability of Python
A French grad student of OCaml
An ambition of Rust
A bank of COBOL
A postmodernism of Perl
An accident of C
A Unabomber of Forth
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
9,216 IBM Power9 CPUs and 27,648 Nvidia Volta GPUs #Supercomputer performs 200 quadrillion calculations per second, #USA tops #China for the world's fastest #computer #AI #DataScience #DataAnalytics #IoT #BigData
http://bit.ly/2sSORWi
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
http://bit.ly/2sSORWi
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
The FEYNMAN technique of learning:
STEP 1 - Pick and study a topic
STEP 2 - Explain the topic to someone, like a child, who is unfamiliar with the topic
STEP 3 - Identify any gaps in your understanding
STEP 4 - Review and Simplify!
- Richard Feynman
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
STEP 1 - Pick and study a topic
STEP 2 - Explain the topic to someone, like a child, who is unfamiliar with the topic
STEP 3 - Identify any gaps in your understanding
STEP 4 - Review and Simplify!
- Richard Feynman
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
An Amoeba-Based Computer Calculated Approximate Solutions to a Very Hard Math Problem
Article by Daniel Oberhaus: https://lnkd.in/eHJRTBS
#biocomputers
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Article by Daniel Oberhaus: https://lnkd.in/eHJRTBS
#biocomputers
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
The Unreasonable Effectiveness of Recurrent Neural Networks
Blog (2015) by Andrej Karpathy: https://lnkd.in/eNC7BK5
#DeepLearning #NeuralNetworks #RecurrentNeuralNetworks #RNN
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Blog (2015) by Andrej Karpathy: https://lnkd.in/eNC7BK5
#DeepLearning #NeuralNetworks #RecurrentNeuralNetworks #RNN
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Can Neural Networks Remember?
Slides by Vishal Gupta: https://lnkd.in/e_EUYGv
#RecurrentNeuralNetworks #LongShortTermMemory #LSTM #neuralnetworks
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Slides by Vishal Gupta: https://lnkd.in/e_EUYGv
#RecurrentNeuralNetworks #LongShortTermMemory #LSTM #neuralnetworks
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Understanding LSTM Networks
By Christopher Olah: https://lnkd.in/eWJkwp3
#DeepLearning #LSTM #RecurrentNeuralNetworks
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
By Christopher Olah: https://lnkd.in/eWJkwp3
#DeepLearning #LSTM #RecurrentNeuralNetworks
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
Best of arXiv.org for AI, Machine Learning, and Deep Learning
πΈ November 2018
πΈ November 2017
πΈ July 2018
πΈ April 2018
πΈ June 2018
πΈ September 2018
πΈ October 2018
πΈ August 2018
#DeepLearning #machinelearning #AI #Artificialinteligence #Ω ΩΨ§ΩΩ
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
πΈ November 2018
πΈ November 2017
πΈ July 2018
πΈ April 2018
πΈ June 2018
πΈ September 2018
πΈ October 2018
πΈ August 2018
#DeepLearning #machinelearning #AI #Artificialinteligence #Ω ΩΨ§ΩΩ
βοΈ @AI_Python_EN
π£ @AI_Python_arXiv
β΄οΈ @AI_Python
This media is not supported in your browser
VIEW IN TELEGRAM
Wanna see progress of a long running operation easily in your Jupyter notebook? Use the wonderful tqdm module - https://github.com/tqdm/tqdm#ipython-jupyter-integration β¦. As a bonus, the name is Arabic & Spanish inspired! twitter JupyterProject
Mona Jalal Siad: tqdm stems from ΨͺΩΨ―Ω which means "progress"
#python
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Mona Jalal Siad: tqdm stems from ΨͺΩΨ―Ω which means "progress"
#python
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Eirikur Agustsson Research Scientist Google
this paper on how to properly interpolate samples from GANs and VAEs has been accepted to ICLR 2019!
Paper: Optimal Transport Maps For Distribution Preserving Operations on Latent Spaces of Generative Models (
https://openreview.net/forum?id=BklCusRct7¬eId=BklCusRct7)
TLDR: Stop using linear interpolation!
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
this paper on how to properly interpolate samples from GANs and VAEs has been accepted to ICLR 2019!
Paper: Optimal Transport Maps For Distribution Preserving Operations on Latent Spaces of Generative Models (
https://openreview.net/forum?id=BklCusRct7¬eId=BklCusRct7)
TLDR: Stop using linear interpolation!
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
When ML has no common sense π
#ML #MachineLearning
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
#ML #MachineLearning
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
AI, Python, Cognitive Neuroscience
Wanna see progress of a long running operation easily in your Jupyter notebook? Use the wonderful tqdm module - https://github.com/tqdm/tqdm#ipython-jupyter-integration β¦. As a bonus, the name is Arabic & Spanish inspired! twitter JupyterProject Mona Jalalβ¦
Could you also consider taking a look at "fastprogress", our recent replacement for tqdm, which has some nice extra features (see the readme) and avoids
some of tqdm's bugs:
https://t.co/QflMyWcUTE
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
some of tqdm's bugs:
https://t.co/QflMyWcUTE
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Forwarded from AI, Python, Cognitive Neuroscience (π»π¦ππ¦
π Meysam Asgari)
This media is not supported in your browser
VIEW IN TELEGRAM
π If you like our channel, i invite you to share it with your friends:
Our channel in english: β΄οΈ @AI_Python_EN
Our Daily arXiv Channel: π£ @AI_Python_Arxiv
BTW: Thank you for joining :)
Our channel in english: β΄οΈ @AI_Python_EN
Our Daily arXiv Channel: π£ @AI_Python_Arxiv
BTW: Thank you for joining :)
This media is not supported in your browser
VIEW IN TELEGRAM
Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression
Article
Code
Online Demo
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
Article
Code
Online Demo
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
ProjectJupyterβ© notebook server running on home_assistantβ© Hassio on an β¦#Raspberry_Piβ© viewed in the iOS app on my Appleβ© iPhone, what a time to be alive
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Based on 2018 HackerRank's Developer survey, #Javascript #Java #Python stand out as the top 3 expected Programming languages but what's next is more important. That's being Language Agnostic!
This is very important especially in #DataScience and #MachineLearning where we always put R
The screenshot is from a Gender-focused #Kaggle Kernel I did sometime back : https://lnkd.in/fXCDHjv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
This is very important especially in #DataScience and #MachineLearning where we always put R
vs
Python, but with market expecting Language Agnostic Developers, It's good to have both the languages at your disposal. The screenshot is from a Gender-focused #Kaggle Kernel I did sometime back : https://lnkd.in/fXCDHjv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
AndrewYNg from LandingAI sharing his thoughts around #AI & #MachineLearning.
https://www.swarmapp.com/c/kLTdYT7cXAO
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
https://www.swarmapp.com/c/kLTdYT7cXAO
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
"Godel Machines, Meta-Learning, and LSTMs" - interview with Juergen Schmidhuber
Juergen Schmidhuber is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence. The conversation and lectures are free and open to everyone
#MachineLearning #AI
https://youtu.be/3FIo6evmweo
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Juergen Schmidhuber is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S099: Artificial General Intelligence. The conversation and lectures are free and open to everyone
#MachineLearning #AI
https://youtu.be/3FIo6evmweo
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Deep Latent-Variable Models for Natural Language
Tutorial by Kim et al.: https://lnkd.in/eUHDAnP
#NLP #pytorch #unsupervisedlearning
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv
Tutorial by Kim et al.: https://lnkd.in/eUHDAnP
#NLP #pytorch #unsupervisedlearning
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_arXiv