n Nature Machine Intelligence today: Whetstone, a method for turning general Keras neural networks into spiking neural networks: https://www.nature.com/articles/s42256-018-0015-y.epdf?author_access_token=HIFIT_s3XXRdKKF3DTspd9RgN0jAjWel9jnR3ZoTv0P7sMl50Mvxe5hygHWfkIWjiyJe1kEkFLNBiorlpBWGyE5yRNu7SaSa6rWLAwmUPf1dL47QUigBag24erZ3G6Ue-9ZkZNtWzrZVVkxMrGE8eA%3D%3D
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
I always prefer papers with source code so I can simply use them instead of spending plenty of time on implementing them. I wrote some code tonight to find Arxiv NLP papers with Github link and here is a list of papers from last December. Plan to run through the past year during this weekend. Hope it is useful.
https://kaggle.com/shujian/arxiv-nlp-papers-with-github-link
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://kaggle.com/shujian/arxiv-nlp-papers-with-github-link
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
To become a data scientist, is it better to be a generalist or a specialist?
The answer: you need to be both.
There is a very broad set of requirements to work as a data scientist, and you need familiarity with all of them to do the job:
* Data loading
* Data manipulation
* Feature engineering
* Model selection
* Model tuning
* Model evaluation
* Coding
* Visualization
* Report creation
* Presenting
* Business acumen
* Etc
What you also need:
π One area of specialization where you bring unique expertise to the team.
Data science is a team sport, and in order to build an effective team, you need complimentary skills that lift the team above the sum of its parts.
Each individual should be able to function on their own, but also contribute a unique skills set to the team.
π Agree or disagree?
#datascience #teams #aspiring #DataScientists
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
The answer: you need to be both.
There is a very broad set of requirements to work as a data scientist, and you need familiarity with all of them to do the job:
* Data loading
* Data manipulation
* Feature engineering
* Model selection
* Model tuning
* Model evaluation
* Coding
* Visualization
* Report creation
* Presenting
* Business acumen
* Etc
What you also need:
π One area of specialization where you bring unique expertise to the team.
Data science is a team sport, and in order to build an effective team, you need complimentary skills that lift the team above the sum of its parts.
Each individual should be able to function on their own, but also contribute a unique skills set to the team.
π Agree or disagree?
#datascience #teams #aspiring #DataScientists
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
The Matrix Cookbook: One of the most useful reference books out there...if you enjoy taking a deeper dive into Machine Learning algorithms
#MachineLearning #DataScience
http://bit.ly/2rjrQLU
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
#MachineLearning #DataScience
http://bit.ly/2rjrQLU
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Which language should you learn to get into Data Science?
Python.
Ranked no. 1 Machine Learning Language on GitHub.
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Python.
Ranked no. 1 Machine Learning Language on GitHub.
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Amazon Comprehend Medical β Natural Language Processing for Healthcare Customers | Amazon Web Services https://amzn.to/2QJLS0W #AI #DeepLearning #MachineLearning #DataScience
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
HEREβS SOME LATEST BREAKTHROUGH ON NLP SPACE
BERT
https://lnkd.in/fR6p4Ut
Sequence Classification with Human Attention
https://lnkd.in/fen6xB8
Phrase-Based & Neural Unsupervised Machine Translation
https://lnkd.in/fE4CfVF
Probing sentence embeddings for linguistic properties
https://lnkd.in/fHpE3KP
SWAG
https://lnkd.in/fgPSxTG
Deep contextualized word representations
https://lnkd.in/ftMAz-g
Meta-Learning for Low-Resource Neural Machine Translation
https://lnkd.in/fYF5Hsx
Linguistically-Informed Self-Attention for Semantic Role Labeling
https://lnkd.in/fkz8usu
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
https://lnkd.in/fGYsEcD
Unanswerable Questions for SQuAD
https://lnkd.in/fddKepX
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/fa6a8FJ
Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/fnTzYpw
Improving Language Understanding by Generative Pre-Training
https://lnkd.in/fpA73wA
Dissecting Contextual Word Embeddings: Architecture and Representation
https://lnkd.in/fg6ck7w
Original by TOPBOTS https://lnkd.in/f_8R-8e
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
BERT
https://lnkd.in/fR6p4Ut
Sequence Classification with Human Attention
https://lnkd.in/fen6xB8
Phrase-Based & Neural Unsupervised Machine Translation
https://lnkd.in/fE4CfVF
Probing sentence embeddings for linguistic properties
https://lnkd.in/fHpE3KP
SWAG
https://lnkd.in/fgPSxTG
Deep contextualized word representations
https://lnkd.in/ftMAz-g
Meta-Learning for Low-Resource Neural Machine Translation
https://lnkd.in/fYF5Hsx
Linguistically-Informed Self-Attention for Semantic Role Labeling
https://lnkd.in/fkz8usu
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
https://lnkd.in/fGYsEcD
Unanswerable Questions for SQuAD
https://lnkd.in/fddKepX
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
https://lnkd.in/fa6a8FJ
Universal Language Model Fine-tuning for Text Classification
https://lnkd.in/fnTzYpw
Improving Language Understanding by Generative Pre-Training
https://lnkd.in/fpA73wA
Dissecting Contextual Word Embeddings: Architecture and Representation
https://lnkd.in/fg6ck7w
Original by TOPBOTS https://lnkd.in/f_8R-8e
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Nice article on the differences between the symbolic and imperative APIs in TensorFlow 2.0. In particular, it's about the differences between Keras's Sequential, Functional and Subclassing API. If you want to create something fast without much abstraction then you should go with the Keras Sequential and Functional API (like plugging together LEGO bricks). Otherwise you should go with the Subclassing API where you think about your models as object-oriented, a choice that my team and me actually prefer. Very insightful article. Definitely check it out! #deeplearning #machinelearning
Article: https://lnkd.in/drDN-NS
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Article: https://lnkd.in/drDN-NS
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Excited to announce StanfordNLP, a natural language processing toolkit for 53 languages with easily accessible pretrained models. It allows you to tokenize, tag, lemmatize, and (dependency) parse many languages, and provides a Python interface to CoreNLP.
https://stanfordnlp.github.io/stanfordnlp/
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
https://stanfordnlp.github.io/stanfordnlp/
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
You will never be a data scientist without knowing #Calculus, #Probability and #InformationTheory.
www.interviews.ai
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
www.interviews.ai
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Shuffling large datasets, have you ever tried that?
Here the author presents an algorithm for shuffling large datasets.
Here you learn the following;
0. why Shuffle in the first place?
1. A 2-pass shuffle algorithm is tested
2. How to deal with oversized piles
3. Parallelization & more
Link to article : https://lnkd.in/dZ8-tyJ
Gist on #Github: for a cool visualization of the shuffle https://lnkd.in/d8iK8fd
#algorithms #github #datasets #deeplearning #machinelearning
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Here the author presents an algorithm for shuffling large datasets.
Here you learn the following;
0. why Shuffle in the first place?
1. A 2-pass shuffle algorithm is tested
2. How to deal with oversized piles
3. Parallelization & more
Link to article : https://lnkd.in/dZ8-tyJ
Gist on #Github: for a cool visualization of the shuffle https://lnkd.in/d8iK8fd
#algorithms #github #datasets #deeplearning #machinelearning
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Foundations Built for a General Theory of Neural Networks
"Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural networkβs form will influence its function."
Article by Kevin Hartnett: https://lnkd.in/eZa5eyX
#artificialneuralnetworks #artificalintelligence #deeplearning #neuralnetworks #mathematics
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
"Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural networkβs form will influence its function."
Article by Kevin Hartnett: https://lnkd.in/eZa5eyX
#artificialneuralnetworks #artificalintelligence #deeplearning #neuralnetworks #mathematics
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
FREE DATA SCIENCE RESOURCE
After long hours curated content on data science resource during 2018, http://www.claoudml.co/ list that actually we have lucrative resource to learn data science.
You can find it http://www.claoudml.co/
#technology #business #datascience
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
After long hours curated content on data science resource during 2018, http://www.claoudml.co/ list that actually we have lucrative resource to learn data science.
You can find it http://www.claoudml.co/
#technology #business #datascience
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Lots of data scientists work primarily with Jupyter notebooks. Have you considered writing your courses in Jupyter?
Jupyter notebooks and python modules (.py files) are both great tools, but they have different things that they do well. Notebooks are good for telling a story, walking through an analysis to answer a specific set of questions. Making a collection of .py files is good when creating a modular codebase, like when writing object-oriented code or building a more extensive set of tools. Modules can get unwieldy if all you're doing is a focused analysis with off-the-shelf tools, and notebooks get tricky to navigate if you are including very many functions. Notebooks excel at scripts, modules excel at classes and functions. They each have settings where they shine.
For the End-to-End Machine Learning scenarios we've walked through in our courses so far, the processing is complex enough that it doesn't lend itself well to a linear script. For that reason I've opted to go the module route. But I do so hesitantly because I know how many data scientists really like notebooks. I'll plan to use them in future courses that are a better fit for script-like code.
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Jupyter notebooks and python modules (.py files) are both great tools, but they have different things that they do well. Notebooks are good for telling a story, walking through an analysis to answer a specific set of questions. Making a collection of .py files is good when creating a modular codebase, like when writing object-oriented code or building a more extensive set of tools. Modules can get unwieldy if all you're doing is a focused analysis with off-the-shelf tools, and notebooks get tricky to navigate if you are including very many functions. Notebooks excel at scripts, modules excel at classes and functions. They each have settings where they shine.
For the End-to-End Machine Learning scenarios we've walked through in our courses so far, the processing is complex enough that it doesn't lend itself well to a linear script. For that reason I've opted to go the module route. But I do so hesitantly because I know how many data scientists really like notebooks. I'll plan to use them in future courses that are a better fit for script-like code.
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
2019 is the year of Artificial Intelligence.
This is the year that we replace one buzzword with another.
This will allow incompetent organizations that have failed to successfully leverage machine learning and data science in the past to make excuses and get another shot at the whole process...
Unfortunately, these companies that have failed to make the proper investment in data science (and failed to hire at the leadership level first) will just be wasting their money and getting burned again.
If you have the option, try to avoid these buzzword-laden hack shops.
If you're looking for a job, look for companies that have been investing in data science for years and building over time... not constantly rebranding their failing departments and products to look like they have something "fresh."
The companies and individuals that make the long-term investments are the ones that will win out in the end.
Invest in yourself and find a company that invests in data science.
#datascience
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
This is the year that we replace one buzzword with another.
This will allow incompetent organizations that have failed to successfully leverage machine learning and data science in the past to make excuses and get another shot at the whole process...
Unfortunately, these companies that have failed to make the proper investment in data science (and failed to hire at the leadership level first) will just be wasting their money and getting burned again.
If you have the option, try to avoid these buzzword-laden hack shops.
If you're looking for a job, look for companies that have been investing in data science for years and building over time... not constantly rebranding their failing departments and products to look like they have something "fresh."
The companies and individuals that make the long-term investments are the ones that will win out in the end.
Invest in yourself and find a company that invests in data science.
#datascience
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
The Evolved Transformer
Paper by So et al.: https://lnkd.in/eNZ6ije
#artificalintelligence #MachineLearning #NeuralComputing #EvolutionaryComputing #research
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Paper by So et al.: https://lnkd.in/eNZ6ije
#artificalintelligence #MachineLearning #NeuralComputing #EvolutionaryComputing #research
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Invertible Residual Networks
Paper by Behrmann: https://lnkd.in/dDnrmhr
#MachineLearning #ArtificialIntelligence #ComputerVision #PatternRecognition
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
Paper by Behrmann: https://lnkd.in/dDnrmhr
#MachineLearning #ArtificialIntelligence #ComputerVision #PatternRecognition
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
This is a super cool resource: Papers With Code now includes 950+ ML tasks, 500+ evaluation tables (including SOTA results) and 8500+ papers with code. Probably the largest collection of NLP tasks I've seen including 140+ tasks and 100 datasets. https://paperswithcode.com/sota
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
YOLOv3 still has the best introduction for any paper I've read so far https://pjreddie.com/media/files/papers/YOLOv3.pdf
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
βοΈ @AI_Python
π£ @AI_Python_Arxiv
β΄οΈ @AI_Python_EN
I have a fully-funded 4Y PhD position in applied #NLP (in the #IR context) available at Delft University of Technology. Get in touch if you are interested!
https://chauff.github.io/
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
https://chauff.github.io/
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
I have an opening for a 4y PhD position in my ERC Consolidator project DREAM (βdistributed dynamic representations for dialogue managementβ) at #ILLC in Amsterdam. Deadline 25 Feb 2019. More details: http://www.illc.uva.nl/NewsandEvents/News/Positions/newsitem/10538/
Please spread the word! #NLProc AmsterdamNLP
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN
Please spread the word! #NLProc AmsterdamNLP
βοΈ @AI_Python
π£ @AI_Python_arXiv
β΄οΈ @AI_Python_EN