If you enjoy maintaining a well-organized Bookmark lists for the work you do e.g. #datascience or #machinelearning, then you can easily share that those resources with everyone in the world.
Just import the bookmark into an HTML -> Use an #HTML-to-#markdown converter -> copy-paste that Markdown text into a Readme.md of a #Github repo and you are done.
If you have a good collection, I bet your Github will attract stars in no time.
Here is mine (there are many repetitions but I will clean them up in a few weeks)
https://github.com/tirthajyoti/Data-science-best-resources
🗣 @AI_Python_Arxiv
✴️ @AI_Python_EN
Just import the bookmark into an HTML -> Use an #HTML-to-#markdown converter -> copy-paste that Markdown text into a Readme.md of a #Github repo and you are done.
If you have a good collection, I bet your Github will attract stars in no time.
Here is mine (there are many repetitions but I will clean them up in a few weeks)
https://github.com/tirthajyoti/Data-science-best-resources
🗣 @AI_Python_Arxiv
✴️ @AI_Python_EN
January 20, 2019
Shuffling large datasets, have you ever tried that?
Here the author presents an algorithm for shuffling large datasets.
Here you learn the following;
0. why Shuffle in the first place?
1. A 2-pass shuffle algorithm is tested
2. How to deal with oversized piles
3. Parallelization & more
Link to article : https://lnkd.in/dZ8-tyJ
Gist on #Github: for a cool visualization of the shuffle https://lnkd.in/d8iK8fd
#algorithms #github #datasets #deeplearning #machinelearning
❇️ @AI_Python
🗣 @AI_Python_Arxiv
✴️ @AI_Python_EN
Here the author presents an algorithm for shuffling large datasets.
Here you learn the following;
0. why Shuffle in the first place?
1. A 2-pass shuffle algorithm is tested
2. How to deal with oversized piles
3. Parallelization & more
Link to article : https://lnkd.in/dZ8-tyJ
Gist on #Github: for a cool visualization of the shuffle https://lnkd.in/d8iK8fd
#algorithms #github #datasets #deeplearning #machinelearning
❇️ @AI_Python
🗣 @AI_Python_Arxiv
✴️ @AI_Python_EN
February 1, 2019
Awesome news for beginners in #MachineLearning and #DeepLearning
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#linearalgebra #computing #DeepLearning
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#linearalgebra #computing #DeepLearning
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
February 14, 2019
NODE - Neural Ordinary Differential Equations
This was recently presented as a new approach in NeurIPS.
The idea?
Instead of specifying a discrete sequence of hidden layers, they parameterized the derivative of the hidden state using a neural network. The output of the network is computed using a black- box differential equation solver.
They also propose CNF - or Continuous Normalizing Flow
The continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
Paper: https://lnkd.in/ddMJQAS
#Github: Examples of implementations coming soon to our repository
#neuralnetwork #deeplearning #machinelearning
✴️ @AI_Python_EN
This was recently presented as a new approach in NeurIPS.
The idea?
Instead of specifying a discrete sequence of hidden layers, they parameterized the derivative of the hidden state using a neural network. The output of the network is computed using a black- box differential equation solver.
They also propose CNF - or Continuous Normalizing Flow
The continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
Paper: https://lnkd.in/ddMJQAS
#Github: Examples of implementations coming soon to our repository
#neuralnetwork #deeplearning #machinelearning
✴️ @AI_Python_EN
March 29, 2019
Credit Risk Analysis Using #MachineLearning and #DeepLearning Models
Lovely paper by Peter Martey Addo, Dominique Guegan and Bertrand Hassani
Code on #Github (it's in #R)
https://github.com/brainy749/CreditRiskPaper
✴️ @AI_Python_EN
Lovely paper by Peter Martey Addo, Dominique Guegan and Bertrand Hassani
Code on #Github (it's in #R)
https://github.com/brainy749/CreditRiskPaper
✴️ @AI_Python_EN
October 31, 2019