#fun
Interesting talk on the softwares used by Apollo.
https://media.ccc.de/v/34c3-9064-the_ultimate_apollo_guidance_computer_talk#t=3305
Interesting talk on the softwares used by Apollo.
https://media.ccc.de/v/34c3-9064-the_ultimate_apollo_guidance_computer_talk#t=3305
media.ccc.de
The Ultimate Apollo Guidance Computer Talk
The Apollo Guidance Computer ("AGC") was used onboard the Apollo spacecraft to support the Apollo moon landings between 1969 and 1972. Th...
#dev
You can even use Chinese in GitHub Codespaces. 😱
Well this is trivial if you have Chinese input methods on your computer. What if you are using a company computer and you would like to add some Chinese comments just for fun....
You can even use Chinese in GitHub Codespaces. 😱
Well this is trivial if you have Chinese input methods on your computer. What if you are using a company computer and you would like to add some Chinese comments just for fun....
#ML
note2self:
From ref 1
> we can take any expected utility maximization problem, and decompose it into an entropy minimization term plus a “make-the-world-look-like-this-specific-model” term.
This view should be combined with ref 2. If the utility is related to the curvature of the discrete state space, we are making a connection between entropy + KL divergence and curvature on graph. (This idea has to be polished in depth.)
Refs:
1. Trivial proof but interesting perspective: https://www.lesswrong.com/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization
2. Samal Areejit, Pharasi Hirdesh K., Ramaia Sarath Jyotsna, Kannan Harish, Saucan Emil, Jost Jürgen and Chakraborti Anirban 2021Network geometry and market instabilityR. Soc. open sci.8201734. http://doi.org/10.1098/rsos.201734
note2self:
From ref 1
> we can take any expected utility maximization problem, and decompose it into an entropy minimization term plus a “make-the-world-look-like-this-specific-model” term.
This view should be combined with ref 2. If the utility is related to the curvature of the discrete state space, we are making a connection between entropy + KL divergence and curvature on graph. (This idea has to be polished in depth.)
Refs:
1. Trivial proof but interesting perspective: https://www.lesswrong.com/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization
2. Samal Areejit, Pharasi Hirdesh K., Ramaia Sarath Jyotsna, Kannan Harish, Saucan Emil, Jost Jürgen and Chakraborti Anirban 2021Network geometry and market instabilityR. Soc. open sci.8201734. http://doi.org/10.1098/rsos.201734
Lesswrong
Utility Maximization = Description Length Minimization — LessWrong
There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. V…
#DataScience
Ah I have always been thinking about writing a book like this.
Just bought the book to educate myself on communications.
https://andrewnc.github.io/blog/everyday_data_science.html
Ah I have always been thinking about writing a book like this.
Just bought the book to educate myself on communications.
https://andrewnc.github.io/blog/everyday_data_science.html
andrewnc.github.io
Everyday Data Science
Everyday Data Science. A book about using data science to solve day-to-day problems. It is a collection of stories, jokes, pictures, math, and code all designed to inspire future data professionals. We make decisions everyday, but often times we don't use…
#ML
Haha
Deep Learning Activation Functions using Dance Moves
https://www.reddit.com/r/learnmachinelearning/comments/lvehmi/deep_learning_activation_functions_using_dance/?utm_medium=android_app&utm_source=share
Haha
Deep Learning Activation Functions using Dance Moves
https://www.reddit.com/r/learnmachinelearning/comments/lvehmi/deep_learning_activation_functions_using_dance/?utm_medium=android_app&utm_source=share
Reddit
r/learnmachinelearning on Reddit: Deep Learning Activation Functions using Dance Moves
Posted by u/TheInsaneApp - 1,171 votes and 19 comments
#event
If you are interested in free online AI Cons, Bosch CAI is organizing the AI Con 2021.
This event starts tomorrow.
https://www.ubivent.com/start/AI-CON-2021
If you are interested in free online AI Cons, Bosch CAI is organizing the AI Con 2021.
This event starts tomorrow.
https://www.ubivent.com/start/AI-CON-2021
CleanShot 2021-03-05 at 13.37.20.png
47.7 KB
#ML #Phyiscs
The easiest method to apply constraints to a dynamical system is through Lagrange multiplier, aka, penalties in statistical learning. Penalties don't guarantee any conservation laws as they are simply penalties, unless you find the multiplers carrying some physical meaning like what we have in Boltzmann statistics.
This paper explains a simple method to hardcode conservation laws in a Neural Network architecture.
Paper:
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.098302
TLDR:
See the attached figure. Basically, the hardcoded conservation is realized using additional layers after the normal neural network predictions.
A quick bite of the paper: https://physics.aps.org/articles/v14/s25
Some thoughts:
I like this paper. When physicists work on problems, they like dimensionlessness. This paper follows this convention. This is extremely important when you are working on a numerical problem. One should always make it dimensionless before implementing the equations in code.
The easiest method to apply constraints to a dynamical system is through Lagrange multiplier, aka, penalties in statistical learning. Penalties don't guarantee any conservation laws as they are simply penalties, unless you find the multiplers carrying some physical meaning like what we have in Boltzmann statistics.
This paper explains a simple method to hardcode conservation laws in a Neural Network architecture.
Paper:
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.098302
TLDR:
See the attached figure. Basically, the hardcoded conservation is realized using additional layers after the normal neural network predictions.
A quick bite of the paper: https://physics.aps.org/articles/v14/s25
Some thoughts:
I like this paper. When physicists work on problems, they like dimensionlessness. This paper follows this convention. This is extremely important when you are working on a numerical problem. One should always make it dimensionless before implementing the equations in code.
#fun
> Growth in data science interviews plateaued in 2020. Data science interviews only grew by 10% after previously growing by 80% year over year.
> Data engineering specific interviews increased by 40% in the past year.
https://www.interviewquery.com/blog-data-science-interview-report
> Growth in data science interviews plateaued in 2020. Data science interviews only grew by 10% after previously growing by 80% year over year.
> Data engineering specific interviews increased by 40% in the past year.
https://www.interviewquery.com/blog-data-science-interview-report
Interview Query
The 2021 Data Science Interview Report
We analyzed over 10,000 data science interview experiences. Here are our findings.
#ML
I just found an elegant decision tree visualization package for sklearn.
I have been trying to explain decision tree results to many business people. It is very hard. This package makes it much easier to explain the results to a non-techinical person.
https://github.com/parrt/dtreeviz
I just found an elegant decision tree visualization package for sklearn.
I have been trying to explain decision tree results to many business people. It is very hard. This package makes it much easier to explain the results to a non-techinical person.
https://github.com/parrt/dtreeviz
GitHub
GitHub - parrt/dtreeviz: A python library for decision tree visualization and model interpretation.
A python library for decision tree visualization and model interpretation. - parrt/dtreeviz
#ML
Simple algorithm, powerful results
https://avinayak.github.io/algorithms/programming/2021/02/19/finding-mona-lisa-in-the-game-of-life.html
Simple algorithm, powerful results
https://avinayak.github.io/algorithms/programming/2021/02/19/finding-mona-lisa-in-the-game-of-life.html
#fun
India is growing so fast
Oh Germany...
Global AI Vibrancy Tool
Who’s leading the global AI race?
https://aiindex.stanford.edu/vibrancy/
India is growing so fast
Oh Germany...
Global AI Vibrancy Tool
Who’s leading the global AI race?
https://aiindex.stanford.edu/vibrancy/
#ML
How do we interpret the capacities of the neural nets? Naively, we would represent the capacity using the number of parameters. Even for Hopfield network, Hopfield introduced the concept of capacity using entropy which in turn is related to the number of parameters.
But adding layers to neural nets also introduces regularizations. It might be related to capacities of the neural nets but we do not have a clear clue.
This paper introduced a new perspective using sparse approximation theory. Sparse approximation theory represents the data by encouraging parsimony. The more parameters, the more accurate the model is representing the training data. But it causes generalization issues as similar data points in the test data may have been pushed apart [^Murdock2021].
By mapping the neural nets to shallow "overcomplete frames", the capacity of the neural nets is easier to interpret.
[Murdock2021]: Murdock C, Lucey S. Reframing Neural Networks: Deep Structure in Overcomplete Representations. arXiv [cs.LG]. 2021. Available: http://arxiv.org/abs/2103.05804
How do we interpret the capacities of the neural nets? Naively, we would represent the capacity using the number of parameters. Even for Hopfield network, Hopfield introduced the concept of capacity using entropy which in turn is related to the number of parameters.
But adding layers to neural nets also introduces regularizations. It might be related to capacities of the neural nets but we do not have a clear clue.
This paper introduced a new perspective using sparse approximation theory. Sparse approximation theory represents the data by encouraging parsimony. The more parameters, the more accurate the model is representing the training data. But it causes generalization issues as similar data points in the test data may have been pushed apart [^Murdock2021].
By mapping the neural nets to shallow "overcomplete frames", the capacity of the neural nets is easier to interpret.
[Murdock2021]: Murdock C, Lucey S. Reframing Neural Networks: Deep Structure in Overcomplete Representations. arXiv [cs.LG]. 2021. Available: http://arxiv.org/abs/2103.05804