#science
The ergodicity problem in economics | Nature Physics
https://www.nature.com/articles/s41567-019-0732-0
I read another paper about hot hand/gamblers' fallacy a while ago and the author of that paper took a similar view. Here is the article:
Surprised by the Hot Hand Fallacy ? A Truth in the Law of Small Numbers by Miller
The ergodicity problem in economics | Nature Physics
https://www.nature.com/articles/s41567-019-0732-0
I read another paper about hot hand/gamblers' fallacy a while ago and the author of that paper took a similar view. Here is the article:
Surprised by the Hot Hand Fallacy ? A Truth in the Law of Small Numbers by Miller
Nature
The ergodicity problem in economics
Nature Physics - This Perspective argues that ergodicity โ a foundational concept in equilibrium statistical physics โ is wrongly assumed in much of the quantitative economics...
#machinelearning
https://arxiv.org/abs/2007.04504
Learning Differential Equations that are Easy to Solve
Jacob Kelly, Jesse Bettencourt, Matthew James Johnson, David Duvenaud
Differential equations parameterized by neural networks become expensive to solve numerically as training progresses. We propose a remedy that encourages learned dynamics to be easier to solve. Specifically, we introduce a differentiable surrogate for the time cost of standard numerical solvers, using higher-order derivatives of solution trajectories. These derivatives are efficient to compute with Taylor-mode automatic differentiation. Optimizing this additional objective trades model performance against the time cost of solving the learned dynamics. We demonstrate our approach by training substantially faster, while nearly as accurate, models in supervised classification, density estimation, and time-series modelling tasks.
https://arxiv.org/abs/2007.04504
Learning Differential Equations that are Easy to Solve
Jacob Kelly, Jesse Bettencourt, Matthew James Johnson, David Duvenaud
Differential equations parameterized by neural networks become expensive to solve numerically as training progresses. We propose a remedy that encourages learned dynamics to be easier to solve. Specifically, we introduce a differentiable surrogate for the time cost of standard numerical solvers, using higher-order derivatives of solution trajectories. These derivatives are efficient to compute with Taylor-mode automatic differentiation. Optimizing this additional objective trades model performance against the time cost of solving the learned dynamics. We demonstrate our approach by training substantially faster, while nearly as accurate, models in supervised classification, density estimation, and time-series modelling tasks.
covid19-top5-countries-combined.mp4
28.9 MB
#datascience #audliolization
This is the audiolization of the daily new cases for FR, IT, ES, DE, PL between 2020-08-01 and 2020-12-14. I made an audiolization video two years ago. As I am currently under quarantine and the days are becoming so boring, I started to think about the mapping of data points to different representations. We usually talk about visualization because there are so many elements to be used to represent complicated data. Audiolization, on the other hand, leaves us with very few elements to encode. But it's a lot of fun working with audio. So I wrote a python package to map a pandas dataframe/numpy ndarray to midi representation. Here is the package https://github.com/emptymalei/audiorepr
This is the audiolization of the daily new cases for FR, IT, ES, DE, PL between 2020-08-01 and 2020-12-14. I made an audiolization video two years ago. As I am currently under quarantine and the days are becoming so boring, I started to think about the mapping of data points to different representations. We usually talk about visualization because there are so many elements to be used to represent complicated data. Audiolization, on the other hand, leaves us with very few elements to encode. But it's a lot of fun working with audio. So I wrote a python package to map a pandas dataframe/numpy ndarray to midi representation. Here is the package https://github.com/emptymalei/audiorepr
#datascience #career #academia
> I regret quitting astrophysics
https://news.ycombinator.com/item?id=25444069
http://www.marcelhaas.com/index.php/2020/12/16/i-regret-quitting-astrophysics/
me too ๐ though not an astrophysicist, I miss academia too
> I regret quitting astrophysics
https://news.ycombinator.com/item?id=25444069
http://www.marcelhaas.com/index.php/2020/12/16/i-regret-quitting-astrophysics/
me too ๐ though not an astrophysicist, I miss academia too
#tools #writing
https://www.losethevery.com/
> "Very good english" is not very good english. Lose the very.
https://www.losethevery.com/
> "Very good english" is not very good english. Lose the very.
#datascience
I ran into this hilarious comment on pie chart in a book called The Grammar of Graphics.
โTo prevent bias, give
the child the knife and someone else the first choice of slices.โ ๐ฑ๐ฑ๐ฑ
I ran into this hilarious comment on pie chart in a book called The Grammar of Graphics.
โTo prevent bias, give
the child the knife and someone else the first choice of slices.โ ๐ฑ๐ฑ๐ฑ
#showerthoughts
As human beings, we read or hear about facts of something. These are our priors. Our belief is then updated based on observation of data, aka, likelihood. Some people abide by the priors, they are the prior-people, while others are more like likelihood-people and easily change their belief based on observations.
There is a third type. They combine priors and likelihood. Change belief based on likelihood is prone to biases in data. By combining priors and likelihood, they have a better chance of getting to the right conclusion.
As human beings, we read or hear about facts of something. These are our priors. Our belief is then updated based on observation of data, aka, likelihood. Some people abide by the priors, they are the prior-people, while others are more like likelihood-people and easily change their belief based on observations.
There is a third type. They combine priors and likelihood. Change belief based on likelihood is prone to biases in data. By combining priors and likelihood, they have a better chance of getting to the right conclusion.
#data #covid19
UK gov has an official covid 19 API. https://coronavirus.data.gov.uk/details/developers-guide#structure-metrics
I found this funny typo in the documentation. ๐ The first one should be cumCasesByPublishDateRate.
UK gov has an official covid 19 API. https://coronavirus.data.gov.uk/details/developers-guide#structure-metrics
I found this funny typo in the documentation. ๐ The first one should be cumCasesByPublishDateRate.
https://www.nature.com/articles/s41557-020-0544-y
> Here we propose PauliNet, a deep-learning wavefunction ansatz that achieves nearly exact solutions of the electronic Schrรถdinger equation for molecules with up to 30 electrons
> Here we propose PauliNet, a deep-learning wavefunction ansatz that achieves nearly exact solutions of the electronic Schrรถdinger equation for molecules with up to 30 electrons
Nature
Deep-neural-network solution of the electronic Schrรถdinger equation
Nature Chemistry - High-accuracy quantum chemistry methods struggle with a combinatorial explosion of Slater determinants in larger molecular systems, but now a method has been developed that...
#data
Could you prevent a pandemic? A very 2020 video game
https://play.acast.com/s/nature/2020festivespectacular
Could you prevent a pandemic? A very 2020 video game
https://play.acast.com/s/nature/2020festivespectacular
acast
Could you prevent a pandemic? A very 2020 video game | Nature Podcast on Acast
A video game provides players with insights into pandemic responses, and our annual festive fun. In this episode: 01:02 Balancing responses in a video game pandemic In the strategy video-game Plague Inc: The Cure, players assume the role of an omnipotentโฆ
#neuroscience
Source:
https://science.sciencemag.org/content/370/6523/1410.full
A gatekeeper for learning
> Upon learning a hippocampus-dependent associative task, perirhinal inputs might act as a gate to modulate the excitability of apical dendrites and the impact of the feedback stream on layer 5 pyramidal neurons of the primary somatosensory cortex.
๐ฒ In some sense, perirhinal inputs are like config files for learning.
Source:
https://science.sciencemag.org/content/370/6523/1410.full
A gatekeeper for learning
> Upon learning a hippocampus-dependent associative task, perirhinal inputs might act as a gate to modulate the excitability of apical dendrites and the impact of the feedback stream on layer 5 pyramidal neurons of the primary somatosensory cortex.
๐ฒ In some sense, perirhinal inputs are like config files for learning.
https://github.com/volotat/DiffMorph
#machinelearning #opensource
Differentiable Morphing
> Image morphing without reference points by applying warp maps and optimizing over them.
#machinelearning #opensource
Differentiable Morphing
> Image morphing without reference points by applying warp maps and optimizing over them.
GitHub
GitHub - volotat/DiffMorph: Image morphing without reference points by applying warp maps and optimizing over them.
Image morphing without reference points by applying warp maps and optimizing over them. - volotat/DiffMorph