#Coding
I found a nice place to practice programming thinking. It is not as comprehensive as hackerrank/leetcode but these problems are quite fun.
https://codingcompetitions.withgoogle.com/
I found a nice place to practice programming thinking. It is not as comprehensive as hackerrank/leetcode but these problems are quite fun.
https://codingcompetitions.withgoogle.com/
#ML
Julia Computing got a lot of investment recently. I need to dive deeper into the Julia Language.
https://juliacomputing.com/blog/2021/07/series-a/
Julia Computing got a lot of investment recently. I need to dive deeper into the Julia Language.
https://juliacomputing.com/blog/2021/07/series-a/
#DS
This is an interesting report by anaconda. We can kind of confirm from this that Python is still the king of languages for data science. SQL is right following Python.
Quote from the report:
> Between March 2020 to February 2021, the pandemic economic period, we saw 4.6 billion package downloads, a 48% increase from the previous year.
We have no data for other languages so no predictions can be made but it is interesting to see Python growing so fast.
The roadblocks different data professionals facing are quite different. If the professional is a cloud engineer or mlops, then they do not mention that skills gap in the organization that many times. But for data scientists/analysts, skills gaps (e.g., data engineering, docker, k8s) is mentioned a lot. This might be related to the cases when the organization doesn't even have cloud engineers/ops or mlops.
See the next message for the PDF file.
https://www.anaconda.com/state-of-data-science-2021
This is an interesting report by anaconda. We can kind of confirm from this that Python is still the king of languages for data science. SQL is right following Python.
Quote from the report:
> Between March 2020 to February 2021, the pandemic economic period, we saw 4.6 billion package downloads, a 48% increase from the previous year.
We have no data for other languages so no predictions can be made but it is interesting to see Python growing so fast.
The roadblocks different data professionals facing are quite different. If the professional is a cloud engineer or mlops, then they do not mention that skills gap in the organization that many times. But for data scientists/analysts, skills gaps (e.g., data engineering, docker, k8s) is mentioned a lot. This might be related to the cases when the organization doesn't even have cloud engineers/ops or mlops.
See the next message for the PDF file.
https://www.anaconda.com/state-of-data-science-2021
Anaconda
Anaconda | State of Data Science 2021
Anaconda is the birthplace of Python data science. We are a movement of data scientists, data-driven enterprises, and open source communities.
Anaconda-2021-SODS-Report-Final.pdf
1.5 MB
I have downloaded the file so you don't need to.
Anaconda-2021-SODS-Report-Final.pdf
Anaconda-2021-SODS-Report-Final.pdf
https://github.com/soumith/ganhacks
Training GAN can be baffling.
For example, the generator and the discriminator just don't "learn" at the same scale sometimes. Would you try to balance the generator loss and discriminator loss by hand?
Soumith Chintala ( @ FAIR ) put together this list of tips for training GAN. "Don't balance loss via statistics" is one of the 17 tips by Chintala. The list is quite inspiring.
Training GAN can be baffling.
For example, the generator and the discriminator just don't "learn" at the same scale sometimes. Would you try to balance the generator loss and discriminator loss by hand?
Soumith Chintala ( @ FAIR ) put together this list of tips for training GAN. "Don't balance loss via statistics" is one of the 17 tips by Chintala. The list is quite inspiring.
GitHub
GitHub - soumith/ganhacks: starter from "How to Train a GAN?" at NIPS2016
starter from "How to Train a GAN?" at NIPS2016. Contribute to soumith/ganhacks development by creating an account on GitHub.
#ML
https://thegradient.pub/systems-for-machine-learning/
challenges in data collection, verification, and serving tasks
https://thegradient.pub/systems-for-machine-learning/
challenges in data collection, verification, and serving tasks
The Gradient
Systems for Machine Learning
On the field of Machine Learning Systems and how it addresses the new challenges of ML with a lens shaped by traditional systems research
#science
Nielsen M. Reinventing discovery: The New Era of networked science. Princeton, NJ: Princeton University Press; 2011.
I found this book this morning and skimmed through it. It looks concise yet unique.
The author discusses how the internet is changing the way human beings think as one collective intelligence. I like the chapters about how the data web is enabling more scientific discoveries.
Nielsen M. Reinventing discovery: The New Era of networked science. Princeton, NJ: Princeton University Press; 2011.
I found this book this morning and skimmed through it. It looks concise yet unique.
The author discusses how the internet is changing the way human beings think as one collective intelligence. I like the chapters about how the data web is enabling more scientific discoveries.
#ML
https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/
Though not the core of the model, I noticed that this model (MEB) uses the user search behavior on Bing to build the language model. If a search result on Bing is clicked by the user, it is considered to be a positive sample for the query, otherwise a negative sample.
In self-supervised learning, it has been shown that negative sampling is extremely important. This Bing search dataset is naturally labeling the positive and negative samples. Kuhl idea.
https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/
Though not the core of the model, I noticed that this model (MEB) uses the user search behavior on Bing to build the language model. If a search result on Bing is clicked by the user, it is considered to be a positive sample for the query, otherwise a negative sample.
In self-supervised learning, it has been shown that negative sampling is extremely important. This Bing search dataset is naturally labeling the positive and negative samples. Kuhl idea.
Microsoft Research
Make Every feature Binary: A 135B parameter sparse neural network for massively improved search relevance - Microsoft Research
Recently, Transformer-based deep learning models like GPT-3 have been getting a lot of attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s…
#DS
Hullman J, Gelman A. Designing for interactive exploratory data analysis requires theories of graphical inference. Harvard Data Science Review. 2021. doi:10.1162/99608f92.3ab8a587
https://hdsr.mitpress.mit.edu/pub/w075glo6/release/2
Creating visualizations seems to be a creative task. At least for entry-level visualization tasks, we follow our hearts and build whatever is needed. However, visualizations are made for different purposes. Some visualizations are simply explorations and for us to get some feelings on the data. Some others are built for the validation of hypotheses. These are very different things.
Confirmation of an idea using charts is usually hard. In most cases, we need statistical tests to (dis)prove a hypothesis instead of just looking at the charts. Thus, visualizations become a tool to help us formulate a good question.
However, not everyone is using charts as hints only. Instead, many use charts to conclude. As a result, even experienced analysts draw spurious conclusions. These so-called insights are not going to be too solid.
The visual analysis seems to be an adversarial game between humans and the visualizations. There are many different models for this process. A crude and probably stupid model can be illustrated through an example of analysis by the histogram of a variable.
The histogram looks like a bell. It is symmetric. It is centered at 10 with an FWHM of 2.6. I guess this is a Gaussian distribution with a mean 10 and sigma 1. This is the posterior p(model | chart).
Imagine a curve like what was just guessed on top of the original curve. Would my guess and the actual curve overlap with each other?
If not, what do we have to adjust? Do we need to introduce another parameter?
Guess the parameter of the new distribution model and compare it with the actual curve again.
The above process is very similar to a repetitive Bayesian inference. Though, the actual analysis may be much more complicated as the analysts would carrier a lot of prior knowledge about the generating process of the data.
Through this example, we see that integrating explorations with preliminary model building as Confirmatory Data Analysis may bring in more confidence in drawing insights from charts.
On the other hand, including complicated statistical models leads to misinterpretations since not everyone is familiar with statistical hypothesis testing. So the complexity has to be balanced.
Hullman J, Gelman A. Designing for interactive exploratory data analysis requires theories of graphical inference. Harvard Data Science Review. 2021. doi:10.1162/99608f92.3ab8a587
https://hdsr.mitpress.mit.edu/pub/w075glo6/release/2
Creating visualizations seems to be a creative task. At least for entry-level visualization tasks, we follow our hearts and build whatever is needed. However, visualizations are made for different purposes. Some visualizations are simply explorations and for us to get some feelings on the data. Some others are built for the validation of hypotheses. These are very different things.
Confirmation of an idea using charts is usually hard. In most cases, we need statistical tests to (dis)prove a hypothesis instead of just looking at the charts. Thus, visualizations become a tool to help us formulate a good question.
However, not everyone is using charts as hints only. Instead, many use charts to conclude. As a result, even experienced analysts draw spurious conclusions. These so-called insights are not going to be too solid.
The visual analysis seems to be an adversarial game between humans and the visualizations. There are many different models for this process. A crude and probably stupid model can be illustrated through an example of analysis by the histogram of a variable.
The histogram looks like a bell. It is symmetric. It is centered at 10 with an FWHM of 2.6. I guess this is a Gaussian distribution with a mean 10 and sigma 1. This is the posterior p(model | chart).
Imagine a curve like what was just guessed on top of the original curve. Would my guess and the actual curve overlap with each other?
If not, what do we have to adjust? Do we need to introduce another parameter?
Guess the parameter of the new distribution model and compare it with the actual curve again.
The above process is very similar to a repetitive Bayesian inference. Though, the actual analysis may be much more complicated as the analysts would carrier a lot of prior knowledge about the generating process of the data.
Through this example, we see that integrating explorations with preliminary model building as Confirmatory Data Analysis may bring in more confidence in drawing insights from charts.
On the other hand, including complicated statistical models leads to misinterpretations since not everyone is familiar with statistical hypothesis testing. So the complexity has to be balanced.
Harvard Data Science Review
Designing for Interactive Exploratory Data Analysis Requires Theories of Graphical Inference · Issue 3.3, Summer 2021
#fun
This is cool.
https://github.blog/2021-08-31-request-for-proposals-defining-standardized-github-metrics/
This is cool.
https://github.blog/2021-08-31-request-for-proposals-defining-standardized-github-metrics/
The GitHub Blog
Request for proposals: Defining standardized GitHub metrics
The GitHub Social Impact and Policy teams are issuing a Request for Proposal (RFP) for a researcher to define a list of publicly available GitHub platform usage metrics by country for international development, public policy and economics disciplines.
#ML
😂
Jürgen Schmidhuber invented transformers in the 90s.
https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html
😂
Jürgen Schmidhuber invented transformers in the 90s.
https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html
people.idsia.ch
2021: 30th anniversary of linear Transformer principle
1991: Nets learn to program nets with outer product fast weights
#DS
Cute comics on interactive data visualization
https://hdsr.mitpress.mit.edu/pub/49opxv6v/release/1
Cute comics on interactive data visualization
https://hdsr.mitpress.mit.edu/pub/49opxv6v/release/1
Harvard Data Science Review
Why Do We Plot Data? · Harvard Data Science Review
Accompanying text for the “Designing for interactive exploratory data analysis requires theory of graphical inference” Explainer Zine
#中文 #visualization
看到 TMS channel 推荐的 data stiches,
https://datastitches.substack.com/
关注了几期,感觉质量非常好,经常能看到很棒的作品。
同时推荐一下 TMS channel
https://t.me/tms_ur_way/1031
关于时间管理,效率,和人生。
看到 TMS channel 推荐的 data stiches,
https://datastitches.substack.com/
关注了几期,感觉质量非常好,经常能看到很棒的作品。
同时推荐一下 TMS channel
https://t.me/tms_ur_way/1031
关于时间管理,效率,和人生。
Substack
Data Stitches
data visualization and digital methods. Click to read Data Stitches, by jsongal, a Substack publication with hundreds of readers.
#ML #self-supervised #representation
Contrastive loss is widely used in representation learning. However, the mechanism behind it is not as straightforward as it seems.
Wang & Isola proposed a method to rewrite the contrastive loss in to alignment and uniformity. Samples in the feature space are normalized to unit vectors. These vectors are allocated onto a hypersphere. The two components of the contrastive loss are
- alignment, which forces the positive samples to be aligned on the hypersphere, and
- uniformity, which distributes the samples uniformly on the hypersphere.
By optimization of such objectives, the samples are distributed on a hypersphere, with similar samples clustered, i.e., pointing to the similar directions. Uniformity makes sure the samples are using the whole hypersphere so we don't waste "space".
References:
Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
Contrastive loss is widely used in representation learning. However, the mechanism behind it is not as straightforward as it seems.
Wang & Isola proposed a method to rewrite the contrastive loss in to alignment and uniformity. Samples in the feature space are normalized to unit vectors. These vectors are allocated onto a hypersphere. The two components of the contrastive loss are
- alignment, which forces the positive samples to be aligned on the hypersphere, and
- uniformity, which distributes the samples uniformly on the hypersphere.
By optimization of such objectives, the samples are distributed on a hypersphere, with similar samples clustered, i.e., pointing to the similar directions. Uniformity makes sure the samples are using the whole hypersphere so we don't waste "space".
References:
Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
#ML
The authors investigate the geometry formed by the responses of neurons for certain stimulations (tunning curve). Using stimulation as the hidden variable, we can construct a geometry of neuron responses. The authors clarified the relations between this geometry and other measurements such as mutual information.
The story itself in this paper may not be interesting to machine learning practitioners. But the method of using the geometry of neuron responses to probe the brain is intriguing. We may borrow this method to help us with the internal mechanism of neural networks.
Kriegeskorte, Nikolaus, and Xue-Xin Wei. 2021. “Neural Tuning and Representational Geometry.” Nature Reviews. Neuroscience, September. https://doi.org/10.1038/s41583-021-00502-3.
The authors investigate the geometry formed by the responses of neurons for certain stimulations (tunning curve). Using stimulation as the hidden variable, we can construct a geometry of neuron responses. The authors clarified the relations between this geometry and other measurements such as mutual information.
The story itself in this paper may not be interesting to machine learning practitioners. But the method of using the geometry of neuron responses to probe the brain is intriguing. We may borrow this method to help us with the internal mechanism of neural networks.
Kriegeskorte, Nikolaus, and Xue-Xin Wei. 2021. “Neural Tuning and Representational Geometry.” Nature Reviews. Neuroscience, September. https://doi.org/10.1038/s41583-021-00502-3.
Nature
Neural tuning and representational geometry
Nature Reviews Neuroscience - Developing a better understanding of neural codes should enable the links between stimuli, brain activity and behaviour to become clearer. In this Perspective,...
#visualization
The Doomsday Datavisualizations - Bulletin of the Atomic Scientists
https://thebulletin.org/doomsday-clock/datavisualizations/
The Doomsday Datavisualizations - Bulletin of the Atomic Scientists
https://thebulletin.org/doomsday-clock/datavisualizations/
Bulletin of the Atomic Scientists
The Doomsday Datavisualizations - Bulletin of the Atomic Scientists
Overview Current Time FAQ Timeline Dashboard Datavisualizations Virtual Tour In setting the Doomsday Clock, the Bulletin’s Science and Security Board consults widely with colleagues across a range of disciplines and considers qualitative and quantitative…
#ML
Phys. Rev. X 11, 031059 (2021) - Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization
https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.031059
Phys. Rev. X 11, 031059 (2021) - Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization
https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.031059
Physical Review X
Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization
A new theory of linear deep neural networks allows for the first statistical study of their ``weight space,'' providing insight into the features that allow such networks to generalize so well.