Am Neumarkt 😱
287 subscribers
89 photos
3 videos
17 files
522 links
Machine learning and other gibberish
Archives: https://datumorphism.leima.is/amneumarkt/
Download Telegram
#Coding

I found a nice place to practice programming thinking. It is not as comprehensive as hackerrank/leetcode but these problems are quite fun.

https://codingcompetitions.withgoogle.com/
#ML

Julia Computing got a lot of investment recently. I need to dive deeper into the Julia Language.

https://juliacomputing.com/blog/2021/07/series-a/
#DS

This is an interesting report by anaconda. We can kind of confirm from this that Python is still the king of languages for data science. SQL is right following Python.

Quote from the report:
> Between March 2020 to February 2021, the pandemic economic period, we saw 4.6 billion package downloads, a 48% increase from the previous year.
We have no data for other languages so no predictions can be made but it is interesting to see Python growing so fast.

The roadblocks different data professionals facing are quite different. If the professional is a cloud engineer or mlops, then they do not mention that skills gap in the organization that many times. But for data scientists/analysts, skills gaps (e.g., data engineering, docker, k8s) is mentioned a lot. This might be related to the cases when the organization doesn't even have cloud engineers/ops or mlops.


See the next message for the PDF file.

https://www.anaconda.com/state-of-data-science-2021
Anaconda-2021-SODS-Report-Final.pdf
1.5 MB
I have downloaded the file so you don't need to.

Anaconda-2021-SODS-Report-Final.pdf
https://github.com/soumith/ganhacks

Training GAN can be baffling.
For example, the generator and the discriminator just don't "learn" at the same scale sometimes. Would you try to balance the generator loss and discriminator loss by hand?
Soumith Chintala ( @ FAIR ) put together this list of tips for training GAN. "Don't balance loss via statistics" is one of the 17 tips by Chintala. The list is quite inspiring.
#science

Nielsen M. Reinventing discovery: The New Era of networked science. Princeton, NJ: Princeton University Press; 2011.

I found this book this morning and skimmed through it. It looks concise yet unique.
The author discusses how the internet is changing the way human beings think as one collective intelligence. I like the chapters about how the data web is enabling more scientific discoveries.
#ML

https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/

Though not the core of the model, I noticed that this model (MEB) uses the user search behavior on Bing to build the language model. If a search result on Bing is clicked by the user, it is considered to be a positive sample for the query, otherwise a negative sample.

In self-supervised learning, it has been shown that negative sampling is extremely important. This Bing search dataset is naturally labeling the positive and negative samples. Kuhl idea.
#DS

Hullman J, Gelman A. Designing for interactive exploratory data analysis requires theories of graphical inference. Harvard Data Science Review. 2021. doi:10.1162/99608f92.3ab8a587
https://hdsr.mitpress.mit.edu/pub/w075glo6/release/2


Creating visualizations seems to be a creative task. At least for entry-level visualization tasks, we follow our hearts and build whatever is needed. However, visualizations are made for different purposes. Some visualizations are simply explorations and for us to get some feelings on the data. Some others are built for the validation of hypotheses. These are very different things.

Confirmation of an idea using charts is usually hard. In most cases, we need statistical tests to (dis)prove a hypothesis instead of just looking at the charts. Thus, visualizations become a tool to help us formulate a good question.

However, not everyone is using charts as hints only. Instead, many use charts to conclude. As a result, even experienced analysts draw spurious conclusions. These so-called insights are not going to be too solid.

The visual analysis seems to be an adversarial game between humans and the visualizations. There are many different models for this process. A crude and probably stupid model can be illustrated through an example of analysis by the histogram of a variable.
The histogram looks like a bell. It is symmetric. It is centered at 10 with an FWHM of 2.6. I guess this is a Gaussian distribution with a mean 10 and sigma 1. This is the posterior p(model | chart).
Imagine a curve like what was just guessed on top of the original curve. Would my guess and the actual curve overlap with each other?
If not, what do we have to adjust? Do we need to introduce another parameter?
Guess the parameter of the new distribution model and compare it with the actual curve again.
The above process is very similar to a repetitive Bayesian inference. Though, the actual analysis may be much more complicated as the analysts would carrier a lot of prior knowledge about the generating process of the data.

Through this example, we see that integrating explorations with preliminary model building as Confirmatory Data Analysis may bring in more confidence in drawing insights from charts.

On the other hand, including complicated statistical models leads to misinterpretations since not everyone is familiar with statistical hypothesis testing. So the complexity has to be balanced.
#中文 #visualization

看到 TMS channel 推荐的 data stiches,
https://datastitches.substack.com/
关注了几期,感觉质量非常好,经常能看到很棒的作品。

同时推荐一下 TMS channel
https://t.me/tms_ur_way/1031
关于时间管理,效率,和人生。
#ML #self-supervised #representation

Contrastive loss is widely used in representation learning. However, the mechanism behind it is not as straightforward as it seems.

Wang & Isola proposed a method to rewrite the contrastive loss in to alignment and uniformity. Samples in the feature space are normalized to unit vectors. These vectors are allocated onto a hypersphere. The two components of the contrastive loss are

- alignment, which forces the positive samples to be aligned on the hypersphere, and
- uniformity, which distributes the samples uniformly on the hypersphere.

By optimization of such objectives, the samples are distributed on a hypersphere, with similar samples clustered, i.e., pointing to the similar directions. Uniformity makes sure the samples are using the whole hypersphere so we don't waste "space".


References:

Wang T, Isola P. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere. arXiv [cs.LG]. 2020. Available: http://arxiv.org/abs/2005.10242
#ML

The authors investigate the geometry formed by the responses of neurons for certain stimulations (tunning curve). Using stimulation as the hidden variable, we can construct a geometry of neuron responses. The authors clarified the relations between this geometry and other measurements such as mutual information.

The story itself in this paper may not be interesting to machine learning practitioners. But the method of using the geometry of neuron responses to probe the brain is intriguing. We may borrow this method to help us with the internal mechanism of neural networks.

Kriegeskorte, Nikolaus, and Xue-Xin Wei. 2021. “Neural Tuning and Representational Geometry.” Nature Reviews. Neuroscience, September. https://doi.org/10.1038/s41583-021-00502-3.