Am Neumarkt 😱
286 subscribers
89 photos
3 videos
17 files
522 links
Machine learning and other gibberish
Archives: https://datumorphism.leima.is/amneumarkt/
Download Telegram
#ml

In his MinT paper, Hyndman said he confused these two quantities in his previous paper. 😂

MinT is a simple method to make forecasts with hierarchical structure coherent. Here coherent means the sum of the lower level forecasts equals the higher level forecasts.

For example, our time series has a strucutre like sales of coca cola + sales of spirit = sales of beverages. If this relations holds for our forecasts, we have coherent forecasts.

This may sound trivial, the problem is in fact hard. There are many trivial methods such as only forecasting lower levels (coca cola, spirit) then use the sum as the higher level (sales of beverages). These are usually too naive to be effective.

MinT is a reconciliation method that combines high level forecasts and the lower level forecasts to find an optimal combination/reconciliation.

https://robjhyndman.com/papers/MinT.pdf
#ml

https://mlcontests.com/state-of-competitive-machine-learning-2022/

Quote from the report:

Successful competitors have mostly converged on a common set of tools — Python, PyData, PyTorch, and gradient-boosted decision trees.

Deep learning still has not replaced gradient-boosted decision trees when it comes to tabular data, though it does often seem to add value when ensembled with boosting methods.
Transformers continue to dominate in NLP, and start to compete with convolutional neural nets in computer vision.

Competitions cover a broad range of research areas including computer vision, NLP, tabular data, robotics, time-series analysis, and many others.
Large ensembles remain common among winners, though single-model solutions do win too.

There are several active machine learning competition platforms, as well as dozens of purpose-built websites for individual competitions.
Competitive machine learning continues to grow in popularity, including in academia.

Around 50% of winners are solo winners50% of winners are first-time winners; 30% have won more than once before.

Some competitors are able to invest significantly into hardware used to train their solutions, though others who use free hardware like Google Colab are also still able to win competitions.
#ml

Pérez J, Barceló P, Marinkovic J. Attention is Turing-Complete. J Mach Learn Res. 2021;22: 1–35. Available: https://jmlr.org/papers/v22/20-302.html
#ml

Yeh, Catherine, Yida Chen, Aoyu Wu, Cynthia Chen, Fernanda Viégas, and Martin Wattenberg. 2023. “AttentionViz: A Global View of Transformer Attention.” ArXiv [Cs.HC]. arXiv. http://arxiv.org/abs/2305.03210.