Russian Search Engine has opened sources of the CatBoost β library which claims to be replacement of Yandexβs famous MatrixNet. Researches claim that CatBoost results are comparable with XGboost.
https://techcrunch.com/2017/07/18/yandex-open-sources-catboost-a-gradient-boosting-machine-learning-librar/
#opensource #yandex #xgboost #catboost
https://techcrunch.com/2017/07/18/yandex-open-sources-catboost-a-gradient-boosting-machine-learning-librar/
#opensource #yandex #xgboost #catboost
TechCrunch
Yandex open sources CatBoost, a gradient boosting machine learning library
Artificial intelligence is now powering a growing number of computing functions, and today the developer community today is getting another AI boost, courtesy of Yandex. Today, the Russian search giant β which, like its US counterpart Google, has extendedβ¦
ββSGLB: Stochastic Gradient Langevin Boosting
In this paper, the authors introduce Stochastic Gradient Langevin Boosting (SGLB) β a powerful and efficient ML framework, which may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of Langevin Diffusion equation specifically designed for gradient boosting. This allows guarantee the global convergence, while standard gradient boosting algorithms can guarantee only local optima, which is a problem for multimodal loss functions. To illustrate the advantages of SGLB, they apply it to a classification task with
The algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms classic gradient boosting methods.
paper: https://arxiv.org/abs/2001.07248
release: https://github.com/catboost/catboost/releases/tag/v0.21
#langevin #boosting #catboost
In this paper, the authors introduce Stochastic Gradient Langevin Boosting (SGLB) β a powerful and efficient ML framework, which may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of Langevin Diffusion equation specifically designed for gradient boosting. This allows guarantee the global convergence, while standard gradient boosting algorithms can guarantee only local optima, which is a problem for multimodal loss functions. To illustrate the advantages of SGLB, they apply it to a classification task with
0-1
loss function, which is known to be multimodal, and to a standard Logistic regression task that is convex.The algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms classic gradient boosting methods.
paper: https://arxiv.org/abs/2001.07248
release: https://github.com/catboost/catboost/releases/tag/v0.21
#langevin #boosting #catboost