Channel: HDI Lab
277 subscribers
2 photos
11 files
140 links
Structural learning group consolidates students and researchers from IITP RAS, Skoltech, HSE, MSU and MIPT. The main research areas are stochastic algorithms, high-dimensional inference and high-dimensional probability.

Website: http://strlearn.ru/
Download Telegram
to view and join the conversation
presentation_hse_23032021.pdf
3.6 MB
Презентация сегодняшнего доклада
Dear Colleagues,

Next Tuesday, March 30 (at 18:00), the speaker will be:

Dmytro Perekrestenko (ETH Zurich)

Constructive Universal High-Dimensional Distribution Generation through Deep ReLU

We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution. The key ingredient of our design is a generalization of the “space-filling” property of sawtooth functions discovered in (Bailey & Telgarsky, 2018). We elicit the importance of depth—in our neural network construction—in driving the Wasserstein distance between the target distribution and the approximation realized by the network to zero. We also will outline an extension of our method to output distributions of arbitrary dimension. Finally, we show that the proposed construction does not incur a cost—in terms of error measured in Wasserstein-distance—relative to generating d-dimensional target distributions from d independent random variables.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested
Channel: HDI Lab pinned «Dear Colleagues, Next Tuesday, March 30 (at 18:00), the speaker will be: Dmytro Perekrestenko (ETH Zurich) Constructive Universal High-Dimensional Distribution Generation through Deep ReLU We present an explicit deep neural network construction that…»
Слайды вчерашнего доклада
Dear Colleagues,

Next Tuesday, April 6 (at 18:00), the speaker will be:

Daniil Tiapkin (HSE University)

Improved Complexity Bounds in the Wasserstein Barycenter Problem

In this paper, we focus on computational aspects of the Wasserstein barycenter problem. We provide two algorithms to compute Wasserstein barycenter of m discrete measures of size n with accuracy $\e$. The first algorithm, based on mirror prox with some specific norm, meets the complexity of celebrated accelerated iterative Bregman projections (IBP), that is $\widetilde O(mn^2\sqrt n/\e)$, however, with no limitations unlike (accelerated) IBP, that is numerically unstable when regularization parameter is small. The second algorithm, based on area-convexity and dual extrapolation, improves the previously best-known convergence rates for Wasserstein barycenter problem enjoying $\widetilde O(mn^2/\e)$ complexity.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested
Дорогие коллеги,

Приглашаем всех интересующихся современной теорией вероятностей в пространствах высокой размерности и ее приложениями в ML принять участие в конференции https://cs.hse.ru/hdilab/hdpa2021
Регистрация до 7 апреля (через систему Сириуса).
Channel: HDI Lab pinned «Dear Colleagues, Next Tuesday, April 6 (at 18:00), the speaker will be: Daniil Tiapkin (HSE University) Improved Complexity Bounds in the Wasserstein Barycenter Problem In this paper, we focus on computational aspects of the Wasserstein barycenter…»
Dear Colleagues,

Next Tuesday, April 13 (at 18:00), the speaker will be:

Alexey Kroshnin (IITP RAS, HSE University)

Robust k-means clustering in metric spaces

In this talk we consider robust median of means based algorithms for the k-means clustering problem in a metric space. The main results are non-asymptotic excess distortion bounds that hold under the two bounded moments assumption in a general separable metric space. In the case of Hilbert space our bounds have the sub-Gaussian form depending on the probability mass of the lightest cluster of an optimal quantizer. In a special case of clustering in R^d we prove matching (up to constant factors) non-asymptotic upper and lower bounds on the excess distortion, which depend on the probability mass of the lightest cluster and on the variance. The talk is based on a joint work with Yegor Klochkov and Nikita Zhivotovskiy https://arxiv.org/abs/2002.02339.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested
Channel: HDI Lab pinned «Dear Colleagues, Next Tuesday, April 13 (at 18:00), the speaker will be: Alexey Kroshnin (IITP RAS, HSE University) Robust k-means clustering in metric spaces In this talk we consider robust median of means based algorithms for the k-means clustering problem…»
Дорогие коллеги, прием заявок на традиционную молодежную летнюю школу "Управление, информация и оптимизация" открыт! Информация и регистрация доступны по ссылке: https://ssopt.org
Dear Colleagues,

Next Tuesday, April 20 (at 18:00), the speaker will be:

Quentin Paris (HSE University)

Online learning with exponential weights in metric spaces

This paper addresses the problem of online learning in metric spaces using exponential weights. We extend the analysis of the exponentially weighted average forecaster, traditionally studied in a Euclidean settings, to a more abstract framework. Our results rely on the notion of barycenters, a suitable version of Jensen's inequality and a synthetic notion of lower curvature bound in metric spaces known as the measure contraction property. We also adapt the online-to-batch conversion principle to apply our results to a statistical learning framework.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested
Channel: HDI Lab pinned «Dear Colleagues, Next Tuesday, April 20 (at 18:00), the speaker will be: Quentin Paris (HSE University) Online learning with exponential weights in metric spaces This paper addresses the problem of online learning in metric spaces using exponential weights.…»
Forwarded from ФКН НИУ ВШЭ
Приглашаем всех желающих на коллоквиум ФКН по теме "Геометрия в моделях машинного обучения"!

В докладе будет рассказано об использовании геометрического подхода к моделям машинного обучения, а также будет проведен обзор результатов последних лет, включая:
📍 применение гиперболических пространств в машинном обучении и создание рекомендательных моделей;
📍 построение интерпретируемых направлений в латентном пространстве генеративных сетей.

📆Дата: 27 апреля, вторник, 16:20 – 17:40
🗣Докладчик: Иван Оселедец, Сколтех

https://cs.hse.ru/announcements/463400550.html
Дорогие коллеги,

мы запустили 2-ую проектную смену Современные методы теории информации, оптимизации и управления, которая пройдет 19 июля - 8 августа в Сочи (Сириус). Регистрируйтесь и решайте вступительные задачи!
Channel: HDI Lab pinned «Дорогие коллеги, мы запустили 2-ую проектную смену Современные методы теории информации, оптимизации и управления, которая пройдет 19 июля - 8 августа в Сочи (Сириус). Регистрируйтесь и решайте вступительные задачи!»
Dear Colleagues,

Next Tuesday, May 18 (at 18:00), the speaker will be:

Egor Klochkov (Cambridge University)

Fast rates for strongly convex optimization via stability

The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondrák, 2018, 2019), (Bousquet, Klochkov, Zhivotovskiy, 2020) contain a generally inevitable sampling error term of order 1/sqrt(n). When applied to excess risk bounds, this leads to suboptimal results in several standard stochastic convex optimization problems. We show that if the so-called Bernstein condition is satisfied, the term O(1/sqrt(n)) can be avoided, and high probability excess risk bounds of order up to O(1/n) are possible via uniform stability. Using this result, we show a high probability excess risk bound with the rate O(log n / n) for strongly convex and Lipschitz losses valid for any empirical risk minimization method. This resolves a question of Shalev-Shwartz, Shamir, Srebro, and Sridharan (2009). We discuss how O(logn/n) high probability excess risk bounds are possible for projected gradient descent in the case of strongly convex and Lipschitz losses without the usual smoothness assumption. This is a joint work with Nikita Zhivotovskiy and Olivier Bousquet.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested
Channel: HDI Lab pinned «Dear Colleagues, Next Tuesday, May 18 (at 18:00), the speaker will be: Egor Klochkov (Cambridge University) Fast rates for strongly convex optimization via stability The sharpest known high probability generalization bounds for uniformly stable algorithms…»
Dear Colleagues,

Next Tuesday, June 1 (at 18:00), the speaker will be:

Sergey Samsonov (HSE University)

UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms

Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). Yet even a precise knowledge of the value function V^{\pi} corresponding to a policy \pi does not provide reliable information on how far is the policy \pi from the optimal one. We present a novel model-free upper value iteration procedure (UVIP) that allows us to estimate the suboptimality gap V*(x) - V^{\pi}(x) from above and to construct confidence intervals for V*. Our approach relies on upper bounds to the solution of the Bellman optimality equation via martingale approach. We provide theoretical guarantees for UVIP under general assumptions and illustrate its performance on a number of benchmark RL problems. The talk is based on our recent work with Denis Belomestny, Ilya Levin, Eric Moulines, Alexey Naumov and Veronika Zorina.

Zoom: https://us02web.zoom.us/j/9252879297

Please forward this message to anyone who might be interested