天极加速器破解版-outline

Portrait 
Sr. Principal Researcher

Machine Learning and Optimization, Microsoft Research, Redmond

天极加速器破解版-outline

Building 99, 3920

Redmond, WA 98052

sebubeck AT microsoft DOT com

Associate Editor for Mathematics of Operations Research.

Associate Editor for Mathematical Statistics and Learning (publisher: European Mathematical Society)

I was on the program committee for NIPS 2012, NIPS 2014, 电脑vp软件, NIPS 2017, NeurIPS 2024 (senior area chair), 免费Ⅴpn安卓 (PC and local organizer), COLT 2014 (PC and local organizer), COLT 2015, COLT 2016, COLT 2017, vp下载安卓版 (co-chair), COLT 2024, COLT 2024 ICML 2015, ICML 2016, ICML 2017, SODA 2017, STOC 2024, Random 2017, ALT 2013, 极光vpm破解无限版.

Steering committee member (elected) for COLT from 2014 to 2017.

I am interested in a variety of topics in theoretical computer science and machine learning.

My best works have been around online decision making, with a couple of solutions to long-standing problems (minimax rate for multi-armed bandits and linear bandits at COLT 2009/COLT 2012/ALT 2018, best of both worlds for multi-armed bandits at COLT 2011, bandit convex optimization at COLT 2016/STOC 2017, progress on k-server and metrical task systems at STOC 2017/SODA 2018, chasing convex bodies at STOC 2024, multiplayer multi-armed bandit at COLT 2024).

I also did a couple of works in convex optimization (entropic barrier at COLT 2015, geometric view on acceleration in 2015, optimal distributed rates at ICML 2017/NIPS 2018/NeurIPS 2024/ICML 2024) and in network analysis (influence of the seed in preferential attachment graphs, and dimension estimation in random geometric graphs, work done in 2013/2014, appeared in Random Structures and Algorithms). Some other fun side projects included Langevin diffusion (NIPS 2015), entropic CLT (International Mathematics Research Notices 2016), smoothed analysis of local search (STOC 2017), adversarial examples in ML (ICML 2024/NeurIPS 2024) and finding critical points on non-convex functions in low dimensions (COLT 2024).

天极加速器破解版-outline

The non-stochastic version of cooperative multiplayer multi-armed bandit (with collisions) turns out to be a surprisingly challenging problem. In this first paper we obtain an optimal algorithm for the model with announced collisions. The model where collisions are not announced remains wide open (both from upper and lower bounds perspective). In the stochastic case we proved that in fact one can get optimal regret without ANY collisions.

天极加速器破解版-outline

With an extremely fun team of co-authors (Yin Tat Lee, Yuanzhi Li, Mark Sellke) we finally managed to obtain a competitive algorithm for chasing convex bodies (after a couple of years of infructuous attempts), see also this youtube video. We also obtained a rather complete picture of the nested version of the problem. The latter approach was then used to obtain the optimal competitive ratio in 免费Ⅴpn安卓 by Mark Sellke (see also this paper with very similar results).

天极加速器破解版-outline

With a fantastic team of co-authors (Michael Cohen, James R. Lee, Yin Tat Lee, Aleksander Madry) we improved the state of the art competitive ratio for k-server and metrical task systems by using the mirror descent algorithm. To learn more about it I recommend to first take a look at this [youtube video], then these 3 blog posts ([part 1], [part 2], [part 3]) and finally [vp下载苹果] itself. The [MTS paper] is finally online too, and it is a great start to get into this line of work.

天极加速器破解版-outline

From July 2014 to July 2016 with various co-authors at MSR we dedicated a lot of energy to bandit convex optimization. The end product is a new efficient algorithm. To learn more about it I recommend to first take a look at this [youtube video], then these 3 blog posts ([part 1], [part 2], [part 3]), and finally [the paper] itself.

天极加速器破解版-outline

  • vp下载安卓版

  • theoretical computer science

  • convex optimization

  • multi-armed bandits

  • online algorithms (in particular metrical task systems and k-server)

  • statistical network analysis, random graphs and random matrices

  • applications of information theory to learning, optimization, and probability

猴王加速器打不开,猴王加速器2024,猴王加速器vp,猴王加速器vn  安卓软件,安卓加速软件,安卓加速器,gofly加速器vpm  super加速器官方网址,super加速器mac下载,super加速器vqn,super加速器跑路了  极云加速器官网网址,极云加速器pc版下载,极云加速器vnp,极云加速器vpm  悠兔机场最新版,悠兔机场电脑版下载,悠兔机场vnp,悠兔机场vp  佛跳加速器最新版,佛跳加速器官网网址,佛跳加速器mac下载,佛跳加速器vps  hide下载地址,hide安卓下载,hide用不了了,hide跑路了