Date : 29/09/2016

14h00 – 18h00

Télécom ParisTech

Amphithéâtre Estaunié

46 rue Barrault 75013 Paris (métro Corvisard)

Plan d’accès / Map

Inscription / Registration

To see the slides, please click on the title of the talk.

- 14h00 : Network of Bandits,
**Raphaël Feraud**, Orange Labs - 14h50 : A Universal Catalyst for First-Order Optimization,
**Julien Mairal**, Inria Grenoble

15h40 : Coffee break - 16h10 : Performance estimation of first-order methods for composite convex optimization,
**François Glineur**, Université Catholique de Louvain - 17h00 : Primal-Dual Rates and Certificates,
**Martin Jaggi**, EPFL.

**Abstracts** :

——–

**Network of Bandits, Raphaël Feraud, Orange Labs**

The distribution of machine learning tasks on the user’s devices offers several advantages for application purposes: scalability, reduction of deployment costs and privacy.

We propose a basic block, DISTRIBUTED MEDIAN ELIMINATION, which can be used to distribute the best arm identification task in various schemes.

In comparison to MEDIAN ELIMINATION run on a single player, we showed a near optimal speed-up factor in O( alpha N/K ln^2 K), where K is the number of actions,

N is the number of players, and alpha in (0; 1/2]. This speed-up factor is reached with a near optimal communication cost. Experiments illustrate and complete the analysis: in comparison to MEDIAN ELIMINATION with unconstrained communication cost, the distributed version shows practical improvements.

——–

**A Universal Catalyst for First-Order Optimization, Julien Mairal, Inria Grenoble**

We introduce a generic scheme for accelerating first-order optimization methods in the sense of Nesterov. Our approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these approaches, we provide acceleration and explicit support for non-strongly convex objectives. In addition to theoretical speed-up, we also show that acceleration is useful in practice, especially for ill-conditioned problems where we measure dramatic improvements. This is a joint work with Hongzhou Lin and Zaid Harchaoui.

——–

**Performance estimation of first-order methods for composite convex optimization, François Glineur, Université Catholique de Louvain — **abstract

——–

**Primal-Dual Rates and Certificates, Martin Jaggi, EPFL
**We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates. Such certificates and corresponding rate of convergence guarantees are important for practitioners to diagnose progress, in particular in machine learning applications. We obtain new primal-dual convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group Lasso and TV-regularized problems. The theory applies to any norm-regularized generalized linear model. Our approach provides efficiently computable duality gaps which are globally defined, without modifying the original problems in the region of interest. If time permits, we will also be discussing distributed optimization algorithms which can leverage the same duality structure, in order to obtain communication efficient schemes.

Joint work with Celestine Dünner, Simone Forte, Virginia Smith and Martin Takáč

——–

We thank the AnR project Odissee for its support in organising this event.