Restarting the accelerated coordinate descent method with a rough strong convexity estimate
Résumé
We propose new restarting strategies for the accelerated coordinate descent
method. Our main contribution is to show that for a well chosen sequence of restarting times,
the restarted method has a nearly geometric rate of convergence. A major feature of the
method is that it can take profit of the local quadratic error bound of the objective function
without knowing the actual value of the error bound. We also show that under the more
restrictive assumption that the objective function is strongly convex, any fixed restart period
leads to a geometric rate of convergence. Finally, we illustrate the properties of the algorithm
on a regularized logistic regression problem and on a Lasso problem.