Semi-stochastic coordinate descent

Jakub Konečný*, Zheng Qu, Peter Richtárik

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

57 Scopus citations

Abstract

We propose a novel stochastic gradient method—semi-stochastic coordinate descent—for the problem of minimizing a strongly convex function represented as the average of a large number of smooth convex functions: f(X)= (1/n)∑ifi(x). Our method first performs a deterministic step (computation of the gradient of f at the starting point), followed by a large number of stochastic steps. The process is repeated a few times, with the last stochastic iterate becoming the new starting point where the deterministic step is taken. The novelty of our method is in how the stochastic steps are performed. In each such step, we pick a random function fi and a random coordinate j—both using non-uniform distributions—and update a single coordinate of the decision vector only, based on the computation of the jth partial derivative of fi at two different points. Each random step of the method constitutes an unbiased estimate of the gradient of f and moreover, the squared norm of the steps goes to zero in expectation, meaning that the stochastic estimate of the gradient progressively improves. The computational complexity of the method is the sum of two terms: O(nlog(1/ϵ)) evaluations of gradients ▽fi and (Formula presented.) evaluations of partial derivatives ▽jfi, where (Formula presented.) is a novel condition number.

Original languageEnglish (US)
Pages (from-to)993-1005
Number of pages13
JournalOptimization Methods and Software
Volume32
Issue number5
DOIs
StatePublished - Sep 3 2017

Keywords

  • Stochastic gradient
  • coordinate descent
  • empirical risk minimization

ASJC Scopus subject areas

  • Software
  • Control and Optimization
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Semi-stochastic coordinate descent'. Together they form a unique fingerprint.

Cite this