TY - JOUR

T1 - Stochastic three points method for unconstrained smooth minimization

AU - Bergou, El Houcine

AU - Gorbunov, Eduard

AU - Richtarik, Peter

N1 - KAUST Repository Item: Exported on 2020-11-02
Acknowledgements: This author received support from the AgreenSkills+ fellowship programme which has received funding from the EU’s Seventh Framework Programme under grant agreement No FP7-609398 (AgreenSkills+ contract)

PY - 2020/10/1

Y1 - 2020/10/1

N2 - In this paper we consider the unconstrained minimization problem of a smooth function in Rn in a setting where only function evaluations are possible. We design a novel randomized derivative-free algorithm-the stochastic three points (STP) method-and analyze its iteration complexity. At each iteration, STP generates a random search direction according to a certain fixed probability law. Our assumptions on this law are very mild: Roughly speaking, all laws which do not concentrate all measures on any halfspace passing through the origin will work. For instance, we allow for the uniform distribution on the sphere and also distributions that concentrate all measures on a positive spanning set. Although our approach is designed to not explicitly use derivatives, it covers some first order methods. For instance, if the probability law is chosen to be the Dirac distribution concentrated on the sign of the gradient, then STP recovers the signed gradient descent method. If the probability law is the uniform distribution on the coordinates of the gradient, then STP recovers the randomized coordinate descent method. The complexity of STP depends on the probability law via a simple characteristic closely related to the cosine measure which is used in the analysis of deterministic direct search (DDS) methods. Unlike in DDS, where O(n) (n is the dimension of x) function evaluations must be performed in each iteration in the worst case, our method only requires two new function evaluations per iteration. Consequently, while the complexity of DDS depends quadratically on n, our method depends linearly on n.

AB - In this paper we consider the unconstrained minimization problem of a smooth function in Rn in a setting where only function evaluations are possible. We design a novel randomized derivative-free algorithm-the stochastic three points (STP) method-and analyze its iteration complexity. At each iteration, STP generates a random search direction according to a certain fixed probability law. Our assumptions on this law are very mild: Roughly speaking, all laws which do not concentrate all measures on any halfspace passing through the origin will work. For instance, we allow for the uniform distribution on the sphere and also distributions that concentrate all measures on a positive spanning set. Although our approach is designed to not explicitly use derivatives, it covers some first order methods. For instance, if the probability law is chosen to be the Dirac distribution concentrated on the sign of the gradient, then STP recovers the signed gradient descent method. If the probability law is the uniform distribution on the coordinates of the gradient, then STP recovers the randomized coordinate descent method. The complexity of STP depends on the probability law via a simple characteristic closely related to the cosine measure which is used in the analysis of deterministic direct search (DDS) methods. Unlike in DDS, where O(n) (n is the dimension of x) function evaluations must be performed in each iteration in the worst case, our method only requires two new function evaluations per iteration. Consequently, while the complexity of DDS depends quadratically on n, our method depends linearly on n.

UR - http://hdl.handle.net/10754/653110

UR - https://epubs.siam.org/doi/10.1137/19M1244378

UR - http://www.scopus.com/inward/record.url?scp=85093507650&partnerID=8YFLogxK

U2 - 10.1137/19M1244378

DO - 10.1137/19M1244378

M3 - Article

SN - 1052-6234

VL - 30

SP - 2726

EP - 2749

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

IS - 4

ER -