A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control

Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem, Peter Richtarik

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We consider the problem of unconstrained minimization of a smooth objective function in ℝn in a setting where only function evaluations are possible. While importance sampling is one of the most popular techniques used by machine learning practitioners to accelerate the convergence of their models when applicable, there is not much existing theory for this acceleration in the derivative-free setting. In this paper, we propose the first derivative free optimization method with importance sampling and derive new improved complexity results on non-convex, convex and strongly convex functions. We conduct extensive experiments on various synthetic and real LIBSVM datasets confirming our theoretical results. We test our method on a collection of continuous control tasks on MuJoCo environments with varying difficulty. Experiments show that our algorithm is practical for high dimensional continuous control problems where importance sampling results in a significant sample complexity improvement.
Original languageEnglish (US)
Title of host publicationProceedings of the AAAI Conference on Artificial Intelligence
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages3275-3282
Number of pages8
DOIs
StatePublished - Apr 3 2020

Fingerprint

Dive into the research topics of 'A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control'. Together they form a unique fingerprint.

Cite this