Linearly Convergent Frank-Wolfe without Prior Knowledge

Abstract

Structured constraints in Machine Learning have recently brought the Frank-Wolfe (FW) family of algorithms back in the spotlight. Recently, the Away-steps (A) and Pairwise (P) FW variants have been shown to converge linearly for polytopic constraints. However, these improved variants suffer from two practical limitations: each iteration requires solving a 1-dimensional minimization problem to determine the step-size along with an exact solution to the Frank-Wolfe linear subproblems. In this paper, we propose simple modifications of AFW and PFW that lift both restrictions simultaneously. Our method relies on a sufficient decrease condition to determine the step-size. It only requires evaluation and gradient oracles on the objective, along with an approximate solution to the FrankWolfe linear subproblems. Furthermore, the theoretical convergence rates of our methods match ones for the exact line-search versions. Benchmarks on different machine learning problems illustrate large practical performance gains of the proposed variants.

Date
Dec 14, 2019 14:45
Event
Location
Vancouver, Canada
Geoffrey Négiar
Geoffrey Négiar
PhD Student

I’m a PhD student at UC Berkeley’s BAIR lab. My research focuses on optimization for machine learning, NLP and modeling in the presence of uncertainty.