Robust Multi-objective Bilevel Optimization With Applications In Machine Learning
Abstract
We consider the generic min-max form of a multi-objective bilevel nonconvex optimization problem where we have (i) n objectives at both the levels, (ii) the upper level variable is shared across all objectives at both the levels, and (iii) every lower level variable is limited to one objective in each of the upper and lower levels. Such a problem appears in various machine learning applications such as representation learning and hyperparameter optimization. We propose a gradient descent-ascent based single-loop two-timescale algorithm, building upon recent single objective bilevel optimization schemes. Our theoretical analyses show that this algorithm converges to the first-order stationary point at a rate of O(√n T-2/5) for a class of nonconvex problems, where n is the number of objectives at each level, and T is the number of algorithm iterations.