Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
We combine two advanced ideas widely used in optimization for machine learning: shuffling strategy and momentum technique to develop a novel method with momentum for finite-sum minimization problems. We establish that our algorithm achieves a state-of-the-art convergence rate for any shuffling strategy under standard assumptions. In particular, if a random shuffling strategy is used, we can further improve our convergence rate by a fraction of the data size. When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods. We prove the same convergence rate of this algorithm under the L-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods.
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Shubhi Asthana, Pawan Chowdhary, et al.
KDD 2021
Pavithra Harsha, Ali Koc, et al.
INFORMS 2021