R. Sebastian, M. Weise, et al.
ECPPM 2022
One way to speed up convergence in a large optimization problem is to introduce a smaller, approximate version of the problem at a coarser scale and to alternate between relaxation steps for the fine-scale and coarse-scale problems. We exhibit such an optimization method for neural networks governed by quite general objective functions. At the coarse scale there is a smaller approximating neural net which, like the original net, is nonlinear and has a nonquadratic objective function. The transitions and information flow from fine to coarse scale and back do not disrupt the optimization, and the user need only specify a partition of the original fine-scale variables. Thus the method can be applied easily to many problems and networks. We show positive experimental results including cost comparisons. © 1991 IEEE
R. Sebastian, M. Weise, et al.
ECPPM 2022
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
Yehuda Naveli, Michal Rimon, et al.
AAAI/IAAI 2006
Michael Hersche, Mustafa Zeqiri, et al.
NeSy 2023