Amarachi Blessing Mbakwe, Joy Wu, et al.
NeurIPS 2023
The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the back-propagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. Our approach is to use a modular network architecture, reducing a k-class problem to a set of K two-class problems, with a separately trained network for each of the simpler problems. Speedups of one order of magnitude have been obtained experimentally, and in some cases convergence was possible using the modular approach but not using a nonmodular network. © 1995 IEEE
Amarachi Blessing Mbakwe, Joy Wu, et al.
NeurIPS 2023
Zhikun Yuen, Paula Branco, et al.
DSAA 2023
Arnon Amir, Michael Lindenbaum
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hong-linh Truong, Maja Vukovic, et al.
ICDH 2024