Chi-square Information for Invariant Learning
Prasanna Sattigeri, Soumya Ghosh, et al.
ICML 2020
Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations. In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training. In contrast to existing methods, we consider this task as a ranking and transfer learning problem. We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other datasets, our model is able to effectively rank learning curves without having to observe many or very long learning curves. We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture. In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model.
Prasanna Sattigeri, Soumya Ghosh, et al.
ICML 2020
Raúl Fernández Díaz, Lam Thanh Hoang, et al.
IRB-AI-DD 2025
Federico Zipoli, Carlo Baldassari, et al.
npj Computational Materials
Mengyang Gu, Debarun Bhattacharjya, et al.
AAAI 2020