About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
KDD 2022
Conference paper
The Good, the Bad, and the Outliers: A Testing Framework for Decision Optimization Model Learning
Abstract
Mathematical decision-optimization (DO) models provide decision support in a wide range of scenarios. Often, hard-to-model constraints and objectives are learned from data. Learning, however, can give rise to DO models that fail to capture the real system, leading to poor recommendations. We introduce an open-source framework designed for large-scale testing and solution quality analysis of DO model learning algorithms. Our framework produces multiple optimization problems at random, feeds them to the user's algorithm and collects its predicted optima. By comparing predictions against the ground truth, our framework delivers a comprehensive prediction profile of the algorithm. Thus, it provides a playground for researchers and data scientists to develop, test, and tune their DO model learning algorithms. Our contributions include: (1) an open-source testing framework implementation, (2) a novel way to generate DO ground truth, and (3) a first-of-its-kind, generic, cloud-distributed Ray and Rayvens architecture. We demonstrate the use of our testing framework on two open-source DO model learning algorithms.