About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
NeurIPS 2024
Workshop paper
MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees
Abstract
Open-weight large language model zoos allow users to quickly integrate state-of-the-art models into systems. Despite increasing accessibility, selecting the most appropriate model for a given task still largely relies on public benchmark leaderboards and educated guesses. This can be unsatisfactory for both inference service providers and end users. The providers prioritize cost efficiency, while the end users prioritize model output quality for their inference requests. In commercial settings, these two priorities are often brought together in Service Level Agreements (SLA). We present MESS+, an online stochastic optimization algorithm for energy optimal model selection in a model zoo that works on a per-inference-request basis. For a given SLA that requires high accuracy, we are up to 2.5× more energy efficient with MESS+ than with randomly selecting an LLM from the zoo while maintaining SLA quality constraints.