About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ISCAS 2023
Conference paper
Acceleration of Decision-Tree Ensemble Models on the IBM Telum Processor
Abstract
This paper presents a tensor-based algorithm that leverages a hardware accelerator for inferencing decision-tree-based machine learning models. The algorithm has been integrated in a public software library and is demonstrated on an IBM z16 server, using the Telum processor with the Integrated Accelerator for AI. We describe the architecture and implementation of the algorithm and present experimental results that demonstrate its superior runtime performance compared with popular CPU-based machine learning inference implementations.