About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AAAI 2025
Conference paper
On the Expressiveness and Length Generalization of Selective State-Space Models on Regular Languages
Abstract
Selective state-space models (SSMs) are an emerging alternative to the Transformer, offering the unique advantage of parallel training and sequential inference. While these models have shown promising performance on a variety of tasks, their formal expressiveness and length generalization properties are not sufficiently explored. In this work, we provide insight into the workings of selective SSMs by analyzing their expressiveness and length generalization performance on regular language tasks, i.e. finite-state automaton (FSA) emulation. We address the limitations of modern SSM-based architectures by introducing the Selective Dense State-Space Model (\name), the first selective SSM that exhibits perfect length generalization on a set of various regular language tasks using a single layer. It utilizes a dictionary of dense transition matrices, a softmax selection mechanism that creates a convex combination of the dictionary matrices at each time step, and a readout consisting of layer normalization followed by a linear map. We then question the expressiveness of variants of diagonal selective SSMs by considering their empirical performance on commutative and non-commutative automata. We explain most of the experimental results with theoretical considerations.