Steven J. Rennie, John R. Hershey, et al.
ASRU 2009
There is normally a simple choice made in the form of the covariahce matrix to be used with continuous-density HMM's. Either a diagonal covariance matrix is used, with the underlying assumption that elements of the feature vector are independent, or a full or block-diagonal matrix is used, where all or some of the correlations are explicitly modeled. Unfortunately when using full or block-diagonal covariance matrices there tends to be a dramatic increase in the number of parameters per Gaussian component, limiting the number of components which may be robustly estimated. This paper introduces a new form of covariance matrix which allows a few "full" covariance matrices to be shared over many distributions, whilst each distribution maintains its own "diagonal" covariance matrix. In contrast to other schemes which have hypothesized a similar form, this technique fits within the standard maximum-likelihood criterion used for training HMM's. The new form of covariance matrix is evaluated on a large-vocabulary speech-recognition task. In initial experiments the performance of the standard system was achieved using approximately half the number of parameters. Moreover, a 10% reduction in word error rate compared to a standard system can be achieved with less than a 1% increase in the number of parameters and little increase in recognition time. © 1999 IEEE.
Steven J. Rennie, John R. Hershey, et al.
ASRU 2009
Fanhua Shang, L.C. Jiao, et al.
CIKM 2012
Jonathan H. Connell, Nalini K. Ratha, et al.
ICIP 2002
Andrea Bartezzaghi, Ioana Giurgiu, et al.
IEEE MELECON 2022