About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
SODA 2017
Conference paper
Low-rank PSD approximation in input-sparsity time
Abstract
We give algorithms for approximation by low-rank positive semidefinite (PSD) matrices. For symmetric input matrix A ϵ ℝn×n, target rank k, and error parameter ϵ > 0, one algorithm finds with constant probability a PSD matrix Y∼ of rank k such that ∥A - Y∼ ∥ 2 F ≤ (1 + ∈)∥2F Ak+∥;2 F , where Ak;+ denotes the best rank-k PSD approximation to A, and the norm is Frobenius. The algorithm takes time O(nnz(A) log n)+npoly((log n)k/∈)+poly(k/∈), where nnz(A) denotes the number of nonzero entries of A, and poly(k=") denotes a polynomial in k=". (There are two difierent polynomials in the time bound.) Here the output matrix ~ Y has the form CUC>, where the O(k/∈) columns of C are columns of A. In contrast to prior work, we do not require the input matrix A to be PSD, our output is rank k (not larger), and our running time is O(nnz(A) log n) provided this is larger than npoly((log n)k/∈). We give a similar algorithm that is faster and simpler, but whose rank- k PSD output does not involve columns of A, and does not require A to be symmetric. We give similar algorithms for best rank-k approximation subject to the constraint of symmetry. We also show that there are asymmetric input matrices that cannot have good symmetric column-selected approximations.