Publication
STOC 2011
Conference paper

Subspace embeddings for the L1-norm with applications

View publication

Abstract

We show there is a distribution over linear mappings R : ℓ1n → ℓ1O(d log d), such that with arbitrarily large constant probability, for any fixed d-dimensional subspace L, for all x ∈ L we have ∥x∥1 ≤ ∥Rx∥1 = O(d log d)∥x∥1. This provides the first analogue of the ubiquitous subspace Johnson-Lindenstrauss embedding for the ℓ1-norm. Importantly, the target dimension and distortion are independent of the ambient dimension n. We give several applications of this result. First, we give a faster algorithm for computing well-conditioned bases. Our algorithm is simple, avoiding the linear programming machinery required of previous algorithms. We also give faster algorithms for least absolute deviation regression and ℓ1-norm best fit hyperplane problems, as well as the first single pass streaming algorithms with low space for these problems. These results are motivated by practical problems in image analysis, spam detection, and tatistics, where the ℓ1-norm is used in studies where outliers may be safely and effectively ignored. This is because the ℓ1-norm is more robust to outliers than the ℓ2-norm. © 2011 ACM.

Date

Publication

STOC 2011

Authors

Share