Sparse features for PCA-like linear regression
Christos Boutsidis, Petros Drineas, et al.
NeurIPS 2011
We study the topic of dimensionality reduction for κ -means clustering. Dimensionality reduction encompasses the union of two approaches: 1) feature selection and 2) feature extraction. A feature selection-based algorithm for κ -means clustering selects a small subset of the input features and then applies κ -means clustering on the selected features. A feature extraction-based algorithm for κ -means clustering constructs a small set of new artificial features and then applies κ -means clustering on the constructed features. Despite the significance of κ -means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for κ -means clustering are not known. On the other hand, two provably accurate feature extraction methods for κ -means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress toward a better understanding of dimensionality reduction for κ -means clustering. Namely, we present the first provably accurate feature selection method for κ -means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal κ -means objective value.
Christos Boutsidis, Petros Drineas, et al.
NeurIPS 2011
Christos Boutsidis, Jimeng Sun, et al.
CIKM 2008
Haim Avron, Christos Boutsidis, et al.
ICML 2013
Saurabh Paul, Christos Boutsidis, et al.
JMLR