Topological Data Analysis on Noisy Quantum Computers
Ismail Akhalwaya, Shashanka Ubaru, et al.
ICLR 2024
In this paper, we present a novel semi-supervised dimensionality reduction technique to address the problems of inefficient learning and costly computation in coping with high-dimensional data. Our method named the dual subspace projections (DSP) embeds high-dimensional data in an optimal low-dimensional space, which is learned with a few user-supplied constraints and the structure of input data. The method projects data into two different subspaces respectively the kernel space and the original input space. Each projection is designed to enforce one type of constraints and projections in the two subspaces interact with each other to satisfy constraints maximally and preserve the intrinsic data structure. Compared to existing techniques, our method has the following advantages: (1) it benefits from constraints even when only a few are available; (2) it is robust and free from overfitting; and (3) it handles nonlinearly separable data, but learns a linear data transformation. As a conclusion, our method can be easily generalized to new data points and is efficient in dealing with large datasets. An empirical study using real data validates our claims so that significant improvements in learning accuracy can be obtained after the DSP-based dimensionality reduction is applied to high-dimensional data. © 2011 Elsevier B.V.
Ismail Akhalwaya, Shashanka Ubaru, et al.
ICLR 2024
Tim Erdmann, Stefan Zecevic, et al.
ACS Spring 2024
Seung Gu Kang, Jeff Weber, et al.
ACS Fall 2023
Hong-linh Truong, Maja Vukovic, et al.
ICDH 2024