Abstract
For Big Data analytics, working in low dimensionalities is beneficial for high performance. Instead of projecting onto a single low dimensionality, we examine, both analytically and empirically, the effects on the 'learning utility' of the original dataset when combining several very low-dimensional random projections. The embedding proposed exhibits many favorable traits to existing low-dimensional methodologies, such as low runtime and equivalent or better embedding quality.