A System for Watermarking Relational Databases
Rakesh Agrawal, Peter J. Haas, et al.
SIGMOD 2003
Large-scale Machine Learning (ML) algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications. Hence, it is crucial for performance to fit the data into single-node or distributed main memory to enable fast matrix-vector operations. General-purpose compression struggles to achieve both good compression ratios and fast decompression for block-wise uncompressed operations. Therefore, we introduce Compressed Linear Algebra (CLA) for lossless matrix compression. CLA encodes matrices with lightweight, value-based compression techniques and executes linear algebra operations directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show good compression ratios and operations performance close to the uncompressed case, which enables fitting larger datasets into available memory. We thereby obtain significant end-to-end performance improvements.
Rakesh Agrawal, Peter J. Haas, et al.
SIGMOD 2003
Subi Arumugam, Ravi Jampani, et al.
VLDB
Botong Huang, Matthias Boehm, et al.
SIGMOD 2015
Tarek Elgamal, Shangyu Luo, et al.
CIDR 2017