DyVEDeep: Dynamic Variable Effort Deep Neural NetworksSanjay GanapathySwagath Venkataramaniet al.2020ACM TECS
Hybrid 8-bit floating point (HFP8) training and inference for deep neural networksXiao SunJungwook Choiet al.2019NeurIPS 2019
Memory and Interconnect Optimizations for Peta-Scale Deep Learning SystemsSwagath VenkataramaniVijayalakshmi Srinivasanet al.2019HiPC 2019
Performance-driven Programming of Multi-TFLOP Deep Learning Accelerators∗Swagath VenkataramaniJungwook Choiet al.2019IISWC 2019
DeepTools: Compiler and Execution Runtime Extensions for RaPiD AI AcceleratorSwagath VenkataramaniJungwook Choiet al.2019IEEE Micro
Dynamic Spike Bundling for Energy-Efficient Spiking Neural NetworksSarada KrithivasanSanchari Senet al.2019ISLPED 2019
BiScaled-DNN: Quantizing long-tailed datastructures with two scale factors for deep neural networksShubham JainSwagath Venkataramaniet al.2019DAC 2019
SparCE: Sparsity Aware General-Purpose Core Extensions to Accelerate Deep Neural NetworksSanchari SenShubham Jainet al.2019IEEE TC
A Compiler for Deep Neural Network Accelerators to Generate Optimized Code for a Wide Range of Data Parameters from a Hand-crafted Computation KernelEri OgawaKazuaki Ishizakiet al.2019COOL CHIPS 2019
Data Subsetting: A Data-Centric Approach to Approximate ComputingYounghoon KimSwagath Venkataramaniet al.2019DATE 2019
09 Jan 2023US11551054System-aware Selective Quantization For Performance Optimized Distributed Deep Learning
29 Aug 2022US11429524Optimized Hierarchical Scratchpads For Enhanced Artificial Intelligence Accelerator Core Utilization
KEKaoutar El MaghraouiPrincipal Research Scientist and Manager, AIU Spyre Model Enablement, AI Hardware Center