A framework for automated performance bottleneck detection
I.-Hsin Chung, Guojing Cong, et al.
IPDPS 2008
We propose algorithms and techniques to accelerate training of deep neural networks for action recognition on a cluster of GPUs. The convergence analysis of our algorithm shows it is possible to reduce communication cost and at the same time minimize the number of iterations needed for convergence. We customize the Adam optimizer for our distributed algorithm to improve efficiency. In addition, we employ transfer-learning to further reduce training time while improving validation accuracy. For the UCF101 and HMDB51 datasets, the validation accuracies achieved are 93.1% and 67.9% respectively. With an additional end-to-end trained temporal stream, the validation accuracies achieved for UCF101 and HMDB51 are 93.47% and 81.24% respectively. As far as we know, these are the highest accuracies achieved with the two-stream approach using ResNet that does not involve computationally expensive 3D convolutions or pretraining on much larger datasets.
I.-Hsin Chung, Guojing Cong, et al.
IPDPS 2008
Changnian Han, Peng Zhang, et al.
Journal of Computational Physics
Guojing Cong, David A. Bader
PDCS 2005
Yasushi Negishi, Hiroki Murata, et al.
ISSTA 2012