Accelerating sparse convolution with column vector-wise sparsity

Y Tan, K Han, K Zhao, X Yu, Z Du… - Advances in …, 2022 - proceedings.neurips.cc
Y Tan, K Han, K Zhao, X Yu, Z Du, Y Chen, Y Wang, J Yao
Advances in Neural Information Processing Systems, 2022proceedings.neurips.cc
Weight sparsity is a promising approach to reducing the model size and computation cost of
convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute
randomly in sparse CNN models, introducing enormous difficulty in obtaining actual
speedup on common hardware (eg, GPU) over their dense counterparts. Existing
acceleration solutions either require hardware modifications for irregular memory access
support or rely on a partially structured sparsity pattern. Neither of these methods is capable …
Abstract
Weight sparsity is a promising approach to reducing the model size and computation cost of convolutional neural networks (CNNs). Nevertheless, non-zero weights often distribute randomly in sparse CNN models, introducing enormous difficulty in obtaining actual speedup on common hardware (eg, GPU) over their dense counterparts. Existing acceleration solutions either require hardware modifications for irregular memory access support or rely on a partially structured sparsity pattern. Neither of these methods is capable of achieving fruitful speedup on convolution layers. In this work, we propose an algorithm-software co-designed sparse convolution based on a novel out-vector-wise (OVW) sparse pattern. Building on the insight that vertical vector integrity can preserve continuous memory access in IM2COL, the OVW pattern treats a vector as an entirety. To reduce the error caused by sparsity, we propose an equivalent transformation process, ie, clustering-based channel permutation, to gather similar rows together. Experimental evaluations demonstrate that our method achieves a and speedup over the SOTA solution and the dense convolution of ResNet50 on NVIDIA V100 at 75\% sparsity, respectively, with only negligible accuracy loss. Moreover, compared to the SOTA solution that achieves speedups only on data with 60\% sparsity or more, our method begins to obtain speedups on data with only 10\% sparsity.
proceedings.neurips.cc
Showing the best result for this search. See all results