Optimization-based Structural Pruning for Large Language Models without Back-Propagation

Y Gao, Z Liu, W Zhang, B Du, GS Xia - arXiv preprint arXiv:2406.10576, 2024 - arxiv.org
Y Gao, Z Liu, W Zhang, B Du, GS Xia
arXiv preprint arXiv:2406.10576, 2024arxiv.org
Compared to the moderate size of neural network models, structural weight pruning on the
Large-Language Models (LLMs) imposes a novel challenge on the efficiency of the pruning
algorithms, due to the heavy computation/memory demands of the LLMs. Recent efficient
LLM pruning methods typically operate at the post-training phase without the expensive
weight finetuning, however, their pruning criteria often rely on heuristically designed metrics,
potentially leading to suboptimal performance. We instead propose a novel optimization …
Compared to the moderate size of neural network models, structural weight pruning on the Large-Language Models (LLMs) imposes a novel challenge on the efficiency of the pruning algorithms, due to the heavy computation/memory demands of the LLMs. Recent efficient LLM pruning methods typically operate at the post-training phase without the expensive weight finetuning, however, their pruning criteria often rely on heuristically designed metrics, potentially leading to suboptimal performance. We instead propose a novel optimization-based structural pruning that learns the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model. To preserve the efficiency, our method 1) works at post-training phase} and 2) eliminates the back-propagation through the LLM per se during the optimization (i.e., only requires the forward pass of the LLM). We achieve this by learning an underlying Bernoulli distribution to sample binary pruning masks, where we decouple the Bernoulli parameters from the LLM loss, thus facilitating an efficient optimization via a policy gradient estimator without back-propagation. As a result, our method is able to 1) operate at structural granularities of channels, heads, and layers, 2) support global and heterogeneous pruning (i.e., our method automatically determines different redundancy for different layers), and 3) optionally use a metric-based method as initialization (of our Bernoulli distributions). Extensive experiments on LLaMA, LLaMA-2, and Vicuna using the C4 and WikiText2 datasets demonstrate that our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU, and our pruned models outperform the state-of-the-arts w.r.t. perplexity. Codes will be released.
arxiv.org
Showing the best result for this search. See all results