Reassessing Layer Pruning in LLMs: New Insights and Methods (2024)

Yao Lu1 Hao Cheng Yujie Fang1 Zeyu Wang1 Jiaheng Wei2
Dongwei Xu1Qi Xuan1Xiaoniu Yang1Zhaowei Zhu
1
Zhejiang University of Technology 2HKUST-GZ
yaolu.zjut@gmail.com. Equal contribution with Hao Cheng.Corresponding author: xuanqi@zjut.edu.cn.

Abstract

Although large language models (LLMs) have achieved remarkable success across various domains, their considerable scale necessitates substantial computational resources, posing significant challenges for deployment in resource-constrained environments. Layer pruning, as a simple yet effective compression method, removes layers of a model directly, reducing computational overhead. However, what are the best practices for layer pruning in LLMs? Are sophisticated layer selection metrics truly effective? Does the LoRA (Low-Rank Approximation) family, widely regarded as a leading method for pruned model fine-tuning, truly meet expectations when applied to post-pruning fine-tuning? To answer these questions, we dedicate thousands of GPU hours to benchmarking layer pruning in LLMs and gaining insights across multiple dimensions. Our results demonstrate that a simple approach, i.e., pruning the final 25% of layers followed by fine-tuning the lm_head and the remaining last three layer, yields remarkably strong performance. Following this guide, we prune Llama-3.1-8B-It and obtain a model that outperforms many popular LLMs of similar size, such as ChatGLM2-6B, Vicuna-7B-v1.5, Qwen1.5-7B and Baichuan2-7B.We release the optimal model weights on Huggingface111https://huggingface.co/YaoLuzjut/Llama-3.1-6.3B-It-Alpaca and https://huggingface.co/YaoLuzjut/Llama-3.1-6.3B-It-Dolly, and the code is available on GitHub222https://github.com/yaolu-zjut/Navigation-LLM-layer-pruning.

1 Introduction

In recent years, large language models (LLMs) have achieved unprecedented success in many fields, such as text generation(Achiam etal., 2023; Touvron etal., 2023), semantic analysis(Deng etal., 2023; Zhang etal., 2023b) and machine translation(Zhang etal., 2023a; Wang etal., 2023). However, these achievements come with massive resource consumption, posing significant challenges for deployment on resource-constrained devices. To address these challenges, numerous techniques have been developed to create more efficient LLMs, including pruning(Ma etal., 2023a; Sun etal., 2023), knowledge distillation(Xu etal., 2024; Gu etal., 2024), quantization(Lin etal., 2024; Liu etal., 2023), low-rank factorization(Saha etal., 2023; Zhao etal., 2024a), and system-level inference acceleration(Shah etal., 2024; Lee etal., 2024).

Among these methods, pruning has emerged as a promising solution to mitigate the resource demands of LLMs. By selectively removing redundant patterns—such as parameters(Sun etal., 2023), attention heads(Ma etal., 2023a) and layers(Men etal., 2024)—pruning aims to slim down the model while maintaining its original performance as much as possible. Among different types of pruning, layer pruning(Kim etal., 2024; Siddiqui etal., 2024) has garnered particular interest due to its direct impact on pruning the model’s depth, thereby decreasing both computational complexity and memory usage. Additionally, thanks to the nice structure of the existing LLMs such as Llama(Dubey etal., 2024), whose transformer blocks have the exactly same dimension of input and output, layer pruning becomes a straightforward and simple solution. Therefore, in this paper, we focus on layer pruning. Unlike existing studies(Men etal., 2024; Yang etal., 2024b; Chen etal., 2024; Zhong etal., 2024; Liu etal., 2024b) that aim to propose various sophisticated pruning methods, we take a step back and focus on the following questions:

Reassessing Layer Pruning in LLMs: New Insights and Methods (1)

Reassessing Layer Pruning in LLMs: New Insights and Methods (2)

  1. Q1.

    Layer Selection: Are fancy metrics essential for identifying redundant layers to prune?

  2. Q2.

    Fine-Tuning: Is the LoRA family the best choice for post-pruning fine-tuning?

  3. Q3.

    Pruning Strategy: Will iterative pruning outperform one-shot pruning?

To answer the aforementioned questions, we spent thousands of GPU hours to benchmark layer pruning, conducting extensive experiments across 7 layer selection metrics, 4 state-of-the-art open-source LLMs, 6 fine-tuning methods, 5 pruning strategies on 10 common datasets. From these efforts, we have developed a practical list of key insights for LLM layer pruning in Figure1:

  1. 1).

    Reverse-order pruning is simple yet effective, i.e., simply pruning the last several layers performs better than many complex pruning metrics(Kim etal., 2024; Men etal., 2024) .

  2. 2).

    LoRA performs worse than expected, i.e., LoRA, the most commonly used fine-tuning methods in existing pruning approaches(Sun etal., 2023; Ma etal., 2023b; Kim etal., 2024; Men etal., 2024), is not the best choice for post-pruning performance recovery. In contrast, freezing the other layers and fine-tuning only the last few remaining layers and lm_head, also known as partial-layer fine-tuning, can achieve higher accuracy while reducing the training time. The result is unique to layer pruning since LoRA and partial-layer fine-tuning perform similarly as Table3 in full-model fine-tuning.

  3. 3).

    Iterative pruning offers no benefit, i.e., considering both training costs and performance gains, iterative pruning, where layers are removed step-by-step, fails to beat the one-shot pruning, where a single cut is made.

In addition to the above practices, we also conduct sensitivity analyses on the number of calibration samples, the choice of Supervised Fine-Tuning (SFT) datasets and various pruning rates for LLM layer pruning. We find that the number of calibration samples affects the performance of data-driven pruning methods, highlighting the importance of considering performance stability as a key criterion when evaluating the quality of pruning metrics. Similarly, we discover that fine-tuning with different SFT datasets significantly impacts the performance of pruned models. This suggests the need for further exploration of the most suitable datasets for fine-tuning. Finally, we apply our insights and practices to prune Llama-3.1-8B-Instruct(Dubey etal., 2024), obtaining Llama-3.1-6.3B-It-Alpaca and Llama-3.1-6.3B-It-Dolly, as shown in Figure1. These pruned models require significantly fewer training tokens but outperform several popular community LLMs of similar size, such as ChatGLM2-6B(GLM etal., 2024), Vicuna-7B-v1.5(Zheng etal., 2024), Qwen1.5-7B(Yang etal., 2024a) and Baichuan2-7B(Baichuan, 2023). We hope our work will help guide future efforts in LLM layer pruning and inform best practices for deploying LLMs in real-world applications. In a nutshell, we make the following contributions:

  • Comprehensive Benchmarking: We conduct an extensive evaluation of layer selection metrics, fine-tuning methods, and pruning strategies, providing practical insights into effective pruning techniques based on thousands of GPU hours across multiple datasets.

  • Novel Best Practices: We identify reverse-order as a simple and effective layer selection metric, find that partial-layer fine-tuning outperforms LoRA-based techniques, and demonstrate that one-shot pruning is as effective as iterative pruning while reducing training costs.

  • Optimized Pruned LLMs: We release Llama-3.1-6.3B-It-Alpaca and Llama-3.1-6.3B-It-Dolly, which are obtained through direct pruning of the Llama-3.1-8B-Instruct. Our pruned models require up to 106×10^{6}\times10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT × fewer training tokens compared to training from scratch, while still comparing favorably to various popular community LLMs of similar size, such as ChatGLM2-6B(GLM etal., 2024), Vicuna-7B-v1.5(Zheng etal., 2024), Qwen1.5-7B(Yang etal., 2024a) and Baichuan2-7B(Baichuan, 2023).

2 Related Work

LLM Layer Pruning. LLM layer pruning is a technique used to reduce the number of layers in LLMs, aiming to lower computational costs without significantly degrading performance. Specifically, it evaluates the contribution of each layer to the model’s overall performance, using criteria such as gradients, activation values, parameter weights, or the layer’s influence on the loss function. Layers that contribute the least are then pruned to reduce complexity. For example, LaCo(Yang etal., 2024b) achieves rapid model size reduction by folding subsequent layers into the previous layer, effectively preserving the model structure. Similarly, MKA(Liu etal., 2024b) uses manifold learning and the Normalized Pairwise Information Bottleneck measure(Tishby etal., 2000) to identify the most similar layers for merging. ShortGPT(Men etal., 2024) uses Block Influence (BI) to measure the importance of each layer in LLMs and remove layers with low BI scores. Kim etal. (2024) utilize Magnitude, Taylor and Perplexity (PPL) to evaluate the significance of each layer.

Differences from Traditional Layer Pruning.Unlike traditional Deep Neural Networks(Szegedy etal., 2014; Simonyan & Zisserman, 2015; He etal., 2015; Dosovitskiy etal., 2021; Liu etal., 2021) (DNNs), typically trained for a single, specific task, LLMs are designed to handle a wide range of tasks and are structured with billions of parameters. These differences in model scale and task complexity fundamentally alter the challenges associated with layer pruning.For example, in traditional DNN layer pruning(Chen & Zhao, 2018; Wang etal., 2019; Lu etal., 2022; Tang etal., 2023; Guenter & Sideris, 2024), assessing the importance of each layer is relatively straightforward, as it is tied to a single task. In contrast, the parameters of LLMs are optimized across diverse tasks, complicating the evaluation of layer importance. Furthermore, traditional DNN pruning commonly involves full parameter fine-tuning after pruning, while LLMs often employ Parameter-Efficient Fine-Tuning (PEFT) techniques(Hu etal., 2021; Meng etal., 2024; Zhao etal., 2024b; Dettmers etal., 2024) such as Low-Rank Approximation (LoRA)(Hu etal., 2021) to accommodate their massive parameter space. Consequently, traditional DNN pruning methods may not adequately address the unique challenges posed by LLMs, highlighting the need for specialized pruning strategies.

Exploration of LLM Pruning. Although recent research focuses on developing sophisticated pruning methods(Kim etal., 2024; Ma etal., 2023a; Men etal., 2024; Liu etal., 2024c; b; Yang etal., 2024b; Zhong etal., 2024), few studies(Jaiswal etal., 2023; Williams & Aletras, 2024; Muralidharan etal., 2024) take a step back and revisit existing LLM pruning techniques. For example, Jaiswal etal. (2023) re-evaluate the effectiveness of existing state-of-the-art pruning methods with PPL. Williams & Aletras (2024) systematically investigate how the calibration dataset impacts the effectiveness of model compression methods. Muralidharan etal. (2024) develop a set of practical practices for LLMs that combine layer, width, attention and MLP pruning with knowledge distillation-based retraining. However, these methods either do not consider layer pruning or lack a comprehensive comparison. In contrast, we systematically validate different layer selection metrics, fine-tuning techniques, and pruning strategies to provide a thorough evaluation.

3 Background and Notation

3.1 Problem Formulation for Layer Pruning

An LLM \mathcal{M}caligraphic_M consists of multiple Transformer layers L={l1,l2,,ln}𝐿subscript𝑙1subscript𝑙2subscript𝑙𝑛L=\{l_{1},l_{2},\cdots,l_{n}\}italic_L = { italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_l start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, each containing a pair of multi-head attention and feed-forward network modules:

\displaystyle\mathcal{M}caligraphic_M=l1l2ln,absentsubscript𝑙1subscript𝑙2subscript𝑙𝑛\displaystyle=l_{1}\circ l_{2}\cdots\circ l_{n},= italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∘ italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋯ ∘ italic_l start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ,(1)

Layer pruning aims to find a subset of layers LLsuperscript𝐿𝐿L^{\prime}\subseteq Litalic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊆ italic_L such that the pruned model superscript\mathcal{M}^{\prime}caligraphic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT maintains acceptable performance while reducing the model’s complexity, which can be formalized as:

Minimize𝒞(),𝒞superscript\displaystyle\mathcal{C}\left(\mathcal{M}^{\prime}\right),caligraphic_C ( caligraphic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ,(2)
s.t.P()αs.t.𝑃superscript𝛼\displaystyle\text{s.t.}\quad P\left(\mathcal{M}^{\prime}\right)\geq\alphas.t. italic_P ( caligraphic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≥ italic_α×P(),LL,\displaystyle\times P(\mathcal{M}),L^{\prime}\subseteq L,× italic_P ( caligraphic_M ) , italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊆ italic_L ,

where 𝒞()𝒞superscript\mathcal{C}\left(\mathcal{M}^{\prime}\right)caligraphic_C ( caligraphic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) denotes the complexity of the pruned model, which can be quantified in terms of the number of parameters, FLOPs, or inference time, etc. α𝛼\alphaitalic_α is a hyperparameter (e.g., α=0.9𝛼0.9\alpha=0.9italic_α = 0.9) that defines the acceptable performance degradation. P()𝑃P(\cdot)italic_P ( ⋅ ) represents the performance on given tasks. Numerous methods have proposed various metrics to identify and prune unimportant layers. Herein, we include 7 popular metrics:

Random Selection. For the random selection baseline, we randomly select several layers to prune.

Reverse-order. This metric (Men etal., 2024) posits that importance is inversely proportional to the sequence order. It assigns lower importance scores to the deeper layers and prune them.

Magnitude. It was first introduced by Li etal. (2016) and subsequently adopted by Kim etal. (2024), which assumes that weights exhibiting smaller magnitudes are deemed less informative. Following Kim etal. (2024), we compute IMagnituden=kWknpsuperscriptsubscript𝐼Magnitude𝑛subscript𝑘subscriptnormsuperscriptsubscript𝑊𝑘𝑛𝑝I_{\text{Magnitude}}^{n}=\sum_{k}||W_{k}^{n}||_{p}italic_I start_POSTSUBSCRIPT Magnitude end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | | italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT | | start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, where Wknsuperscriptsubscript𝑊𝑘𝑛W_{k}^{n}italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT denotes the weight matrix of operation k𝑘kitalic_k within the n𝑛nitalic_n-th transformer layer. In this paper, we uniformly set p={1,2}𝑝12p=\{1,2\}italic_p = { 1 , 2 }. As a result, we term these methods as Magnitude-l1 and Magnitude-l2.

Taylor. For a given calibration dataset D𝐷Ditalic_D, the significance of removing weight parameters is indicated by the change in training loss :=|(Wkn,D)(Wkn=0,D)||(D)WknWkn|assignsuperscriptsubscript𝑊𝑘𝑛𝐷superscriptsubscript𝑊𝑘𝑛0𝐷𝐷superscriptsubscript𝑊𝑘𝑛superscriptsubscript𝑊𝑘𝑛\mathcal{L}:=|\mathcal{L}(W_{k}^{n},D)-\mathcal{L}(W_{k}^{n}=0,D)|\approx|%\frac{\partial\mathcal{L}(D)}{\partial W_{k}^{n}}W_{k}^{n}|caligraphic_L := | caligraphic_L ( italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , italic_D ) - caligraphic_L ( italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = 0 , italic_D ) | ≈ | divide start_ARG ∂ caligraphic_L ( italic_D ) end_ARG start_ARG ∂ italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_ARG italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT |. Following Ma etal. (2023a); Kim etal. (2024), we omit the second-order derivatives in this assessment. Then we define the Taylor score of the n𝑛nitalic_n-th transformer layer as ITaylorn=k|(D)WknWkn|superscriptsubscript𝐼Taylor𝑛subscript𝑘𝐷superscriptsubscript𝑊𝑘𝑛superscriptsubscript𝑊𝑘𝑛I_{\text{Taylor}}^{n}=\sum_{k}{|\frac{\partial\mathcal{L}(D)}{\partial W_{k}^{%n}}W_{k}^{n}|}italic_I start_POSTSUBSCRIPT Taylor end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | divide start_ARG ∂ caligraphic_L ( italic_D ) end_ARG start_ARG ∂ italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_ARG italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT |.

PPL. Following Kim etal. (2024), we remove a single layer and assess its impact on the perplexity of the pruned model using the calibration dataset D𝐷Ditalic_D. We then prune those layers that lead to a smaller degradation of the PPL.

BI. Men etal. (2024) introduce a metric called Block Influence as an effective indicator of layer importance. Specifically, the BI score of the i𝑖iitalic_i-th layer can be calculated as follows:

BIi=1𝔼X,tXi,tTXi+1,tXi,t2Xi+1,t2,subscriptBI𝑖1subscript𝔼𝑋𝑡superscriptsubscript𝑋𝑖𝑡𝑇subscript𝑋𝑖1𝑡subscriptnormsubscript𝑋𝑖𝑡2subscriptnormsubscript𝑋𝑖1𝑡2\mathrm{BI}_{i}=1-\mathbb{E}_{X,t}\frac{X_{i,t}^{T}X_{i+1,t}}{\left\|X_{i,t}%\right\|_{2}\left\|X_{i+1,t}\right\|_{2}},roman_BI start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 - blackboard_E start_POSTSUBSCRIPT italic_X , italic_t end_POSTSUBSCRIPT divide start_ARG italic_X start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_X start_POSTSUBSCRIPT italic_i + 1 , italic_t end_POSTSUBSCRIPT end_ARG start_ARG ∥ italic_X start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∥ italic_X start_POSTSUBSCRIPT italic_i + 1 , italic_t end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ,(3)

where Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the input of the i𝑖iitalic_i-th layer and Xi,tsubscript𝑋𝑖𝑡X_{i,t}italic_X start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT is the t𝑡titalic_t-th row of Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.

3.2 Evaluation and Datasets

To assess the performance of the model, we follow the evaluation of Ma etal. (2023a) to perform zero-shot task classification on 8888 common sense reasoning datasets using the lm-evaluation-harness(Gao etal., 2023) package: MMLU(Hendrycks etal., 2021), CMMLU(Li etal., 2023), PIQA(Bisk etal., 2020), HellaSwag(Zellers etal., 2019), WinoGrande(Sakaguchi etal., 2021), ARC-easy(Clark etal., 2018), ARC-challenge(Clark etal., 2018) and OpenbookQA(Mihaylov etal., 2018). Additionally, we evaluate the model using perplexity on the WikiText2(Merity etal., 2016) and Penn Treebank (PTB)(Marcus etal., 1993) datasets. For the PPL metric, we follow (Ma etal., 2023a; Muralidharan etal., 2024) and use WikiText2 for calculation. Following (Ma etal., 2023a), we randomly select 10 samples from BookCorpus(Zhu etal., 2015) to compute Taylor and BI, truncating each sample to a sequence length of 128. Unless otherwise specified, we utilize the Alpaca-cleaned(Taori etal., 2023) with LoRA to recover the performance. Uniformly, we set the training epoch to 2 and batch size to 64. All experiments are conducted on 2 NVIDIA A100 GPUs with 40 GB of memory and 4 NVIDIA RTX A5000 GPUs with 24 GB of memory.

4 An Empirical Exploration of LLM Layer Pruning

This paper aims to contribute to the community the best practice of layer pruning such that practitioners can prune an LLM to an affordable size and desired performance with minimal exploration effort. Specifically, we will expand from three aspects: First, we explore which metric is most effective for identifying unimportant layers, helping researchers make informed choices.Then, we investigate which fine-tuning method most effectively restores model performance after pruning. Finally, we delve deeper into various pruning strategies and want to answer whether iterative pruning will outperform one-shot pruning.

ModelMetricBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Vicuna-7B-v1.5Dense0.7720±plus-or-minus\pm±0.00980.5642±plus-or-minus\pm±0.00490.3300±plus-or-minus\pm±0.02100.7555±plus-or-minus\pm±0.00880.4326±plus-or-minus\pm±0.01450.4858±plus-or-minus\pm±0.00400.3518±plus-or-minus\pm±0.00440.6953±plus-or-minus\pm±0.01290.5484
Reverse-order0.7171±plus-or-minus\pm±0.01050.5005±plus-or-minus\pm±0.00500.2608±plus-or-minus\pm±0.01980.6221±plus-or-minus\pm±0.00990.3848±plus-or-minus\pm±0.01420.4737±plus-or-minus\pm±0.00410.3417±plus-or-minus\pm±0.00440.6267±plus-or-minus\pm±0.01360.4909
Random0.5223±plus-or-minus\pm±0.01170.2607±plus-or-minus\pm±0.00440.1380±plus-or-minus\pm±0.01540.2614±plus-or-minus\pm±0.00900.2176±plus-or-minus\pm±0.01210.2295±plus-or-minus\pm±0.00350.2500±plus-or-minus\pm±0.00400.4672±plus-or-minus\pm±0.01400.2933
PPL0.7361±plus-or-minus\pm±0.01030.4734±plus-or-minus\pm±0.00500.2760±plus-or-minus\pm±0.02000.6705±plus-or-minus\pm±0.00960.3456±plus-or-minus\pm±0.01390.2943±plus-or-minus\pm±0.00380.2569±plus-or-minus\pm±0.00410.5896±plus-or-minus\pm±0.01380.4553
Magnitude-l10.5299±plus-or-minus\pm±0.01160.2586±plus-or-minus\pm±0.00440.1440±plus-or-minus\pm±0.01570.2609±plus-or-minus\pm±0.00900.2253±plus-or-minus\pm±0.01220.2297±plus-or-minus\pm±0.00350.2514±plus-or-minus\pm±0.00400.4893±plus-or-minus\pm±0.01400.2986
Magnitude-l20.5256±plus-or-minus\pm±0.01170.2578±plus-or-minus\pm±0.00440.1340±plus-or-minus\pm±0.01520.2622±plus-or-minus\pm±0.00900.2108±plus-or-minus\pm±0.01190.2295±plus-or-minus\pm±0.00350.2515±plus-or-minus\pm±0.00400.4838±plus-or-minus\pm±0.01400.2944
BI0.6910±plus-or-minus\pm±0.01080.3987±plus-or-minus\pm±0.00490.2100±plus-or-minus\pm±0.01820.5829±plus-or-minus\pm±0.01010.2654±plus-or-minus\pm±0.01290.2389±plus-or-minus\pm±0.00360.2513±plus-or-minus\pm±0.00400.5036±plus-or-minus\pm±0.01410.3927
Taylor0.5250±plus-or-minus\pm±0.01170.2581±plus-or-minus\pm±0.00440.1360±plus-or-minus\pm±0.01530.2584±plus-or-minus\pm±0.00900.2048±plus-or-minus\pm±0.01180.2318±plus-or-minus\pm±0.00360.2526±plus-or-minus\pm±0.00400.4972±plus-or-minus\pm±0.01410.2955
Qwen1.5-7BDense0.7845±plus-or-minus\pm±0.00960.5785±plus-or-minus\pm±0.00490.3160±plus-or-minus\pm±0.02080.7125±plus-or-minus\pm±0.00930.4053±plus-or-minus\pm±0.01430.5967±plus-or-minus\pm±0.00390.7277±plus-or-minus\pm±0.00390.6575±plus-or-minus\pm±0.01330.5973
Reverse-order0.6942±plus-or-minus\pm±0.01070.4444±plus-or-minus\pm±0.00500.2280±plus-or-minus\pm±0.01880.5143±plus-or-minus\pm±0.01030.3302±plus-or-minus\pm±0.01370.5101±plus-or-minus\pm±0.00410.7171±plus-or-minus\pm±0.00400.5912±plus-or-minus\pm±0.01380.5037
Random0.5408±plus-or-minus\pm±0.01160.2682±plus-or-minus\pm±0.00440.1240±plus-or-minus\pm±0.01480.2630±plus-or-minus\pm±0.00900.2039±plus-or-minus\pm±0.01180.2366±plus-or-minus\pm±0.00760.2457±plus-or-minus\pm±0.00400.4807±plus-or-minus\pm±0.01400.2954
PPL0.7089±plus-or-minus\pm±0.01060.4195±plus-or-minus\pm±0.00490.2240±plus-or-minus\pm±0.01870.5960±plus-or-minus\pm±0.01010.2944±plus-or-minus\pm±0.01330.2457±plus-or-minus\pm±0.00360.2552±plus-or-minus\pm±0.00410.5185±plus-or-minus\pm±0.01400.4078
Magnitude-l10.6578±plus-or-minus\pm±0.01110.3989±plus-or-minus\pm±0.00490.2040±plus-or-minus\pm±0.01800.5244±plus-or-minus\pm±0.01020.2901±plus-or-minus\pm±0.01330.2574±plus-or-minus\pm±0.00370.2541±plus-or-minus\pm±0.00410.5249±plus-or-minus\pm±0.01400.3890
Magnitude-l20.5903±plus-or-minus\pm±0.01150.3657±plus-or-minus\pm±0.00480.1640±plus-or-minus\pm±0.01660.4630±plus-or-minus\pm±0.01020.2381±plus-or-minus\pm±0.01240.2502±plus-or-minus\pm±0.00370.2513±plus-or-minus\pm±0.00400.5312±plus-or-minus\pm±0.01400.3567
BI0.7220±plus-or-minus\pm±0.01050.4190±plus-or-minus\pm±0.00490.2440±plus-or-minus\pm±0.01920.5972±plus-or-minus\pm±0.01010.2671±plus-or-minus\pm±0.01290.2456±plus-or-minus\pm±0.00360.2536±plus-or-minus\pm±0.00400.5383±plus-or-minus\pm±0.01400.4190
Taylor0.6970±plus-or-minus\pm±0.01070.4284±plus-or-minus\pm±0.00490.2060±plus-or-minus\pm±0.01810.5160±plus-or-minus\pm±0.01030.3140±plus-or-minus\pm±0.01360.5231±plus-or-minus\pm±0.00410.6079±plus-or-minus\pm±0.00430.6046±plus-or-minus\pm±0.01370.4871
Gemma2-2B-ItDense0.7867±plus-or-minus\pm±0.00960.5367±plus-or-minus\pm±0.00500.3560±plus-or-minus\pm±0.02140.8085±plus-or-minus\pm±0.00810.5111±plus-or-minus\pm±0.01460.5687±plus-or-minus\pm±0.00390.4499±plus-or-minus\pm±0.00450.6961±plus-or-minus\pm±0.01290.5892
Reverse-order0.7029±plus-or-minus\pm±0.01070.4529±plus-or-minus\pm±0.00500.2660±plus-or-minus\pm±0.01980.6343±plus-or-minus\pm±0.00990.3763±plus-or-minus\pm±0.01420.5261±plus-or-minus\pm±0.00400.4117±plus-or-minus\pm±0.00450.6551±plus-or-minus\pm±0.01340.5032
Random0.7307±plus-or-minus\pm±0.01040.4462±plus-or-minus\pm±0.00500.2860±plus-or-minus\pm±0.02020.6852±plus-or-minus\pm±0.00950.3422±plus-or-minus\pm±0.01390.3452±plus-or-minus\pm±0.00400.2893±plus-or-minus\pm±0.00420.5833±plus-or-minus\pm±0.01390.4635
PPL0.7454±plus-or-minus\pm±0.01020.4611±plus-or-minus\pm±0.00500.2940±plus-or-minus\pm±0.02040.7008±plus-or-minus\pm±0.00940.3609±plus-or-minus\pm±0.01400.3503±plus-or-minus\pm±0.00400.2838±plus-or-minus\pm±0.00420.5825±plus-or-minus\pm±0.01390.4724
Magnitude-l10.7481±plus-or-minus\pm±0.01010.4530±plus-or-minus\pm±0.00500.3040±plus-or-minus\pm±0.02060.7239±plus-or-minus\pm±0.00920.3729±plus-or-minus\pm±0.01410.2703±plus-or-minus\pm±0.00370.2514±plus-or-minus\pm±0.00400.5596±plus-or-minus\pm±0.01400.4604
Magnitude-l20.7225±plus-or-minus\pm±0.01040.4245±plus-or-minus\pm±0.00490.2380±plus-or-minus\pm±0.01910.6561±plus-or-minus\pm±0.00970.3038±plus-or-minus\pm±0.01340.2413±plus-or-minus\pm±0.00360.2258±plus-or-minus\pm±0.00410.5493±plus-or-minus\pm±0.01400.4202
BI0.6921±plus-or-minus\pm±0.01080.4272±plus-or-minus\pm±0.00490.2700±plus-or-minus\pm±0.01990.6511±plus-or-minus\pm±0.00980.3703±plus-or-minus\pm±0.01410.4968±plus-or-minus\pm±0.00400.3851±plus-or-minus\pm±0.00450.6661±plus-or-minus\pm±0.01330.4948
Taylor0.7002±plus-or-minus\pm±0.01070.4541±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.6359±plus-or-minus\pm±0.00990.3695±plus-or-minus\pm±0.01410.5431±plus-or-minus\pm±0.00400.4048±plus-or-minus\pm±0.00450.6488±plus-or-minus\pm±0.01340.5073
Llama-3.1-8B-ItDense0.8003±plus-or-minus\pm±0.00930.5910±plus-or-minus\pm±0.00490.3380±plus-or-minus\pm±0.02120.8182±plus-or-minus\pm±0.00790.5179±plus-or-minus\pm±0.01460.6790±plus-or-minus\pm±0.00380.5552±plus-or-minus\pm±0.00450.7395±plus-or-minus\pm±0.01230.6299
Reverse-order0.7002±plus-or-minus\pm±0.01070.4010±plus-or-minus\pm±0.00490.2940±plus-or-minus\pm±0.02040.6170±plus-or-minus\pm±0.01000.3985±plus-or-minus\pm±0.01430.6342±plus-or-minus\pm±0.00390.5449±plus-or-minus\pm±0.00450.6243±plus-or-minus\pm±0.01360.5268
Random0.5653±plus-or-minus\pm±0.01160.2886±plus-or-minus\pm±0.00450.1400±plus-or-minus\pm±0.01550.3169±plus-or-minus\pm±0.00950.1860±plus-or-minus\pm±0.01140.2275±plus-or-minus\pm±0.00350.2559±plus-or-minus\pm±0.00410.5075±plus-or-minus\pm±0.01410.3110
PPL0.7628±plus-or-minus\pm±0.00990.4931±plus-or-minus\pm±0.00500.2640±plus-or-minus\pm±0.01970.7290±plus-or-minus\pm±0.00910.3805±plus-or-minus\pm±0.01420.3367±plus-or-minus\pm±0.00400.2724±plus-or-minus\pm±0.00410.5793±plus-or-minus\pm±0.01390.4772
Magnitude-l10.5408±plus-or-minus\pm±0.01160.2634±plus-or-minus\pm±0.00440.1360±plus-or-minus\pm±0.01530.2845±plus-or-minus\pm±0.00930.2014±plus-or-minus\pm±0.01170.2504±plus-or-minus\pm±0.00370.2503±plus-or-minus\pm±0.00400.4878±plus-or-minus\pm±0.01400.3018
Magnitude-l20.5413±plus-or-minus\pm±0.01160.2638±plus-or-minus\pm±0.00440.1340±plus-or-minus\pm±0.01520.2841±plus-or-minus\pm±0.00930.2014±plus-or-minus\pm±0.01170.2498±plus-or-minus\pm±0.00360.2504±plus-or-minus\pm±0.00400.4870±plus-or-minus\pm±0.01400.3015
BI0.7176±plus-or-minus\pm±0.01050.4196±plus-or-minus\pm±0.00490.2020±plus-or-minus\pm±0.01800.6107±plus-or-minus\pm±0.01000.2841±plus-or-minus\pm±0.01320.2417±plus-or-minus\pm±0.00360.2494±plus-or-minus\pm±0.00400.5391±plus-or-minus\pm±0.01400.4080
Taylor0.7138±plus-or-minus\pm±0.01050.4964±plus-or-minus\pm±0.00500.2740±plus-or-minus\pm±0.02000.6848±plus-or-minus\pm±0.00950.4181±plus-or-minus\pm±0.01440.2861±plus-or-minus\pm±0.00380.2504±plus-or-minus\pm±0.00400.7135±plus-or-minus\pm±0.01270.4796

4.1 Are fancy metrics essential for identifying redundant layers to prune?

The first question is to find the most “redundant” layers to prune. As discussed in Section3.1, there are various metrics for layer selection, which can be as straightforward as reverse-order, or as complicated as BI. However, does a complicated metric always contribute to a better performance? Probably not. We find that a simple metric, i.e., reverse-order, is competitive among these metrics.

Specifically, we conduct comprehensive experiments on Vicuna-7B-v1.5(Zheng etal., 2024), Qwen1.5-7B(Yang etal., 2024a), Gemma2-2B-Instruct(Team, 2024) and Llama-3.1-8B-Instruct(Dubey etal., 2024). We uniformly prune 8 layers (25% pruning ratio) for Vicuna-7B-v1.5, Qwen1.5-7B and Llama-3.1-8B-Instruct, and 6 layers for Gemma2-2B-Instruct. Experiments with a 50% pruning ratio (12 layers for Gemma2-2B-Instruct and 16 layers for others) are provided in TableA. In the fine-tuning stage, we use LoRA with a rank d𝑑ditalic_d of 8888 and a batch size of 64646464, and the AdamW optimizer. The learning rate is set to 1×1051superscript1051\times 10^{-5}1 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT with 100100100100 warming steps.

Results. As shown in Table1, we find that the reverse-order metric delivers stable and superior results across various models under the 25% pruning rate, making it a reliable choice for pruning. On average, it outperforms the second-best PPL metric by 5.30%percent5.305.30\%5.30 % across four models. The result also holds for the 50% pruning rate, as shown in TableA.We hope our insights can help researchers make informed choices when selecting the most suitable pruning metrics for their specific models.

4.2 Is the LoRA family the best choice for post-pruning fine-tuning?

In previous studies(Kim etal., 2024; Men etal., 2024), LoRA is often used to restore the performance of pruned models. This raises a question: Is the LoRA family the best choice for post-pruning fine-tuning? To answer this question, we further use QLoRA(Dettmers etal., 2024) and partial-layer fine-tuning techniques to conduct experiments. We briefly introduce these methods as follows:

LoRA Fine-tuning. LoRA is one of the best-performed parameter-efficient fine-tuning paradigm that updates dense model layers using pluggable low-rank matrices(Mao etal., 2024). Specifically, for a pre-trained weight matrix W0subscript𝑊0W_{0}italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, LoRA constrains its update by representing the latter with a low-rank decomposition W0+ΔW=W0+BAsubscript𝑊0Δ𝑊subscript𝑊0𝐵𝐴W_{0}+\Delta W=W_{0}+BAitalic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + roman_Δ italic_W = italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_B italic_A. At the beginning of training, A𝐴Aitalic_A is initialize with a random Gaussian initialization, while B𝐵Bitalic_B is initialized to zero. During training, W0subscript𝑊0W_{0}italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is frozen and does not receive gradient updates, while A𝐴Aitalic_A and B𝐵Bitalic_B contain trainable parameters. Then the forward pass can be formalized as:

W0x+ΔWx=W0x+BAx.subscript𝑊0𝑥Δ𝑊𝑥subscript𝑊0𝑥𝐵𝐴𝑥W_{0}x+\Delta Wx=W_{0}x+BAx.italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_x + roman_Δ italic_W italic_x = italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_x + italic_B italic_A italic_x .(4)

QLoRA Fine-tuning. QLoRA builds on LoRA by incorporating quantization techniques to further reduce memory usage while maintaining, or even enhancing the performance.

Partial-layer Fine-tuning. Compared to LoRA and QLoRA, which inject trainable low-rank factorization matrices into each layer, partial-layer fine-tuning simply freezes the weights of some layers while updating only the specified layers to save computing resources and time(Shen etal., 2021; Ngesthi etal., 2021; Peng & Wang, 2020). Following by the common practice of previous studies(Khan & Fang, 2023), we choose to fine-tune only the later layers that are closer to the output, while keeping the earlier layers, which capture more general features, frozen. Specifically, we use two different fine-tuning strategies: one is to finetune only the model head (lm_head only), and the other is to finetune the lm_head plus the last layer (lm_head + last layer), the last two layers (lm_head + last two layers), and the last three layers (lm_head + last three layers).

ModelMethodLayerBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Vicuna-7B-v1.5LoRA-0.7171±plus-or-minus\pm±0.01050.5005±plus-or-minus\pm±0.00500.2608±plus-or-minus\pm±0.01980.6221±plus-or-minus\pm±0.00990.3848±plus-or-minus\pm±0.01420.4737±plus-or-minus\pm±0.00410.3417±plus-or-minus\pm±0.00440.6267±plus-or-minus\pm±0.01360.4909
QLoRA-0.6649±plus-or-minus\pm±0.01100.4057±plus-or-minus\pm±0.00490.2700±plus-or-minus\pm±0.01990.5345±plus-or-minus\pm±0.01020.3439±plus-or-minus\pm±0.01390.4809±plus-or-minus\pm±0.00410.3473±plus-or-minus\pm±0.00440.6014±plus-or-minus\pm±0.01380.4561
Partial-layerlm_head only0.7057±plus-or-minus\pm±0.01060.4865±plus-or-minus\pm±0.00500.2880±plus-or-minus\pm±0.02030.6301±plus-or-minus\pm±0.00990.4010±plus-or-minus\pm±0.01430.4819±plus-or-minus\pm±0.00410.3520±plus-or-minus\pm±0.00440.6156±plus-or-minus\pm±0.01370.4951
lm_head+last layer0.7155±plus-or-minus\pm±0.01050.5054±plus-or-minus\pm±0.00500.2900±plus-or-minus\pm±0.02030.6511±plus-or-minus\pm±0.00980.4113±plus-or-minus\pm±0.01440.4831±plus-or-minus\pm±0.00410.3538±plus-or-minus\pm±0.00440.6283±plus-or-minus\pm±0.01360.5048
lm_head+last two layers0.7214±plus-or-minus\pm±0.01050.5060±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.6532±plus-or-minus\pm±0.00980.4002±plus-or-minus\pm±0.01430.4858±plus-or-minus\pm±0.00410.3530±plus-or-minus\pm±0.00440.6267±plus-or-minus\pm±0.01360.5060
lm_head+last three layers0.7247±plus-or-minus\pm±0.01040.5103±plus-or-minus\pm±0.00500.2960±plus-or-minus\pm±0.02040.6528±plus-or-minus\pm±0.00980.3985±plus-or-minus\pm±0.01430.4870±plus-or-minus\pm±0.00400.3544±plus-or-minus\pm±0.00440.6219±plus-or-minus\pm±0.01360.5057
Qwen1.5-7BLoRA-0.6942±plus-or-minus\pm±0.01070.4444±plus-or-minus\pm±0.00500.2280±plus-or-minus\pm±0.01880.5143±plus-or-minus\pm±0.01030.3302±plus-or-minus\pm±0.01370.5101±plus-or-minus\pm±0.00410.7171±plus-or-minus\pm±0.00400.5912±plus-or-minus\pm±0.01380.5037
QLoRA-0.6697±plus-or-minus\pm±0.01100.4028±plus-or-minus\pm±0.00490.2400±plus-or-minus\pm±0.01910.4760±plus-or-minus\pm±0.01020.2969±plus-or-minus\pm±0.01340.4797±plus-or-minus\pm±0.00410.6914±plus-or-minus\pm±0.00410.5825±plus-or-minus\pm±0.01390.4799
Partial-layerlm_head only0.7149±plus-or-minus\pm±0.01050.4735±plus-or-minus\pm±0.00500.2460±plus-or-minus\pm±0.01930.5497±plus-or-minus\pm±0.01020.3524±plus-or-minus\pm±0.01400.5467±plus-or-minus\pm±0.00400.7276±plus-or-minus\pm±0.00390.5967±plus-or-minus\pm±0.01380.5259
lm_head+last layer0.7220±plus-or-minus\pm±0.01050.4850±plus-or-minus\pm±0.00500.2440±plus-or-minus\pm±0.01920.5690±plus-or-minus\pm±0.01020.3549±plus-or-minus\pm±0.01400.5719±plus-or-minus\pm±0.00400.7283±plus-or-minus\pm±0.00390.6275±plus-or-minus\pm±0.01360.5378
lm_head+last two layers0.7214±plus-or-minus\pm±0.01050.4915±plus-or-minus\pm±0.00500.2540±plus-or-minus\pm±0.01950.5783±plus-or-minus\pm±0.01010.3584±plus-or-minus\pm±0.01400.5734±plus-or-minus\pm±0.00400.7275±plus-or-minus\pm±0.00390.6298±plus-or-minus\pm±0.01360.5418
lm_head+last three layers0.7296±plus-or-minus\pm±0.01040.4974±plus-or-minus\pm±0.00500.2520±plus-or-minus\pm±0.01940.5808±plus-or-minus\pm±0.01010.3618±plus-or-minus\pm±0.01400.5795±plus-or-minus\pm±0.00400.7272±plus-or-minus\pm±0.00400.6275±plus-or-minus\pm±0.01360.5445
Llama-3.1-8B-ItLoRA-0.7002±plus-or-minus\pm±0.01070.4010±plus-or-minus\pm±0.00490.2940±plus-or-minus\pm±0.02040.6170±plus-or-minus\pm±0.01000.3985±plus-or-minus\pm±0.01430.6342±plus-or-minus\pm±0.00390.5449±plus-or-minus\pm±0.00450.6243±plus-or-minus\pm±0.01360.5268
QLoRA-0.6980±plus-or-minus\pm±0.01070.3975±plus-or-minus\pm±0.00490.3000±plus-or-minus\pm±0.02050.6183±plus-or-minus\pm±0.01000.3840±plus-or-minus\pm±0.01420.6032±plus-or-minus\pm±0.00390.5090±plus-or-minus\pm±0.00450.6267±plus-or-minus\pm±0.01360.5171
Partial-layerlm_head only0.7334±plus-or-minus\pm±0.01030.4896±plus-or-minus\pm±0.00500.2860±plus-or-minus\pm±0.02020.7012±plus-or-minus\pm±0.00940.4411±plus-or-minus\pm±0.01450.6122±plus-or-minus\pm±0.00400.5442±plus-or-minus\pm±0.00450.6717±plus-or-minus\pm±0.01320.5599
lm_head+last layer0.7350±plus-or-minus\pm±0.01030.5107±plus-or-minus\pm±0.00500.2940±plus-or-minus\pm±0.02040.7193±plus-or-minus\pm±0.00920.4531±plus-or-minus\pm±0.01450.6630±plus-or-minus\pm±0.00380.5526±plus-or-minus\pm±0.00450.6582±plus-or-minus\pm±0.01330.5732
lm_head+last two layers0.7361±plus-or-minus\pm±0.01030.5204±plus-or-minus\pm±0.00500.3080±plus-or-minus\pm±0.02070.7151±plus-or-minus\pm±0.00930.4633±plus-or-minus\pm±0.01460.6588±plus-or-minus\pm±0.00380.5543±plus-or-minus\pm±0.00450.6567±plus-or-minus\pm±0.01330.5766
lm_head+last three layers0.7383±plus-or-minus\pm±0.01030.5323±plus-or-minus\pm±0.00500.3080±plus-or-minus\pm±0.02070.7260±plus-or-minus\pm±0.00920.4684±plus-or-minus\pm±0.01460.6567±plus-or-minus\pm±0.00380.5515±plus-or-minus\pm±0.00450.6646±plus-or-minus\pm±0.01330.5807

MethodBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Dense0.8003±plus-or-minus\pm±0.00930.5910±plus-or-minus\pm±0.00490.3380±plus-or-minus\pm±0.02120.8182±plus-or-minus\pm±0.00790.5179±plus-or-minus\pm±0.01460.6790±plus-or-minus\pm±0.00380.5552±plus-or-minus\pm±0.00450.7395±plus-or-minus\pm±0.01230.6299
lm_head+last three layers0.7998±plus-or-minus\pm±0.00930.6057±plus-or-minus\pm±0.00490.3520±plus-or-minus\pm±0.02140.8186±plus-or-minus\pm±0.00790.5316±plus-or-minus\pm±0.01460.6784±plus-or-minus\pm±0.00380.5522±plus-or-minus\pm±0.00450.7316±plus-or-minus\pm±0.01250.6337
LoRA0.8047±plus-or-minus\pm±0.00920.6007±plus-or-minus\pm±0.00490.3500±plus-or-minus\pm±0.02140.8287±plus-or-minus\pm±0.00770.5316±plus-or-minus\pm±0.01460.6764±plus-or-minus\pm±0.00380.5530±plus-or-minus\pm±0.00450.7380±plus-or-minus\pm±0.01240.6354

LoRAQLoRAlm_head onlylm_head+last layerlm_head+last two layerslm_head+last three layers
Trainable parameters15.73M15.73M525.34M743.45M961.56M1179.68M
GPU memory45.83G14.26G39.82G42.12G44.41G48.02G
Training time (2 epoch)10440.30s17249.01s6952.92s7296.76s7616.83s7931.36s

In view of the superiority of the reverse-order metric in Section4.1, we use it to prune here. For the Vicuna-7B-v1.5, Qwen1.5-7B, and Llama-3.1-8B-Instruct models, we prune 8 layers. For the Gemma2-2B-Instruct model, we prune 6 layers. Subsequently, we utilize LoRA, QLoRA and partial-layer fine-tuning methods to restore performance. We provide more results of fine-tuning with the taylor metric in TableB. In particular, because Gemma2-2B-Instruct employs weight tying(Press & Wolf, 2016) to share the weights between the embedding layer and the softmax layer (lm_head), we exclude partial-layer fine-tuning in Gemma2-2B-Instruct. For fine-tuning with LoRA and partial-layer methods, we utilize the AdamW optimizer, while for QLoRA, we opt for the paged_adamw_8bit optimizer. All other hyperparameter settings are the same as in Section4.1.

Results. As shown in the Table2 and TableB, we find that fine-tuning with QLoRA slightly hurts the performance of pruned models compared to LoRA. Excitingly, the effect of partial-layer fine-tuning is significantly better than LoRA, providing a viable new direction for fine-tuning models after pruning. In the ablation study, we compare the performance of LoRA with partial-layer fine-tuning for the full model in Table3, which shows that partial-layer fine-tuning and LoRA perform similarly. This suggests that the conventional insights for the full model fine-tuning do not hold after pruning, i.e., the structural changes and parameter reduction of the model enable partial layer fine-tuning to adapt more effectively to the new parameter distribution and fully leverage the potential benefits of pruning. When considering fine-tuning methods for LLMs, in addition to performance, the training cost is also a significant factor to take into account. Therefore, we compare the training cost of these fine-tuning methods, including training time, gpu memory and trainable parameters. Specifically, we conduct experiments on 2 empty NVIDIA RTX A100 GPUs using the pruned Llama-3.1-8B-Instruct model (with 8 layers removed in reverse order). Table4 shows the comparison among these fine-tuning methods. We find that compared to LoRA, partial-layer fine-tuning involves more trainable parameters but maintains comparable GPU usage and achieves faster training time. Additionally, partial-layer fine-tuning outperforms LoRA in effectiveness. In contrast, although QLoRA consumes less GPU memory, it has much longer training time and yields poorer performance. In summary, we conclude that partial-layer fine-tuning is an effective approach to restoring the performance of pruned models when sufficient memory is available.

4.3 Will iterative pruning outperform one-shot pruning?

In this subsection, we provide insights into the optimal pruning strategy for LLMs. Although Muralidharan etal. (2024) have explored pruning strategies and concluded that iterative pruning offers no benefit, their study focuses on utilizing knowledge distillation(Hinton, 2015) for performance recovery. In contrast, this paper concentrates on layer pruning with LoRA and partial-layer fine-tuning, thereby broadening the scope of pruning strategies evaluated. We briefly introduce the one-shot pruning and iterative pruning:

One-shot Pruning.One-shot pruning scores once and then prune the model to a target prune ratio.

Iterative Pruning.Iterative pruning alternately processes the score-prune-update cycle until achieving the target prune ratio.

Specifically, we select Llama-3.1-8B-Instruct and Gemma2-2B-Instruct as the base models. For one-shot pruning, we prune 8 layers from the Llama-3.1-8B-Instruct and 6 layers from the Gemma2-2B-Instruct in a single step, guided by the reverse-order and taylor metrics. For iterative pruning with LoRA, we begin by scoring all layers using these metrics. Subsequently, we set the pruning step to 1111 and 4444 for Llama-3.1-8B-Instruct, and 1111 and 3333 for Gemma2-2B-Instruct. After each pruning step, we fine-tune the model with LoRA and merge LoRA weights back into the fine-tuned model. This score-prune-fine-tune-merge cycle is repeated until a total of 8 layers are pruned for Llama-3.1-8B-Instruct and 6 layers for Gemma2-2B-Instruct. For iterative pruning with partial-layer fine-tuning, we fine-tune the model using partial-layer fine-tuning (lm_head + last three layers) after each pruning step, and then repeat the score-prune-fine-tune cycle. To avoid the fine-tuned layers being pruned completely, we set the pruning step size to 1. All hyperparameter settings are the same as in Section4.1. Experiments with iterative pruning of more layers are provided in TableC.

Results. By comparing the results of iterative and one-shot pruning in Table5 and TableC, we find that unlike traditional CNN pruning, which often yields significant performance improvements through iterative pruning(Tan & Motani, 2020; He & Xiao, 2023), the iterative approach for LLMs may not provide the same benefits and can even lead to performance degradation. We believe that is because too much training causes the model to suffer from catastrophic forgetting(Zhai etal., 2024; Liu etal., 2024a). Figure B visualizes the representational similarity of different pruning strategies. From this, we observe that different pruning strategies yield significantly different representations, highlighting the impact of each strategy on the model’s learned features. Besides, iterative pruning requires more computational overhead than one-shot pruning, which is not cost-effective with limited performance gains.

Fine-tuning MethodModelMetricIteration stepsBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
LoRALlama-3.1-8B-ItReverse-orderone-shot0.7002+0.01070.4010+0.00490.2940+0.02040.6170+0.01000.3985+0.01430.6342+0.00390.5449±plus-or-minus\pm±0.00450.6243±plus-or-minus\pm±0.01360.5268
1:4:80.7176±plus-or-minus\pm±0.01050.4538±plus-or-minus\pm±0.00500.2920±plus-or-minus\pm±0.02040.6705±plus-or-minus\pm±0.00960.4121±plus-or-minus\pm±0.01440.6374±plus-or-minus\pm±0.00390.5439±plus-or-minus\pm±0.00450.6369±plus-or-minus\pm±0.01350.5455
1:1:80.7160±plus-or-minus\pm±0.01050.4470±plus-or-minus\pm±0.00500.2860±plus-or-minus\pm±0.02020.6637±plus-or-minus\pm±0.00970.4061±plus-or-minus\pm±0.01440.6440±plus-or-minus\pm±0.00390.5425±plus-or-minus\pm±0.00450.6448±plus-or-minus\pm±0.01350.5438
Taylorone-shot0.7138±plus-or-minus\pm±0.01050.4964±plus-or-minus\pm±0.00500.2740±plus-or-minus\pm±0.02000.6848±plus-or-minus\pm±0.00950.4181±plus-or-minus\pm±0.01440.2861±plus-or-minus\pm±0.00380.2504±plus-or-minus\pm±0.00400.7135±plus-or-minus\pm±0.01270.4796
1:4:80.7149±plus-or-minus\pm±0.01050.4991±plus-or-minus\pm±0.00500.2480±plus-or-minus\pm±0.01930.7071±plus-or-minus\pm±0.00930.3951±plus-or-minus\pm±0.01430.4676±plus-or-minus\pm±0.00410.3480±plus-or-minus\pm±0.00440.6709±plus-or-minus\pm±0.01320.5063
1:1:80.6921±plus-or-minus\pm±0.01080.4728±plus-or-minus\pm±0.00500.2140±plus-or-minus\pm±0.01840.6675±plus-or-minus\pm±0.00970.3891±plus-or-minus\pm±0.01420.4576±plus-or-minus\pm±0.00410.3511±plus-or-minus\pm±0.00440.6519±plus-or-minus\pm±0.01340.4870
Gemma2-2B-ItReverse-orderone-shot0.7029±plus-or-minus\pm±0.01070.4529±plus-or-minus\pm±0.00500.2660±plus-or-minus\pm±0.01980.6343±plus-or-minus\pm±0.00990.3763±plus-or-minus\pm±0.01420.5261±plus-or-minus\pm±0.00400.4117±plus-or-minus\pm±0.00450.6551±plus-or-minus\pm±0.01340.5032
1:3:60.6953±plus-or-minus\pm±0.01070.4523±plus-or-minus\pm±0.00500.2900±plus-or-minus\pm±0.02030.6397±plus-or-minus\pm±0.00990.3729±plus-or-minus\pm±0.01410.5418±plus-or-minus\pm±0.00400.4013±plus-or-minus\pm±0.00450.6496±plus-or-minus\pm±0.01340.5054
1:1:60.7067±plus-or-minus\pm±0.01060.4476±plus-or-minus\pm±0.00500.2660±plus-or-minus\pm±0.01980.6305±plus-or-minus\pm±0.00990.3746±plus-or-minus\pm±0.01410.5143±plus-or-minus\pm±0.00400.4066±plus-or-minus\pm±0.00450.6559±plus-or-minus\pm±0.01340.5003
Taylorone-shot0.7002±plus-or-minus\pm±0.01070.4541±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.6359±plus-or-minus\pm±0.00990.3695±plus-or-minus\pm±0.01410.5431±plus-or-minus\pm±0.00400.4048±plus-or-minus\pm±0.00450.6488±plus-or-minus\pm±0.01340.5073
1:3:60.7057±plus-or-minus\pm±0.01060.4473±plus-or-minus\pm±0.00500.2380±plus-or-minus\pm±0.01910.6553±plus-or-minus\pm±0.00980.3490±plus-or-minus\pm±0.01390.3697±plus-or-minus\pm±0.00400.2884±plus-or-minus\pm±0.00420.5927±plus-or-minus\pm±0.01380.4558
1:1:60.7236±plus-or-minus\pm±0.01040.4544±plus-or-minus\pm±0.00500.2860±plus-or-minus\pm±0.02020.6574±plus-or-minus\pm±0.00970.3490±plus-or-minus\pm±0.01390.4763±plus-or-minus\pm±0.00410.3801±plus-or-minus\pm±0.00450.6306±plus-or-minus\pm±0.01360.4947
Partial-layerLlama-3.1-8B-ItReverse-orderone-shot0.7383±plus-or-minus\pm±0.01030.5323±plus-or-minus\pm±0.00500.3080±plus-or-minus\pm±0.02070.7260±plus-or-minus\pm±0.00920.4684±plus-or-minus\pm±0.01460.6567±plus-or-minus\pm±0.00380.5515±plus-or-minus\pm±0.00450.6646±plus-or-minus\pm±0.01330.5807
1:1:80.7432±plus-or-minus\pm±0.01020.5357±plus-or-minus\pm±0.00500.2980±plus-or-minus\pm±0.02050.7496±plus-or-minus\pm±0.00890.4590±plus-or-minus\pm±0.01460.6539±plus-or-minus\pm±0.00380.5558±plus-or-minus\pm±0.00450.6922±plus-or-minus\pm±0.01300.5859
Taylorone-shot0.7345±plus-or-minus\pm±0.01030.5290±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.7399±plus-or-minus\pm±0.00900.4360±plus-or-minus\pm±0.01450.6277±plus-or-minus\pm±0.00390.4763±plus-or-minus\pm±0.00460.7151±plus-or-minus\pm±0.01270.5701
1:1:80.6300±plus-or-minus\pm±0.01130.3553±plus-or-minus\pm±0.00480.1760±plus-or-minus\pm±0.01700.5177±plus-or-minus\pm±0.01030.2756±plus-or-minus\pm±0.01310.2611±plus-or-minus\pm±0.00370.2557±plus-or-minus\pm±0.00410.5312±plus-or-minus\pm±0.01400.3753

VerificationPPL on WikiText2PPL on PTBAvg Acc
MetricBITaylorBITaylorBITaylor
Calibration Samples151.0665.4390.9794.350.400.36
543.5465.4379.3494.350.430.36
1053.5365.43101.6494.350.410.36
3050.0355.4288.0277.630.420.55
5059.7355.42103.1977.630.410.55

5 Sensitivity Analysis

In this section, we conduct sensitivity analyses on the number of calibration samples, the choice of SFT dataset and various pruning rates for LLM layer pruning.

The effect of number of calibration samples on LLM layer pruning.It is worth noting that some data-driven layer pruning methods, such as BI and Taylor, rely upon calibration samples to generate layer activations. Therefore, we explore the effect of the number of calibration samples on pruning. Specifically, we calculate BI and Taylor metrics using 1, 5, 10, 30, and 50 calibration samples, prune 8 layers based on these metrics, finetune the pruned Llama-3.1-8B-Instruct models using LoRA, and evaluate their performance through lm-evaluation-harness package. For ease of comparison, we report the average accuracy on 8 datasets in the main text. For more details, see TableD. Besides, we report the model perplexity on the WikiText and Penn Treebank test set. As shown in Table6, we observe that the pruned models, obtained using varying numbers of calibration samples, do affect the model complexity and zero-shot performance, which suggests that for data-driven pruning methods, performance stability should also be considered a key criterion when evaluating the quality of pruning technique.

DatasetBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Dolly-15k0.7709±plus-or-minus\pm±0.00980.5541±plus-or-minus\pm±0.00500.3000±plus-or-minus\pm±0.02050.7424±plus-or-minus\pm±0.00900.4838±plus-or-minus\pm±0.01460.6753±plus-or-minus\pm±0.00380.5522±plus-or-minus\pm±0.00450.7032±plus-or-minus\pm±0.01280.5977
Alpaca-cleaned0.7383±plus-or-minus\pm±0.01030.5323±plus-or-minus\pm±0.00500.3080±plus-or-minus\pm±0.02070.7260±plus-or-minus\pm±0.00920.4684±plus-or-minus\pm±0.01460.6567±plus-or-minus\pm±0.00380.5515±plus-or-minus\pm±0.00450.6646±plus-or-minus\pm±0.01330.5807
MMLU0.6012±plus-or-minus\pm±0.01140.2714±plus-or-minus\pm±0.00440.1700±plus-or-minus\pm±0.01680.3430±plus-or-minus\pm±0.00970.2457±plus-or-minus\pm±0.01260.5888±plus-or-minus\pm±0.00400.5266±plus-or-minus\pm±0.00450.5856±plus-or-minus\pm±0.01380.4165

Reassessing Layer Pruning in LLMs: New Insights and Methods (3)

Reassessing Layer Pruning in LLMs: New Insights and Methods (4)

The effect of SFT datasets on LLM layer pruning. In the previous sections, we uniformly utilize Alpaca-cleaned(Taori etal., 2023) to fine-tune the pruned models. Herein, we aim to assess how fine-tuning a pruned model using different SFT datasets affects its performance. Specifically, we conduct experiments using the Reverse-order metric to remove 8 layers from the Llama-3.1-8B-Instruct and fine-tune the pruned model using lm_head + last three layers on MMLU (training set)(Hendrycks etal., 2021) and Dolly-15k(Conover etal., 2023). We set the maximum sequence length to 512 for MMLU and 1024 for Dolly-15k. From Table7, we observe that among these datasets, Dolly-15k achieves the best results, followed by Alpaca-cleaned. This demonstrates that fine-tuning with different SFT datasets has a significant impact on the performance of pruned models and suggests further exploration of the most suitable datasets for fine-tuning pruned models.

The effect of different pruning rates on LLM layer pruning.We investigate the impact of pruning the LLM at various pruning rates in Figure2. Specifically, we conduct one-shot pruning on Llama-3.1-8B-Instruct using reverse-order and taylor metrics and evaluate their effects on the model’s performance with LoRA. All hyperparameter settings remain consistent with those in Section4.1. As shown in Figure2, we observe that as the number of pruned layers increases, the performance of the model on all datasets tends to decrease and eventually converges.However, certain datasets, especially MMLU, CMMLU, and ARC-c, are highly sensitive to layer changes and degrade faster than others. Besides, after cutting off about 16 layers, the model was damaged, so we set the maximum pruning rate in the paper to 16 layers.

Baseline# Parameters (TTokens)BenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Vicuna-7B-v1.56.74B (370M)0.7720±plus-or-minus\pm±0.00980.5642±plus-or-minus\pm±0.00490.3300±plus-or-minus\pm±0.02100.7555±plus-or-minus\pm±0.00880.4326±plus-or-minus\pm±0.01450.4858±plus-or-minus\pm±0.00400.3518±plus-or-minus\pm±0.00440.6953±plus-or-minus\pm±0.01290.5484
ChatGLM2-6B6.24B (1.4T)0.5403±plus-or-minus\pm±0.01160.2589±plus-or-minus\pm±0.00440.1420±plus-or-minus\pm±0.01560.2597±plus-or-minus\pm±0.00900.2005±plus-or-minus\pm±0.01170.2431±plus-or-minus\pm±0.00360.2537±plus-or-minus\pm±0.00400.5288±plus-or-minus\pm±0.01400.3034
Baichuan2-7B7.51B (2.6T)0.7666±plus-or-minus\pm±0.00990.5363±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.7475±plus-or-minus\pm±0.00890.4206±plus-or-minus\pm±0.01440.5024±plus-or-minus\pm±0.00400.5220±plus-or-minus\pm±0.00450.6819±plus-or-minus\pm±0.01310.5599
Qwen1.5-7B7.72B (18T)0.7845±plus-or-minus\pm±0.00960.5785±plus-or-minus\pm±0.00490.3160±plus-or-minus\pm±0.02080.7125±plus-or-minus\pm±0.00930.4053±plus-or-minus\pm±0.01430.5967±plus-or-minus\pm±0.00390.7277±plus-or-minus\pm±0.00390.6575±plus-or-minus\pm±0.01330.5973
LLaMA3-8B8.03B (15T+)0.7965±plus-or-minus\pm±0.00940.6014±plus-or-minus\pm±0.00490.3480±plus-or-minus\pm±0.02130.8005±plus-or-minus\pm±0.00820.4983±plus-or-minus\pm±0.01460.6212±plus-or-minus\pm±0.00380.4752±plus-or-minus\pm±0.00450.7332±plus-or-minus\pm±0.01240.6093
Gemma2-7B8.54B (6T)0.8025±plus-or-minus\pm±0.00930.6039±plus-or-minus\pm±0.00490.3300±plus-or-minus\pm±0.02100.8110±plus-or-minus\pm±0.00800.5009±plus-or-minus\pm±0.01460.6143±plus-or-minus\pm±0.00390.4430±plus-or-minus\pm±0.00450.7435±plus-or-minus\pm±0.01230.6061
Llama-3.1-8B-It8.03B (15T+)0.8003±plus-or-minus\pm±0.00930.5910±plus-or-minus\pm±0.00490.3380±plus-or-minus\pm±0.02120.8182±plus-or-minus\pm±0.00790.5179±plus-or-minus\pm±0.01460.6790±plus-or-minus\pm±0.00380.5552±plus-or-minus\pm±0.00450.7395±plus-or-minus\pm±0.01230.6299
ShortGPT (BI)6.29B (12.74M)0.7176±plus-or-minus\pm±0.01050.4196±plus-or-minus\pm±0.00490.2020±plus-or-minus\pm±0.01800.6107±plus-or-minus\pm±0.01000.2841±plus-or-minus\pm±0.01320.2417±plus-or-minus\pm±0.00360.2494±plus-or-minus\pm±0.00400.5391±plus-or-minus\pm±0.01400.4080
Shortened LLaMA (PPL)6.29B (12.74M)0.7628±plus-or-minus\pm±0.00990.4931±plus-or-minus\pm±0.00500.2640±plus-or-minus\pm±0.01970.7290±plus-or-minus\pm±0.00910.3805±plus-or-minus\pm±0.01420.3367±plus-or-minus\pm±0.00400.2724±plus-or-minus\pm±0.00410.5793±plus-or-minus\pm±0.01390.4772
Shortened LLaMA (Taylor)6.29B (12.74M)0.7138±plus-or-minus\pm±0.01050.4964±plus-or-minus\pm±0.00500.2740±plus-or-minus\pm±0.02000.6848±plus-or-minus\pm±0.00950.4181±plus-or-minus\pm±0.01440.2861±plus-or-minus\pm±0.00380.2504±plus-or-minus\pm±0.00400.7135±plus-or-minus\pm±0.01270.4796
Llama-3.1-6.3B-It-Alpaca6.29B (12.74M)0.7383±plus-or-minus\pm±0.01030.5323±plus-or-minus\pm±0.00500.3080±plus-or-minus\pm±0.02070.7260±plus-or-minus\pm±0.00920.4684±plus-or-minus\pm±0.01460.6567±plus-or-minus\pm±0.00380.5515±plus-or-minus\pm±0.00450.6646±plus-or-minus\pm±0.01330.5807
Llama-3.1-6.3B-It-Dolly6.29B (14.96M)0.7709±plus-or-minus\pm±0.00980.5541±plus-or-minus\pm±0.00500.3000±plus-or-minus\pm±0.02050.7424±plus-or-minus\pm±0.00900.4838±plus-or-minus\pm±0.01460.6753±plus-or-minus\pm±0.00380.5522±plus-or-minus\pm±0.00450.7032±plus-or-minus\pm±0.01280.5977

Model# Params# MACsMemoryLatency
Llama-3.1-6.3B-It-Alpaca, Llama-3.1-6.3B-Dolly6.29B368.65G23984MiB210.35s

6 Obtaining the Best Pruned Models

In Section4 and Section5, we have gained some valuable non-trivial practices and insights on LLM layer pruning through systematic experiments. Herein, we use these practices and insights to obtain the Llama-3.1-6.3B-It model and compare its performance against multiple baselines: (1) the original Llama-3.1-8B-It model, (2) a set of similarly sized community models and (3) a set of pruned models obtained by state-of-the-art LLM layer pruning methods (all prune 8 layers, fine-tune on Alpaca-cleaned).

Specifically, Llama-3.1-6.3B-It is obtained by pruning 8 layers of Llama-3.1-8B-It using the reverse-order metric. Note that, in contrast to these community models trained from scratch on trillions of tokens (except for Vicuna-7B-v1.5), Llama-3.1-6.3B-It is fine-tuned solely on Alpaca-cleaned (12.74M tokens) and Dolly-15k (14.96M tokens). For ease of distinction, we refer to them as “Llama-3.1-6.3B-It-Alpaca” and “Llama-3.1-6.3B-It-Dolly”, respectively. From Table8, we find that both Llama-3.1-6.3B-It-Alpaca and Llama-3.1-6.3B-It-Dolly outperform ChatGLM2-6B(GLM etal., 2024), Vicuna-7B-v1.5(Zheng etal., 2024) and Baichuan2-7B(Baichuan, 2023), and partially exceed LLaMA3-8B(AI@Meta, 2024), Gemma2-7B(Team etal., 2024) (e.g., MMLU), while using significantly fewer training tokens. Notably, Llama-3.1-6.3B-It-Dolly also outperforms Qwen1.5-7B(Yang etal., 2024a). Besides, we also compare our models to other pruned models obtained by various LLM layer pruning methods. Experimental results show that our models are nearly 19% better than ShortGPT(Men etal., 2024) and 10%+ better than Shortened LLaMA(Kim etal., 2024). Table9 presents the statistic of Llama-3.1-6.3B-It, including parameters, MACs, memory requirements and latency. Following Ma etal. (2023a), the statistical evaluation is conducted in inference mode, where the model is fed a sentence consisting of 64 tokens. The latency is tested under the test set of WikiText2 on a single NVIDIA RTX A100 GPU. We also present the generation results of the Llama-3.1-6.3B-It-Alpaca, Llama-3.1-6.3B-It-Dolly and Llama-3.1-8B-It in TableE.

7 Conclusion

In this paper, we revisit LLM layer pruning, focusing on pruning metrics, fine-tuning methods and pruning strategies. From these efforts, we have developed a practical list of best practices for LLM layer pruning. We use these practices and insights to guide the pruning of Llama-3.1-8B-Instruct and obtain Llama-3.1-6.3B-It-Alpaca and Llama-3.1-6.3B-It-Dolly. Our pruned models require fewer training tokens compared to training from scratch, yet still performing favorably against various popular community LLMs of similar size. We hope our work will help inform best practices for deploying LLMs in real-world applications.

Limitations and Future Work. In Section5, we find that SFT datasets do effect the performance of pruned models. Therefore, we will explore which SFT datasets are more suitable for fine-tuning pruned models in future work. Additionally, in this paper, we focus primarily on layer pruning due to the straightforward nature of pruning layers in LLMs, where the input and output dimensions are identical. However, we plan to further investigate weight pruning(Sun etal., 2023; Frantar & Alistarh, 2023) and width pruning(Xia etal., 2023; Ma etal., 2023b) in future experiments.

8 Reproducibility Statement

The authors have made great efforts to ensure the reproducibility of the empirical results reported in this paper. Firstly, the experiment settings, evaluation metrics, and datasets were described in detail in Section3.2. Secondly, the code to reproduce the results is available at https://github.com/yaolu-zjut/Navigation-LLM-layer-pruning, and the optimal model weights can be found at at https://huggingface.co/YaoLuzjut/Llama-3.1-6.3B-It-Alpaca and https://huggingface.co/YaoLuzjut/Llama-3.1-6.3B-It-Dolly.

9 Ethics statement

In this paper, we carefully consider ethical concerns related to our research and ensure that all methodologies and experimental designs adhere to ethical standards. Our study focuses on layer pruning to enhance the efficiency of LLMs and reduce computational resource requirements, thereby promoting sustainable AI development. Furthermore, all models and datasets used in our research are sourced from publicly available and accessible origins, ensuring no infringement on intellectual property or personal privacy.

References

  • Achiam etal. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal.Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023.
  • AI@Meta (2024)AI@Meta.Llama 3 model card.2024.URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
  • Baichuan (2023)Baichuan.Baichuan 2: Open large-scale language models.arXiv preprint arXiv:2309.10305, 2023.URL https://arxiv.org/abs/2309.10305.
  • Bisk etal. (2020)Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, etal.Piqa: Reasoning about physical commonsense in natural language.In Proceedings of the AAAI conference on artificial intelligence, volume34, pp. 7432–7439, 2020.
  • Chen & Zhao (2018)Shi Chen and QiZhao.Shallowing deep networks: Layer-wise pruning based on feature representations.IEEE transactions on pattern analysis and machine intelligence, 41(12):3048–3056, 2018.
  • Chen etal. (2024)Xiaodong Chen, Yuxuan Hu, and Jing Zhang.Compressing large language models by streamlining the unimportant layer.arXiv preprint arXiv:2403.19135, 2024.
  • Clark etal. (2018)Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.Think you have solved question answering? try arc, the ai2 reasoning challenge.arXiv preprint arXiv:1803.05457, 2018.
  • Conover etal. (2023)Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin.Free dolly: Introducing the world’s first truly open instruction-tuned llm, 2023.URL https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm.
  • Deng etal. (2023)Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, and Michael Bendersky.Llms to the moon? reddit market sentiment analysis with large language models.In Companion Proceedings of the ACM Web Conference 2023, pp. 1014–1019, 2023.
  • Dettmers etal. (2024)Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.Qlora: Efficient finetuning of quantized llms.Advances in Neural Information Processing Systems, 36, 2024.
  • Dosovitskiy etal. (2021)Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.An image is worth 16x16 words: Transformers for image recognition at scale, 2021.URL https://arxiv.org/abs/2010.11929.
  • Dubey etal. (2024)Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, etal.The llama 3 herd of models.arXiv preprint arXiv:2407.21783, 2024.
  • Frantar & Alistarh (2023)Elias Frantar and Dan Alistarh.Sparsegpt: Massive language models can be accurately pruned in one-shot.In International Conference on Machine Learning, pp. 10323–10337. PMLR, 2023.
  • Gao etal. (2023)Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain LeNoac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou.A framework for few-shot language model evaluation, 12 2023.URL https://zenodo.org/records/10256836.
  • GLM etal. (2024)Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, DaYin, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, WengLam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiaotao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yifan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, and Zihan Wang.Chatglm: A family of large language models from glm-130b to glm-4 all tools, 2024.
  • Gu etal. (2024)Yuxian Gu, LiDong, Furu Wei, and Minlie Huang.Minillm: Knowledge distillation of large language models.In The Twelfth International Conference on Learning Representations, 2024.
  • Guenter & Sideris (2024)Valentin FrankIngmar Guenter and Athanasios Sideris.Concurrent training and layer pruning of deep neural networks.arXiv preprint arXiv:2406.04549, 2024.
  • He etal. (2015)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition, 2015.URL https://arxiv.org/abs/1512.03385.
  • He & Xiao (2023)Yang He and Lingao Xiao.Structured pruning for deep convolutional neural networks: A survey.IEEE transactions on pattern analysis and machine intelligence, 2023.
  • Hendrycks etal. (2021)Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring massive multitask language understanding.Proceedings of the International Conference on Learning Representations (ICLR), 2021.
  • Hinton (2015)Geoffrey Hinton.Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531, 2015.
  • Hu etal. (2021)EdwardJ Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen.Lora: Low-rank adaptation of large language models.arXiv preprint arXiv:2106.09685, 2021.
  • Jaiswal etal. (2023)Ajay Jaiswal, Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang, and Yinfei Yang.Compressing llms: The truth is rarely pure and never simple.arXiv preprint arXiv:2310.01382, 2023.
  • Khan & Fang (2023)MuhammadOsama Khan and YiFang.Revisiting fine-tuning strategies for self-supervised medical imaging analysis.arXiv preprint arXiv:2307.10915, 2023.
  • Kim etal. (2024)Bo-Kyeong Kim, Geonmin Kim, Tae-Ho Kim, Thibault Castells, Shinkook Choi, Junho Shin, and Hyoung-Kyu Song.Shortened llama: A simple depth pruning for large language models.arXiv preprint arXiv:2402.02834, 2024.
  • Lee etal. (2024)Jungi Lee, Wonbeom Lee, and Jaewoong Sim.Tender: Accelerating large language models via tensor decomposition and runtime requantization.arXiv preprint arXiv:2406.12930, 2024.
  • Li etal. (2016)Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and HansPeter Graf.Pruning filters for efficient convnets.arXiv preprint arXiv:1608.08710, 2016.
  • Li etal. (2023)Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin.Cmmlu: Measuring massive multitask language understanding in chinese.arXiv preprint arXiv:2306.09212, 2023.
  • Lin etal. (2024)JiLin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han.Awq: Activation-aware weight quantization for on-device llm compression and acceleration.Proceedings of Machine Learning and Systems, 6:87–100, 2024.
  • Liu etal. (2024a)Chengyuan Liu, Shihang Wang, Yangyang Kang, Lizhi Qing, Fubang Zhao, Changlong Sun, Kun Kuang, and Fei Wu.More than catastrophic forgetting: Integrating general capabilities for domain-specific llms.arXiv preprint arXiv:2405.17830, 2024a.
  • Liu etal. (2024b)Deyuan Liu, Zhanyue Qin, Hairu Wang, Zhao Yang, Zecheng Wang, Fangying Rong, Qingbin Liu, Yanchao Hao, XiChen, Cunhang Fan, etal.Pruning via merging: Compressing llms via manifold alignment based layer merging.arXiv preprint arXiv:2406.16330, 2024b.
  • Liu etal. (2024c)Songwei Liu, Chao Zeng, Lianqiang Li, Chenqian Yan, Lean Fu, Xing Mei, and Fangmin Chen.Foldgpt: Simple and effective large language model compression scheme.arXiv preprint arXiv:2407.00928, 2024c.
  • Liu etal. (2021)ZeLiu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.Swin transformer: Hierarchical vision transformer using shifted windows, 2021.URL https://arxiv.org/abs/2103.14030.
  • Liu etal. (2023)Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra.Llm-qat: Data-free quantization aware training for large language models.arXiv preprint arXiv:2305.17888, 2023.
  • Lu etal. (2022)Yao Lu, Wen Yang, Yunzhe Zhang, Zuohui Chen, Jinyin Chen, QiXuan, Zhen Wang, and Xiaoniu Yang.Understanding the dynamics of dnns using graph modularity.In European Conference on Computer Vision, pp. 225–242. Springer, 2022.
  • Ma etal. (2023a)Xinyin Ma, Gongfan Fang, and Xinchao Wang.Llm-pruner: On the structural pruning of large language models.Advances in neural information processing systems, 36:21702–21720, 2023a.
  • Ma etal. (2023b)Xinyin Ma, Gongfan Fang, and Xinchao Wang.Llm-pruner: On the structural pruning of large language models.In Advances in Neural Information Processing Systems, 2023b.
  • Mao etal. (2024)Yuren Mao, Yuhang Ge, Yijiang Fan, Wenyi Xu, YuMi, Zhonghao Hu, and Yunjun Gao.A survey on lora of large language models.arXiv preprint arXiv:2407.11046, 2024.
  • Marcus etal. (1993)Mitch Marcus, Beatrice Santorini, and MaryAnn Marcinkiewicz.Building a large annotated corpus of english: The penn treebank.Computational linguistics, 19(2):313–330, 1993.
  • Men etal. (2024)Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen.Shortgpt: Layers in large language models are more redundant than you expect.arXiv preprint arXiv:2403.03853, 2024.
  • Meng etal. (2024)Fanxu Meng, Zhaohui Wang, and Muhan Zhang.Pissa: Principal singular values and singular vectors adaptation of large language models, 2024.URL https://arxiv.org/abs/2404.02948.
  • Merity etal. (2016)Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.Pointer sentinel mixture models.arXiv preprint arXiv:1609.07843, 2016.
  • Mihaylov etal. (2018)Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal.Can a suit of armor conduct electricity? a new dataset for open book question answering.arXiv preprint arXiv:1809.02789, 2018.
  • Muralidharan etal. (2024)Saurav Muralidharan, SharathTuruvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, and Pavlo Molchanov.Compact language models via pruning and knowledge distillation.arXiv preprint arXiv:2407.14679, 2024.
  • Ngesthi etal. (2021)StephanyOctaviani Ngesthi, Iwan Setyawan, and IvannaK Timotius.The effect of partial fine tuning on alexnet for skin lesions classification.In 2021 13th International Conference on Information Technology and Electrical Engineering (ICITEE), pp. 147–152. IEEE, 2021.
  • Peng & Wang (2020)Peng Peng and Jiugen Wang.How to fine-tune deep neural networks in few-shot learning?arXiv preprint arXiv:2012.00204, 2020.
  • Press & Wolf (2016)Ofir Press and Lior Wolf.Using the output embedding to improve language models.arXiv preprint arXiv:1608.05859, 2016.
  • Saha etal. (2023)Rajarshi Saha, Varun Srivastava, and Mert Pilanci.Matrix compression via randomized low rank and low precision factorization.Advances in Neural Information Processing Systems, 36, 2023.
  • Sakaguchi etal. (2021)Keisuke Sakaguchi, RonanLe Bras, Chandra Bhagavatula, and Yejin Choi.Winogrande: An adversarial winograd schema challenge at scale.Communications of the ACM, 64(9):99–106, 2021.
  • Shah etal. (2024)Jay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, and Tri Dao.Flashattention-3: Fast and accurate attention with asynchrony and low-precision.arXiv preprint arXiv:2407.08608, 2024.
  • Shen etal. (2021)Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, and Kwang-Ting Cheng.Partial is better than all: Revisiting fine-tuning strategy for few-shot learning.In Proceedings of the AAAI conference on artificial intelligence, volume35, pp. 9594–9602, 2021.
  • Siddiqui etal. (2024)ShoaibAhmed Siddiqui, Xin Dong, Greg Heinrich, Thomas Breuel, Jan Kautz, David Krueger, and Pavlo Molchanov.A deeper look at depth pruning of llms.arXiv preprint arXiv:2407.16286, 2024.
  • Simonyan & Zisserman (2015)Karen Simonyan and Andrew Zisserman.Very deep convolutional networks for large-scale image recognition, 2015.URL https://arxiv.org/abs/1409.1556.
  • Sun etal. (2023)Mingjie Sun, Zhuang Liu, Anna Bair, and JZico Kolter.A simple and effective pruning approach for large language models.arXiv preprint arXiv:2306.11695, 2023.
  • Szegedy etal. (2014)Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich.Going deeper with convolutions, 2014.URL https://arxiv.org/abs/1409.4842.
  • Tan & Motani (2020)Chong MinJohn Tan and Mehul Motani.Dropnet: Reducing neural network complexity via iterative pruning.In International Conference on Machine Learning, pp. 9356–9366. PMLR, 2020.
  • Tang etal. (2023)Hui Tang, Yao Lu, and QiXuan.Sr-init: An interpretable layer pruning method.In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023.
  • Taori etal. (2023)Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and TatsunoriB Hashimoto.Stanford alpaca: An instruction-following llama model, 2023.
  • Team (2024)Gemma Team.Gemma.2024.doi: 10.34740/KAGGLE/M/3301.URL https://www.kaggle.com/m/3301.
  • Team etal. (2024)Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, MihirSanjay Kale, Juliette Love, etal.Gemma: Open models based on gemini research and technology.arXiv preprint arXiv:2403.08295, 2024.
  • Tishby etal. (2000)Naftali Tishby, FernandoC Pereira, and William Bialek.The information bottleneck method.arXiv preprint physics/0004057, 2000.
  • Touvron etal. (2023)Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, etal.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971, 2023.
  • Wang etal. (2023)Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu.Document-level machine translation with large language models.arXiv preprint arXiv:2304.02210, 2023.
  • Wang etal. (2019)Wenxiao Wang, Shuai Zhao, Minghao Chen, Jinming Hu, Deng Cai, and Haifeng Liu.Dbp: Discrimination based block-level pruning for deep model acceleration.arXiv preprint arXiv:1912.10178, 2019.
  • Williams & Aletras (2024)Miles Williams and Nikolaos Aletras.On the impact of calibration data in post-training quantization and pruning.In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10100–10118, 2024.
  • Xia etal. (2023)Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen.Sheared llama: Accelerating language model pre-training via structured pruning.arXiv preprint arXiv:2310.06694, 2023.
  • Xu etal. (2024)Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, and Tianyi Zhou.A survey on knowledge distillation of large language models.arXiv preprint arXiv:2402.13116, 2024.
  • Yang etal. (2024a)AnYang, Baosong Yang, Binyuan Hui, BoZheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, NaNi, Pei Zhang, Peng Wang, RuPeng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, YuWan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan.Qwen2 technical report.arXiv preprint arXiv:2407.10671, 2024a.
  • Yang etal. (2024b)Yifei Yang, Zouying Cao, and Hai Zhao.Laco: Large language model pruning via layer collapse.arXiv preprint arXiv:2402.11187, 2024b.
  • Zellers etal. (2019)Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.Hellaswag: Can a machine really finish your sentence?arXiv preprint arXiv:1905.07830, 2019.
  • Zhai etal. (2024)Yuexiang Zhai, Shengbang Tong, Xiao Li, MuCai, Qing Qu, YongJae Lee, and YiMa.Investigating the catastrophic forgetting in multimodal large language model fine-tuning.In Conference on Parsimony and Learning, pp. 202–227. PMLR, 2024.
  • Zhang etal. (2023a)Biao Zhang, Barry Haddow, and Alexandra Birch.Prompting large language model for machine translation: A case study.In International Conference on Machine Learning, pp. 41092–41110. PMLR, 2023a.
  • Zhang etal. (2023b)Boyu Zhang, Hongyang Yang, Tianyu Zhou, Muhammad AliBabar, and Xiao-Yang Liu.Enhancing financial sentiment analysis via retrieval augmented large language models.In Proceedings of the fourth ACM international conference on AI in finance, pp. 349–356, 2023b.
  • Zhao etal. (2024a)Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian.Galore: Memory-efficient llm training by gradient low-rank projection.arXiv preprint arXiv:2403.03507, 2024a.
  • Zhao etal. (2024b)Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian.Galore: Memory-efficient llm training by gradient low-rank projection, 2024b.URL https://arxiv.org/abs/2403.03507.
  • Zheng etal. (2024)Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, ZiLin, Zhuohan Li, Dacheng Li, Eric Xing, etal.Judging llm-as-a-judge with mt-bench and chatbot arena.Advances in Neural Information Processing Systems, 36, 2024.
  • Zhong etal. (2024)Longguang Zhong, Fanqi Wan, Ruijun Chen, Xiaojun Quan, and Liangzhi Li.Blockpruner: Fine-grained pruning for large language models.arXiv preprint arXiv:2406.10594, 2024.
  • Zhu etal. (2015)Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler.Aligning books and movies: Towards story-like visual explanations by watching movies and reading books.In Proceedings of the IEEE international conference on computer vision, pp. 19–27, 2015.

Appendix A Supplementary Material of Reassessing Layer Pruning in LLMs: New Insights and Methods

ModelMetricBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Vicuna-7B-v1.5Dense0.7720±plus-or-minus\pm±0.00980.5642±plus-or-minus\pm±0.00490.3300±plus-or-minus\pm±0.02100.7555±plus-or-minus\pm±0.00880.4326±plus-or-minus\pm±0.01450.4858±plus-or-minus\pm±0.00400.3518±plus-or-minus\pm±0.00440.6953±plus-or-minus\pm±0.01290.5484
Reverse-order0.5642±plus-or-minus\pm±0.01160.2919±plus-or-minus\pm±0.00450.1700±plus-or-minus\pm±0.01680.3258±plus-or-minus\pm±0.00960.2645±plus-or-minus\pm±0.01290.4372±plus-or-minus\pm±0.00410.3069±plus-or-minus\pm±0.00430.5872±plus-or-minus\pm±0.01380.3685
Random0.5773±plus-or-minus\pm±0.01150.3083±plus-or-minus\pm±0.00460.1560±plus-or-minus\pm±0.01620.3775±plus-or-minus\pm±0.00990.2176±plus-or-minus\pm±0.01210.2650±plus-or-minus\pm±0.00370.2542±plus-or-minus\pm±0.00410.5067±plus-or-minus\pm±0.01410.3328
PPL0.6572±plus-or-minus\pm±0.01110.3524±plus-or-minus\pm±0.00480.1940±plus-or-minus\pm±0.01770.4971±plus-or-minus\pm±0.01030.2406±plus-or-minus\pm±0.01250.2361±plus-or-minus\pm±0.00360.2510±plus-or-minus\pm±0.00400.5328±plus-or-minus\pm±0.01400.3702
Magnitude-l10.5239±plus-or-minus\pm±0.01170.2585±plus-or-minus\pm±0.00440.1400±plus-or-minus\pm±0.01550.2635±plus-or-minus\pm±0.00900.2184±plus-or-minus\pm±0.01210.2295±plus-or-minus\pm±0.00350.2527±plus-or-minus\pm±0.00400.4893±plus-or-minus\pm±0.01400.2970
Magnitude-l20.5245±plus-or-minus\pm±0.01170.2590±plus-or-minus\pm±0.00440.1300±plus-or-minus\pm±0.01510.2656±plus-or-minus\pm±0.00910.2210±plus-or-minus\pm±0.01210.2293±plus-or-minus\pm±0.00350.2512±plus-or-minus\pm±0.00400.4791±plus-or-minus\pm±0.01400.2950
BI0.5250±plus-or-minus\pm±0.01170.2598±plus-or-minus\pm±0.00440.1440±plus-or-minus\pm±0.01570.2740±plus-or-minus\pm±0.00920.1928±plus-or-minus\pm±0.01150.2296±plus-or-minus\pm±0.00350.2476±plus-or-minus\pm±0.00400.4988±plus-or-minus\pm±0.01410.2965
Taylor0.5283±plus-or-minus\pm±0.01160.2585±plus-or-minus\pm±0.00440.1300±plus-or-minus\pm±0.01510.2572±plus-or-minus\pm±0.00900.2167±plus-or-minus\pm±0.01200.2614±plus-or-minus\pm±0.00370.2513±plus-or-minus\pm±0.00400.4901±plus-or-minus\pm±0.01400.2992
Qwen1.5-7BDense0.7845±plus-or-minus\pm±0.00960.5785±plus-or-minus\pm±0.00490.3160±plus-or-minus\pm±0.02080.7125±plus-or-minus\pm±0.0093f0.4053±plus-or-minus\pm±0.01430.5967±plus-or-minus\pm±0.00390.7277±plus-or-minus\pm±0.00390.6575±plus-or-minus\pm±0.01330.5973
Reverse-order0.5783±plus-or-minus\pm±0.01150.3100±plus-or-minus\pm±0.00460.1640±plus-or-minus\pm±0.01660.3047±plus-or-minus\pm±0.00940.2363±plus-or-minus\pm±0.01240.2507±plus-or-minus\pm±0.00370.2564±plus-or-minus\pm±0.00410.5391±plus-or-minus\pm±0.01400.3299
Random0.6409±plus-or-minus\pm±0.01120.3268±plus-or-minus\pm±0.00470.1940±plus-or-minus\pm±0.01770.4617±plus-or-minus\pm±0.01020.2261±plus-or-minus\pm±0.01220.2321±plus-or-minus\pm±0.00360.2529±plus-or-minus\pm±0.00400.5083±plus-or-minus\pm±0.01410.3553
PPL0.6529±plus-or-minus\pm±0.01110.3233±plus-or-minus\pm±0.00470.1700±plus-or-minus\pm±0.01680.4360±plus-or-minus\pm±0.01020.2099±plus-or-minus\pm±0.01190.2297±plus-or-minus\pm±0.00350.2541±plus-or-minus\pm±0.00410.5225±plus-or-minus\pm±0.01400.3498
Magnitude-l10.5452±plus-or-minus\pm±0.01160.2690±plus-or-minus\pm±0.00440.1280±plus-or-minus\pm±0.01500.2837±plus-or-minus\pm±0.00920.1962±plus-or-minus\pm±0.01160.2548±plus-or-minus\pm±0.00370.2479±plus-or-minus\pm±0.00400.4862±plus-or-minus\pm±0.01400.3013
Magnitude-l20.5348±plus-or-minus\pm±0.01160.2651±plus-or-minus\pm±0.00440.1520±plus-or-minus\pm±0.01610.2858±plus-or-minus\pm±0.00930.1843±plus-or-minus\pm±0.01130.2659±plus-or-minus\pm±0.00370.2519±plus-or-minus\pm±0.00400.5059±plus-or-minus\pm±0.01410.3057
BI0.6001±plus-or-minus\pm±0.01140.2905±plus-or-minus\pm±0.00450.1880±plus-or-minus\pm±0.01750.4099±plus-or-minus\pm±0.01010.2090±plus-or-minus\pm±0.01190.2420±plus-or-minus\pm±0.00360.2472±plus-or-minus\pm±0.00400.4901±plus-or-minus\pm±0.01400.3346
Taylor0.5223±plus-or-minus\pm±0.01170.2540±plus-or-minus\pm±0.00430.1460±plus-or-minus\pm±0.01580.2403±plus-or-minus\pm±0.00880.2176±plus-or-minus\pm±0.01210.2393±plus-or-minus\pm±0.00360.2478±plus-or-minus\pm±0.00400.4854±plus-or-minus\pm±0.01400.2941
Gemma2-2B-ItDense0.7867±plus-or-minus\pm±0.00960.5367±plus-or-minus\pm±0.00500.3560±plus-or-minus\pm±0.02140.8085±plus-or-minus\pm±0.00810.5111±plus-or-minus\pm±0.01460.5687±plus-or-minus\pm±0.00390.4499±plus-or-minus\pm±0.00450.6961±plus-or-minus\pm±0.01290.5892
Reverse-order0.6050±plus-or-minus\pm±0.01140.3049±plus-or-minus\pm±0.00460.1900±plus-or-minus\pm±0.01760.3817±plus-or-minus\pm±0.01000.2491±plus-or-minus\pm±0.01260.2327±plus-or-minus\pm±0.00360.2527±plus-or-minus\pm±0.00400.5580±plus-or-minus\pm±0.01400.3468
Random0.6741±plus-or-minus\pm±0.01090.3441±plus-or-minus\pm±0.00470.2180±plus-or-minus\pm±0.01850.5446±plus-or-minus\pm±0.01020.2696±plus-or-minus\pm±0.01300.2307±plus-or-minus\pm±0.00360.2540±plus-or-minus\pm±0.00410.5335±plus-or-minus\pm±0.01400.3836
PPL0.6621±plus-or-minus\pm±0.01100.3505±plus-or-minus\pm±0.00480.2380±plus-or-minus\pm±0.01910.5585±plus-or-minus\pm±0.01020.2526±plus-or-minus\pm±0.01270.2328±plus-or-minus\pm±0.00360.2526±plus-or-minus\pm±0.00400.5280±plus-or-minus\pm±0.01400.3844
Magnitude-l10.6649±plus-or-minus\pm±0.01100.3358±plus-or-minus\pm±0.00470.1960±plus-or-minus\pm±0.01780.5564±plus-or-minus\pm±0.01020.2355±plus-or-minus\pm±0.01240.2307±plus-or-minus\pm±0.00350.2516±plus-or-minus\pm±0.00400.5264±plus-or-minus\pm±0.01400.3747
Magnitude-l20.6159±plus-or-minus\pm±0.01130.2956±plus-or-minus\pm±0.00460.1720±plus-or-minus\pm±0.01690.4301±plus-or-minus\pm±0.01020.2073±plus-or-minus\pm±0.01180.2319±plus-or-minus\pm±0.00360.2501±plus-or-minus\pm±0.00400.5178±plus-or-minus\pm±0.01400.3401
BI0.6376±plus-or-minus\pm±0.01120.3310±plus-or-minus\pm±0.00470.2140±plus-or-minus\pm±0.01840.4891±plus-or-minus\pm±0.01030.2406±plus-or-minus\pm±0.01250.2397±plus-or-minus\pm±0.00360.2532±plus-or-minus\pm±0.00400.5667±plus-or-minus\pm±0.01390.3715
Taylor0.6088±plus-or-minus\pm±0.01140.3142±plus-or-minus\pm±0.00460.1880±plus-or-minus\pm±0.01750.4049±plus-or-minus\pm±0.01010.2739±plus-or-minus\pm±0.01300.2297±plus-or-minus\pm±0.00350.2508±plus-or-minus\pm±0.00400.5817±plus-or-minus\pm±0.01390.3565
Llama-3.1-8B-ItDense0.8003±plus-or-minus\pm±0.00930.5910±plus-or-minus\pm±0.00490.3380±plus-or-minus\pm±0.02120.8182±plus-or-minus\pm±0.00790.5179±plus-or-minus\pm±0.01460.6790±plus-or-minus\pm±0.00380.5552±plus-or-minus\pm±0.00450.7395±plus-or-minus\pm±0.01230.6299
Reverse-order0.6376±plus-or-minus\pm±0.01120.3163±plus-or-minus\pm±0.00460.1960±plus-or-minus\pm±0.01780.4019±plus-or-minus\pm±0.01010.3106±plus-or-minus\pm±0.01350.2502±plus-or-minus\pm±0.00360.2482±plus-or-minus\pm±0.00400.6101±plus-or-minus\pm±0.01370.3714
Random0.5588±plus-or-minus\pm±0.01160.2730±plus-or-minus\pm±0.00440.1280±plus-or-minus\pm±0.01500.2826±plus-or-minus\pm±0.00930.1903±plus-or-minus\pm±0.01150.2406±plus-or-minus\pm±0.00360.2555±plus-or-minus\pm±0.00410.5020±plus-or-minus\pm±0.01410.3039
PPL0.6643±plus-or-minus\pm±0.01100.3548±plus-or-minus\pm±0.00480.1960±plus-or-minus\pm±0.01780.4718±plus-or-minus\pm±0.01020.2483±plus-or-minus\pm±0.01260.2394±plus-or-minus\pm±0.00360.2446±plus-or-minus\pm±0.00400.5454±plus-or-minus\pm±0.01400.3706
Magnitude-l10.5316±plus-or-minus\pm±0.01160.2576±plus-or-minus\pm±0.00440.1360±plus-or-minus\pm±0.01530.2572±plus-or-minus\pm±0.00900.1980±plus-or-minus\pm±0.01160.2344±plus-or-minus\pm±0.00360.2526±plus-or-minus\pm±0.00400.4933±plus-or-minus\pm±0.01410.2951
Magnitude-l20.5316±plus-or-minus\pm±0.01160.2576±plus-or-minus\pm±0.00440.1360±plus-or-minus\pm±0.01530.2572±plus-or-minus\pm±0.00900.1980±plus-or-minus\pm±0.01160.2344±plus-or-minus\pm±0.00360.2526±plus-or-minus\pm±0.00400.4933±plus-or-minus\pm±0.01410.2951
BI0.5773±plus-or-minus\pm±0.01150.2878±plus-or-minus\pm±0.00450.1520±plus-or-minus\pm±0.01610.3674±plus-or-minus\pm±0.00990.1706±plus-or-minus\pm±0.01100.2342±plus-or-minus\pm±0.00360.2466±plus-or-minus\pm±0.00400.5036±plus-or-minus\pm±0.01410.3174
Taylor0.6088±plus-or-minus\pm±0.01140.3288±plus-or-minus\pm±0.00470.1660±plus-or-minus\pm±0.01670.4318±plus-or-minus\pm±0.01020.2790±plus-or-minus\pm±0.01310.2310±plus-or-minus\pm±0.00360.2534±plus-or-minus\pm±0.00410.6093±plus-or-minus\pm±0.01370.3635

Reassessing Layer Pruning in LLMs: New Insights and Methods (5)

ModelMethodLayerBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Llama-3.1-8B-ItLoRA-0.7138±plus-or-minus\pm±0.01050.4964±plus-or-minus\pm±0.00500.2740±plus-or-minus\pm±0.02000.6848±plus-or-minus\pm±0.00950.4181±plus-or-minus\pm±0.01440.2861±plus-or-minus\pm±0.00380.2504±plus-or-minus\pm±0.00400.7135±plus-or-minus\pm±0.01270.4796
QLoRA-0.6496±plus-or-minus\pm±0.01110.3260±plus-or-minus\pm±0.00470.1820±plus-or-minus\pm±0.01730.4520±plus-or-minus\pm±0.01020.2969±plus-or-minus\pm±0.01340.3425±plus-or-minus\pm±0.00400.2627±plus-or-minus\pm±0.00410.5793±plus-or-minus\pm±0.01390.3864
Partial-layerlm_head only0.6752±plus-or-minus\pm±0.01090.3685±plus-or-minus\pm±0.00480.2100±plus-or-minus\pm±0.01820.5349±plus-or-minus\pm±0.01020.3276±plus-or-minus\pm±0.01370.4315±plus-or-minus\pm±0.00410.3373±plus-or-minus\pm±0.00440.6795±plus-or-minus\pm±0.01090.4456
lm_head+last layer0.7029±plus-or-minus\pm±0.01070.4676±plus-or-minus\pm±0.00500.2140±plus-or-minus\pm±0.01840.6393±plus-or-minus\pm±0.00990.3763±plus-or-minus\pm±0.01420.5682±plus-or-minus\pm±0.00410.4483±plus-or-minus\pm±0.00460.6748±plus-or-minus\pm±0.01320.5114
lm_head+last two layers0.7252±plus-or-minus\pm±0.01040.5173±plus-or-minus\pm±0.00500.2800±plus-or-minus\pm±0.02010.7104±plus-or-minus\pm±0.00930.4232±plus-or-minus\pm±0.01440.6058±plus-or-minus\pm±0.00400.4659±plus-or-minus\pm±0.00460.7040±plus-or-minus\pm±0.01280.5540
lm_head+last three layers0.7345±plus-or-minus\pm±0.01030.5290±plus-or-minus\pm±0.00500.3020±plus-or-minus\pm±0.02060.7399±plus-or-minus\pm±0.00900.4360±plus-or-minus\pm±0.01450.6277±plus-or-minus\pm±0.00390.4763±plus-or-minus\pm±0.00460.7151±plus-or-minus\pm±0.01270.5701

Fine-tuning MethodModelMethodIteration stepsBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
LoRALlama-3.1-8B-ItReverse-orderone-shot0.6376±plus-or-minus\pm±0.01120.3163±plus-or-minus\pm±0.00460.1960±plus-or-minus\pm±0.01780.4019±plus-or-minus\pm±0.01010.3106±plus-or-minus\pm±0.01350.2502±plus-or-minus\pm±0.00360.2482±plus-or-minus\pm±0.00400.6101±plus-or-minus\pm±0.01370.3714
1:8:160.6376±plus-or-minus\pm±0.01120.3160±plus-or-minus\pm±0.00460.1980±plus-or-minus\pm±0.01780.3990±plus-or-minus\pm±0.01000.3106±plus-or-minus\pm±0.01350.2526±plus-or-minus\pm±0.00370.2504±plus-or-minus\pm±0.00400.6046±plus-or-minus\pm±0.01370.3711
1:1:160.6333±plus-or-minus\pm±0.01120.3259±plus-or-minus\pm±0.00470.2020±plus-or-minus\pm±0.01800.4146±plus-or-minus\pm±0.01010.2961±plus-or-minus\pm±0.01330.2426±plus-or-minus\pm±0.00360.2690±plus-or-minus\pm±0.00410.5912±plus-or-minus\pm±0.01380.3718
Taylorone-shot0.6088±plus-or-minus\pm±0.01140.3288±plus-or-minus\pm±0.00470.1660±plus-or-minus\pm±0.01670.4318±plus-or-minus\pm±0.01020.2790±plus-or-minus\pm±0.01310.2310±plus-or-minus\pm±0.00360.2534±plus-or-minus\pm±0.00410.6093±plus-or-minus\pm±0.01370.3635
1:8:160.6230±plus-or-minus\pm±0.01130.3516±plus-or-minus\pm±0.00480.1480±plus-or-minus\pm±0.01590.4604±plus-or-minus\pm±0.01020.2355±plus-or-minus\pm±0.01240.2541±plus-or-minus\pm±0.00370.2546±plus-or-minus\pm±0.00410.5312±plus-or-minus\pm±0.01400.3573
1:1:160.5430±plus-or-minus\pm±0.01160.2692±plus-or-minus\pm±0.00440.1580±plus-or-minus\pm±0.01630.2921±plus-or-minus\pm±0.00930.1937±plus-or-minus\pm±0.01150.2334±plus-or-minus\pm±0.00360.2481±plus-or-minus\pm±0.00400.5091±plus-or-minus\pm±0.01410.3058
Gemma2-2B-ItReverse-orderone-shot0.6050±plus-or-minus\pm±0.01140.3049±plus-or-minus\pm±0.00460.1900±plus-or-minus\pm±0.01760.3817±plus-or-minus\pm±0.01000.2491±plus-or-minus\pm±0.01260.2327±plus-or-minus\pm±0.00360.2527±plus-or-minus\pm±0.00400.5580±plus-or-minus\pm±0.01400.3468
1:6:120.6007±plus-or-minus\pm±0.01140.3076±plus-or-minus\pm±0.00460.1900±plus-or-minus\pm±0.01760.3994±plus-or-minus\pm±0.01010.2483±plus-or-minus\pm±0.01260.2429±plus-or-minus\pm±0.00360.2495±plus-or-minus\pm±0.00400.5478±plus-or-minus\pm±0.01400.3483
1:1:120.6023±plus-or-minus\pm±0.01140.3173±plus-or-minus\pm±0.00460.1720±plus-or-minus\pm±0.01690.3897±plus-or-minus\pm±0.01000.2449±plus-or-minus\pm±0.01260.2531±plus-or-minus\pm±0.00370.2481±plus-or-minus\pm±0.00400.5387±plus-or-minus\pm±0.01400.3458
Taylorone-shot0.6088±plus-or-minus\pm±0.01140.3142±plus-or-minus\pm±0.00460.1880±plus-or-minus\pm±0.01750.4049±plus-or-minus\pm±0.01010.2739±plus-or-minus\pm±0.01300.2297±plus-or-minus\pm±0.00350.2508±plus-or-minus\pm±0.00400.5817±plus-or-minus\pm±0.01390.3565
1:6:120.5909±plus-or-minus\pm±0.01150.2806±plus-or-minus\pm±0.00450.1380±plus-or-minus\pm±0.01540.3834±plus-or-minus\pm±0.01000.2150±plus-or-minus\pm±0.01200.2295±plus-or-minus\pm±0.00350.2523±plus-or-minus\pm±0.00400.5059±plus-or-minus\pm±0.01410.3245
1:1:120.6502±plus-or-minus\pm±0.01110.3456±plus-or-minus\pm±0.00470.1860±plus-or-minus\pm±0.01740.4790±plus-or-minus\pm±0.01030.2483±plus-or-minus\pm±0.01260.2314±plus-or-minus\pm±0.00360.2578±plus-or-minus\pm±0.00410.5525±plus-or-minus\pm±0.01400.3689
Partial-layerLlama-3.1-8B-ItReverse-orderone-shot0.6578±plus-or-minus\pm±0.01110.4137±plus-or-minus\pm±0.00490.2200±plus-or-minus\pm±0.01850.5707±plus-or-minus\pm±0.01020.3294±plus-or-minus\pm±0.01370.3854±plus-or-minus\pm±0.00400.3190±plus-or-minus\pm±0.00430.6504±plus-or-minus\pm±0.01340.4433
1:1:160.6774±plus-or-minus\pm±0.01090.4164±plus-or-minus\pm±0.00490.2200±plus-or-minus\pm±0.01850.5863±plus-or-minus\pm±0.01010.3362±plus-or-minus\pm±0.01380.4170±plus-or-minus\pm±0.00410.3460±plus-or-minus\pm±0.00440.6385±plus-or-minus\pm±0.01350.4547
Taylorone-shot0.6649±plus-or-minus\pm±0.01100.3985±plus-or-minus\pm±0.00490.2100±plus-or-minus\pm±0.01820.5581±plus-or-minus\pm±0.01020.3251±plus-or-minus\pm±0.01370.3054±plus-or-minus\pm±0.00390.2876±plus-or-minus\pm±0.00420.6212±plus-or-minus\pm±0.01360.4214
1:1:160.5876±plus-or-minus\pm±0.01150.2813±plus-or-minus\pm±0.00450.1300±plus-or-minus\pm±0.01510.3986±plus-or-minus\pm±0.01000.1980±plus-or-minus\pm±0.01160.2508±plus-or-minus\pm±0.00370.2502±plus-or-minus\pm±0.00400.4957±plus-or-minus\pm±0.01410.3240

Reassessing Layer Pruning in LLMs: New Insights and Methods (6)

Reassessing Layer Pruning in LLMs: New Insights and Methods (7)

Reassessing Layer Pruning in LLMs: New Insights and Methods (8)

ModelMetricCalibration SamplesRemoved LayersBenchmarksAvg Acc
PIQAHellaSwagOpenbookQAARC-eARC-cMMLUCMMLUWinoGrande
Llama-3.1-8B-InstructBI12,3,5,6,7,8,11,120.7029±plus-or-minus\pm±0.01070.4167±plus-or-minus\pm±0.00490.2060±plus-or-minus\pm±0.01810.6136±plus-or-minus\pm±0.01000.2739±plus-or-minus\pm±0.01300.2362±plus-or-minus\pm±0.00360.2512±plus-or-minus\pm±0.00400.5225±plus-or-minus\pm±0.01400.40
53,4,5,8,9,10,13,190.7236±plus-or-minus\pm±0.01040.4400±plus-or-minus\pm±0.00500.2420±plus-or-minus\pm±0.01920.6730±plus-or-minus\pm±0.00960.3311±plus-or-minus\pm±0.01380.2524±plus-or-minus\pm±0.00370.2553±plus-or-minus\pm±0.00410.5485±plus-or-minus\pm±0.01400.43
102,3,4,5,6,7,8,90.7176±plus-or-minus\pm±0.01050.4196±plus-or-minus\pm±0.00490.2020±plus-or-minus\pm±0.01800.6107±plus-or-minus\pm±0.01000.2841±plus-or-minus\pm±0.01320.2417±plus-or-minus\pm±0.00360.2494±plus-or-minus\pm±0.00400.5391±plus-or-minus\pm±0.01400.41
302,3,4,10,11,12,13,140.7209±plus-or-minus\pm±0.01050.4328±plus-or-minus\pm±0.00490.2040±plus-or-minus\pm±0.01800.6414±plus-or-minus\pm±0.00980.3259±plus-or-minus\pm±0.01370.2500±plus-or-minus\pm±0.00360.2576±plus-or-minus\pm±0.00410.5517±plus-or-minus\pm±0.01400.42
502,3,4,5,6,7,10,130.7100±plus-or-minus\pm±0.01060.4091±plus-or-minus\pm±0.00490.2180±plus-or-minus\pm±0.01850.6221±plus-or-minus\pm±0.00990.2875±plus-or-minus\pm±0.01320.2492±plus-or-minus\pm±0.00360.2529±plus-or-minus\pm±0.00400.5462±plus-or-minus\pm±0.01400.41
Taylor127, 26, 25, 24, 28, 23, 29, 220.6088±plus-or-minus\pm±0.01140.3288±plus-or-minus\pm±0.00470.1660±plus-or-minus\pm±0.01670.4318±plus-or-minus\pm±0.01020.2790±plus-or-minus\pm±0.01310.2310±plus-or-minus\pm±0.00360.2534±plus-or-minus\pm±0.00410.6093±plus-or-minus\pm±0.01370.36
524, 26, 25, 28, 27, 23, 29, 220.6088±plus-or-minus\pm±0.01140.3288±plus-or-minus\pm±0.00470.1660±plus-or-minus\pm±0.01670.4318±plus-or-minus\pm±0.01020.2790±plus-or-minus\pm±0.01310.2310±plus-or-minus\pm±0.00360.2534±plus-or-minus\pm±0.00410.6093±plus-or-minus\pm±0.01370.36
1024, 26, 25, 28, 27, 23, 29, 220.6088±plus-or-minus\pm±0.01140.3288±plus-or-minus\pm±0.00470.1660±plus-or-minus\pm±0.01670.4318±plus-or-minus\pm±0.01020.2790±plus-or-minus\pm±0.01310.2310±plus-or-minus\pm±0.00360.2534±plus-or-minus\pm±0.00410.6093±plus-or-minus\pm±0.01370.36
3024, 23, 25, 26, 22, 27, 28, 200.7280±plus-or-minus\pm±0.01040.4985±plus-or-minus\pm±0.00500.2460±plus-or-minus\pm±0.01930.6961±plus-or-minus\pm±0.00940.4130±plus-or-minus\pm±0.01440.6611±plus-or-minus\pm±0.00380.4915±plus-or-minus\pm±0.00460.7032±plus-or-minus\pm±0.01280.55
5024, 23, 25, 26, 22, 27, 28, 200.7280±plus-or-minus\pm±0.01040.4985±plus-or-minus\pm±0.00500.2460±plus-or-minus\pm±0.01930.6961±plus-or-minus\pm±0.00940.4130±plus-or-minus\pm±0.01440.6611±plus-or-minus\pm±0.00380.4915±plus-or-minus\pm±0.00460.7032±plus-or-minus\pm±0.01280.55

ModelGenerated Sentences
Llama-3.1-6.3B-It-AlpacaTo be or not to be, that is the question that has been asked thousands of times throughout history. but have you ever wondered what it would be like to not exist? what would it feel like to not have consciousness or awareness of any kind? would it be like being asleep or a combed out hair? no, it would be something else entirely. it would be like not being here at all, like being a tree or a rock, a piece of furniture or a piece of trash. nothingness is not something we can really comprehend because we can’t experience it. we can’t even imagine it because we can’t imagine.
It fills me with such pride and joy knowing that I have been able to help shaped the lives of countless students over the years. I am proud of the numerous accomplishments and achievements of my students, and I am proud of the role I have played in shaping them into the people they are today. As a teacher, I believe it is my duty to provide an optimal learning environment that fosters growth and development of my students. I believe in making learning fun and engaging, and I am dedicated to pushing the bounds of innovation in the classroom. I am dedicated to ensuring that my students leave my classroom with the knowledge and skills they need …
What’s great about the holiday season? Christmas lights, Christmas shows, Christmas presents, Christmas holiday traditions. But what’s not great about the holiday season? crowds, stress, Santa Claus, Christmas holiday stress, Christmas holiday stressors. It’s important to remember to do things that help you relax during the holiday season, such as taking time for yourself, engaging in relaxation techniques, practicing mindfulness, engaging in physical activity, practicing gratitude, practicing self-care, engaging in activities that bring you joy, and spending time with loved ones. These are all important components of stressors prevention during the holiday season. Here are some tips to help you. …
Llama-3.1-6.3B-It-DollyTo be or not to be, that is the question - Hamlet’s famous soliloquy - could easily apply to my current state of mind. I’ve been struggling to decision whether to go back to school or not. On one hand, I’m really interesting in learning more about the subject matter I’ve been passionate about for years, and I think pursuing a degree would be a great way to formalize my knowledge. On the Hamlet scale, this option would be ”to be”: me myself, fully engaged in learning and growing as a person. On the other hand, I’ve been married for almost 20 years …
It fills me with such pride and joy to see my little human growing up. yesterday was a special day in our household because we celebrated one of the major milestone in life. my little one cut his first teeth. Yes you heard it right he is now a big boy and his first teeth are now visible. It has been a long journey since we started noticed his teething symptoms and we have been through lot of sleepless nights and teething tantrums but it all pay off now. I can see the relief in his little face when he finally gets to experience the world without the constant pain in his little bottom teeth. As a …
What’s great about the holiday season is spending time with loved ones and making new traditions. But let’s not forget about the stress and pressure that comes with it. We’re here to help you beat stress during the holiday season with some tips from our #Ask a therapist series with licensed therapist, Dr. Dezympfer. ### tip #1: Prioritize Self-Cark. With all the commotion during the holiday season, it can be easy to forget to take care of yourself. Make sure to schedule time for self-Cark. whether that is taking a yoga class, taking a few minutes each day to meditate, …
Llama-3.1-8B-ItTo be or not to be, that is the question. How does one decide whether a being is alive or not? The issue of what constitutes life has puzzled philosophers and scientists alike for centuries. A being is alive if it meets the criteria of life, which are: Metabolism: The ability to carry out basic functions such as respiration, photosynthesis, and nutrient uptake.Homeostasis: The ability to maintain a stable internal environment, despite changes in the external environment.Growth and Development: The ability to increase in size and complexity over time.Reproduction: The ability to produce offspring, either sexually or asex
It fills me with such pride and joy to see my friends’ passions ignite into flame and to watch them soar to great heights! It was an absolutely honour to have been able to share this experience with you and to bear witness to your hard work and dedication! It was an absolute pleasure to get to know you and share our experiences, but I know that this is just the beginning of an amazing journey! I feel so grateful to have you in my life, and I am constantly in awe of your love, kindness, and generosity! It’s been such an incredible journey watching you discover and grow, and I feel so lucky to have…
What’s great about the holiday season? In addition to being able to spend time with loved ones and get some much-needed R&R, many people enjoy the idea of giving back to others. Whether it’s volunteering, donating to charity, or participating in a Secret Santa gift exchange, the holiday season can be a time of kindness and generosity. But have you ever thought about how you might be able to combine your love of cooking and giving back this holiday season? If so, you might be interested in hosting a charity-themed potluck dinner or bake sale. Here are a few ideas to get you started: Host a potluck dinner to…

Reassessing Layer Pruning in LLMs: New Insights and Methods (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Reed Wilderman

Last Updated:

Views: 6674

Rating: 4.1 / 5 (52 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Reed Wilderman

Birthday: 1992-06-14

Address: 998 Estell Village, Lake Oscarberg, SD 48713-6877

Phone: +21813267449721

Job: Technology Engineer

Hobby: Swimming, Do it yourself, Beekeeping, Lapidary, Cosplaying, Hiking, Graffiti

Introduction: My name is Reed Wilderman, I am a faithful, bright, lucky, adventurous, lively, rich, vast person who loves writing and wants to share my knowledge and understanding with you.