Pruning Large Language Models with Semi-Structural Adaptive Sparse Training
Paper
•
2407.20584
•
Published
This repo contains a 2:4 sparse version of the LLaMA2-7B model. Trainied with methods from AAAI25 paper Pruning Large Language Models with Semi-Structural Adaptive Sparse Training.
Same structured as LLaMA2-7B, but weight from linear layer conform to 2:4 sparse pattern.
Base model
meta-llama/Llama-2-7b-hf