A Comprehensive Study on Post-Training Quantization for Large Language Models

Abstract

Post-training quantization (PTQ) had been recently shown as a compromising method to reduce memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different PTQ methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study of those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and PTQ methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bit precision is similar to 5 bits). We also present recommendations about how to utilize quantization for LLMs with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.

Publication
In arXiv
Cheng Li
Cheng Li
Senior Software Engineer

My work focus on optimizing training/inference of Deep Learning models, particularly on LLM/LMM.

Related