LoRA

2025-12-10 0 497

LoRA: Low-Rank Adaptation of Large Language Models

This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in Hugging Face.
We only support PyTorch for now.
See our paper for a detailed description of LoRA.

LoRA: Low-Rank Adaptation of Large Language Models
Edward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
Paper: https://a*rxi**v.org/abs/2106.09685
Video explainer: https://www.*youtub**e.com/watch?v=DhRoTONcyZE

Update 2/2023: LoRA is now supported by the State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) library by Hugging Face.

LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights.
This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency.
LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.

We obtain result comparable or superior to full finetuning on the GLUE benchmark using RoBERTa (Liu et al., 2019) base and large and DeBERTa (He et al., 2020) XXL 1.5B, while only training and storing a fraction of the parameters. Click the numbers below to download the RoBERTa and DeBERTa LoRA checkpoints.

RoBERTa base
Fine-tune
RoBERTa base
LoRA
DeBERTa XXL
Fine-tune
DeBERTa XXL
LoRA
# of Trainable Params. 125M 0.8M 1.5B 4.7M
MNLI (m-Acc/mm-Acc) 87.6 87.5±.3/86.9±.3 91.7/91.9 91.9±.1/91.9±.2
SST2 (Acc) 94.8 95.1±.2 97.2 96.9±.2
MRPC (Acc) 90.2 89.7±.7 92.0 92.6±.6
CoLA (Matthew\’s Corr) 63.6 63.4±1.2 72.0 72.4±1.1
QNLI (Acc) 92.8 93.3±.3 96.0 96.0±.1
QQP (Acc) 91.9 90.8±.1 92.7 92.9±.1
RTE (Acc) 78.7 86.6±.7 93.9 94.9±.4
STSB (Pearson/Spearman Corr) 91.2 91.5±.2/91.3±.2 92.9/92.6 93.0±.2/92.9±.3
Average 86.40 87.24 91.06 91.32

Note: You still need the original pre-trained checkpoint from Hugging Face to use the LoRA checkpoints.

Fine-tuning numbers are taken from Liu et al. (2019) and He et al. (2020). We include confidence intervals on results from our experiments. Please follow the instructions in examples/NLU/ to reproduce our results.

On GPT-2, LoRA compares favorably to both full finetuning and other efficient tuning methods, such as adapter (Houlsby et al., 2019) and prefix tuning (Li and Liang, 2021). We evaluated on E2E NLG Challenge, DART, and WebNLG:

Method # of Trainable Params E2E (BLEU) DART (BLEU) WebNLG (BLEU-U/S/A)
GPT-2 M (Fine-Tune) 354.92M 68.2 46.0 30.4/63.2/47.6
GPT-2 M (Adapter) 0.37M 66.3 42.4 45.1/54.5/50.2
GPT-2 M (Prefix) 0.35M 69.7 45.7 44.1/63.1/54.4
GPT-2 M (LoRA) 0.35M 70.4±.1 47.1±.2 46.7±.4/62.1±.2/55.3±.2
GPT-2 L (Fine-Tune) 774.03M 68.5 46.5 41.7/64.6/54.2
GPT-2 L (Adapter) 0.88M 69.1±.1 45.7±.1 49.8±.0/61.1±.0/56.0±.0
GPT-2 L (Prefix) 0.77M 70.3 46.5 47.0/64.2/56.4
GPT-2 L (LoRA) 0.77M 70.4±.1 47.5±.1 48.4±.3/64.0±.3/57.0±.1

Non-LoRA baselines, except for adapter on GPT-2 large, are taken from Li and Liang (2021). We include confidence intervals on results from our experiments.

Download the GPT-2 LoRA checkpoints:

  • GPT-2 Medium E2E (1.5 MB)
  • GPT-2 Medium DART (1.5 MB)
  • GPT-2 Medium WebNLG (1.5 MB)
  • GPT-2 Large E2E (2.3 MB)
  • GPT-2 Large DART (2.3 MB)
  • GPT-2 Large WebNLG (2.3 MB)

Please follow the instructions in examples/NLG/ to reproduce our result.

Repository Overview

(The initial release of this repo has been archived in the branch \”snapshot-9-15-2021\”)

There are several directories in this repo:

  • loralib/ contains the source code for the package loralib, which needs to be installed to run the examples we provide;
  • examples/NLG/ contains an example implementation of LoRA in GPT-2 using our package, which can be used to reproduce the result in our paper;
  • examples/NLU/ contains an example implementation of LoRA in RoBERTa and DeBERTa using our package, which produces competitive results on the GLUE benchmark;
  • See how we use loralib in GPT-2, RoBERTa, and DeBERTa v2

Quickstart

  1. Installing loralib is simply
pip install loralib
# Alternatively
# pip install git+https://g**it*hub.com/microsoft/LoRA
  1. You can choose to adapt some layers by replacing them with counterparts implemented in loralib. We only support nn.Linear, nn.Embedding, and nn.Conv2d for now. We also support a MergedLinear for cases where a single nn.Linear represents more than one layers, such as in some implementations of the attention qkv projection (see Additional Notes for more).
# ===== Before =====
# layer = nn.Linear(in_features, out_features)

# ===== After ======
import loralib as lora
# Add a pair of low-rank adaptation matrices with rank r=16
layer = lora.Linear(in_features, out_features, r=16)
  1. Before the training loop begins, mark only LoRA parameters as trainable.
import loralib as lora
model = BigModel()
# This sets requires_grad to False for all parameters without the string \"lora_\" in their names
lora.mark_only_lora_as_trainable(model)
# Training loop
for batch in dataloader:
   ...
  1. When saving a checkpoint, generate a state_dict that only contains LoRA parameters.
# ===== Before =====
# torch.save(model.state_dict(), checkpoint_path)
# ===== After =====
torch.save(lora.lora_state_dict(model), checkpoint_path)
  1. When loading a checkpoint using load_state_dict, be sure to set strict=False.
# Load the pretrained checkpoint first
model.load_state_dict(torch.load(\'ckpt_pretrained.pt\'), strict=False)
# Then load the LoRA checkpoint
model.load_state_dict(torch.load(\'ckpt_lora.pt\'), strict=False)

Now training can proceed as usual.

Additional Notes

  1. While we focus on a simple yet effect setup, namely adapting only the q and v projection in a Transformer, in our examples, LoRA can be apply to any subsets of pre-trained weights. We encourage you to explore different configurations, such as adapting the embedding layer by replacing nn.Embedding with lora.Embedding and/or adapting the MLP layers. It\’s very likely that the optimal configuration varies for different model architectures and tasks.

  2. Some Transformer implementation uses a single nn.Linear for the projection matrices for query, key, and value. If one wishes to constrain the rank of the updates to the individual matrices, one has to either break it up into three separate matrices or use lora.MergedLinear. Make sure to modify the checkpoint accordingly if you choose to break up the layer.

# ===== Before =====
# qkv_proj = nn.Linear(d_model, 3*d_model)
# ===== After =====
# Break it up (remember to modify the pretrained checkpoint accordingly)
q_proj = lora.Linear(d_model, d_model, r=8)
k_proj = nn.Linear(d_model, d_model)
v_proj = lora.Linear(d_model, d_model, r=8)
# Alternatively, use lora.MergedLinear (recommended)
qkv_proj = lora.MergedLinear(d_model, 3*d_model, r=8, enable_lora=[True, False, True])
  1. Training bias vectors in tandem with LoRA might be a cost-efficient way to squeeze out extra task performance (if you tune the learning rate carefully). While we did not study its effect thoroughly in our paper, we make it easy to try in lora. You can mark some biases as trainable by passing \”all\” or \”lora_only\” to bias= when calling mark_only_lora_as_trainable. Remember to pass the corresponding bias= argument to lora_state_dict when saving a checkpoint.
# ===== Before =====
# lora.mark_only_lora_as_trainable(model) # Not training any bias vectors
# ===== After =====
# Training all bias vectors associated with modules we apply LoRA to 
lora.mark_only_lora_as_trainable(model, bias=\'lora_only\')
# Alternatively, we can train *all* bias vectors in the model, including LayerNorm biases
lora.mark_only_lora_as_trainable(model, bias=\'all\')
# When saving a checkpoint, use the same bias= (\'all\' or \'lora_only\')
torch.save(lora.lora_state_dict(model, bias=\'all\'), checkpoint_path)
  1. Calling model.eval() will trigger the merging of LoRA parameters with the corresponding pretrained ones, which eliminates additional latency for subsequent forward passes. Calling model.train() again will undo the merge. This can be disabled by passing merge_weights=False to LoRA layers.

Contact

Please contact us or post an issue if you have any questions.

For questions related to the package loralib:

  • Edward Hu (edward@edwardjhu.com)
  • Phillip Wallis (phwallis@microsoft.com)
  • Weizhu Chen (wzchen@microsoft.com)

The GPT-2 example:

  • Phillip Wallis (phwallis@microsoft.com)
  • Yelong Shen (yeshe@microsoft.com)

The RoBERTa/DeBERTa example:

  • Lu Wang (luw@microsoft.com)

Acknowledgements

We thank in alphabetical order Jianfeng Gao, Jade Huang, Jiayuan Huang, Lisa Xiang Li, Xiaodong Liu, Yabin Liu, Benjamin Van Durme, Luis Vargas, Haoran Wei, Peter Welinder, and Greg Yang for providing valuable feedback.

Citation

@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://o**penrev*iew.net/forum?id=nZeVKeeFYf9}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.c**o*m.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct.
For more information see the Code of Conduct FAQ or
contact opencode@microsoft.com with any additional questions or comments.

下载源码

通过命令行克隆项目:

git clone https://github.com/microsoft/LoRA.git

收藏 (0) 打赏

感谢您的支持,我会继续努力的!

打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
点赞 (0)

申明:本文由第三方发布,内容仅代表作者观点,与本网站无关。对本文以及其中全部或者部分内容的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。本网发布或转载文章出于传递更多信息之目的,并不意味着赞同其观点或证实其描述,也不代表本网对其真实性负责。

左子网 编程相关 LoRA https://www.zuozi.net/33315.html

常见问题
  • 1、自动:拍下后,点击(下载)链接即可下载;2、手动:拍下后,联系卖家发放即可或者联系官方找开发者发货。
查看详情
  • 1、源码默认交易周期:手动发货商品为1-3天,并且用户付款金额将会进入平台担保直到交易完成或者3-7天即可发放,如遇纠纷无限期延长收款金额直至纠纷解决或者退款!;
查看详情
  • 1、描述:源码描述(含标题)与实际源码不一致的(例:货不对板); 2、演示:有演示站时,与实际源码小于95%一致的(但描述中有”不保证完全一样、有变化的可能性”类似显著声明的除外); 3、发货:不发货可无理由退款; 4、安装:免费提供安装服务的源码但卖家不履行的; 5、收费:价格虚标,额外收取其他费用的(但描述中有显著声明或双方交易前有商定的除外); 6、其他:如质量方面的硬性常规问题BUG等。 注:经核实符合上述任一,均支持退款,但卖家予以积极解决问题则除外。
查看详情
  • 1、左子会对双方交易的过程及交易商品的快照进行永久存档,以确保交易的真实、有效、安全! 2、左子无法对如“永久包更新”、“永久技术支持”等类似交易之后的商家承诺做担保,请买家自行鉴别; 3、在源码同时有网站演示与图片演示,且站演与图演不一致时,默认按图演作为纠纷评判依据(特别声明或有商定除外); 4、在没有”无任何正当退款依据”的前提下,商品写有”一旦售出,概不支持退款”等类似的声明,视为无效声明; 5、在未拍下前,双方在QQ上所商定的交易内容,亦可成为纠纷评判依据(商定与描述冲突时,商定为准); 6、因聊天记录可作为纠纷评判依据,故双方联系时,只与对方在左子上所留的QQ、手机号沟通,以防对方不承认自我承诺。 7、虽然交易产生纠纷的几率很小,但一定要保留如聊天记录、手机短信等这样的重要信息,以防产生纠纷时便于左子介入快速处理。
查看详情

相关文章

猜你喜欢
发表评论
暂无评论
官方客服团队

为您解决烦忧 - 24小时在线 专业服务