Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Beyond fine-tuning: LoRA modules boost near-OOD detection and LLM security
 
conference paper not in proceedings

Beyond fine-tuning: LoRA modules boost near-OOD detection and LLM security

Salimbeni, Etienne
•
Craighero, Francesco  
•
Khasanova, Renata
Show more
March 4, 2024
ICLR 2024 Workshop on Secure and Trustworthy Large Language Model

Under resource constraints, LLMs are usually fine-tuned with additional knowledge using Parameter Efficient Fine-Tuning (PEFT), using Low-Rank Adaptation (LoRA) modules. In fact, LoRA injects a new set of small trainable matrices to adapt an LLM to a new task, while keeping the latter frozen. At deployment, LoRA weights are subsequently merged with the LLM weights to speed up inference. In this work, we show how to exploit the unmerged LoRA’s embedding to boost the performance of Out-Of-Distribution (OOD) detectors, especially in the more challenging near-OOD scenarios. Accordingly, we demonstrate how improving OOD detection also helps in characterizing wrong predictions in downstream tasks, a fundamental aspect to improve the reliability of LLMs. Moreover, we will present a use-case in which the sensitivity of LoRA modules and OOD detection are employed together to alert stakeholders about new model updates. This scenario is particularly important when LLMs are out-sourced. Indeed, test functions should be applied as soon as the model changes the version in order to adapt prompts in the downstream applications. In order to validate our method, we performed tests on Multiple Choice Question Answering datasets, by focusing on the medical domain as a fine-tuning task. Our results motivate the use of LoRA modules even after deployment, since they provide strong features for OOD detection for fine-tuning tasks and can be employed to improve the security of LLMs.

  • Details
  • Metrics
Type
conference paper not in proceedings
Author(s)
Salimbeni, Etienne
Craighero, Francesco  
Khasanova, Renata
Vasic, Milos
Vandergheynst, Pierre  
Date Issued

2024-03-04

Subjects

Parameter Efficient Fine-Tuning

•

Large Language Models

•

Low-Rank Adaptation

•

Out-Of-Distribution Detection

URL

fultext

https://openreview.net/pdf?id=H7Q5hHcvZE
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LTS2  
Event nameEvent placeEvent date
ICLR 2024 Workshop on Secure and Trustworthy Large Language Model

Vienna, Austria

May 11, 2024

RelationURL/DOI

IsIdenticalTo

https://infoscience.epfl.ch/record/310348?ln=fr
Available on Infoscience
May 31, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/208157
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés