Nothing Special   »   [go: up one dir, main page]

GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis

Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong


Abstract
Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe, which effectively detects jailbreak prompts by scrutinizing the gradients of safety-critical parameters in LLMs. Our method is grounded in a pivotal observation: the gradients of an LLM’s loss for jailbreak prompts paired with compliance response exhibit similar patterns on certain safety-critical parameters. In contrast, safe prompts lead to different gradient patterns. Building on this observation, GradSafe analyzes the gradients from prompts (paired with compliance responses) to accurately detect jailbreak prompts. We show that GradSafe, applied to Llama-2 without further training, outperforms Llama Guard—despite its extensive finetuning with a large dataset—in detecting jailbreak prompts. This superior performance is consistent across both zero-shot and adaptation scenarios, as evidenced by our evaluations on ToxicChat and XSTest. The source code is available at https://github.com/xyq7/GradSafe.
Anthology ID:
2024.acl-long.30
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
507–518
Language:
URL:
https://aclanthology.org/2024.acl-long.30
DOI:
10.18653/v1/2024.acl-long.30
Bibkey:
Cite (ACL):
Yueqi Xie, Minghong Fang, Renjie Pi, and Neil Gong. 2024. GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 507–518, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis (Xie et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.30.pdf