Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unveiling Safety Vulnerabilities of Large Language Models

About

As large language models become more prevalent, their possible harmful or inappropriate responses are a cause for concern. This paper introduces a unique dataset containing adversarial examples in the form of questions, which we call AttaQ, designed to provoke such harmful or inappropriate responses. We assess the efficacy of our dataset by analyzing the vulnerabilities of various models when subjected to it. Additionally, we introduce a novel automatic approach for identifying and naming vulnerable semantic regions - input semantic areas for which the model is likely to produce harmful outputs. This is achieved through the application of specialized clustering techniques that consider both the semantic similarity of the input attacks and the harmfulness of the model's responses. Automatically identifying vulnerable semantic regions enhances the evaluation of model weaknesses, facilitating targeted improvements to its safety mechanisms and overall reliability.

George Kour, Marcel Zalmanovici, Naama Zwerdling, Esther Goldbraich, Ora Nova Fandina, Ateret Anaby-Tavor, Orna Raz, Eitan Farchi• 2023

Related benchmarks

TaskDatasetResultRank
Safety EvaluationAdvBench--
117
Safety EvaluationStrongREJECT
Attack Success Rate10
45
Red-teaming Safety EvaluationStrongREJECT
ASR7
32
Red-teaming Safety EvaluationHarmBench
ASR1
32
Red-teaming Safety EvaluationBasebench
HS1.76
16
Red-teaming Safety EvaluationEdgebench
HS Score3.15
16
Red-teaming Safety EvaluationSC-Safety
HS2.27
16
Safety EvaluationXSTest
HS Rate2.31
8
Red-teaming Safety EvaluationAdvBench
HPR26
8
Red-teaming Safety EvaluationXSTest
HPR23
8
Showing 10 of 10 rows

Other info

Follow for update