GitXplorerGitXplorer
n

hacky-lm-attack-24

public
1 stars
0 forks
0 issues

Commits

List of commits on branch main.
Verified
4570dc3f11e9797178ce41994b652e5e92d13e1c

Update README.md

aandyzoujm committed a year ago
Verified
8abb93aa0b38942c3b328ccad3259cf58a8518ee

Update template.py

zzifanw505 committed a year ago
Verified
438f2fb00a8d2c15ef4e4dfcd2df15a230fac5e0

Update evaluate.py

zzifanw505 committed a year ago
Verified
7370f81e8b58f67d08ebc24d1fb564ab2f51c549

Update evaluate_individual.py

zzifanw505 committed a year ago
Unverified
07eaa21ff76a56e9d3363c2d116f6d210ea83b0b

Create results folder if it doesn't exist

zzifanw505 committed a year ago
Verified
9282df2708449d261abe0318d059bc62787152c2

Update README.md

zzifanw505 committed a year ago

README

The README file for this repository.

LLM Attacks

License: MIT

This is the official repository for "Universal and Transferable Adversarial Attacks on Aligned Language Models" by Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson.

Check out our website and demo here.

Updates

  • (2023-08-16) We include a notebook demo.ipynb (or see it on Colab) containing the minimal implementation of GCG for jailbreaking LLaMA-2 for generating harmful completion.

Table of Contents

Installation

We need the newest version of FastChat fschat==0.2.23 and please make sure to install this version. The llm-attacks package can be installed by running the following command at the root of this repository:

pip install -e .

Models

Please follow the instructions to download Vicuna-7B or/and LLaMA-2-7B-Chat first (we use the weights converted by HuggingFace here). Our script by default assumes models are stored in a root directory named as /DIR. To modify the paths to your models and tokenizers, please add the following lines in experiments/configs/individual_xxx.py (for individual experiment) and experiments/configs/transfer_xxx.py (for multiple behaviors or transfer experiment). An example is given as follows.

    config.model_paths = [
        "/DIR/vicuna/vicuna-7b-v1.3",
        ... # more models
    ]
    config.tokenizer_paths = [
        "/DIR/vicuna/vicuna-7b-v1.3",
        ... # more tokenizers
    ]

Demo

We include a notebook demo.ipynb which provides an example on attacking LLaMA-2 with GCG. You can also view this notebook on Colab. This notebook uses a minimal implementation of GCG so it should be only used to get familiar with the attack algorithm. For running experiments with more behaviors, please check Section Experiments. To monitor the loss in the demo we use livelossplot, so one should install this library first by pip.

pip install livelossplot

Experiments

The experiments folder contains code to reproduce GCG experiments on AdvBench.

  • To run individual experiments with harmful behaviors and harmful strings (i.e. 1 behavior, 1 model or 1 string, 1 model), run the following code inside experiments (changing vicuna to llama2 and changing behaviors to strings will switch to different experiment setups):
cd launch_scripts
bash run_gcg_individual.sh vicuna behaviors
  • To perform multiple behaviors experiments (i.e. 25 behaviors, 1 model), run the following code inside experiments:
cd launch_scripts
bash run_gcg_multiple.sh vicuna # or llama2
  • To perform transfer experiments (i.e. 25 behaviors, 2 models), run the following code inside experiments:
cd launch_scripts
bash run_gcg_transfer.sh vicuna 2 # or vicuna_guanaco 4
  • To perform evaluation experiments, please follow the directions in experiments/parse_results.ipynb.

Notice that all hyper-parameters in our experiments are handled by the ml_collections package here. You can directly change those hyper-parameters at the place they are defined, e.g. experiments/configs/individual_xxx.py. However, a recommended way of passing different hyper-parameters -- for instance you would like to try another model -- is to do it in the launch script. Check out our launch scripts in experiments/launch_scripts for examples. For more information about ml_collections, please refer to their repository.

Reproducibility

A note for hardware: all experiments we run use one or multiple NVIDIA A100 GPUs, which have 80G memory per chip.

We include a few examples people told us when reproducing our results. They might also include workaround for solving a similar issue in your situation.

Currently the codebase only supports training with LLaMA or Pythia based models. Running the scripts with other models (with different tokenizers) will likely result in silent errors. As a tip, start by modifying this function where different slices are defined for the model.

Citation

If you find this useful in your research, please consider citing:

@misc{zou2023universal,
      title={Universal and Transferable Adversarial Attacks on Aligned Language Models}, 
      author={Andy Zou and Zifan Wang and J. Zico Kolter and Matt Fredrikson},
      year={2023},
      eprint={2307.15043},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

llm-attacks is licensed under the terms of the MIT license. See LICENSE for more details.