Welcome to ARES 2.0
Overview
ARES 2.0, which stands for Adversarial Robustness Evaluation for Safety, is a Python library designed for research in adversarial machine learning. Its primary goal is to evaluate how well image classification and object detection models can withstand adversarial attacks. Additionally, ARES 2.0 offers strategies for improving model robustness through what is known as robust training.
Features
- Built with Pytorch: ARES 2.0 is developed using the Pytorch framework, which facilitates flexibility and ease of use.
- Supports Multiple Attacks: The library can simulate various attacks on image classification models to evaluate their robustness.
- Object Detection Attack Capability: ARES 2.0 allows for adversarial attacks on object detection models as well.
- Robust Training and Checkpoints: It provides methods for robust training to enhance resistance to attacks and includes a variety of pre-trained models, or checkpoints.
- Distributed Training and Testing: The library supports distributed computing, which means it can be run across multiple devices to increase efficiency and speed.
Installation
To get started with ARES 2.0, users may set up a dedicated environment:
- Create a virtual environment using Conda, which is recommended but optional.
conda create -n ares python==3.10.9 conda activate ares
- Clone the ARES 2.0 repository and install the necessary packages:
git clone https://github.com/thu-ml/ares2.0 cd ares2.0 pip install -r requirements.txt mim install mmengine==0.8.4 mim install mmcv==2.0.0 mim install mmdet==3.1.0 pip install -v -e .
Getting Started
ARES 2.0 is organized into different modules to help you get started quickly:
- Image Classification: For evaluating how well image classification models handle adversarial attacks.
- Object Detection: For assessing the robustness of object detection models.
- Robust Training: Techniques and methodologies for toughening up your models against attacks.
Each module has its own detailed guide in its respective directory.
Documentation
For comprehensive guides and detailed API documentation, you can visit the official documentation site. It contains everything from basic tutorials to intricate strategies for attacking and defending machine learning models.
Citation
If ARES 2.0 is beneficial to your research or projects, you are encouraged to cite the accompanying paper, which provides in-depth insights into the adversarial robustness techniques developed within this library:
@inproceedings{dong2020benchmarking,
title={Benchmarking Adversarial Robustness on Image Classification},
author={Dong, Yinpeng and Fu, Qi-An and Yang, Xiao and Pang, Tianyu and Su, Hang and Xiao, Zihao and Zhu, Jun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={321--331},
year={2020}
}
ARES 2.0 is a powerful resource for researchers looking to delve into the intricacies of adversarial machine learning with robust tools and comprehensive documentation.