Introduction to the Adapter Project
Overview
The Adapter project is a sophisticated yet user-friendly library designed to enhance parameter-efficient and modular transfer learning. It serves as an add-on to HuggingFace's popular Transformers library, integrating over ten adapter methods with more than twenty advanced Transformer models. The goal of this integration is to minimize the coding required for model training and inference, making it easier for researchers and developers to implement these models in natural language processing (NLP) tasks.
Features
The Adapter library offers a unified interface that facilitates efficient fine-tuning and modular transfer learning. Some of the notable features include:
- Support for full-precision or quantized training methods, such as Q-LoRA, Q-Bottleneck Adapters, and Q-PrefixTuning.
- The ability to merge adapters using task arithmetic or compose multiple adapters using composition blocks.
- Flexibility in configuring adapters to suit a variety of research needs in parameter-efficient transfer learning.
Installation
Installing the Adapter library is straightforward and it supports Python versions 3.8 and above, and PyTorch versions 1.10 and above. Here’s how you can get started:
- Install PyTorch following the provided guidelines.
- Install the Adapter library from PyPI using the command:
pip install -U adapters
- Alternatively, clone the repository and install it from the source:
git clone https://github.com/adapter-hub/adapters.git cd adapters pip install .
Quick Start Guide
Loading Pre-trained Adapters
One of the library’s key functionalities is the loading of pre-trained adapters. Here is a simple example:
from adapters import AutoAdapterModel
from transformers import AutoTokenizer
model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
Modifying Existing Models
The Adapter library also allows users to adapt existing model setups seamlessly, making them reusable with minimal adjustment:
import adapters
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")
# Your regular training loop...
Customizing Adapter Configurations
The library provides tools to adjust adapter configurations to meet specific requirements:
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapter_config = ConfigUnion(
PrefixTuningConfig(prefix_length=20),
ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
Composing Multiple Adapters
Users can create complex models by composing multiple adapters:
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)):
print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
Additional Resources
To fully leverage the Adapter library, several resources are available:
- Colab notebook tutorials: These are practical guides that introduce the main concepts of transformers and adapters.
- Documentation: Comprehensive materials are available on how to train and employ adapters.
- Pre-trained adapters library: Explore a wide array of pre-trained adapter modules and contribute your own.
- Examples folder: Contains scripts adapted for training with adapters.
Supported Methods and Models
The Adapter library integrates various methods, such as Bottleneck adapters, AdapterFusion, Prefix Tuning, LoRA, and more. It supports all PyTorch models listed on the Model Overview page of their documentation.
Contribution and Development
For those interested in contributing or developing within the Adapter project, detailed guides and ways to contribute are provided.
Academic References
When utilizing the Adapter library in academic work, it is recommended to cite their library papers, which are pivotal to understanding the underlying technology and improvements of the project.
In summary, the Adapter library offers a rich, customizable, and highly efficient environment for advancing NLP tasks with modular and parameter-efficient transfer learning.