detoxify
Detoxify provides accurate toxic comment classification using Pytorch Lightning and Transformers, with models for multilingual and unbiased detection. The library effectively identifies toxic content across various languages, minimizing biases to support researchers and content moderators. Discover how to train and deploy these models on diverse datasets to enhance online safety.