Project Icon

detoxify

Reliable Toxic Comment Classification with Pytorch Lightning and Transformers

Product DescriptionDetoxify provides accurate toxic comment classification using Pytorch Lightning and Transformers, with models for multilingual and unbiased detection. The library effectively identifies toxic content across various languages, minimizing biases to support researchers and content moderators. Discover how to train and deploy these models on diverse datasets to enhance online safety.
Project Details