Project Icon

DiG

The Gated Linear Attention Approach for Scalable and Efficient Diffusion Models

Product DescriptionThe DiG model, leveraging Gated Linear Attention, enhances the scalability and efficiency of visual content generation. Offering a 2.5-fold increase in training speed over traditional Diffusion Transformers and notable GPU memory reductions, it supports improved scalability across computational complexities. DiG's performance in deeper models shows consistent FID score reductions, marking its superior efficiency in current diffusion technology.
Project Details