#pre-trained model

Logo of CDial-GPT
CDial-GPT
This project explores the LCCC and MMChat datasets and pretrained models using the Chinese GPT architecture. It provides code for pretraining and fine-tuning with HuggingFace's Transformers library, facilitating robust Chinese dialogue generation. Updates and resources are included to support sentiment analysis and natural language generation.
Logo of CodeGeeX2
CodeGeeX2
CodeGeeX2 is a multilingual code generation model based on the ChatGLM2 framework, achieving enhanced performance with 6 billion parameters. It supports languages including Python, C++, and Java, offering improvements over its predecessor. Key features include faster inference, support for up to 8192 sequence length, and deployment requiring only 6GB GPU memory. The updated CodeGeeX plugin facilitates over 100 languages and offers contextual and cross-file completion. The model weights are available for academic research, with an option for commercial use.
Logo of Fast-SRGAN
Fast-SRGAN
This project provides an efficient solution for real-time super resolution of low-resolution videos utilizing the SR-GAN inspired architecture and pixel shuffle technique. Capable of processing videos up to 720p at 30fps on MPS devices, it offers a pretrained model for image inference and detailed instructions for custom training with editable CLI configurations. The repository welcomes contributions for model enhancement and feature expansion, facilitating advancements in video quality through established machine learning methodologies.
Logo of CharacterGLM-6B
CharacterGLM-6B
CharacterGLM-6B advances conversational AI through realistic character attributes and behaviors, emphasizing consistency, human-likeness, and engagement. Developed by Lingxin Intelligence and Tsinghua University's CoAI Lab, this model utilizes ChatGLM2 for creating intricate AI personalities. It's intended for academic research to explore complex dialogues and ethical AI development. Discover its capabilities via demos and comprehensive technical documentation.