#Fine-tuning

Logo of PhoGPT
PhoGPT
PhoGPT is an open-source generative model designed for Vietnamese, comprising a base model and a chat-focused variant with 4 billion parameters. Trained on a vast dataset and fine-tuned with comprehensive conversational data, PhoGPT demonstrates improved performance for language tasks. Access technical details and guidelines for implementation. Note limitations in complex reasoning and safe question handling.
Logo of llama-recipes
llama-recipes
The llama-recipes repository provides scripts and tutorials for getting started with Meta's Llama models, including the Llama 3.2 Vision and Text models. It includes local, cloud, and on-premises implementation guides. The repository supports multimodal inference and model fine-tuning, accommodating developers interested in integrating advanced language models without unnecessary complexity.
Logo of YiVal
YiVal
YiVal provides an advanced solution for automating prompt and configuration optimizations in GenAI applications. This tool simplifies prompt adjustments, ensuring improved performance and minimized latency without manual intervention. It utilizes data-driven insights to refine model parameters, addressing challenges including prompt creation, fine-tuning complexity, scalability, and data changes. YiVal supports applications in achieving better results and cost savings effectively. Refer to the quickstart guide for seamless integration.
Logo of self-llm
self-llm
The 'self-llm' project provides a step-by-step tutorial on using open-source large language models tailored for beginners in China. Focusing on Linux platforms, it covers environment setup, local deployment, and various fine-tuning techniques, including LoRA and ptuning, for models such as LLaMA and ChatGLM. This resource aims to assist students and researchers in understanding and applying these tools effectively in their studies and projects, while also welcoming community contributions to enrich this collaborative open-source initiative.
Logo of LLamaTuner
LLamaTuner
LLamaTuner is a sophisticated toolkit providing efficient and flexible solutions for fine-tuning large language models such as Llama3, Phi3, and Mistral on different GPU setups. It supports both single and multi-node configurations by using features like FlashAttention and Triton kernels to enhance training throughput. The toolkit's compatibility with DeepSpeed enables the use of ZeRO optimization techniques for efficient training. LLamaTuner also offers broad support for various models, datasets, and training methods, making it versatile for open-source and customized data formats. It is well-suited for continuous pre-training, instruction fine-tuning, and chat interactions.
Logo of Yi-1.5
Yi-1.5
Yi-1.5 is the enhanced version of Yi, improving coding, math, reasoning, and instructional abilities while retaining excellent language comprehension. Available in 34B, 9B, and 6B sizes, it is open-source and can be fine-tuned or deployed using Python and Ollama. Platforms like Hugging Face and ModelScope support its deployment. Additionally, it is interactive online and tunable via LLaMA-Factory and Swift under an Apache 2.0 license.
Logo of lora
lora
Learn how Low-rank Adaptation speeds up the fine-tuning of Stable Diffusion models, enhancing efficiency and reducing model size, ideal for easy sharing. This technique is compatible with diffusers, includes inpainting support, and can surpass traditional fine-tuning in performance. Discover integrated pipelines for enhancing CLIP, Unet, and token outputs, along with straightforward checkpoint merging. Delve into project updates, its web demo on Huggingface Spaces, and explore detailed features to understand its role in text-to-image diffusion fine-tuning.
Logo of stanford_alpaca
stanford_alpaca
The Stanford Alpaca project delivers an enhanced LLaMA-based model focused on instruction-following, optimized with a comprehensive dataset consisting of 52K unique instructions. The initiative supports research endeavors by offering a robust dataset and a flexible training framework, all while adhering to non-commercial licensing terms. This model invites exploration of its capabilities, urging caution regarding its current limitations, and aims to improve safety and ethical standards. Explore extensive resources such as data generation techniques, fine-tuning methods, and evaluation tools, fostering a deeper engagement with large language models.
Logo of DB-GPT-Hub
DB-GPT-Hub
This project utilizes Large Language Models (LLMs) to improve Text-to-SQL parsing efficiency and accuracy. It achieves notable accuracy improvements in complex database queries by employing Supervised Fine-Tuning (SFT) and using datasets like Spider. With support for various models, including CodeLlama and Baichuan2, it minimizes hardware demands through QLoRA. A valuable resource for developers, the initiative offers comprehensive instructions for data preprocessing, model training, prediction, and evaluation.
Logo of swift
swift
SWIFT delivers a scalable framework for the training and deployment of over 350 language models and 100 multimodal models. It includes a comprehensive library of adapters for integrating advanced techniques like NEFTune and LoRA+, allowing for seamless workflow integration without proprietary scripts. With a Gradio web interface and abundant documentation, SWIFT enhances accessibility in deep learning, benefiting both beginners and professionals by improving model training efficiency.
Logo of LLM-FineTuning-Large-Language-Models
LLM-FineTuning-Large-Language-Models
Discover detailed methodologies and practical techniques for fine-tuning large language models (LLMs), supported by comprehensive notebooks and video guides. This resource covers techniques such as 4-bit quantization, direct preference optimization, and custom dataset tuning for models like LLaMA, Mistral, and Mixtral. It also demonstrates the integration of tools like LangChain and the use of APIs, alongside advanced concepts including RoPE embeddings and validation log perplexity, providing diverse applications for AI project enhancement.
Logo of dl-for-emo-tts
dl-for-emo-tts
The project investigates various deep learning techniques to enhance emotional expression in Text-to-Speech systems. Focusing on Tacotron and DCTTS models, it explores fine-tuning strategies using datasets such as RAVDESS and EMOV-DB to augment speech naturalness and emotional depth. The research involves optimizing model parameters, applying novel training methodologies, and utilizing transfer learning in low-resource settings. The repository offers insights into utilizing neural networks for generating emotionally nuanced speech, along with practical implementations and evaluations of diverse methods.
Logo of fashion-clip
fashion-clip
FashionCLIP uses contrastive learning to improve image-text model performance for fashion. Fine-tuned with over 700K data pairs, it excels in capturing fashion specifics. FashionCLIP 2.0 boosts performance further with updated checkpoints, aiding in tasks like retrieval and parsing. Available on HuggingFace, it supports scalable, sustainable applications with low environmental impact.
Logo of trlx
trlx
The framework provides robust solutions for fine-tuning large-scale language models using reinforcement learning, compatible with models such as GPT-NeoX and FLAN-T5, up to 20 billion parameters. It employs Hugging Face's Accelerate and NVIDIA's NeMo for efficient distributed training, incorporating innovative algorithms like Proximal Policy Optimization and Implicit Language Q-Learning. Comprehensive documentation and practical examples facilitate effective training using reward mechanisms and support human-in-the-loop projects, ensuring scalable and optimized reinforcement learning outcomes.
Logo of ludwig
ludwig
Ludwig provides a low-code environment to create tailored AI models such as LLMs and neural networks with ease. The framework uses a declarative YAML-based configuration, supporting features like multi-task and multi-modality learning. Designed for scalable efficiency, it includes tools like automatic batch size selection and distributed training options like DDP and DeepSpeed. With hyperparameter tuning and model explainability, users have detailed control, along with a modular and extensible structure for different model architectures. Ready for production, Ludwig integrates Docker, supports Kubernetes with Ray, and offers model exports to Torchscript and Triton.
Logo of Play-with-LLMs
Play-with-LLMs
Explore effective methods for training and evaluating large language models through practical examples involving RAG, Agent, and Chain applications. Understand the use of Mistral-8x7b and Llama3-8b models with techniques such as CoT and ReAct Agents, transformers, and adaptations for specific languages like Chinese. The article offers comprehensive insights into pretraining, fine-tuning, and RLHF processes, supported by practical case studies. Ideal for those interested in model quantization and deployment on platforms such as Huggingface.
Logo of Taiwan-LLM
Taiwan-LLM
The Llama-3-Taiwan-70B is a 70-billion parameter model optimized for Traditional Mandarin and English NLP tasks. Its training, supported by NVIDIA's advanced systems, encompasses diverse areas such as legal, medical, and electronics. This model excels in language comprehension, creation, and multi-turn conversations, made possible by the Llama-3 framework. With backing from partners like NVIDIA and Chang Gung Memorial Hospital, it stands as a robust choice for multilingual NLP applications. Discover its capabilities through an online demo and extensive training materials.
Logo of Multimodal-GPT
Multimodal-GPT
Discover the innovative integration of visual and language inputs in chatbot development with the OpenFlamingo framework. This approach boosts chatbot efficiency by utilizing a range of visual instruction datasets and language data. Features include efficient tuning via LoRA and support for various data types. Access technical details and explore applications in fields like visual reasoning and dialogue systems. Engage with a community focused on the future of AI and multi-modal technology.
Logo of chatgpt-finetune-ui
chatgpt-finetune-ui
chatgpt-finetune-ui provides a Python-based WebUI to finetune GPT-3.5-turbo, aiming to streamline the adjustment process. With straightforward installation through OpenAI and Streamlit, it's designed to be accessible and adaptable. Execute via a basic Streamlit command, offering flexible server setups for efficient deployment. Suitable for developers looking for an intuitive platform to refine AI models. Explore the experimental demo for a practical understanding of its functionalities, fostering an effective interaction with cutting-edge AI technology.
Logo of DeSRA
DeSRA
DeSRA provides methods to identify and eliminate artifacts from GAN-inferred super-resolution models using minimal image data for fine-tuning. It facilitates real-world super-resolution application by releasing datasets, detection codes, and models such as Real-ESRGAN, LDL, and SwinIR. Supported by SegFormer and Python, it evaluates artifact detection with IOU, precision, and recall metrics. Access resources including pre-trained models and download options.
Logo of Multi-LLM-Agent
Multi-LLM-Agent
α-UMi introduces an innovative, open-source method for tool learning by allowing small LLMs to effectively collaborate, surpassing the performance of larger closed-source LLMs. The system categorizes capabilities into planner, caller, and summarizer roles, each enhancing tool interaction and user response formation. With adaptable prompt design, the Multi-LLM agent employs a distinctive two-stage Global-to-Local Progressive Fine-tuning (GLPFT) for enhanced training. Renowned for its operational flexibility and advanced collaboration, α-UMi provides processed data for easy adoption, showcasing outstanding performance in both static and real-time evaluations.
Logo of metavoice-src
metavoice-src
MetaVoice-1B is a robust 1.2 billion parameter model for text-to-speech, emphasizing emotional speech rhythm and tone. It features zero-shot voice cloning for American and British accents and supports cross-lingual cloning with minimal data through fine-tuning. The model is optimized for swift inference and can be deployed on both local and cloud platforms. It is accessible via various interfaces including a web UI, Colab demo, and Hugging Face, and is available under the Apache 2.0 license for wide-reaching use without restrictions.
Logo of ML-Bench
ML-Bench
This framework evaluates large language models and machine learning agents using repository-level code, featuring ML-LLM-Bench and ML-Agent-Bench. Key functionalities include environment setup scripts, data preparation tools, model fine-tuning recipes, and API calling guides. It supports assessing open-source models, aiding in training and testing dataset preparation, and offering a Docker environment for streamlined operation.
Logo of mlx-vlm
mlx-vlm
MLX-VLM provides tools to perform inference and fine-tune vision-language models on macOS. It supports efficient interaction through a command-line interface and Gradio chat UI, and is compatible with models like Idefics 2 and Phi3-Vision. With features like multi-image chat support and model enhancement using LoRA and QLoRA, MLX-VLM facilitates comprehensive image analysis. Installation is straightforward via pip.
Logo of pytorch-openai-transformer-lm
pytorch-openai-transformer-lm
This PyTorch project replicates OpenAI's finetuned transformer language model, adhering to the original TensorFlow setup. It efficiently applies pre-trained weights via a modified Adam optimizer, enhancing NLP tasks with fixed weight decay and scheduled learning rates. Users can generate hidden states and build a full language model or a classifier using LMHead and ClfHead. Fine-tuning on tasks like ROCStories delivers strong accuracy, highlighting its utility in natural language understanding.
Logo of mae_st
mae_st
The PyTorch implementation of 'Masked Autoencoders As Spatiotemporal Learners' enhances video processing with pre-trained checkpoints for Kinetics series, interactive visualization demos, and fine-tuning options. Built on the modified MAE repository for PyTorch 1.8.1+, it allows examination of outputs with varied mask rates and includes comprehensive pre-training guidelines, making it a valuable resource for researchers and developers in video analysis.
Logo of Awesome-Tool-Learning
Awesome-Tool-Learning
Discover a comprehensive collection of tool learning and AI resources, featuring surveys, fine-tuning methods, and in-context learning improvements. Uncover practical applications like Auto-GPT and LangChain. Regular updates make it a valuable resource for developers and researchers focused on advancing AI tool integration.