#LLaMA

Logo of xTuring
xTuring
The platform provides an intuitive interface for fine-tuning open-source LLMs such as Mistral, LLaMA, and GPT-J. It facilitates customized model management while maintaining data privacy, supports data ingestion, scalable GPU usage, and employs memory-efficient methods like INT4 and LoRA to lower hardware expenses. Users can explore various tuning techniques, assess models using defined metrics, and leverage new library features such as LLaMA 2 integration and CPU inference capabilities for enhanced performance and precision.
Logo of LLMCompiler
LLMCompiler
LLMCompiler boosts the performance of language models by orchestrating parallel function calls. It selects tasks for concurrent execution, which helps lower latency and cost while enhancing accuracy. Supporting both open-source and proprietary models like LLaMA and GPT, LLMCompiler facilitates complex problem-solving. It integrates with frameworks such as LangGraph and supports endpoints like Azure and Friendli, allowing for the seamless creation of custom benchmarks. LLMCompiler stands out as a versatile tool for large language model applications.
Logo of OpenGPTAndBeyond
OpenGPTAndBeyond
Delve into the development journey of open-source large language models aiming to reach and exceed ChatGPT-like performance. Gain knowledge on fundamental and specialized models, training methodologies, inference methods, and multi-modal integrations. Keep informed about evaluation metrics, multilingual capabilities, and advancements in efficient tuning and economical inference. Access comprehensive insights into tools, external knowledge implementation, and safety protocols.
Logo of GPT4Tools
GPT4Tools
Explore a system facilitating interactive image management using self-instructed language models founded on Vicuna (LLaMA). This tool utilizes 71K instruction datasets to autonomously manage visual foundation models, supporting user-led model teaching with LoRA enhancements while allowing interactive dialogue engagement with images. Stay updated on NIPS 2023 developments and explore features through demos and resources.
Logo of dalai
dalai
Enable seamless execution of LLaMA and Alpaca models on Linux, Mac, and Windows with minimal hardware. Experience flexible web apps and APIs, supporting model sizes from 7B to 65B, ensuring optimized performance.
Logo of Linly
Linly
This project advances Chinese language model capabilities by leveraging LLaMA and Falcon foundations with bilingual data. Introducing Linly-ChatFlow, fine-tuned through comprehensive directive training protocols. Models like Linly-OpenLLaMA (3B, 7B, 13B) are robust and open-sourced for diverse applications, supporting full-parameter training and various CUDA deployment strategies.
Logo of Platypus
Platypus
This project delivers advanced solutions to enhance transformer architectures like LLaMA and LLaMA-2 using LoRA and PEFT. It focuses on efficiency and affordability, allowing users to access fine-tuned models on HuggingFace with seamless integration. Recent advancements include improved data processing and scripts for easy model setup and tuning. Discover various data refinement techniques to ensure model training accuracy and uniqueness, with detailed CLI guidelines for local deployment.
Logo of MGM
MGM
Discover the innovative dual-encoder framework designed for large language models ranging from 2B to 34B, specialized in image comprehension and generation. This open-source project, built upon LLaVA, provides detailed resources for training, setup, and assessment. Engage with advanced vision-language integration via its demos and vast datasets such as COCO and GQA, available on Hugging Face Spaces. Follow recent model developments and performance evaluations.
Logo of ChatGenTitle
ChatGenTitle
ChatGenTitle utilizes fine-tuned LLaMA models with extensive arXiv data to efficiently generate paper titles. This project offers open-source models, online trials, and flexibility for diverse AI research fields, facilitating straightforward deployment. Integrating with HuggingFace, it ensures seamless access and applications in scientific contexts, enriched by thorough data collection from arXiv.
Logo of llama-classification
llama-classification
Discover a text classification framework using LLaMA with approaches like direct, channel, and pure generation. The repository provides Nvidia GPU setup details for optimized processing of the ag_news dataset, focusing on conditional probability and calibration methods to boost prediction accuracy. Engage with the community through issues or pull requests for enhancements, making it suitable for researchers and developers seeking a practical LLaMA classification solution.
Logo of cog-llama-template
cog-llama-template
This guide provides a detailed approach to implementing LLaMA and LLaMA2 models using Cog, including setting up model weights, conversion to transformers-compatible formats, and tensorization for enhanced performance. It covers how to deploy models on Replicate for cloud integration. This project primarily supports 7B, 13B, and 70B models intended for research only. It discusses using NVIDIA GPUs and Docker to optimize processing efficiency and highlights the integration of Exllama dependencies while noting the licensing terms for non-commercial use.
Logo of safe-rlhf
safe-rlhf
Explore an open-source framework for language model training emphasizing safety and alignment using Safe RLHF methods. It supports leading pre-trained models, extensive datasets, and customizable training. Features include multi-scale safety metrics and thorough evaluation, assisting researchers in optimizing models with reduced risks. Developed by the PKU-Alignment team at Peking University.
Logo of llama-tokenizer-js
llama-tokenizer-js
This JavaScript tokenizer allows for token counting in LLaMA models in the browser or Node, with no dependencies and TypeScript support. Encapsulated in a single file, it is designed for efficient client-side calculations, utilizing byte pair encoding. Compatible with existing LLaMA models, it ensures precise tokenization without the latency of server-side alternatives, making it suitable for web applications.
Logo of ppl.llm.serving
ppl.llm.serving
This project provides a scalable solution for deploying Large Language Models using gRPC on the PPL.NN platform. Key features include model exporting and configuration for optimal performance on x86_64 and arm64 systems with CUDA. The environment supports inference, benchmarking, and seamless client-server interactions. Designed for Linux, it requires GCC, CMake, and CUDA, ensuring compatibility and enhanced performance.