ESFT
ESFT optimizes Large Language Models (LLMs) performance and efficiency using Mixture-of-Experts (MoE) architecture. By focusing on task-relevant components, it reduces resource and storage needs and enhances model adaptability to various datasets. This method suits industries looking for efficient LLM deployment with specialized tuning. Recently accepted at EMNLP 2024, ESFT provides open-source training code for integration and testing on personal models and data, facilitating effective model customization with decreased computational demand.