LongLoRA
LongLoRA utilizes efficient fine-tuning to enhance long-context language models with techniques like shifted short attention and Flash-Attention compatibility. Supporting models from 7B to 70B and context lengths up to 100k, it integrates an open-sourced dataset, LongAlpaca-12k, while facilitating reduced memory usage through QLoRA. This approach expands models' capability for complex tasks and optimizes computational resources.