finetuned-qlora-falcon7b-medical
The project fine-tunes the Falcon-7B language model with QLoRA on a specialized mental health dataset, derived from FAQs and healthcare blogs, ensuring anonymized, realistic patient-doctor dialogues. Utilizing sharded models, tuning is efficient on both Nvidia A100 and T4 GPUs, achieving a 0.031 training loss after 320 steps. This refined model enhances chatbot support for mental health, providing non-judgmental assistance as a complement to professional services. Available for further exploration with Gradio, this work integrates AI breakthroughs into mental health, fostering greater empathy and understanding.