Back to search
This repository demonstrates efficient fine-tuning of large language models like Llama 2 and DeepSeek R1 using QLoRA and Unsloth. It enables memory-efficient training on consumer GPUs with 4-bit quantization and LoRA adapters. Covers instruction tuning, medical reasoning (CoT), and HF deployment.
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
5
commits