Found 6 repositories(showing 6)
leehanchung
Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
iwalton3
Patch for MPT-7B which allows using and training a LoRA
mikeybellissimo
A repo for finetuning MPT using LoRA. It is currently configured to work with the Alpaca dataset from Stanford but can easily be adapted to use another.
interactivetech
Testing MPT 7B finetuning using LORA
leehanchung
No description available
interactivetech
Simple Example to train MPT 30B (Single GPU and DDP) model using LORA and Int8 training
All 6 repositories loaded