A repo for finetuning MPT using LoRA. It is currently configured to work with the Alpaca dataset from Stanford but can easily be adapted to use another.
Stars
18
Forks
7
Watchers
18
Open Issues
8
Overall repository health assessment
No package.json found
This might not be a Node.js project
25
commits
Now allows for dataset format of MPT-7B-instructs finetuning dataset dolly-hhrlhf
b280ee5View on GitHub