Found 66 repositories(showing 30)
tatsu-lab
Code and documentation to train Stanford's Alpaca models, and generate the data.
gururise
Alpaca dataset from Stanford, cleaned and curated
jankais3r
Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.
declare-lab
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
dropreg
The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca
official-elinas
Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models
clcarwin
Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.
lxe
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
DominguesM
Finetuning Stanford Alpaca (LLaMA) with Brazilian Portuguese data
l294265421
Multi-turn alpaca is an extension of stanford alpaca and supports multi-turn dialogue 多轮对话版alpaca
Emplocity
The OWCA dataset is a polish translated dataset of instructions for fine-tuning the Alpaca model made by Stanford .
mikeybellissimo
A repo for finetuning MPT using LoRA. It is currently configured to work with the Alpaca dataset from Stanford but can easily be adapted to use another.
bacoco
https://github.com/tatsu-lab/stanford_alpaca
snowolf
This is a sample about how to run stanford_alpaca on Amazon SageMaker, only for demo use.
vaguenebula
An experiment to see if chatgpt can improve the output of the stanford alpaca dataset
Nelsonlin0321
No description available
ryan-air
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
Rafaelmdcarneiro
Train llama with lora on NVIDIA RTX 4090 and merge weight of lora to work as stanford alpaca.
ryan-air
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
RogerDeng
No description available
RaccoonOnion
Finetuning Llama2 model on Stanford Alpaca datasets
yuki-2025
Reproduce renowned LLM papers, eg. Stanford Alpaca, LlaVa....
yu-jeffy
Stanford Alpaca LLM Training Data, modified with prompts and training data from educational sources
hon9kon9ize
Generate Cantonese Instruction dataset by Gemini Pro using Stanford's Alpaca prompts for fine-tuning LLMs.
This repo contains the code and data used to finetune microsoft's phi 1_5 model. The finetune contains roughly 1M additional tokens. The datasets used are a combination of 200k Stanford Alpaca examples, 27k Code Exercise examples in python, and gsm8k dataset.
kesperinc
No description available
447428054
No description available
minpeter
No description available
niruhsa
Custom Stanford LLaMA/Alpaca training & fine tuning
tosiyuki
Generate like Stanford Alpaca data using Gemma.