Found 31 repositories(showing 30)
jackaduma
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
jackaduma
A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM
Baijiong-Lin
PyTorch Reimplementation of LoRA (featuring with supporting nn.MultiheadAttention in OpenCLIP)
jackaduma
A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca
s-du
Realtime diffusion (LCM-LoRA) from screen capture or webcam, for architecture, using torch and Pyside6
LoRA-Safe TorchCompile node for ComfyUI
loretoparisi
Export alpaca-lora to Torch checkpoint and HuggingFace
GURPREETKAURJETHRA
Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora
rjojjr
A simple and convenient CLI wrapper for the supervised fine-tuning(SFT) of LLM models with JSONL(or plain text) using LoRA, Transformers and Torch.
dagrende
torch controlled by a lora radio module
Yuan-ManX
PyTorch implementation of LoRA.
Onkarsus13
This the standard LoRA Implimentation for Linear, Conv1D, Conv2D, and Conv3D layers in pytorch
Yuan-ManX
PyTorch implementation of MusicGen LoRA.
No description available
No description available
shangshang-wang
LoRA-based RL dev in Torchforge
shangshang-wang
LoRA-based RL dev in Torchtitan
anshulsc
No description available
lusknchars
No description available
AkshatPal2007
No description available
parthsalke
This repository contains a Jupyter Notebook implementing **Low-Rank Adaptation (LoRA)** for fine-tuning large language models efficiently. LoRA reduces the number of trainable parameters, making adaptation feasible even on resource-constrained hardware.
burakmemisss
I have build an LoRA implementation to tune a pre-trained model.
HarshTomar1234
Pure PyTorch implementations of LoRA and QLoRA for memory-efficient fine-tuning of large language models and vision transformers.
arjuntheprogrammer
Full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware.
An end-to-end conversational AI that learns from a user’s WhatsApp chat data to generate responses mimicking their talk/text style. Built a custom BPE tokenizer from scratch, trained a GPT-like transformer model for next-token prediction, and fine-tuned a pretrained GPT-2 using both full fine-tuning and LoRA-based parameter-efficient methods.
weihuakuang
Replication of official LoRA by Jittor, aligned with Pytorch. Compatible with recent CUDA, torch, Jittor version.
This repository contains a PyTorch implementation of a Convolutional Neural Network (CNN) for classifying the MNIST dataset. The project explores different fine-tuning techniques, including LoRA (Low-Rank Adaptation), DoRA (Dynamic Low-Rank Adaptation), and QLoRA (Quantized Low-Rank Adaptation), to improve model performance and efficiency.
GeoffreyWang1117
30+ layer contribution metrics from 7 theoretical categories for PyTorch model compression. Bridges for Torch-Pruning and PEFT/LoRA.
The-CarL
A weekend-sized GPT implementation in pure PyTorch — tokenizer, multi-head attention, training, generation, LoRA, and ablation studies. 12 modules, ~5K lines, zero dependencies beyond torch.
yash-solankii
A transformer fine-tuning toolkit demonstrating both Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) (using LoRA). The project includes notebooks for training, applying PEFT methods, and testing the models via Hugging Face, Torch, and datasets integrations.