Found 23 repositories(showing 23)
hkproj
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
YYZhang2025
Implement Multi-Modality-LLM and fine tuning the model using LoRA. Only depends on the PyTorch, no other "fancy" library
FakeJackJia
Built a modular pipeline for LLM alignment with SFT (Prompt-tuning, P-tuning, Prefix-tuning, and LoRA), Reward Modeling (RM), and PPO-based RLHF using PyTorch.
0xafraidoftime
Efficient fine-tuning of large language models using Unsloth, LoRA (Low-Rank Adaptation), and 4-bit quantization techniques with PyTorch compilation optimizations.
SiddharthUchil
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
Adam-Researchh
Fine-tune embedding models with LoRA on Apple Silicon using MLX. 6-8x faster than PyTorch.
Fine-tuning large models using Multi-GPU techniques including DDP, FSDP, Model Parallelism, DeepSpeed, and LoRA with PyTorch and Hugging Face Accelerate.
heytanix
A PyTorch-based Jupyter notebook for fine-tuning GPT-2 using LoRA (Low-Rank Adaptation) and full model training on the Reddit-TIFU dataset, with memory optimization techniques for efficient training.
HimadeepRagiri
🚀 An AI-powered code documentation tool using a LoRA fine-tuned GPT-2 model. Automatically generates docstrings for Python code. Includes Gradio web interface, efficient training pipeline, and programmatic access. Built with PyTorch and CodeSearchNet data.
ArjunJagdale
This project implements Low-Rank Adaptation (LoRA) manually in PyTorch, injecting it into a BERT model for sentiment classification on the **SST-2 dataset** (GLUE benchmark). It demonstrates parameter-efficient fine-tuning using only ~0.5% of BERT’s weights.
parthsalke
This repository contains a Jupyter Notebook implementing **Low-Rank Adaptation (LoRA)** for fine-tuning large language models efficiently. LoRA reduces the number of trainable parameters, making adaptation feasible even on resource-constrained hardware.
melanieyes
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
Donald8585
Fine-tuned PaliGemma vision-language model using LoRA for image captioning | PyTorch, PEFT, Transformers
neelThummar
LoRA fine-tuning of a 4-bit quantized Gemma-2B model to generate Yoda-style responses using a custom PyTorch data pipeline.
mokhtarian2020
This project fine-tunes the LLaMA 3.1 model using LoRA adapters on an Alpaca-style dataset. It leverages PyTorch, Transformers, and PEFT for efficient, GPU-accelerated instruction tuning.
This project implements LoRA (Low-Rank Adaptation) from scratch using only PyTorch — no external LoRA libraries. It fine-tunes GPT-2 (124M parameters) on Shakespeare's works, teaching the model to generate Shakespearean text by training only ~0.15% of the total parameters.
Built a generative AI pipeline using Python, PyTorch & Google Colab to generate high-resolution images from text prompts using SDXL-1.0 with Hyper-SD LoRA. Covers prompt engineering, diffusion model architecture & image quality evaluation.
adhyatm12024-svg
This notebook fine-tunes the Donut OCR-free document understanding model using LoRA for the Amazon ML Challenge dataset. It includes custom dataset processing, model configuration, and training with PyTorch Lightning, Transformers, and Weights & Biases logging, aiming for efficient document parsing.
SharmaShivam9
This notebook fine-tunes the Donut OCR-free document understanding model using LoRA for the Amazon ML Challenge dataset. It includes custom dataset processing, model configuration, and training with PyTorch Lightning, Transformers, and Weights & Biases logging, aiming for efficient document parsing.
shobhitsinha04
This project focuses on building, pretraining, and fine-tuning a transformer-based Large Language Model (LLM) from scratch using modern NLP techniques. Leveraging PyTorch and TensorFlow, it explores model construction, tokenizer development, and LoRA-based fine-tuning for instruction-based tasks.
This project demonstrates efficient fine-tuning of OpenAI’s Whisper-small model using LoRA (Low-Rank Adaptation) for English speech-to-text transcription. It leverages the Common Voice dataset, Hugging Face Transformers, PEFT, and PyTorch to enable memory-efficient training and prompt-guided inference for accurate, customizable ASR performance.
choosechart
This project demonstrates how to fine-tune a pre-trained transformer model on a CPU using the Hugging Face `transformers`, `peft`, and `datasets` libraries, along with PyTorch. It leverages Parameter-Efficient Fine-Tuning (PEFT) with LoRA (Low-Rank Adaptation) to optimize the fine-tuning process for resource-constrained environments.
MIHIRY
A PyTorch implementation of a tree-aware transformer model for database query execution plan ranking. Uses LoRA (Low-Rank Adaptation) transfer learning with a two-phase training pipeline: cost prediction pre-training followed by pairwise ranking fine-tuning. Built on the Spider dataset with 35K+ plan variants across 134 schemas.
All 23 repositories loaded