Found 35 repositories(showing 30)
dvgodoy
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
No description available
manindersingh120996
In this Repository I will Put the code for Fine Tuning LLM for use case
mazurkin
Bootstrap for the book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face" (https://github.com/dvgodoy/FineTuningLLMs)
Anurich
This project discusses the complete pipeline, of creating an end-to-end system of finetuning the LLMs, with PEFT STTF and DPO. This project explanation is also available in the medium article which can be accessed through this link https://medium.com/dev-genius/how-to-harness-peft-sftt-and-dpo-to-fine-tune-llms-394e9cd0b150.
yiyichanmyae
Fine Tuning LLMs with the Ludwig framework
Pragyan10
No description available
Anurag-cod
No description available
inkri
FineTuningLLM
AminVilan
- FineTuningLLM-WordpressQA -
DebGB
No description available
AmlanSamanta
No description available
Emarku
Finetuning LLMs in practice from DeepLearning.AI
AdityaSinghDevs
This is my first proper experimentation of Finetuning LLMs, applying, Quantization, LoRA/QLoRA and much more
NafisehVahdian
My learning journey in LLM fine-tuning—documenting every experiment, dataset, and skill I build along the way.
koushikc7
NLP Research Project
santhoshkavi123
No description available
hkumar00
Collection of scripts to perform finetuning on consumer grade hardware or google colab free tier
20127304-AQ
This project leverages LLaMA-based models fine-tuned using QLoRA on Amazon Books Reviews (2023) to predict book prices based on content summaries. Includes data cleaning, prompt engineering, quality filtering, and low-rank adaptation techniques. Built with Hugging Face, Gradio, and integrated tools for reproducible LLM workflows.
tanhaoran1
No description available
Harris-giki
This repository demonstrates efficient fine-tuning of large language models like Llama 2 and DeepSeek R1 using QLoRA and Unsloth. It enables memory-efficient training on consumer GPUs with 4-bit quantization and LoRA adapters. Covers instruction tuning, medical reasoning (CoT), and HF deployment.
pjmreddy
This repo contains projects related to fine tuning of the Large Language Models
OzdemirOrcun
No description available
Arkajit-Datta
FineTune LLMs
Roja0125
No description available
pappachenlipin
No description available
devadigapratham
No description available
Gihan007
reading
kubernetism
Finetuning Large Language Models
Vedansh1857
No description available