Found 193 repositories(showing 30)
DLYuanGod
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
glidea
Learning LLM by doing
keith2018
Tiny C++ LLM inference implementation from scratch
isaacperez
A tiny version of GPT fully implemented in Python with zero dependencies
NotShrirang
🎈 A series of lightweight GPT models featuring TinyGPT Base (~51M params) and TinyGPT2 (~95M params). Fast, creative text generation trained on whimsical stories.
kyegomez
Simple Implementation of TinyGPTV in super simple Zeta lego blocks
Manav-Rachna-AIML-Club
No description available
mytechnotalent
A pure Rust GPT implementation from scratch.
jessiepathfinder
A lightweight deep language model
lohar-animesh-27112001
TinyGPT - 9M Parameters
sirohikartik
Tinystories version of gpt with custom inference engine
Ach113
implementation of decoder-only GPT model for text generation
adammikulis
tinygpt allows users to train small (~2GB) language models based on GPT-2
Leeseungjun315
Local GPU-powered console AI chatbot built with Ollama and Rich
spikedoanz
nanoGPT in tinygrad
iamaryav
A tiny GPT model
kimathi-phil
A lightweight AI text generator based on distilgpt2
Kingfish404
A tiny and fast GPT implement with JAX
big-tinsu
No description available
hemantvirmani
My own super super tiny GPT
Louay-Yahyaoui
Tiny GPT model for simple text generation experiments.
acganesh
No description available
rotemgoren
No description available
Krish2002
In this repo , i have implemented a tiny gpt model (including attention layer , decoder blocks) from scratch.
amanvars
No description available
liuxiangfeng
No description available
SWHL
整理学习版,博客依赖该仓库,不可删除
pocketive
Inference for GPT.c models in microcontrollers.
saikiranakula-amzn
A minimal implementation of a GPT-style language model built from scratch using PyTorch. This project demonstrates the core concepts of transformer architecture and language modeling.
CephasTechOrg
TinyChatGPT is a from-scratch, decoder-only Transformer (GPT-style) chatbot project built to learn the full training pipeline end-to-end. It starts small on CPU (laptop-friendly) and scales to GPU/cloud later using the same architecture and codebase.