Found 525 repositories(showing 30)
VietnamAIHub
Dự án bao gồm: 1. Xây dựng bộ dữ Instructions Vietnamese (chất lượng, nhiều, và đa dạng). 2.LLM Training, Finetuning, Evaluating & Testing trên Open-source mô hình ngôn ngữ: Bloomz,T5, UL2, LLaMA (1&2), OpenLLaMA, GPT-J pythia etc. 3. Ứng dụng và Giao diện Người dùng (UI)
低成本的简单基于live2d TTS文字转语音和大模型聊天的直播解决方案
Imboyeong
LLM 비교 연구 기반 올인원 학습 허브 플랫폼 | GPT-4o · Gemini 2.0 Flash · Claude 4.5 Sonnet 의 정확도 분석을 토대로 학생들이 목적에 맞는 최적 모델을 선택하고 활용할 수 있도록 돕는 플랫폼을 제안합니다.
amitkedia007
The aim of this dissertation is to assess the effectiveness of LLMs such as FinBERT and GPT-2 in detecting fraudulent activities in financial reports and statements. This repo provides the code for implementing LLMs, traditional machine learning and deep learning models on the labelled dataset
dubermandeer
High-performance C++ execution engine for LLM red-teaming and prompt engineering. Deploy dynamic jailbreak payloads, bypass alignment guardrails, and utilize free autonomous uncensored conversational logic locally.
katsumiok
AskIt: Unified programming interface for programming with LLMs (GPT-3.5, GPT-4, Gemini, Claude, Cohere, Llama 2)
arafkarsh
Java 23, SpringBoot 3.4.1 Examples using Deep Learning 4 Java & LangChain4J for Generative AI using ChatGPT LLM, RAG and other open source LLMs. Sentiment Analysis, Application Context based ChatBots. Custom Data Handling. LLMs - GPT 3.5 / 4o, Gemini Pro 1.5, Claude 3, Llama 3.1, Phi-3, Gemma 2, Falcon 3, Qwen 2.5, Mistral Nemo, Wizard Math
SAP AI Core Open AI Compatible LLM Proxy(gpt-5, claude sonnet 4.6, claude opus 4.6, gemini-2.5-pro)
NJX-njx
🔬 The most atomic GPT-2 implementation in 265 lines of pure Python & CUDA. A bilingual "Rosetta Stone" for understanding LLM internals from scratch. No dependencies, just math and kernels.
rachittshah
Multi-LLM deliberation council — MCP server + Claude Code skill. GPT-5, Gemini 2.5, Claude as peers. Vote, debate, synthesize, critique, MAV protocols.
StarLight1212
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
RachidNichan
The first specialized Large Language Model (LLM) for the Tamazight language (Tifinagh script). Based on GPT-2 architecture and fine-tuned on IRCAM datasets.
No description available
ethz-coss
This GitHub repository hosts the source code and data for the research paper titled "LLM Voting: Human Choices and AI Collective Decision Making," which investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2.
nafew-azim
RL-driven framework that composes modular DSPy pipelines and teleprompters to improve LLM reasoning (experiments use GPT-2).
XiaomingX
🚀 项目使命:弥合算法理论与工程实践的鸿沟 本项目是一个专为中文开发者设计的深度学习与强化学习算法全栈实验室。我们通过对 GPT-2、RLHF、MuZero 以及 Alignment (GRPO, Weak-to-Strong) 等前沿算法的现代化 PyTorch 重构,旨在提供一个“所见即所得”的学习与研究基准。 核心差异化价值 全栈重构: 彻底告别不再维护的 TensorFlow 1.x / JAX 遗留代码,全面拥抱 PyTorch 2.x 生态。 理论实战闭环: 每一行核心逻辑都配有详尽的中文注释,直接对应论文中的数学公式。 对齐技术前瞻: 率先集成了 GRPO (DeepSeek)、Weak-to-Strong (OpenAI) 等 LLM 对齐关键算法。
CactusQ
TensorRT-LLM: Quantization and Benchmark on GPT-2
jesvijonathan
This Machine Learning Project detects AI/LLM generated content & provides in depth analysis. The trained model can detect text generated by GPT-4, 3.5, 2, Zero, Bard & other LLMs with high accuracy by using newer efficient detection algorithms.
Mysticbirdie
Multi-tier benchmark: Cultural grounding + Triad Engine eliminates LLM hallucination across Claude 4.6, GPT-5.2, Mistral 7B, Gemini 2.5 Pro. Raw 15-58% → 95-100% accuracy on 222 adversarial QA pairs (Ancient Rome 110 CE). Novel topological paradox detection (F1=0.939, zero-shot). Model-agnostic, in production.
ianmkim
An experiment annotating protein sequences using pretrained LLMs GPT-3 and OPT-2.7B
Refinath
A research-oriented multi-agent trading simulator using open LLMs. Specialized agents perform news sentiment analysis (DistilBERT), technical analysis via moving-average crossovers, and trade decision generation with GPT-2, coordinated by an execution simulator and orchestrator.
wiskojo
Evaluate LLMs like GPT-4, GPT-3.5, and LLAMA-2-70b-chat on their ability to respond to an increasing number of system prompt constraints. Includes code, data, and results.
In this tutorial, we’ll walk you through the steps to fine-tune an LLM using the Hugging Face transformers library, which provides easy-to-use tools for working with models like GPT, BERT, and others. we’ll also provide a code demo for fine-tuning GPT-2 (a smaller version of GPT-3) on a custom text dataset.
Madhur-Chotia
this repo contains LLM and NLP applications starting from how tokenisers are applied, how embedings work, concepts of Masked Language Modelling(MLM), and practical applications of BERT, GPT and T5 using real world use cases, such as Question Answering using BERT, Text Generation using GPT-2, Product Review using T5
kristofferv98
Agentic framework for dynamic function calling across latest LLMs (gpt-4o, gemini-2.0-flash, groq modes, and anthropic models). Converts Python functions into provider-specific schemas for autonomous tool use. Features unified API, JSON schema generation, and integrated tool execution handling.
ducanhho2296
This project employs fine-tuning techniques on LLM such as LLamas 2, GPT to develop a specialized Q&A chatbot for enhancing customer services.
Justin-Yuan
Playing around with different small-scale/sized LLMs, such as GPT-2, Phi models, SmolLM models, and tiny Llama models
zsh-6534
No description available
melvinprince
No description available
DanielT504
A fine-tuned GPT-2 LLM for Python-to-JavaScript code translation