Found 77 repositories(showing 30)
run-llama
A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain
akx
Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp
IIIIIllllIIIIIlllll
llamacpp的整合包,自用于AI MAX+ 395机器,但是其它设备实际通用。如有问题可以提ISSUE,回复不一定及时;也可以加QQ群:829631748
ntropy-network
This repository benchmark Ntropy API against different Large Language Models (OpenAI ChatGPT and LLAMA finetuned models). It also contains an easy to use wrapper to enable using the LLMs to perform transaction enrichment. Llama adapters were open sourced and are available on huggingface hub.
jbulger82
An upgraded llama.cpp GUI (https://github.com/ggml-org) local-first cloud model llama.cpp GUI multi agent command center with RAG, MCP tools, browser automation, voice, and multi-provider orchestration. Demo Here https://llamahub.netlify.app/
A Full-Stack Web App with LLamaIndex and Django to query, summarize, and group documents
Bronwin87
Higher level implementations for LLamaSharp
dirmacs
End-to-end llama.cpp toolkit in Rust. API client, HuggingFace Hub, server orchestration.
Trained and evaluated traditional ML models, fine-tuned Dolphin 2.9.4 based on the Llama 3.1 (8B) model, and processed Bangla text to classify sentiments. (bnlp, nltk, bnlp_toolkit, banglanltk, huggingface_hub, transformers, torch)
sohomx
Short Description: This file is a Jupyter notebook that demonstrates how to fine-tune and export a Qwen2-VL-2B-Instruct model using the LLaMA Factory library. It covers the process of setting up the environment, training the model with LoRA, merging the adapters, and uploading the resulting model to Hugging Face Hub.
LlamaMC
Modern website.
ankitjawla
No description available
tooniez
Jupyter notebook to run a FastAPI server with Llama 2 model integration using Google Colab's free T4 GPU.
MettaMazza
On-device AI hub for Android — HuggingFace model browser, llama.cpp inference, multi-platform bridges
erdoganhalit
Notebook study that improves the RAG accuracy of a Llama Hub pack that deals with embedded tables
KevKibe
This is an implementation of fine-tuning the Llama-2 model with the QLoRA (Quantized LoRA) framework using a specific version of Llama and a particular dataset all from HuggingFace Hub.
⚙️ Fine-Tune 🦙 Llama 3.1, Phi-3.. Models on custom DataSet using 🕴️ unsloth & Saving to HuggingFace Hub
Daidanny008
Recognize LaTeX images into LaTeX code, terribly, spinoff on https://github.com/patchy631/ai-engineering-hub/blob/main/LaTeX-OCR-with-Llama/README.md
This project demonstrates how to fine-tune a pretrained LLaMA 2 model using Hugging Face Transformers and PEFT (LoRA) techniques in Google Colab. The base model, aboonaji/llama2finetune-v2, was loaded from Hugging Face Hub and fine-tuned on a medical text dataset (wiki_medical_terms_llam2_format).
belviskhoremk
Talk to Your PDF is a lightweight AI-powered application that allows you to ask questions and have conversations with the content of any PDF file. It uses cutting-edge tools like LLaMA 3, LangChain, and Hugging Face Hub to extract insights and provide contextual answers from your documents.
BrianDLawrence
No description available
Theotypus
No description available
SigmaBoy2213
No description available
ReneDrengen
No description available
Dsniels
No description available
mthomas46
No description available
xiscoding
No description available
Theotypus
No description available
msftwarelab
Llama-Hub-WordLift-Graphql-Connector
AaronSosaRamos
No description available