Found 167 repositories(showing 30)
mozilla-ai
Distribute and run LLMs with a single file.
ad-si
User friendly CLI tool for AI tasks. Stop thinking about LLMs and prompts, start getting results!
asg017
A SQLite extension for generating text embeddings from remote APIs (OpenAI, Nomic, Ollama, llamafile...)
Mozilla-Ocho
No description available
iverly
Distribute and run llamafile/LLMs with a single docker image.
brainqub3
A repo to help you get your Llamafile up and running quickly
themaximalist
AI Toolkit for Node.js (LLM, Image Generation, Embeddings, Vector Search)
disler
Local LLMs in One Line Of Code (thanks to llamafile)
tluyben
Simple llamafile setup with docker
heaversm
Run CrewAI agent workflows on local LLM models with Llamafile and Ollama
Wannabeasmartguy
Probably one of the lightest native RAG + Agent apps out there,experience the power of Agent-powered models and Agent-driven knowledge bases in one click, without complex configuration.
alybun
Create an IRC chat bot powered by AI, using llamafile, in minutes.
jacob-ebey
Chat with LLMs locally utilizing llamafile as the underlying model executor.
simonw
Access llamafile localhost models via LLM
Abdoulaye-Sayouti
An Offline and Secure Retrieval-Augmented Generation (RAG) system designed for efficient processing of diverse content types with minimal computational overhead. This system use only open source tools such as LangChain, FAISS , Docling, Llamafile, Mistral Nemo, Streamlit, Hugging Face Transformers, and so on ...
Mozilla-Ocho
No description available
rabilrbl
A simple github actions script to build a llamafile and uploads to huggingface
shadowcz007
No description available
HyperMink
Scalable AI Inference Server for CPU and GPU with Node.js | Utilizes llama.cpp and parts of llamafile C/C++ core under the hood.
DjagbleyEmmanuel
This GUI aims to simplify the process of converting GGUF files to llamafile format by providing an intuitive and convenient way for users to interact with the underlying conversion script.
leighklotz
helpful scripts for llamafiles
A llamafile application starter
metaskills
Serverless AI Inference with Gemma 2 using Mozilla's llamafile on AWS Lambda
mddunlap924
This repository demonstrates LLM execution on CPUs using packages like llamafile, emphasizing low-latency, high-throughput, and cost-effective benefits for inference and serving.
fabiomatricardi
A quantized LLM and API webserver in one single executable file
maragudk
Scripts to create llamafiles.
veronika20
No description available
lmorchard
No description available
themaximalist
API Proxy for AI models, rate limiting, management and more!
aifoundry-org
No description available