Found 18 repositories(showing 18)
run-llama
Knowledge Agents and Management in the Cloud
JossueE
Local LLM for Robots is an intelligent agent that integrates a local Large Language Model (e.g., LLaMA.cpp) with tool-calling to control and query robot functions using natural language and Socket.IO. It is optimized to run on embedded hardware, such as industrial PCs or lightweight laptops running Linux, without relying on cloud services.
ebi-shirinbegi
Easy setup for running LLaMA 3.1 locally on your PC. Download, install, and chat with a powerful language model without relying on cloud services.
SolomonAureus
A modular, MCP-native AI orchestrator that bridges your browser, local files, and cloud services (GitHub, Slack) into a unified, intelligent workflow powered by Llama-3.
conda-forge
A conda-smithy repository for llama-cloud-services.
cezarmaldini
No description available
iriano380
No description available
tapasyamohan
Fashion visual search engine using LLaMA-3.1 intent analysis, semantic embeddings, and Azure cloud services
pulirahul4
Language Model: Meta LLaMA 3.1 , Python, Flask, and potentially other cloud-based services for scaling and integration
mudit14224
This project provides a comprehensive solution for extracting structured data from documents using Llama Cloud's AI services and managing the extracted data in a SQLite database.
soaringDistributions
Built with Llama. For developer workstations, rather than as a dependency (ie. boot this from USB, don't install in a VM, WSL, cloud VPS, etc). More running services and large files by default.
WieczorekAdrian
AI CLI — a simple command-line interface for interacting with a local LLaMA model, allowing you to chat with an AI model without relying on cloud services. Perfect for testing, experiments, and integration with other projects.
FawazAhmed02
LLaMail is a smart email assistant that uses LLaMA's local LLM to classify emails into categories like Work, Personal, and Urgent. It helps prioritize tasks and manage your inbox efficiently without relying on cloud services.
AaryaMehta2506
A fully local AI chatbot built with Streamlit and Ollama. Runs offline using small open-source models like Phi-3 Mini or LLaMA 3. Perfect for data science, NLP, and AI experimentation — no API keys or cloud services required.
Anilkumarvreddy
Ollama is a tool that allows you to run powerful Large Language Models (LLMs) locally on your computer. It makes it easy to download, run, and interact with models like LLaMA, Mistral, and other open-source AI models without relying on cloud services.
This project is a local website summarizer that uses the LLaMA 3 language model via Ollama. The model runs entirely on your machine using Ollama's local API, making it a private and efficient way to analyze website content without relying on external cloud services.
jay-paul1530
JarvisAIChatbot is a locally hosted AI assistant built with LLaMA 3.1 and Nomic's embedding model. It offers fast, context-aware responses with a user-friendly frontend. Designed for natural language understanding, Q&A, and task automation, it showcases practical LLM integration without relying on cloud services.
nirawadea
This repository implements a serverless AI-powered blog generation system using AWS managed services. The application allows users to submit a blog topic via an HTTP API, generates blog content using AWS Bedrock (Meta LLaMA 3), and stores the output in S3. The project demonstrates how to integrate Generative AI with serverless cloud architecture.
All 18 repositories loaded