Found 20 repositories(showing 20)
Ava-AgentOne
Ollama with Intel iGPU acceleration via IPEX-LLM — Docker container for Unraid
malcolmchanhaoxian
No description available
Mingqi2
智博AI - 基于Intel IPEX-LLM 的博物馆智能体/敦煌博物馆问答多模态智能体
leeroopedia
Align LLMs with human preferences using DPO on Intel GPUs with IPEX-LLM 4-bit quantization and LoRA adapters
juan-OY
Run Qwen2.5-Omni-7B with ipex-llm on Intel platform
andrewjswan
Home Assistant Add-on: Ollama Portable on Intel GPU with IPEX-LLM
jlau78
Study: LLM setup for the intelanalytics IPEX library to use the Intel Arc GPU for LLM
leeroopedia
No description available
leeroopedia
No description available
NathanielIskandar
Integration of Meta Llama 3, a language model by Meta AI, and Intel IPEX LLM. The project leverages advancements in NLP and machine learning to create high-performance, efficient models, showcasing Intel's AI acceleration technologies to optimize large language model performance.
leeroopedia
Full-precision LoRA fine-tuning of LLMs on Intel GPUs using IPEX-LLM with bf16 base weights and optional DeepSpeed ZeRO-3
Michael-C-Buckley
Intel's patched Ollama for Ipex-LLM in nix format
No description available
leeroopedia
No description available
KiwiHana
run moonlight-16B-A3B-instruct by intel ipex-llm-transformers python
laialbus
Bridge enabling Intel IPEX optimizations with HuggingFace Accelerate for memory-efficient LLM inference on CPU
SichengStevenLi
Hi all, this is the place where I post my experiences working with Intel IPEX-LLM to speed up local LLMs.
Ac1dBomb
Intel based using LLM-IPEX/llama.cpp server and Gradio lite web interface to control a Blender Extension using python.
cr8ivecodesmith
This repository contains a custom wrapper to use the ipex-llm on an Intel ARC laptop (e.g., Lunar Lake).
tonyeatsm
Personal experiments using the Intel AI technology stack: OpenVINO, IPEX-LLM, DL Streamer, and oneAPI, with support for CPUs and GPUs (Intel integrated and discrete GPUs).
All 20 repositories loaded