Found 52 repositories(showing 30)
intel
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
intel
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
celesrenata
Intel 185H Kubernetes with SR-IOV GPU Passthrough to cluster w/ various projects. Now with working Intel SR-IOV to KubeVirt!
yaosenJ
IPEX-LLM是一个PyTorch库,用于在英特尔CPU和GPU(例如,带有集成GPU的本地PC、像Arc、Flex和Max这样的独立GPU)上以非常低的延迟运行LLM。
digitalscream
Docker image providing fastchat (webui and api) for Intel Arc GPUs
Ava-AgentOne
Ollama with Intel iGPU acceleration via IPEX-LLM — Docker container for Unraid
malcolmchanhaoxian
No description available
biyuehuang
including ollama, vllm, xft, ipex-llm, tensorrt-llm
Mingqi2
智博AI - 基于Intel IPEX-LLM 的博物馆智能体/敦煌博物馆问答多模态智能体
XtromAI
No description available
dogdogpp
实现Qwen2-1.5B和简单RAG(langchain)的ipex_llm、openvino推理加速以及性能对比
rkilchmn
openedai-whisper-openvino
Goldlionren
This project provides an OpenAI-compatible API wrapper for local inference using llama-gemma3-cli.exe, enabling smooth integration with Open WebUI to interact with Gemma 3 models such as Gemma 3 27B.
leeroopedia
Align LLMs with human preferences using DPO on Intel GPUs with IPEX-LLM 4-bit quantization and LoRA adapters
juan-OY
Run Qwen2.5-Omni-7B with ipex-llm on Intel platform
andrewjswan
Home Assistant Add-on: Ollama Portable on Intel GPU with IPEX-LLM
futursolo
Collection of AI Containers - Prebuilt and Ready-to-Use
Jasonzzt
IPEX-LLM on Modelscope
MingxuZh
No description available
vajraudham
No description available
JoonHyoungLee-Seoul
No description available
lirc572
No description available
wallacezq
some multimodal llm examples accelerated with ipex-llm backend
jlau78
Study: LLM setup for the intelanalytics IPEX library to use the Intel Arc GPU for LLM
KiwiHana
run moonlight-16B-A3B-instruct by intel ipex-llm-transformers python
yk-svg
No description available
Cyberavater
No description available
Michael-C-Buckley
Intel's patched Ollama for Ipex-LLM in nix format
jackm97
No description available
yxchia98
No description available