Found 53 repositories(showing 30)
huggingface
Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Mohamad-Hussein
Desktop application for Linux and Windows that utilizes distil-whisper models from HuggingFace, to enable real-time offline speech-to-text dictation.
gregmeldrum
A realtime speech to image generator using distil-whisper and stable diffusion SDXL turbo
Codeblockz
No description available
gorkemkaramolla
Faster Whisper with Speaker Diarization
fpaupier
Distil whisper on web
sh-aidev
No description available
PieterjanCriel
AWS Step Function workflow to create a transcription with a Distil-Whisper Model
inferless
Distilled model which is 49% smaller and 6.3× faster while maintaining near accuracy, especially on long-form transcription. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>
yuran986
Knowledge distillation–based compression of OpenAI Whisper for Chinese ASR, achieving 2.89× inference speedup with only 2 decoder layers while improving CER.
Elias1986a
Push-to-talk voice-to-text for macOS — offline AI powered by Whisper, Distil-Whisper, and Moonshine
Siris2314
Summarize YT videos in one go using Mixtral
innerNULL
Simpler Distil-Whisper
nico-byte
The Whisper Web Transcription Server is a Python-based real-time speech-to-text transcription system powered by OpenAI's Whisper models. It leverages state-of-the-art models like Distil-Whisper to transcribe audio input in real-time.
Adityarajsingh2904
No description available
UBC-NLP
uDistil-Whisper: Label-Free Data Filtering for Knowledge Distillation in Low-Data Regimes ( NAACL'2025 )
ptparkr
Lightning-fast audio transcription (6x speed) with batch processing, Obsidian integration, and optimized real-time performance. Powered by faster-whisper and Distil-Whisper models.
prathamc25
Accelerating OpenAI's Whisper model using Speculative Decoding with Tiny, Base and Distil draft models. Includes benchmarks vs. Beam Search.
GrimFandango42
Free, local, open-source push-to-talk transcription for Windows. Hold Ctrl+Shift, speak, release — text injected anywhere. GPU-accelerated with Distil-Whisper. No cloud, no subscription.
inferless
Distilled Whisper model that is 6 times faster, 49% smaller, and performs within 1% WER on out-of-distribution evaluation sets. <metadata> gpu: T4 | collections: ["HF Transformers"] </metadata>
Emmastyle
No description available
Satviknrn157
No description available
esnvidia
No description available
Fork of incredibly-fast-whisper
AshishJaiswal25
Alchemy — An async-first universal data ingestion & parsing platform for GenAI apps. Transform documents, images, audio, video & web pages into structured, LLM-ready output. Built with Docling, Qwen2-VL, Distil-Whisper & Crawl4AI. Features batch processing, SSE streaming, semantic chunking & a job queue.
sujitvasanth
faster whisper, noise cancellation, distil whisper
nehat005
No description available
jjroberts88
Efficiently Convert Your Local Audio Recordings to Text Using the Power of Distil Whisper
clamsproject
CLAMS wrapper app for distil-whisper from HuggingFace
ogrnz
Add subtitles to your videos using distil-whisper