1.8.4+pkg-368f98528.7 MB
MIT
strict
core24
Port of OpenAI's Whisper model in C/C++
Distribution-specific information
This is NOT an official distribution of whisper.cpp, please file any issue
regarding the usage of this snap to the snap's own issue tracker:
Refer to the upstream project website for more info about this application:
The following commands are provided by this snap:
You may run the following command in a terminal to setup the upstream preferred
This snap only supports Vulkan-based GPU inference at the moment. As
for CPU inference, this snap is currently built against the x86_64
baseline ABI which is very slow.
Upstream information(some features are not applicable to snap)
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
This is NOT an official distribution of whisper.cpp, please file any issue
regarding the usage of this snap to the snap's own issue tracker:
Issues · 林博仁 Buo-ren Lin / Unofficial snap packaging for whisper.cpp · GitLab
https://gitlab.com/brlin/whisper.cpp-snap/-/issuesRefer to the upstream project website for more info about this application:
ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C/C++
https://github.com/ggerganov/whisper.cppThe following commands are provided by this snap:
whisper-cpp.cli: Corresponds to thewhisper-clicommand, which is the main CLI interface for whisper.cpp.whisper-cpp.download-ggml-model: Upstream utility script to download GGML models for whisper.cpp.whisper-cpp.download-vad-model: Upstream utility script to download VAD models for whisper.cpp.
You may run the following command in a terminal to setup the upstream preferred
whisper-cli command: sudo snap alias whisper-cpp.cli whisper-cliThis snap only supports Vulkan-based GPU inference at the moment. As
for CPU inference, this snap is currently built against the x86_64
baseline ABI which is very slow.
Upstream information(some features are not applicable to snap)
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
- Plain C/C++ implementation without dependencies
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](https://github.com/ggerganov/whisper.cpp#core-ml-support)
- AVX intrinsics support for x86 architectures
- VSX intrinsics support for POWER architectures
- Mixed F16 / F32 precision
- [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
- Zero memory allocations at runtime
- Support for CPU-only inference
- [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
- [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
Update History
1.8.3+pkg-5298 (932) → 1.8.4+pkg-368f (985)31 Mar 2026, 02:49 UTC
1.8.2+pkg-cdc4 (914) → 1.8.3+pkg-5298 (932)25 Jan 2026, 15:45 UTC
18 Jul 2024, 10:04 UTC
30 Mar 2026, 14:32 UTC
13 Dec 2025, 09:47 UTC