导图社区 whisper
这是一篇关于whisper的思维导图,主要内容有Approach、Setup、Available models and languages、Command-line usage等。
编辑于2022-12-25 14:27:53Whisper

[Blog]
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
Approach
A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
Setup
We used Python 3.9.9 and
1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.7 or later and recent PyTorch versions. The codebase also depends on a few Python packages, most notably
for their fast tokenizer implementation and
for reading audio files. The following command will pull and install the latest commit from this repository, along with its Python dependencies
pip install git+https://github.com/openai/whisper.git
To update the package to the latest version of this repository, please run:
pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git
It also requires the command-line tool
to be installed on your system, which is available from most package managers:
# on Ubuntu or Debian sudo apt update && sudo apt install ffmpeg # on Arch Linux sudo pacman -S ffmpeg # on MacOS using Homebrew (https://brew.sh/) brew install ffmpeg # on Windows using Chocolatey (https://chocolatey.org/) choco install ffmpeg # on Windows using Scoop (https://scoop.sh/) scoop install ffmpeg
You may need
installed as well, in case
does not provide a pre-built wheel for your platform. If you see installation errors during the
pip install
command above, please follow the
to install Rust development environment. Additionally, you may need to configure the
PATH
environment variable, e.g.
export PATH="$HOME/.cargo/bin:$PATH"
. If the installation fails with
No module named 'setuptools_rust'
, you need to install
setuptools_rust
, e.g. by running:
pip install setuptools-rust
Available models and languages

There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed.
Size Parameters English-only model Multilingual model Required VRAM Relative speed tiny 39 M tiny.en tiny ~1 GB ~32x base 74 M base.en base ~1 GB ~16x small 244 M small.en small ~2 GB ~6x medium 769 M medium.en medium ~5 GB ~2x large 1550 M N/A large ~10 GB 1x
For English-only applications, the
.en
models tend to perform better, especially for the
tiny.en
and
base.en
models. We observed that the difference becomes less significant for the
small.en
medium.en
models.
Whisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the
large-v2
model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in
Command-line usage
The following command will transcribe speech in audio files, using the
medium
model:
whisper audio.flac audio.mp3 audio.wav --model medium
The default setting (which selects the
small
model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the
--language
option: