Introducing Whisper

0
233
Introducing Whisper


We’ve educated and are open-sourcing a neural web referred to as Whisper that approaches human stage robustness and accuracy on English speech recognition.

Learn Paper


View Code


View Mannequin Card

Whisper is an automated speech recognition (ASR) system educated on 680,000 hours of multilingual and multitask supervised information collected from the online. We present that the usage of such a big and numerous dataset results in improved robustness to accents, background noise and technical language. Furthermore, it permits transcription in a number of languages, in addition to translation from these languages into English. We’re open-sourcing fashions and inference code to function a basis for constructing helpful functions and for additional analysis on sturdy speech processing.

The Whisper structure is an easy end-to-end strategy, carried out as an encoder-decoder Transformer. Enter audio is cut up into 30-second chunks, transformed right into a log-Mel spectrogram, after which handed into an encoder. A decoder is educated to foretell the corresponding textual content caption, intermixed with particular tokens that direct the one mannequin to carry out duties comparable to language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

Different current approaches continuously use smaller, extra carefully paired audio-text coaching datasets, or use broad however unsupervised audio pretraining. As a result of Whisper was educated on a big and numerous dataset and was not fine-tuned to any particular one, it doesn’t beat fashions focusing on LibriSpeech efficiency, a famously aggressive benchmark in speech recognition. Nonetheless, once we measure Whisper’s zero-shot efficiency throughout many numerous datasets we discover it’s rather more sturdy and makes 50% fewer errors than these fashions.

A few third of Whisper’s audio dataset is non-English, and it’s alternately given the duty of transcribing within the unique language or translating to English. We discover this strategy is especially efficient at studying speech to textual content translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.

We hope Whisper’s excessive accuracy and ease of use will enable builders so as to add voice interfaces to a a lot wider set of functions. Take a look at the paper, mannequin card, and code to study extra particulars and to check out Whisper.

LEAVE A REPLY

Please enter your comment!
Please enter your name here