The following notes have little to do with whisper_server
, but might be of interest if you want to run Whisper directly.
Follow the setup instructions at Whisper on GitHub:
- install Python
pip install -U openai-whisper
sudo apt update && sudo apt install ffmpeg
choco install ffmpeg
docker build -t nalbion/openai-whisper .
docker run --rm -e "PULSE_SERVER=/mnt/wslg/PulseServer" -v /mnt/wslg/:/mnt/wslg/ nalbion/openai-whisper
whisper.cpp can also be run in the browser.
The online examples are not too impressive on a laptop.
Model | Disk | Mem |
---|---|---|
tiny | 75 MB | ~125 MB |
base | 142 MB | ~210 MB |
small | 466 MB | ~600 MB |
medium | 1.5 GB | ~1.7 GB |
large | 2.9 GB | ~3.3 GB |
Choose a model from the table above and run:
MODEL=base & LANG=".en" & docker build -f Dockerfile.cpp -t nalbion/whisper-cpp-$MODEL --build-arg MODEL=$MODEL$LANG .
or on Windows:
set MODEL=base& set LANG=.en& docker build -f Dockerfile.cpp -t nalbion/whisper-cpp-%MODEL% --build-arg MODEL=%MODEL%%LANG% .
docker run --rm -v /dev/snd:/dev/snd nalbion/whisper-cpp-base
or on WSL:
docker run --rm -e "PULSE_SERVER=/mnt/wslg/PulseServer" -v /mnt/wslg/:/mnt/wslg/ nalbion/whisper-cpp-base
or for an interactive shell:
docker run --rm -it -v "$(pwd)/build:/artifacts" --entrypoint /bin/bash nalbion/whisper-cpp-base
or from Windows:
docker run --rm -it -v %cd%/build:/artifacts --entrypoint /bin/bash nalbion/whisper-cpp-base
From the shell you can copy the wasm files:
cp -r bin/stream.wasm/* bin/libstream.worker.js /artifacts/wasm