Skip to content
Sean McNamara edited this page Jul 25, 2020 · 24 revisions

librespot supports various audio backends. Multiple backends can be enabled at compile time by enabling the corresponding cargo feature. By default, only Rodio is enabled.

See the Compiling page for information on building specific backends into librespot.

Usage

--backend Is used to select the audio playback engine.

The following backends are currently available:

  • Rodio (Default)
    • Uses ALSA on linux, so development packages are required to build.
    • Mac OS and Windows should work out of the box.
  • ALSA --backend alsa
    • Linux build dependencies: libasound2-dev on Debian and alsa-lib-devel on Fedora
  • PortAudio --backend portaudio
  • PulseAudio --backend pulseaudio
  • JACK --backend jackaudio
  • SDL --backend sdl
  • Pipe --backend pipe
    • Always included during compilation, will require a separate solution to play back the audio.
  • GStreamer --backend gstreamer

--device Is used to select a specific audio device. Some backends will list available audio devices when passing --device ?.

Most backends don't require specific setup and try to have sane defaults.

ALSA

The right device needs to be passed to librespot via the --device flag. The alsa terminology of hw:X,Y is used, where X is the card number, and Y is the device number. This can be determined by:

~/librespot$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC3234 Analog [ALC3234 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 7: HDMI 1 [HDMI 1]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 8: HDMI 2 [HDMI 2]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 9: HDMI 3 [HDMI 3]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 10: HDMI 4 [HDMI 4]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

# To determine the right mixer name for your card, try
~/librespot$  amixer --device "hw:0" --card "0" # Results truncated here 
Simple mixer control 'Master',0
  Capabilities: pvolume pvolume-joined pswitch pswitch-joined
  Playback channels: Mono
  Limits: Playback 0 - 87
  Mono: Playback 64 [74%] [-17.25dB] [on]
Simple mixer control 'Headphone',0
  Capabilities: pswitch
  Playback channels: Front Left - Front Right
  Mono:
  Front Left: Playback [off]
  Front Right: Playback [off]
Simple mixer control 'PCM',0
  Capabilities: pvolume
  Playback channels: Front Left - Front Right
  Limits: Playback 0 - 255
  Mono:
  Front Left: Playback 2 [1%] [-50.60dB]
  Front Right: Playback 2 [1%] [-50.60dB]

Thus, in this example the settings would be:

./target/release/librespot [] --backend alsa  \
        --device="hw:0,0" \
        --mixer="alsa" --mixer-card="hw:0" --mixer-name="Master" \

NOTE: You might want to turn on --linear-volume when utilising a hardware mixer with Librespot

Pipe

The pipe back-end requires very minimal setup provided you have an audio player which can accept and process the output correctly.

The pipe outputs the data as raw stereo 16bit 44.1KHz bytes. This is true for any bitrate (quality) setting.

GStreamer

The GStreamer backend sends raw PCM audio to any arbitrary GStreamer pipeline. You can define the pipeline using the device option to librespot.

If you specify the GStreamer backend but omit the device option, the default device will be ! audioconvert ! autoaudiosink. This basically tells GStreamer to determine what your "default" audio device is for your platform, and play the sound back normally using that. As of this writing, that usually means:

  • The default DirectSound playback device on Windows;
  • The default playback device on MacOS;
  • PulseAudio on Linux if it's available, otherwise ALSA, otherwise it'll keep trying audio backends based on which plugins you have installed and eventually give up if it can't figure out what to use.

This behavior is subject to change as GStreamer makes changes to the behavior of their "autoaudiosink" code.

Specifying the GStreamer device option

To customize the back end of the GStreamer pipeline, you need to learn a bit about pipeline syntax. Here are some great examples. This behavior is not unique to librespot; we are using the same pipeline parsing engine as gst-launch-1.0.

Very important: The first characters of your device parameter should be either whitespace, or an exclamation mark !. So prepending any amount of whitespace is fine, because GStreamer ignores whitespace in pipeline syntax, but the first non-whitespace character must be !.

After that first !, the first element you declare will be receiving 16-bit signed integer, interleaved PCM audio samples with a sample rate of 44100 Hz and two channels. In most cases, just sticking an ! audioconvert or -- at worst -- ! audioconvert ! audioresample in the start of your pipeline should cause GStreamer to perform any necessary format conversion so that your downstream pipeline elements can use the data being passed in.

You can test your custom pipeline code independent of librespot using gst-launch (usually gst-launch-1.0 on most systems) as follows:

gst-launch-1.0 audiotestsrc ! <your pipeline backend>

Your pipeline backend can contain any arbitrary number of elements that do processing, as well as any sink supported by GStreamer.

The pipeline "preamble" that is hard-coded into librespot is the following:

appsrc caps="audio/x-raw,format=S16LE,layout=interleaved,channels=2,rate=44100" block=true max-bytes=4096 name=appsrc0

Basically, you're going to get audio in the format listed in the caps parameter above to the "left-most" element in your custom device pipeline.

Here are some full examples of invoking librespot with GStreamer.

  1. Send audio out to DirectSound on Windows

From a cmd shell: librespot.exe -u 12345 -p password -n kitchen --backend gstreamer --device "! audioconvert ! directsoundsink"

  1. Change the pitch of the audio to 90% of its original pitch and send audio out to PulseAudio on Linux

From a bash/zsh shell: ./librespot -u 12345 -p password -n kitchen --backend gstreamer --device '! audioconvert ! pitch pitch=0.9 ! audioconvert ! pulsesink'

You should note that in both Bash and Windows CMD, I put quotes around the pipeline content. The reason is that librespot expects the entire device to be read in as a single argument to librespot, so if you have any spaces in the pipeline that get interpreted by the shell as argument separators, it won't work.

Here is an INCORRECT example that does not properly quote the device argument:

  1. INCORRECT: ./librespot -u 12345 -p password -n kitchen --backend gstreamer --device ! audioconvert ! alsasink

Why GStreamer?

If you just want to play audio out to a sound backend that is already supported by the existing librespot backends, there is no reason to use GStreamer. In fact, GStreamer might introduce additional latency that is absent from some of the more "direct" audio backends like the ALSA backend.

The librespot GStreamer backend exists for those who want to do processing on the audio that comes out of librespot, or who want to pipe it to sound backends that are not supported natively by other librespot backends.

A few obvious examples of processing include:

  • Changing the pitch, tempo or speed of the audio.
  • Equalizers.
  • Visualization (yes, GStreamer supports visualization plugins).
  • Digital limiting, normalization, AGC, adding audio watermarks, mixing in audio from other sources, etc.

A few obvious examples of sinks include:

  • Piping the audio to a novel audio backend that isn't supported well or at all otherwise; example, PipeWire.
  • Customizing the audio device or the properties (like sample rate, etc.) for the audio backend of your choice. This could be really useful if you want to change the sample rate to the "native" sample rate of your hardware; some ALSA devices either don't support resampling, or don't support the sample rate you're trying to play back, for instance.

To truly understand the capabilities of audio processing and sinks in GStreamer, you need to look at the GStreamer plugins list here.

This is just a start, though. There are many proprietary, as well as system-specific GStreamer plugins that have been written over the years. A lot of smartphones and embedded chipsets have GStreamer elements for processing and/or output, and some of these are "non-standard" and wouldn't work with other librespot plugins.

Clone this wiki locally