Skip to content

Commit

Permalink
Merge pull request #92 from linuxserver/readme_hwaccel
Browse files Browse the repository at this point in the history
  • Loading branch information
Roxedus committed Feb 13, 2024
2 parents f5bab02 + d52c4bf commit dfea278
Show file tree
Hide file tree
Showing 2 changed files with 45 additions and 26 deletions.
47 changes: 34 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,35 +72,55 @@ Webui can be found at `http://<your-ip>:8096`

Emby has very complete and verbose documentation located [here](https://github.com/MediaBrowser/Wiki/wiki) .

Hardware acceleration users for Intel Quicksync and AMD VAAPI will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:
### Hardware Acceleration Enhancements

```--device=/dev/dri:/dev/dri```
This section lists the enhancements we have made for hardware acceleration in this image specifically.

We will automatically ensure the abc user inside of the container has the proper permissions to access this device.

Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:

https://github.com/NVIDIA/nvidia-docker

We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the emby docker.
#### OpenMAX (Raspberry Pi)

### OpenMAX (Raspberry Pi)
Hardware acceleration users for Raspberry Pi MMAL/OpenMAX will need to mount their `/dev/vcsm` and `/dev/vchiq` video devices inside of the container and their system OpenMax libs by passing the following options when running or creating the container:

Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container:
```
--device=/dev/vcsm:/dev/vcsm
--device=/dev/vchiq:/dev/vchiq
-v /opt/vc/lib:/opt/vc/lib
```

### V4L2 (Raspberry Pi)
#### V4L2 (Raspberry Pi)

Hardware acceleration users for Raspberry Pi V4L2 will need to mount their `/dev/video1X` devices inside of the container by passing the following options when running or creating the container:

Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X devices inside of the container by passing the following options when running or creating the container:
```
--device=/dev/video10:/dev/video10
--device=/dev/video11:/dev/video11
--device=/dev/video12:/dev/video12
```

### Hardware Acceleration

Many desktop application will need access to a GPU to function properly and even some Desktop Environments have compisitor effects that will not function without a GPU. This is not a hard requirement and all base images will function without a video device mounted into the container.

#### Intel/ATI/AMD

To leverage hardware acceleration you will need to mount /dev/dri video device inside of the container.

```text
--device=/dev/dri:/dev/dri
```

We will automatically ensure the abc user inside of the container has the proper permissions to access this device.

#### Nvidia

Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:
https://github.com/NVIDIA/nvidia-docker

We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the container.

#### Arm Devices

Best effort is made to install tools to allow mounting in /dev/dri on Arm devices. In most cases if /dev/dri exists on the host it should just work. If running a Raspberry Pi 4 be sure to enable `dtoverlay=vc4-fkms-v3d` in your usercfg.txt.

## Usage

To help you get started creating a container from this image you can either use docker-compose or the docker cli.
Expand Down Expand Up @@ -339,6 +359,7 @@ Once registered you can define the dockerfile to use with `-f Dockerfile.aarch64

## Versions

* **12.02.24:** - Use universal hardware acceleration blurb
* **19.01.24:** - Fix tonemapping so it's done with hw acceleration.
* **06.07.23:** - Deprecate armhf. As announced [here](https://www.linuxserver.io/blog/a-farewell-to-arm-hf)
* **08.06.23:** - Fix package extraction so it doesn't change /tmp perms.
Expand Down
24 changes: 11 additions & 13 deletions readme-vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,36 +53,34 @@ app_setup_block: |
Emby has very complete and verbose documentation located [here](https://github.com/MediaBrowser/Wiki/wiki) .
Hardware acceleration users for Intel Quicksync and AMD VAAPI will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:
### Hardware Acceleration Enhancements
```--device=/dev/dri:/dev/dri```
This section lists the enhancements we have made for hardware acceleration in this image specifically.
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
#### OpenMAX (Raspberry Pi)
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:
Hardware acceleration users for Raspberry Pi MMAL/OpenMAX will need to mount their `/dev/vcsm` and `/dev/vchiq` video devices inside of the container and their system OpenMax libs by passing the following options when running or creating the container:
https://github.com/NVIDIA/nvidia-docker
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the emby docker.
### OpenMAX (Raspberry Pi)
Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container:
```
--device=/dev/vcsm:/dev/vcsm
--device=/dev/vchiq:/dev/vchiq
-v /opt/vc/lib:/opt/vc/lib
```
### V4L2 (Raspberry Pi)
#### V4L2 (Raspberry Pi)
Hardware acceleration users for Raspberry Pi V4L2 will need to mount their `/dev/video1X` devices inside of the container by passing the following options when running or creating the container:
Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X devices inside of the container by passing the following options when running or creating the container:
```
--device=/dev/video10:/dev/video10
--device=/dev/video11:/dev/video11
--device=/dev/video12:/dev/video12
```
readme_hwaccel: true
# changelog
changelogs:
- {date: "12.02.24:", desc: "Use universal hardware acceleration blurb"}
- {date: "19.01.24:", desc: "Fix tonemapping so it's done with hw acceleration."}
- {date: "06.07.23:", desc: "Deprecate armhf. As announced [here](https://www.linuxserver.io/blog/a-farewell-to-arm-hf)"}
- {date: "08.06.23:", desc: "Fix package extraction so it doesn't change /tmp perms."}
Expand Down

0 comments on commit dfea278

Please sign in to comment.