-
Notifications
You must be signed in to change notification settings - Fork 86
BKM: RTP Streaming under Kubernetes
RTP streaming (from an IP camera) requires to open multiple UDP ports. This presents a challenge under Kubernetes, where a Service
must explicitly declare any open ports. To make things worse, if we use POD
replication, we need to make sure each POD
instance operates on different ranges of ports.
Under Kubernetes, we can declare an UDP port in multiple ways:
We can declare the UDP port directly at the POD
level:
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
template:
spec:
containers:
- ports:
- containerPort:
protocol: UDP
hostPort: 32546
We can also define a Service
and declare the port at the Service
level:
apiVersion: v1
kind: Service
spec:
ports:
- port: 32546
protocol: UDP
There are a few drawbacks with the above approaches:
-
Multiple Ports: As of Kubernetes v1.16, there is no support for declaring a range of ports. With RTP, we need to open multiple UDP ports (each serving a video or audio stream.) We can workaround this issue by utilizing
m4
to generate the deployment script:
apiVersion: v1
kind: Service
spec:
ports:
forloop(`PORT',32546,32550,`dnl
- port: defn(`PORT')
protocol: UDP
name: `udp'defn(`PORT')
')dnl
See Also: forloop
-
Replica Conflict: If we plan to use
replicas
, declaring ports at theService
orPOD
level is not a good idea. The declared ports will cause conflict when scheduling replicatedPODs
on the same node. The workaround is to explicitly control which replicated instance uses which port (range.) This defeats the purpose of usingreplicas
.
So the only viable (and happen-to-be the most reliable) solution is to use:
Declaring a POD
to run on the host network exposes the containers to the host network. There is no need to additionally declare any UDP ports. As long as the RTP receiving code can handle port conflict, we can use replicas
to run multiple POD
instances on the same node.
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 4
template:
spec:
enableServiceLinks: false
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: ...
image: ...
where the hostNetwork
setting exposes the POD
to the host network. The dnsPolicy
setting is critical here to allow the POD
to still serve DNS and communicate back to any PODs
or Services
within the cluster network.
See Also: analytics.yaml.m4
Under Kubernetes, container instances exposing to the host network share the host localhost
network. We need to make sure that there is no local binding to the localhost
network in the container process(es). With replicas
, such binding may be in conflict or cause unexpected behavior.
Powered by Open Visual Cloud media and analytics software stacks.