Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

clearcontainers/proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status Build Status Build Status Go Report Card Coverage Status GoDoc

cc-proxy

cc-proxy works alongside the Clear Containers runtime and shim to provide a VM-based OCI runtime solution.

cc-proxy is a daemon offering access to the agent to both the runtime and shim processes. Only a single instance of cc-proxy per host is necessary as it can be used for several different VMs. Since Clear Containers 3.0.10, one proxy instance per virtual machine is launched for improved isolation.

High-level Architecture Diagram

  • The agent interface consists of:
    • A control channel on which the agent API is delivered.
    • An I/O channel with the stdin/stout/stderr streams of the processes running inside the VM multiplexed onto.
  • cc-proxy's main role is to:
    • Arbitrate access to the agent control channel between all the instances of the OCI runtimes and cc-shim.
    • Route the I/O streams between the various shim instances and agent.

cc-proxy itself has an API to setup the route to the hypervisor/agent and to forward agent commands. This API is done with a small JSON RPC protocol on an AF_UNIX located at: ${localstatesdir}/run/clear-containers/proxy.sock

Protocol

the protocol interacts with the proxy is described in the documentation of the api package.

systemd integration

When compiling in the presence of the systemd pkg-config file, two systemd unit files are created and installed.

  • cc-proxy.service: the usual service unit file
  • cc-proxy.socket: the socket activation unit

The proxy doesn't have to run all the time, just when a Clear Container is running. Socket activation can be used to start the proxy when a client connects to the socket for the first time.

After having run make install, socket action is enabled with:

sudo systemctl enable cc-proxy.socket

The proxy can output log messages on stderr, which are automatically handled by systemd and can be viewed with:

journalctl -u cc-proxy -f

SELinux

To verify you have SELinux enforced check the output of sestatus:

$ sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          error (Permission denied)
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      30

If you have SELinux status enabled and Current mode enforcing, then you'll need to build and install SELinux cc-proxy policy.

Run the following commands as root:

cd selinux/
dnf install selinux-policy-devel rpm-build
make 
restorecon -R -v /run/cc-oci-runtime/proxy.sock
semodule -X 300 -i cc-proxy.pp.bz2
systemctl start cc-proxy.socket

Detailed info in selinux/README.md

Debugging

cc-proxy uses logrus for its log messages.

Logging verbosity can be configured through the -log command line parameter, try the -h option for more details.

$ sudo ./cc-proxy --log info

Additionally, the CC_PROXY_LOG_LEVEL environment variable can be used to set the log level. The command line parameter -v takes precedence over the environment variable.

$ sudo CC_PROXY_LOG_LEVEL=debug ./cc-proxy

The log level defines how verbose logging will be:

  • Level "info" will show the important events happening at the proxy interface and during the lifetime of a pod.
  • Level "debug" will dump the raw data going over the I/O channel and display the VM console logs. With clear VM images, this will show agent's stdout and stderr.