-
-
Notifications
You must be signed in to change notification settings - Fork 612
Deployment & Implementation Guide
The quick start instructions in the Boulder README, using a docker-compose config, are only suitable for test environments. If you want to run a more production-like Boulder, you'll need to make some changes. In particular, docker-compose up
run a number of mock backends, like ct-test-srv
, that are not needed in production. Also, if one component in the docker config dies, the whole thing dies. You'll want separate systemd units for each component, and ensure that failed units get restarted appropriately.
The only components that need inbound access from the Internet are ocsp-responder
, boulder-wfe
, and boulder-wfe2
. Additionally, boulder-publisher
and boulder-va
need outbound access to the Internet, to send certificates to CT logs and to perform validation requests, respectively.
The other components should be firewalled off. Additionally, each component exports a debug port offering metrics and profiling endpoints. The profiling endpoints may create a potential for DoS by increasing resource usage. You should ensure that the publicly-accessible hosts only have the necessary ports exposed, and those should be configured behind a reverse proxy like Nginx, Apache, or Caddy. In particular, a reverse proxy is the best way to provide HTTPS termination for the ACME services (boulder-wfe
and boulder-wfe2
).
Boulder uses gRPC between components. See the Component Model for a description of what talks to what.
Boulder components use mutual TLS, issued from a special-purpose CA. In the testing environment this is provided by minica and checked into test/grpc-creds
. If you want to generate your own CA and sign certificates for it, chdir to test/grpc-creds
, remove minica*
, and run ./generate.sh
.
The CA, cert, and key for each component is configured in the tls
section of its JSON config file.
Boulder components that offer gRPC servers have a clientNames
section in their config. This specifies a list of SANs, of which at least one must be present in the client certificate presented by any client connecting. This provides a layer of access control.
Boulder components that act as gRPC clients are configured with a single serverAddress
per service. That address should be a hostname, which can resolve to multiple IP addresses, in which case requests will round-robin across those addresses. Note that the gRPC server must offer a TLS certificate that contains the hostname in the serverAddress
config.
The Boulder CA must be configured with at least one RSA issuing intermediate and one ECDSA issuing intermediate.
A list of components needing database access, and example permission grants, is in test/sa_db_users.sql. You'll need to customize this a bit; for instance, unless you are running all your components on localhost
, you'll want to change the hostname for the grants.
All log messages defined by the Baseline Requirements as auditable events, as well as others defined by the Boulder team, are marked with the string [AUDIT]
at the beginning of their message payload. These messages should be specially filtered and retained per your audit requirements.
The test environment uses alternate, test-only ports for validating the HTTP and ALPN challenges. You'll want to change these in boulder-va's config to 80 and 443, respectively.
Also, the VA in the test environment talks to a mocked-out Google Safe Browsing endpoint; you'll want to point that at the live Google Safe Browsing endpoint. Alternately, you can run the mock server from test/gsb-test-test-srv
, if you never want to block domains based on GSB results.
The VA config has a list of DNS resolvers (dnsResolvers
) to use when looking up validation hostnames, TXT records, and CAA records. You'll want to change this to your own set of authoritative resolvers. Note that DNS is completely critical to certificate validation, so you should ensure these resolvers are controlled by you and have good security settings. For instance, it may be tempting to set these to public resolvers like 8.8.8.8
but this is unwise because (a) you don't control that resolver, and (b) the network path between your Boulder instance and that resolver may be unsafe.