Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use upstream coredns chart instead of fork #1743

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

abaguas
Copy link
Collaborator

@abaguas abaguas commented Oct 1, 2024

CoreDNS is a core component of the K8GB application. Until now we were installing it using a fork of the official helm chart with the following diff. The fork was necessary because Kubernetes did not support services of type load balancer with both udp and tcp ports, and to reduce the attack surface by running K8GB on a non-privileged port.

Since version 1.26 Kubernetes supports load balancers with mixed ports.
The coredns helm chart also evolved over the years, and with this contribution we are able to use coredns without opening privileged ports.

Generated configuration

The generated configuration by the base chart has only the following differences:

<   - {port: 53, targetPort: 5353, protocol: UDP, name: udp-5353}
---
>   - {"name":"udp-5353","port":53,"protocol":"UDP","targetPort":5353}
>   - {"name":"tcp-5353","port":53,"protocol":"TCP","targetPort":5353}
  • Ports exposed by the coredns container (allows prometheus metrics scraping)
<         - {containerPort: 5353, protocol: UDP, name: udp-5353}
<         - {containerPort: 5353, protocol: TCP, name: tcp-5353}
---
>         - {"containerPort":5353,"name":"udp-5353","protocol":"UDP"}
>         - {"containerPort":5353,"name":"tcp-5353","protocol":"TCP"}
>         - {"containerPort":9153,"name":"tcp-9153","protocol":"TCP"}
  • coredns container security context (to disable the NET_BIND_SERVICE capability)
>         securityContext:
>           capabilities:
>             add: []

Abusing the servers block

Since the service ports and the prometheus port configuration are taken from the servers block of the configuration we need to configure it as follows:

  servers:
  - port: 5353
    servicePort: 53
    plugins:
    - name: prometheus
      parameters: 0.0.0.0:9153

This opens the ports that we need on the coredns pod and service, nothing more.
In the coredns chart this configuration would be used to create the configmap that configures the served zones. But we are creating that config ourselves (by setting coredns.deployment.skipConfig=true) to reuse helm values.

User configuration

The are no changes to the user configuration

Conclusion

With this change we are now able to use the upstream chart instead of a fork. Unfortunately there is a overhead on the configuration necessary to the user, but we gain a lot. We can profit from all the features the coredns community offers, expose TCP port 53 publicly (fixes #1741) and expose the metrics port on the container which allows Prometheus metrics scraping.


Others

k3d services

By enabling k3d's loadbalancer we enable routing to Kubernetes services. This means we can use services of type LoadBalancer or ClusterIP. We no longer need to rely on the service type NodePort, or on making sure a pod is running on the node exposing the port. CoreDNS can be queried exactly the same way as it was before:

➜  ~ dig -p 5054 @localhost localtargets-roundrobin.cloud.example.com +short
172.19.0.11
172.19.0.12
➜  ~ dig -p 5054 @localhost localtargets-roundrobin.cloud.example.com +tcp +short
172.19.0.11
172.19.0.12

In a follow up PR I will try to extend this setup to expose the istio ingress gateway and the nginx ingress controller.

upgrade testing

kubectl -n k8gb delete svc k8gb-coredns --ignore-not-found

creating namespaces with kubectl apply

While reading the upgrade testing logs I saw errors for the creation of namespaces with kubectl create because the namespaces already existed. By using kubectl apply these errors are no longer thrown.

istio version pinning

While updating the Makefile I noticed the istio version of the ingress controller was not pinned, so I added it.

Copy link

netlify bot commented Oct 1, 2024

Deploy Preview for k8gb-preview ready!

Name Link
🔨 Latest commit e0000aa
🔍 Latest deploy log https://app.netlify.com/sites/k8gb-preview/deploys/672ccbb1bbfdd9000843e57d
😎 Deploy Preview https://deploy-preview-1743--k8gb-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@abaguas abaguas force-pushed the coredns/dependency branch 4 times, most recently from d4c7bb0 to 89aa08e Compare October 1, 2024 20:12
@k0da
Copy link
Collaborator

k0da commented Oct 1, 2024

@abaguas This is cool. It just an upstream chart never worked with ServiceType=LoadBalancer. It has a conflict due to this wasn't there. I never tested this FeatureGate

@ytsarev ytsarev mentioned this pull request Oct 2, 2024
@abaguas
Copy link
Collaborator Author

abaguas commented Oct 8, 2024

@abaguas This is cool. It just an upstream chart never worked with ServiceType=LoadBalancer. It has a conflict due to this wasn't there. I never tested this FeatureGate

Thank you @k0da! I just tested it in my cluster and I can have a LoadBalancer service with both UDP and TCP

@abaguas abaguas force-pushed the coredns/dependency branch 3 times, most recently from 4a967bd to b6a49af Compare October 16, 2024 10:11
@abaguas abaguas force-pushed the coredns/dependency branch 6 times, most recently from ddebf3a to 2e9023f Compare November 10, 2024 18:43
Signed-off-by: Andre Baptista Aguas <andre.aguas@protonmail.com>
@abaguas
Copy link
Collaborator Author

abaguas commented Nov 11, 2024

This PR was getting slightly out of control with many little changes. I decided to split it in two PRs, one for the chart change (#1776) and another one for the e2e testing changes (coming soon).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CoreDNS AWS NLB health check not getting healthy
2 participants