Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support daemonset to use nodelocal dns #173

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

LeoQuote
Copy link

@LeoQuote LeoQuote commented Sep 13, 2024

Why is this pull request needed and what does it do?

This PR introduces the daemonset deployment mode, which allows coredns to be deployed on every node. With subsequent configuration, all pods can use the DNS server on the current node instead of the cluster's internal server, thereby minimizing DNS latency.

To facilitate simple traffic redirection to local pods, I have embedded a Cilium local redirect policy in the helm chart. Through this Custom Resource (CR), traffic can be automatically forwarded from a virtual IP to the coredns on the current node. If a user's Kubernetes cluster does not use Cilium, this feature can be skipped, and they can resolve it on their own.

Additionally, in previous implementations, many resources were tied to .Values.deployment.enabled. I have removed these bindings to make these resources more independent and compatible with the daemonset. However, this change could potentially be breaking. If you need me to add related documentation, please let me know.

Which issues (if any) are related?

fix #86
provides solution for: kubernetes/dns#594

Checklist:

  • I have bumped the chart version according to versioning.
  • I have updated the chart changelog with all the changes that come with this pull request according to changelog.
  • Any new values are backwards compatible and/or have sensible default.
  • I have signed off all my commits as required by DCO.

Changes are automatically published when merged to main. They are not published on branches.

Note on DCO

If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.

Signed-off-by: Leo Q <LeoQuote@users.noreply.github.com>
@LeoQuote
Copy link
Author

LeoQuote commented Sep 13, 2024

Here's an example helm value if you want to use it.

notable settings:

  1. change k8sAppLabelOverride to prevent confusion
  2. set forward plugin to forward dns to the cluster coredns
  - name: forward
 # you should replace this IP to your cluster codedns
    parameters: . 172.23.0.53
    configBlock: |-
      force_tcp
  1. disable deployment to avoid duplication
  2. enable cilium local redirect policy to redirect requests to local pods
Full value

If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.

resources:
  # no limit to prevent OOM
  requests:
    cpu: 50m
    memory: 128Mi

serviceType: "ClusterIP"

k8sAppLabelOverride: "nodelocal-dns"

# enable monitoring
prometheus:
  service:
    enabled: true
  monitor:
    enabled: true

service:
  clusterIP: "None"

rbac:
  create: false

serviceAccount:
  create: false

# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true

# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
priorityClassName: "system-node-critical"

# Default zone is what Kubernetes recommends:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
servers:
- zones:
  - zone: .
  port: 53
  plugins:
  - name: errors
  # Serves a /health endpoint on :8080, required for livenessProbe
  - name: health
    configBlock: |-
      lameduck 5s
  # Serves a /ready endpoint on :8181, required for readinessProbe
  - name: ready
  # Serves a /metrics endpoint on :9153, required for serviceMonitor
  - name: prometheus
    parameters: 0.0.0.0:9153
  - name: forward
 # you should replace this IP to your cluster codedns
    parameters: . 172.23.0.53
    configBlock: |-
      force_tcp
  - name: cache
    parameters: 30
  - name: loop
  - name: reload
  - name: loadbalance

# To use the livenessProbe, the health plugin needs to be enabled in CoreDNS' server config
livenessProbe:
  enabled: true
  initialDelaySeconds: 60
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 5
  successThreshold: 1
# To use the readinessProbe, the ready plugin needs to be enabled in CoreDNS' server config
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 5
  successThreshold: 1


# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
# nodeSelector:
#   douban.com/reserved: sa
# tolerations:
#   - operator: Exists

tolerations: 
- key: node-role.kubernetes.io/master
  effect: NoSchedule


## Configue a cluster-proportional-autoscaler for coredns
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
autoscaler:
  # Enabled the cluster-proportional-autoscaler
  enabled: false

deployment:
  enabled: false

daemonset:
  enabled: true

cilium:
  localRedirectPolicy:
    enabled: true
    frontend:
      # a virtual IP that not occupied in your network
      ip: "10.9.1.53"

Signed-off-by: Leo Q <LeoQuote@users.noreply.github.com>
Signed-off-by: Leo Q <LeoQuote@users.noreply.github.com>
@LeoQuote LeoQuote changed the title support daemonset to use nodelocal dns Support daemonset to use nodelocal dns Sep 13, 2024
Signed-off-by: Leo Q <LeoQuote@users.noreply.github.com>
@hagaibarel
Copy link
Collaborator

hagaibarel commented Sep 15, 2024

Hi,

The idea of using CoreDNS as a nodelocal dns was thourghly discussed in:

And rejected due to various limitations, see the threads in the linked issues.

I don't see any new reason to add support for it now, the right solution is to use proper nodelocal dns cache

@LeoQuote
Copy link
Author

First, I would like to break down the points of contention into whether we should deploy CoreDNS as a DaemonSet and whether our project should support this deployment method.

Kubernetes administrators want a local DNS service to act as a DNS cache, which corresponds exactly to the DaemonSet deployment mode. I believe CoreDNS has the capability to provide local DNS services, and you can refer to the source code of node-cache, which is essentially a streamlined version of CoreDNS with simplified plugins. I also summarized the differences between node-cache and CoreDNS in kubernetes/dns#594 (comment)

Even if administrators, for various reasons, prefer to use node-cache or even any dns server based on the original CoreDNS, they can still use the CoreDNS Helm chart to manage the related manifests since the configuration files are identical.

@LeoQuote
Copy link
Author

And lastly, I would believe this will not bring the helm chart maintainer much more burden as it’s targeting just deploy CoreDNS to every node. It follows the same configuration style with the previous chart.

@LeoQuote
Copy link
Author

LeoQuote commented Nov 26, 2024

For anyone wanted to use this version of chart, you can find it in https://artifacthub.io/packages/helm/douban/coredns , the source code is at https://github.com/douban/charts/tree/master/charts/coredns

cc @stenreijers @gecube

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CoreDNS currently only supports Deployment mode, but would you consider supporting Daemonset mode as well
2 participants