如何在EKS上启用节点本地DNS缓存?

发布于 2025-01-22 10:24:28 字数 474 浏览 0 评论 0 原文

我正在尝试在EKS群集上实现Nodelocaldn。 我已经部署了

我需要一些帮助,在这里,

  1. Kubernetes官员Doc说 - 如果在IPVS模式下使用Kube-Proxy,则需要修改cluster-DNS flag进行kubelet,以使用该节点上的Nodelocal DNScache正在听。否则,无需修改-cluster-DNS标志的值,因为NodeLocal DNScace在Kube-DNS服务IP上都倾听。

我如何在EKS中找到我的Kube-Proxy运行方式?

  1. 我如何验证DNS请求是否要归因于Nodelocaldns?

I am trying to implement nodeLocalDns on my eks cluster.
I have deployed the master branch of https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/

I need help with a few things here

  1. The kubernetes official doc says that - If using kube-proxy in IPVS mode, --cluster-dns flag to kubelet needs to be modified to use that NodeLocal DNSCache is listening on. Otherwise, there is no need to modify the value of the --cluster-dns flag, since NodeLocal DNSCache listens on both the kube-dns service IP as well as .

How can I find out in EKS ,which mode is my kube-proxy runnning on?

  1. How can I verify if the DNS requests are going to nodeLocalDns ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

拒绝两难 2025-01-29 10:24:28

在EKS上启用节点 - local-dns-cache只需运行以下2个命令:

  1. helm repo add deliveryhero https://charts.deliveryhero.io.io/
  2. helm helm升级 - 安装Node-local-dns-cache veliverhero/node-local-dns

上面的自动化完成了安装和配置:

  1. 它在缓存模式下安装了运行的核心核心。
    (缓存模式:有效的DNS查找将被缓存30秒,向上
    达到9984个条目的容量。 Invalid DNS names will be cached for 5 seconds.)(Perma链接源)
  2. 它有效地重新配置了流量以利用新安装的DNS CACHE。

它如何工作? (简短答案)
jfm(Just F'n Magic),聪明的骇客和可靠的安装UX(用户体验),这要归功于工程师交付英雄

它如何以及为什么起作用? (长答案)
好吧,你好,疲倦的旅行者!您说您不信任由计算机向导源提供的解决方案,这些解决方案可以自动起作用?好吧,很好,我们可以介绍其工作原理的科学,以便您可以信任它。

  • 在正常情况下:
    • pod的/etc/resolv.conf将在Kube-System名称空间中具有Kube-DNS服务的群集。
    • 该文件可以通过将标志传递给kubelet进行配置。
    • 通常,您需要将Kubelet的配置更新到Update /etc/Resolv.conf,以指向新配置的DNS缓存,这是有问题的 /相对较难的,因为通常您无法轻松地使用Kubectl轻松调整该缓存。
  • 官方的Kubernetes文档中存在着关于这项工作的解释。不幸的是,这很难理解,所以我会解释它,以便更容易求助:
    • 使用iptables的kubernetes发行版可以使用一个聪明的黑客,使我想起了MITM(中间的人)攻击,该攻击是使用无用的ARP来拦截流量的。在这种情况下
    • config.dnsserver = 172.20.0.10的默认头盔值匹配EKS的kube-dns服务的默认群集IP。
    • daemonset具有net_admin的True和SecurityContext的主机网络,它允许其绑定EC2 Worker节点上的IP地址。
    • 聪明的黑客工作的方式是EKS节点本地DNS缓存绑定172.20.0.10,这与Kube-DNS服务是相同的IP。因为它正在对相同的IP进行Local -Host特权绑定,所以能够做3件花哨的事情:
      1. 截距通往Kube-DNS服务的流量,并将其路由到它的节点 - 局部-DNS-cache
      2. 仍然允许自己路由到Kube-DNS服务,并将其用作上游DNS服务器。
      3. 由于IP不变,因此您不需要重新配置kubelet即可使用它,并有效卸载了MITM截距,该截距可以像以前一样使用Kube-DNS服务。
  • 通常,不需要在上述Helm Install命令中传递标志/头盔值覆盖,因为Helm Chart的默认值假定使用EKS的常见配置默认值。 (EKS需要遗留iptables,默认为iptables和172.20.0.10是EKS簇上Kube-DNS的默认群集IP。) 或帐户。
  • 有效 Fargate节点不支持DAEMONSET,因此他们的DNS流量仍将直接进入原始的Kube-DNS服务。
  • 现在,您知道EKS对Node Local DNS缓存的实现基本上是友好的MITM公用事业服务,官方文档链接将变得更有意义。 (注意:我认为169.254.20.10是解决方法实现细节的一部分,它允许Kube-DNS用作上游DNS服务器,或者是支持基于IPV的非IPTABLES实现。)

Q2:如何验证DNS请求是否要到Nodelocaldns?
A2:由于其解决方案的实现详细信息是基于巧妙的黑客攻击,因此很难使用普通方法来验证节点本地DNS缓存。
以下内容在验证方面应该足够:

# Run from Laptop
# (The 1st command makes a utility pod run in the background)
# (3rd command runs `time dig zombo.com` from the 
# perspective of the pod running in the background)
kubectl run -it netshoot --image docker.io/nicolaka/netshoot -- /bin/bash -c "sleep 1000000000" &

kubectl get pod

kubectl exec -it netshoot -- time dig zombo.com
# ^-- Think of this as a quick and dirty benchmarking tool
#     You can run it before and after
#     then compare performance.

更新:如果您需要/想优化核心,则还应打开Coredns自动升级(配置节点比例Autoscaler,其中coredns replicas基于节点计数。 EKS默认值,但您可以使其成为簇的默认值。)

”

{
  "autoScaling": {
    "enabled": true,
    "minReplicas": 2,
    "maxReplicas": 100
  },
  "affinity": {
    "nodeAffinity": {
      "requiredDuringSchedulingIgnoredDuringExecution": {
        "nodeSelectorTerms": [
          {
            "matchExpressions": [
              {
                "key": "kubernetes.io/os",
                "operator": "In",
                "values": ["linux"]
              },
              {
                "key": "kubernetes.io/arch",
                "operator": "In",
                "values": ["amd64", "arm64"]
              }
            ]
          }
        ]
      }
    },
    "podAntiAffinity": {
      "requiredDuringSchedulingIgnoredDuringExecution": [
        {
          "labelSelector": {
            "matchExpressions": [
              {
                "key": "k8s-app",
                "operator": "In",
                "values": ["kube-dns"]
              }
            ]
          },
          "topologyKey": "kubernetes.io/hostname"
        }
      ]
    }
  }
}

To enable node-local-dns-cache on EKS just run the following 2 commands:

  1. helm repo add deliveryhero https://charts.deliveryhero.io/
  2. helm upgrade --install node-local-dns-cache deliveryhero/node-local-dns

The above automagically accomplishes both installation and configuration:

  1. It installs a daemonset of coredns running in cache mode.
    (Cache Mode: Valid DNS lookups will be cached for 30 seconds, up
    to a capacity of 9984 entries. Invalid DNS names will be cached for 5 seconds.)(perma link source)
  2. It effectively reconfigure the traffic flow to utilize the newly installed DNS Cache.

How does it work? (Short Answer)
JFM (Just F'n Magic), Clever Hacks, and a solid installation UX(user experience) thanks to the engineers at Delivery Hero.

How and why does it work? (Long Answer)
Well, hello there, weary traveler! You say you don't trust solutions powered by computer wizard sourcery that automagically work? Well that's fine, we can go over the science of how it works so you can trust it.

  • Under Normal Circumstances:
    • A pod's /etc/resolv.conf will have the ClusterIP of kube-dns service in kube-system namespace.
    • That file can be configured by passing flags to kubelet.
    • Usually you'd need to update kubelet's config to update /etc/resolv.conf to point to a newly configured DNS cache, and that'd be problematic / relatively hard, because normally you can't easily tweak that using kubectl.
  • An explanation of how this works does exist in the official Kubernetes docs. Unfortunately it's a bit hard to understand, so I'll paraphrase it so it's easier to grok:
    • Kubernetes Distributions that use iptables can make use of a clever hack that vaguely reminds me of a MITM (man in the middle) Attack where a Gratuitous ARP is used to intercept traffic. In this case the automagic solution effectively installs the node local DNS cache as a friendly MITM utility service.
    • The default helm value of config.dnsServer=172.20.0.10, matches the default Cluster IP of EKS's kube-dns service.
    • The daemonset has host networking true and securityContext of NET_ADMIN, which allows it to bind IP addresses on EC2 worker nodes.
    • The way the clever hack works is that the EKS Node Local DNS Cache binds 172.20.0.10, which is the same IP as the kube-dns service. Because it's doing a localhost privileged bind of the same IP it's able to do 3 fancy things:
      1. Intercept traffic destined for the kube-dns service, and route it to it's node-local-dns-cache
      2. Still allow itself to route to the kube-dns service, and use it as an upstream DNS server.
      3. Because the IP doesn't change you don't need to reconfigure kubelet to use it, and uninstall effectively removed the MITM interception, which goes back to using the kube-dns service like before.
  • Usually no flags/helm value overrides need to be passed in the above helm install command, because the helm chart's default values assume that EKS's common configuration defaults are used. (EKS requires legacy iptables, defaults to iptables, and 172.20.0.10 is the default Cluster IP of kube-dns on EKS clusters.) (In theory the helm chart could work on non-EKS distros if helm values were tweaked and prerequisite assumptions were valid or accounted for.)
  • Note: The daemonset will run the node-local-dns-cache on EC2 backed worker nodes. Fargate Nodes don't support daemonsets, so their DNS traffic will still go directly to the original kube-dns service.
  • Now that you know EKS's implementation of Node Local DNS Cache is basically a friendly MITM Utility Service the following diagram which the official docs link to will make a lot more sense. (Note: I think the 169.254.20.10, is either part of a workaround implementation detail that allows kube-dns to be used as an upstream dns server, or it's to support non iptables based implementations like IPVS.)

Q2: How can I verify if the DNS requests are going to nodeLocalDns?
A2: Because the implementation details of their solution are based on a clever hack, it's hard to use normal methods to verify Node Local DNS Cache is being used.
The following should be sufficient in terms of verification:

# Run from Laptop
# (The 1st command makes a utility pod run in the background)
# (3rd command runs `time dig zombo.com` from the 
# perspective of the pod running in the background)
kubectl run -it netshoot --image docker.io/nicolaka/netshoot -- /bin/bash -c "sleep 1000000000" &

kubectl get pod

kubectl exec -it netshoot -- time dig zombo.com
# ^-- Think of this as a quick and dirty benchmarking tool
#     You can run it before and after
#     then compare performance.

Update: If you need/want to optimize coredns, you should also turn on coredns autoscaling (configure node proportional autoscaler, where coredns replicas scale up based on node count. IDK why this isn't an EKS default, but you can make it a default for your clusters.)

eks addon override

{
  "autoScaling": {
    "enabled": true,
    "minReplicas": 2,
    "maxReplicas": 100
  },
  "affinity": {
    "nodeAffinity": {
      "requiredDuringSchedulingIgnoredDuringExecution": {
        "nodeSelectorTerms": [
          {
            "matchExpressions": [
              {
                "key": "kubernetes.io/os",
                "operator": "In",
                "values": ["linux"]
              },
              {
                "key": "kubernetes.io/arch",
                "operator": "In",
                "values": ["amd64", "arm64"]
              }
            ]
          }
        ]
      }
    },
    "podAntiAffinity": {
      "requiredDuringSchedulingIgnoredDuringExecution": [
        {
          "labelSelector": {
            "matchExpressions": [
              {
                "key": "k8s-app",
                "operator": "In",
                "values": ["kube-dns"]
              }
            ]
          },
          "topologyKey": "kubernetes.io/hostname"
        }
      ]
    }
  }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文