“未找到名称的集群:MVP-EKS。”

发布于 2025-02-12 01:26:23 字数 5658 浏览 2 评论 0原文

我在尝试部署节点组时会遇到此错误,这是一个错误,说

│ Error: error creating EKS Node Group (mvp-eks:mvp-node-group): ResourceNotFoundException: No cluster found for name: mvp-eks.
│ {
│   RespMetadata: {
│     StatusCode: 404,
│     RequestID: "160a391e-85f1-48b1-af8d-519afece70d3"
│   },
│   Message_: "No cluster found for name: mvp-eks."
│ }
│
│   with aws_eks_node_group.nodes_eks,
│   on main.tf line 275, in resource "aws_eks_node_group" "nodes_eks":
│  275: resource "aws_eks_node_group" "nodes_eks" {

这很奇怪,因为如果我去我的AWS控制台并检查我的EKS群集“ MVP-EKS”,则它在那里并且处于活动状态。我想与这两个资源块之一的配置是错误的?这是我的VPC代码,它由NAT网关和堡垒主机的公共子网组成,以及一个私人子网,我希望将我的EKS节点部署

resource "aws_vpc" "vpc" {
  cidr_block = "10.1.0.0/16"

  tags = {
    Name = "${var.name}-vpc"
  }
}

resource "aws_subnet" "public_subnet" {
  count                   = length(var.azs)
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = var.public_cidrs[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.name}-public-subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "private_subnet" {
  count                   = length(var.azs)
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = var.private_cidrs[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = false

  tags = {
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"           = "1"
  }
}

resource "aws_internet_gateway" "internet_gateway" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "${var.name}-internet-gateway"
  }
}

resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "${var.name}-public-rt"
  }
}

resource "aws_route" "default_route" {
  route_table_id         = aws_route_table.public_rt.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.internet_gateway.id
}

resource "aws_route_table_association" "public_assoc" {
  count          = length(var.public_cidrs)
  subnet_id      = aws_subnet.public_subnet[count.index].id
  route_table_id = aws_route_table.public_rt.id
}

resource "aws_eip" "nat_eip" {
  count      = length(var.public_cidrs)
  vpc        = true
  depends_on = [aws_internet_gateway.internet_gateway]

  tags = {
    Name = "${var.name}-nat-eip-${count.index + 1}"
  }
}

resource "aws_nat_gateway" "nat_gateway" {
  count         = length(var.public_cidrs)
  allocation_id = aws_eip.nat_eip[count.index].id
  subnet_id     = aws_subnet.public_subnet[count.index].id
  depends_on    = [aws_internet_gateway.internet_gateway]

  tags = {
    Name = "${var.name}-NAT-gateway-${count.index + 1}"
  }
}

在这里是我的EKS群集和节点组,

resource "aws_iam_role" "eks_cluster" {
  name = "${var.name}-eks-cluster-role"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "amazon_eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

resource "aws_eks_cluster" "eks" {
  name     = var.cluster_name
  role_arn = aws_iam_role.eks_cluster.arn

  ## k8s Version
  version = var.k8s_version

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access  = false
    subnet_ids              = [
        aws_subnet.private_subnet[0].id,
        aws_subnet.private_subnet[1].id,
        aws_subnet.private_subnet[2].id,
    ]
  }
  depends_on = [
    aws_iam_role_policy_attachment.amazon_eks_cluster_policy
  ]
}

resource "aws_iam_role" "nodes_eks" {
  name               = "role-node-group-eks"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      }, 
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy_eks" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.nodes_eks.name
}

resource "aws_iam_role_policy_attachment" "amazon_eks_cni_policy_eks" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.nodes_eks.name
}

resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.nodes_eks.name
}


resource "aws_eks_node_group" "nodes_eks" {
  cluster_name    = var.cluster_name
  node_group_name = "${var.name}-node-group"
  node_role_arn   = aws_iam_role.nodes_eks.arn
  subnet_ids      = [
        aws_subnet.private_subnet[0].id,
        aws_subnet.private_subnet[1].id,
        aws_subnet.private_subnet[2].id,
  ]
  remote_access {
    ec2_ssh_key = aws_key_pair.bastion_auth.id

  }

  scaling_config {
    desired_size = 3
    max_size     = 6
    min_size     = 3
  }

  ami_type       = "AL2_x86_64"
  capacity_type  = "ON_DEMAND"
  disk_size      = 20
  instance_types = [var.instance_type]
  labels = {
    role = "nodes-group-1"
  }

  version = var.k8s_version

  depends_on = [
    aws_iam_role_policy_attachment.amazon_eks_worker_node_policy_eks,
    aws_iam_role_policy_attachment.amazon_eks_cni_policy_eks,
    aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
  ]
}

因为它找不到我的群集那是被创建的,并且具有活动状态,这是否有连接问题?还是我在其他地方做事不正确?

I'm getting this error when trying to deploy my node group, it's getting an error saying

│ Error: error creating EKS Node Group (mvp-eks:mvp-node-group): ResourceNotFoundException: No cluster found for name: mvp-eks.
│ {
│   RespMetadata: {
│     StatusCode: 404,
│     RequestID: "160a391e-85f1-48b1-af8d-519afece70d3"
│   },
│   Message_: "No cluster found for name: mvp-eks."
│ }
│
│   with aws_eks_node_group.nodes_eks,
│   on main.tf line 275, in resource "aws_eks_node_group" "nodes_eks":
│  275: resource "aws_eks_node_group" "nodes_eks" {

It's strange because if I go to my aws console and check for my eks cluster "mvp-eks" it's there and it's in an active status. I'd imagine the configuration with one of those two resource blocks is wrong? Here's my vpc code which consists of a public subnet for a nat gateway and a bastion host, as well as a private subnet where I want my eks nodes to be deployed

resource "aws_vpc" "vpc" {
  cidr_block = "10.1.0.0/16"

  tags = {
    Name = "${var.name}-vpc"
  }
}

resource "aws_subnet" "public_subnet" {
  count                   = length(var.azs)
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = var.public_cidrs[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.name}-public-subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "private_subnet" {
  count                   = length(var.azs)
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = var.private_cidrs[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = false

  tags = {
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"           = "1"
  }
}

resource "aws_internet_gateway" "internet_gateway" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "${var.name}-internet-gateway"
  }
}

resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "${var.name}-public-rt"
  }
}

resource "aws_route" "default_route" {
  route_table_id         = aws_route_table.public_rt.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.internet_gateway.id
}

resource "aws_route_table_association" "public_assoc" {
  count          = length(var.public_cidrs)
  subnet_id      = aws_subnet.public_subnet[count.index].id
  route_table_id = aws_route_table.public_rt.id
}

resource "aws_eip" "nat_eip" {
  count      = length(var.public_cidrs)
  vpc        = true
  depends_on = [aws_internet_gateway.internet_gateway]

  tags = {
    Name = "${var.name}-nat-eip-${count.index + 1}"
  }
}

resource "aws_nat_gateway" "nat_gateway" {
  count         = length(var.public_cidrs)
  allocation_id = aws_eip.nat_eip[count.index].id
  subnet_id     = aws_subnet.public_subnet[count.index].id
  depends_on    = [aws_internet_gateway.internet_gateway]

  tags = {
    Name = "${var.name}-NAT-gateway-${count.index + 1}"
  }
}

Here's my eks cluster and node-group

resource "aws_iam_role" "eks_cluster" {
  name = "${var.name}-eks-cluster-role"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "amazon_eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}

resource "aws_eks_cluster" "eks" {
  name     = var.cluster_name
  role_arn = aws_iam_role.eks_cluster.arn

  ## k8s Version
  version = var.k8s_version

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access  = false
    subnet_ids              = [
        aws_subnet.private_subnet[0].id,
        aws_subnet.private_subnet[1].id,
        aws_subnet.private_subnet[2].id,
    ]
  }
  depends_on = [
    aws_iam_role_policy_attachment.amazon_eks_cluster_policy
  ]
}

resource "aws_iam_role" "nodes_eks" {
  name               = "role-node-group-eks"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      }, 
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy_eks" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.nodes_eks.name
}

resource "aws_iam_role_policy_attachment" "amazon_eks_cni_policy_eks" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.nodes_eks.name
}

resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.nodes_eks.name
}


resource "aws_eks_node_group" "nodes_eks" {
  cluster_name    = var.cluster_name
  node_group_name = "${var.name}-node-group"
  node_role_arn   = aws_iam_role.nodes_eks.arn
  subnet_ids      = [
        aws_subnet.private_subnet[0].id,
        aws_subnet.private_subnet[1].id,
        aws_subnet.private_subnet[2].id,
  ]
  remote_access {
    ec2_ssh_key = aws_key_pair.bastion_auth.id

  }

  scaling_config {
    desired_size = 3
    max_size     = 6
    min_size     = 3
  }

  ami_type       = "AL2_x86_64"
  capacity_type  = "ON_DEMAND"
  disk_size      = 20
  instance_types = [var.instance_type]
  labels = {
    role = "nodes-group-1"
  }

  version = var.k8s_version

  depends_on = [
    aws_iam_role_policy_attachment.amazon_eks_worker_node_policy_eks,
    aws_iam_role_policy_attachment.amazon_eks_cni_policy_eks,
    aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
  ]
}

Since it can't find my cluster that is getting created and has the status of active, is it somehow a connection issue? Or am I doing something incorrectly elsewhere?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文