あめがえるのITブログ

頑張りすぎない。ほどほどに頑張るブログ。

Amazon EKSのClusterをeksctlコマンドで作成してみた

Amazon EKSを使ってkubernetesを感じようと思いますー

Amazon EKSとは

独自の Kubernetes コントロールプレーンまたはノードをインストール、操作、および維持することなく、AWSKubernetes を実行するために使用できるマネージドサービス。

Masterノードがマネージドになり、Workerが操作できる模様。
確かにMasterはあまり設定を加えないのでマネージドは助かる(*´ω`*)

やること

  • AWS CloudShellにkubectl、eksctlのインストール
  • AWS CloudShellからeksctlコマンドでEKS Clusterを作成
  • サンプルアプリケーションを作成

環境

CloudShell ⇒ EKS

実践!

1.前準備
 1-1.kubectlインストール

$ curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.26.2/2023-03-17/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 45.8M  100 45.8M    0     0  15.9M      0  0:00:02  0:00:02 --:--:-- 15.9M

$ curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.26.2/2023-03-17/bin/linux/amd64/kubectl.sha256
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    73  100    73    0     0    178      0 --:--:-- --:--:-- --:--:--   178

$ sha256sum -c kubectl.sha256
kubectl: OK

$ chmod +x ./kubectl

$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

$ kubectl version --short --client
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.6-eks-48e63af
Kustomize Version: v4.5.7


 1-2.eksctlインストール

$ ARCH=amd64

$ PLATFORM=$(uname -s)_$ARCH

$ curl -sLO "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"

$ curl -sL "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
eksctl_Linux_amd64.tar.gz: OK

$ tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz

$ sudo mv /tmp/eksctl /usr/local/bin

$ eksctl version
0.143.0


2.EKS Cluster作成
 2-1.下記コマンドを実行
  ※10分程度かかる

$ eksctl create cluster --name my-cluster --region ap-northeast-1
2023-05-30 07:08:19 []  eksctl version 0.143.0
2023-05-30 07:08:19 []  using region ap-northeast-1
2023-05-30 07:08:19 []  setting availability zones to [ap-northeast-1d ap-northeast-1c ap-northeast-1a]
2023-05-30 07:08:19 []  subnets for ap-northeast-1d - public:192.168.0.0/19 private:192.168.96.0/19
2023-05-30 07:08:19 []  subnets for ap-northeast-1c - public:192.168.32.0/19 private:192.168.128.0/19
2023-05-30 07:08:19 []  subnets for ap-northeast-1a - public:192.168.64.0/19 private:192.168.160.0/19
2023-05-30 07:08:19 []  nodegroup "ng-b29f31c0" will use "" [AmazonLinux2/1.25]
2023-05-30 07:08:19 []  using Kubernetes version 1.25
2023-05-30 07:08:19 []  creating EKS cluster "my-cluster" in "ap-northeast-1" region with managed nodes
2023-05-30 07:08:19 []  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2023-05-30 07:08:19 []  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-1 --cluster=my-cluster'
2023-05-30 07:08:19 []  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-cluster" in "ap-northeast-1"
2023-05-30 07:08:19 []  CloudWatch logging will not be enabled for cluster "my-cluster" in "ap-northeast-1"
2023-05-30 07:08:19 []  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-1 --cluster=my-cluster'
2023-05-30 07:08:19 []  
2 sequential tasks: { create cluster control plane "my-cluster", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create managed nodegroup "ng-b29f31c0",
    } 
}
2023-05-30 07:08:19 []  building cluster stack "eksctl-my-cluster-cluster"
2023-05-30 07:08:19 []  deploying stack "eksctl-my-cluster-cluster"
2023-05-30 07:08:49 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:09:19 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:10:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:11:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:12:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:13:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:14:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:15:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:16:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:17:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:18:20 []  waiting for CloudFormation stack "eksctl-my-cluster-cluster"
2023-05-30 07:20:21 []  building managed nodegroup stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:20:21 []  deploying stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:20:21 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:20:51 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:21:49 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:22:55 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:24:06 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:24:06 []  waiting for the control plane to become ready
2023-05-30 07:24:08 []  saved kubeconfig as "/home/cloudshell-user/.kube/config"
2023-05-30 07:24:08 []  no tasks
2023-05-30 07:24:08 []  all EKS cluster resources for "my-cluster" have been created
2023-05-30 07:24:08 []  nodegroup "ng-b29f31c0" has 2 node(s)
2023-05-30 07:24:08 []  node "ip-192-168-23-106.ap-northeast-1.compute.internal" is ready
2023-05-30 07:24:08 []  node "ip-192-168-63-108.ap-northeast-1.compute.internal" is ready
2023-05-30 07:24:08 []  waiting for at least 2 node(s) to become ready in "ng-b29f31c0"
2023-05-30 07:24:08 []  nodegroup "ng-b29f31c0" has 2 node(s)
2023-05-30 07:24:08 []  node "ip-192-168-23-106.ap-northeast-1.compute.internal" is ready
2023-05-30 07:24:08 []  node "ip-192-168-63-108.ap-northeast-1.compute.internal" is ready
2023-05-30 07:24:12 []  kubectl command should work with "/home/cloudshell-user/.kube/config", try 'kubectl get nodes'
2023-05-30 07:24:12 []  EKS cluster "my-cluster" in "ap-northeast-1" region is ready
$ 


 2-2.CLI確認

$ kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE     VERSION               INTERNAL-IP      EXTERNAL-IP      OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-23-106.ap-northeast-1.compute.internal   Ready    <none>   3m19s   v1.25.9-eks-0a21954   192.168.23.106   35.77.21.19      Amazon Linux 2   5.10.178-162.673.amzn2.x86_64   containerd://1.6.19
ip-192-168-63-108.ap-northeast-1.compute.internal   Ready    <none>   3m19s   v1.25.9-eks-0a21954   192.168.63.108   18.181.218.117   Amazon Linux 2   5.10.178-162.673.amzn2.x86_64   containerd://1.6.19

$ kubectl get pods -A -o wide
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE     IP               NODE                                                NOMINATED NODE   READINESS GATES
kube-system   aws-node-gvqvl             1/1     Running   0          3m35s   192.168.63.108   ip-192-168-63-108.ap-northeast-1.compute.internal   <none>           <none>
kube-system   aws-node-rlfzp             1/1     Running   0          3m35s   192.168.23.106   ip-192-168-23-106.ap-northeast-1.compute.internal   <none>           <none>
kube-system   coredns-7dbf6bcd5b-58c4x   1/1     Running   0          10m     192.168.63.161   ip-192-168-63-108.ap-northeast-1.compute.internal   <none>           <none>
kube-system   coredns-7dbf6bcd5b-cnpwk   1/1     Running   0          10m     192.168.36.201   ip-192-168-63-108.ap-northeast-1.compute.internal   <none>           <none>
kube-system   kube-proxy-p6b8t           1/1     Running   0          3m35s   192.168.63.108   ip-192-168-63-108.ap-northeast-1.compute.internal   <none>           <none>
kube-system   kube-proxy-r5t5r           1/1     Running   0          3m35s   192.168.23.106   ip-192-168-23-106.ap-northeast-1.compute.internal   <none>           <none>


 2-3.Webコンソール確認
  EKS-[クラスター]

 ※作成されてますね(*´ω`*)

 ※ノードも作成されてる。m5.largeはでけぇ。。。

3.サンプルアプリケーションデプロイ
 3-1.下記コマンドを実行

$ kubectl create namespace eks-sample-app
namespace/eks-sample-app created

$ vi eks-sample-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eks-sample-linux-deployment
  namespace: eks-sample-app
  labels:
    app: eks-sample-linux-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: eks-sample-linux-app
  template:
    metadata:
      labels:
        app: eks-sample-linux-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
      containers:
      - name: nginx
        image: public.ecr.aws/nginx/nginx:1.21
        ports:
        - name: http
          containerPort: 80
        imagePullPolicy: IfNotPresent
      nodeSelector:
        kubernetes.io/os: linux

$ ls | grep eks-sample
eks-sample-deployment.yaml

$ kubectl apply -f eks-sample-deployment.yaml
deployment.apps/eks-sample-linux-deployment created

$ vi eks-sample-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: eks-sample-linux-service
  namespace: eks-sample-app
  labels:
    app: eks-sample-linux-app
spec:
  selector:
    app: eks-sample-linux-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

$ ls | grep eks-sample
eks-sample-deployment.yaml
eks-sample-service.yaml

$ kubectl apply -f eks-sample-service.yaml
service/eks-sample-linux-service created

$ kubectl get all -n eks-sample-app
NAME                                               READY   STATUS    RESTARTS   AGE
pod/eks-sample-linux-deployment-7f646d456c-9z2nz   1/1     Running   0          2m21s
pod/eks-sample-linux-deployment-7f646d456c-hrmnw   1/1     Running   0          2m21s
pod/eks-sample-linux-deployment-7f646d456c-ndcwt   1/1     Running   0          2m21s

NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/eks-sample-linux-service   ClusterIP   10.100.123.250   <none>        80/TCP    22s

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/eks-sample-linux-deployment   3/3     3            3           2m21s

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/eks-sample-linux-deployment-7f646d456c   3         3         3       2m21s

$ kubectl exec -it pod/eks-sample-linux-deployment-7f646d456c-9z2nz -n eks-sample-app -- /bin/bash

root@eks-sample-linux-deployment-7f646d456c-9z2nz:/# curl eks-sample-linux-service
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>

root@eks-sample-linux-deployment-7f646d456c-9z2nz:/# exit

$


4.後処理
 4-1.サンプルアプリケーション削除

$ kubectl delete namespace eks-sample-app
namespace "eks-sample-app" deleted


 4-2.EKS Cluster削除

$ eksctl delete cluster --name my-cluster --region ap-northeast-1
2023-05-30 07:36:12 []  deleting EKS cluster "my-cluster"
2023-05-30 07:36:13 []  will drain 0 unmanaged nodegroup(s) in cluster "my-cluster"
2023-05-30 07:36:13 []  starting parallel draining, max in-flight of 1
2023-05-30 07:36:13 []  deleted 0 Fargate profile(s)
2023-05-30 07:36:13 []  kubeconfig has been updated
2023-05-30 07:36:13 []  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2023-05-30 07:36:14 []  
2 sequential tasks: { delete nodegroup "ng-b29f31c0", delete cluster control plane "my-cluster" [async] 
}
2023-05-30 07:36:14 []  will delete stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:36:14 []  waiting for stack "eksctl-my-cluster-nodegroup-ng-b29f31c0" to get deleted
2023-05-30 07:36:14 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:36:45 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:37:22 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:39:19 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:41:07 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:41:42 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:42:21 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:43:22 []  waiting for CloudFormation stack "eksctl-my-cluster-nodegroup-ng-b29f31c0"
2023-05-30 07:43:22 []  will delete stack "eksctl-my-cluster-cluster"
2023-05-30 07:43:22 []  all cluster resources were deleted




感想

一撃でノードまで作成されるので楽ですね(*´ω`*)