体验 Kubernetes Cluster API
体验 Kubernetes Cluster API
什么是 Kubernetes Cluster API
Cluster API 是一个 Kubernetes 子项目,专注于提供声明性 API 和工具,以简化多个 Kubernetes 集群的配置、升级和操作。
Cluster API 使用 Kubernetes 风格的 API 和模式为平台运营商自动化集群生命周期管理。支持基础设施(如虚拟机、网络、负载均衡器和 VPC)以及 Kubernetes 集群的定义方式都与部署和管理其工作负载的方式相同。这样可以在各种基础架构环境中实现一致且可重复的集群部署。
安装 Kind
访问https://github.com/kubernetes-sigs/kind/releases 查看Kind最新的Release版本和对应的Node镜像。
cd /tmp
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.18.0/kind-linux-amd64
chmod +x ./kind
sudo mv kind /usr/local/bin/
增加 ulimit 和 inotify
Linux 用户注意:使用 Docker (CAPD) 时,您可能需要增加 ulimit 和 inotify 。
refer: https://cluster-api.sigs.k8s.io/user/troubleshooting.html#cluster-api-with-docker----too-many-open-files
refer: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
sudo sysctl fs.inotify.max_user_watches=1048576
sudo sysctl fs.inotify.max_user_instances=8192
sudo vi /etc/sysctl.conf
--- add
fs.inotify.max_user_watches = 1048576
fs.inotify.max_user_instances = 8192
---
创建 Kind 集群
运行以下命令以创建一个类型的配置文件,以允许 Docker 提供程序访问主机上的 Docker:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-planeextraMounts:- hostPath: /var/run/docker.sockcontainerPath: /var/run/docker.sock
EOF
kind create cluster --config kind-cluster-with-extramounts.yaml
安装 clusterctl CLI 工具
在 Linux 上使用 curl 安装 clusterctl 二进制文件,
cd /tmp
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.4.1/clusterctl-linux-amd64 -o clusterctl
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
clusterctl version--- output
clusterctl version: &version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"39d87e91080088327c738c43f39e46a7f557d03b", GitTreeState:"clean", BuildDate:"2023-04-04T17:31:43Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
---
初始化管理集群
现在我们已经安装了 clusterctl 并满足了所有先决条件,让我们转换 Kubernetes 集群 通过使用 clusterctl init
放入管理集群中。
该命令接受要安装的 providers 列表作为输入。首次执行时,clusterctl init
会自动将 cluster-api
core provider 添加到列表中,如果未指定,则还会添加 kubeadm
引导程序 和 kubeadm
控制平面 providers。
此示例以使用 Docker 为例。
# Enable the experimental Cluster topology feature.
export CLUSTER_TOPOLOGY=true# Initialize the management cluster
clusterctl init --infrastructure docker
输出示例,
Fetching providers
Installing cert-manager Version="v1.11.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.4.1" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v1.4.1" TargetNamespace="capd-system"Your management cluster has been initialized successfully!You can now create your first workload cluster by running the following:clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
创建一个工作负载集群
此示例以 Docker 为例,Docker 所需的配置,
# The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.96.0.0/12"]# The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["192.168.0.0/16"]# The service domain, default "cluster.local"
export SERVICE_DOMAIN="k8scloud.local"# It is also possible but not recommended to disable the per-default enabled Pod Security Standard
export POD_SECURITY_STANDARD_ENABLED="false"
生成群集配置,我们将群集命名为 capi-quickstart。
clusterctl generate cluster capi-quickstart --flavor development \\--kubernetes-version v1.27.0 \\--control-plane-machine-count=1 \\--worker-machine-count=1 \\> capi-quickstart.yaml
运行以下命令以应用集群清单,
kubectl apply -f capi-quickstart.yaml
输出示例,
clusterclass.cluster.x-k8s.io/quick-start created
dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
cluster.cluster.x-k8s.io/capi-quickstart created
访问工作负载集群
集群现在将开始预配,可以通过以下方式检查状态,
kubectl get cluster--- output
NAME PHASE AGE VERSION
capi-quickstart Provisioned 10s v1.27.0
---
还可以通过运行以下命令获得集群及其资源的"概览"视图,
clusterctl describe cluster capi-quickstart--- output
NAME READY SEVERITY REASON SINCE MESSAGECluster/capi-quickstart False Warning ScalingUp 26s Scaling up control plane to 1 replicas (actual 0)
├─ClusterInfrastructure - DockerCluster/capi-quickstart-cq94k True 22s├─ControlPlane - KubeadmControlPlane/capi-quickstart-d86xg False Warning ScalingUp 26s Scaling up control plane to 1 replicas (actual 0)
│ └─Machine/capi-quickstart-d86xg-mg7kq False Info WaitingForBootstrapData 20s 1 of 2 completed└─Workers└─MachineDeployment/capi-quickstart-md-0-dbhbw False Warning WaitingForAvailableMachines 26s Minimum availability requires 1 replicas, current 0 available└─Machine/capi-quickstart-md-0-dbhbw-85cd6498cx9bbwg-f5nqx False Info WaitingForControlPlaneAvailable 22s 0 of 2 completed
---
要验证第一个控制平面是否已启动,请执行以下操作,
kubectl get kubeadmcontrolplane--- output
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-d86xg capi-quickstart 1 1 1 55s v1.27.0
---
在下一步我们安装CNI之前,控制平面不会变成 Ready
。
在第一个控制平面节点启动和运行后,我们可以检索工作负载集群 Kubeconfig。
clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
# Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
sed -i -e "s/server:.*/server: https:\\/\\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig
部署一个 CNI 解决方案
这里以Calico为例,
kubectl --kubeconfig=./capi-quickstart.kubeconfig \\apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
输出结果示例,
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
过了一会儿,我们的节点应该正在运行并处于就绪状态,让我们用 kubectl get nodes
来检查状态,
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
输出示例,
NAME STATUS ROLES AGE VERSION
capi-quickstart-d86xg-mg7kq Ready control-plane 4m59s v1.27.0
capi-quickstart-md-0-dbhbw-85cd6498cx9bbwg-f5nqx Ready <none> 4m9s v1.27.0
安装 MetalLB
KUBECONFIG=./capi-quickstart.kubeconfig
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
kubectl create secret generic \\-n metallb-system memberlist \\--from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
docker inspect kind | jq '.[0].IPAM.Config[0].Subnet' -r--- output
172.18.0.0/16
---
KUBECONFIG=./capi-quickstart.kubeconfig
kubectl apply -f - <<-EOF
apiVersion: v1
kind: ConfigMap
metadata:namespace: metallb-systemname: config
data:config: |address-pools:- name: my-ip-spaceprotocol: layer2addresses:- 172.18.0.230-172.18.0.250
EOF
部署 nginx 示例
KUBECONFIG=./capi-quickstart.kubeconfig
kubectl create deployment nginx-deploy --image=nginx
kubectl expose deployment nginx-deploy --name=nginx-lb --port=80 --target-port=80 --type=LoadBalancer
KUBECONFIG=./capi-quickstart.kubeconfig
MetalLB_IP=$(kubectl get svc nginx-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $MetalLB_IP | grep "Thank you"--- output
<p><em>Thank you for using nginx.</em></p>
---
清理
删除工作负载集群,
KUBECONFIG=~/.kube/config
kubectl delete cluster capi-quickstart
重要提示:为了确保基础设施的正确清理,你必须始终删除集群对象。用 kubectl delete -f capi-quickstart.yaml
删除整个集群模板可能会导致需要手动清理的 pending resources。
删除管理集群,
kind delete cluster
完结!
(参考)capi-quickstart.yaml 文件内容
$ cat capi-quickstart.yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:name: quick-startnamespace: default
spec:controlPlane:machineInfrastructure:ref:apiVersion: infrastructure.cluster.x-k8s.io/v1beta1kind: DockerMachineTemplatename: quick-start-control-planeref:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatename: quick-start-control-planeinfrastructure:ref:apiVersion: infrastructure.cluster.x-k8s.io/v1beta1kind: DockerClusterTemplatename: quick-start-clusterpatches:- definitions:- jsonPatches:- op: addpath: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/imageRepositoryvalueFrom:variable: imageRepositoryselector:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatematchResources:controlPlane: truedescription: Sets the imageRepository used for the KubeadmControlPlane.enabledIf: '{{ ne .imageRepository "" }}'name: imageRepository- definitions:- jsonPatches:- op: addpath: /spec/template/spec/kubeadmConfigSpec/initConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-drivervalue: cgroupfs- op: addpath: /spec/template/spec/kubeadmConfigSpec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-drivervalue: cgroupfsselector:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatematchResources:controlPlane: truedescription: |Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced.This is required because kind and the node images do not support the defaultsystemd cgroupDriver for kubernetes < v1.24.enabledIf: '{{ semverCompare "<= v1.23" .builtin.controlPlane.version }}'name: cgroupDriver-controlPlane- definitions:- jsonPatches:- op: addpath: /spec/template/spec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-drivervalue: cgroupfsselector:apiVersion: bootstrap.cluster.x-k8s.io/v1beta1kind: KubeadmConfigTemplatematchResources:machineDeploymentClass:names:- default-workerdescription: |Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced.This is required because kind and the node images do not support the defaultsystemd cgroupDriver for kubernetes < v1.24.enabledIf: '{{ semverCompare "<= v1.23" .builtin.machineDeployment.version }}'name: cgroupDriver-machineDeployment- definitions:- jsonPatches:- op: addpath: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/etcdvalueFrom:template: |local:imageTag: {{ .etcdImageTag }}selector:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatematchResources:controlPlane: truedescription: Sets tag to use for the etcd image in the KubeadmControlPlane.name: etcdImageTag- definitions:- jsonPatches:- op: addpath: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/dnsvalueFrom:template: |imageTag: {{ .coreDNSImageTag }}selector:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatematchResources:controlPlane: truedescription: Sets tag to use for the etcd image in the KubeadmControlPlane.name: coreDNSImageTag- definitions:- jsonPatches:- op: addpath: /spec/template/spec/customImagevalueFrom:template: |kindest/node:{{ .builtin.machineDeployment.version | replace "+" "_" }}selector:apiVersion: infrastructure.cluster.x-k8s.io/v1beta1kind: DockerMachineTemplatematchResources:machineDeploymentClass:names:- default-worker- jsonPatches:- op: addpath: /spec/template/spec/customImagevalueFrom:template: |kindest/node:{{ .builtin.controlPlane.version | replace "+" "_" }}selector:apiVersion: infrastructure.cluster.x-k8s.io/v1beta1kind: DockerMachineTemplatematchResources:controlPlane: truedescription: Sets the container image that is used for running dockerMachinesfor the controlPlane and default-worker machineDeployments.name: customImage- definitions:- jsonPatches:- op: addpath: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraArgsvalue:admission-control-config-file: /etc/kubernetes/kube-apiserver-admission-pss.yaml- op: addpath: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraVolumesvalue:- hostPath: /etc/kubernetes/kube-apiserver-admission-pss.yamlmountPath: /etc/kubernetes/kube-apiserver-admission-pss.yamlname: admission-psspathType: FilereadOnly: true- op: addpath: /spec/template/spec/kubeadmConfigSpec/filesvalueFrom:template: |- content: |apiVersion: apiserver.config.k8s.io/v1kind: AdmissionConfigurationplugins:- name: PodSecurityconfiguration:apiVersion: pod-security.admission.config.k8s.io/v1{{ if semverCompare "< v1.25" .builtin.controlPlane.version }}beta1{{ end }}kind: PodSecurityConfigurationdefaults:enforce: "{{ .podSecurityStandard.enforce }}"enforce-version: "latest"audit: "{{ .podSecurityStandard.audit }}"audit-version: "latest"warn: "{{ .podSecurityStandard.warn }}"warn-version: "latest"exemptions:usernames: []runtimeClasses: []namespaces: [kube-system]path: /etc/kubernetes/kube-apiserver-admission-pss.yamlselector:apiVersion: controlplane.cluster.x-k8s.io/v1beta1kind: KubeadmControlPlaneTemplatematchResources:controlPlane: truedescription: Adds an admission configuration for PodSecurity to the kube-apiserver.enabledIf: '{{ .podSecurityStandard.enabled }}'name: podSecurityStandardvariables:- name: imageRepositoryrequired: trueschema:openAPIV3Schema:default: ""description: imageRepository sets the container registry to pull images from.If empty, nothing will be set and the from of kubeadm will be used.example: registry.k8s.iotype: string- name: etcdImageTagrequired: trueschema:openAPIV3Schema:default: ""description: etcdImageTag sets the tag for the etcd image.example: 3.5.3-0type: string- name: coreDNSImageTagrequired: trueschema:openAPIV3Schema:default: ""description: coreDNSImageTag sets the tag for the coreDNS image.example: v1.8.5type: string- name: podSecurityStandardrequired: falseschema:openAPIV3Schema:properties:audit:default: restricteddescription: audit sets the level for the audit PodSecurityConfigurationmode. One of privileged, baseline, restricted.type: stringenabled:default: truedescription: enabled enables the patches to enable Pod Security Standardvia AdmissionConfiguration.type: booleanenforce:default: baselinedescription: enforce sets the level for the enforce PodSecurityConfigurationmode. One of privileged, baseline, restricted.type: stringwarn:default: restricteddescription: warn sets the level for the warn PodSecurityConfigurationmode. One of privileged, baseline, restricted.type: stringtype: objectworkers:machineDeployments:- class: default-workertemplate:bootstrap:ref:apiVersion: bootstrap.cluster.x-k8s.io/v1beta1kind: KubeadmConfigTemplatename: quick-start-default-worker-bootstraptemplateinfrastructure:ref:apiVersion: infrastructure.cluster.x-k8s.io/v1beta1kind: DockerMachineTemplatename: quick-start-default-worker-machinetemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerClusterTemplate
metadata:name: quick-start-clusternamespace: default
spec:template:spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:name: quick-start-control-planenamespace: default
spec:template:spec:kubeadmConfigSpec:clusterConfiguration:apiServer:certSANs:- localhost- 127.0.0.1- 0.0.0.0- host.docker.internalcontrollerManager:extraArgs:enable-hostpath-provisioner: "true"initConfiguration:nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockkubeletExtraArgs:eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%joinConfiguration:nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockkubeletExtraArgs:eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:name: quick-start-control-planenamespace: default
spec:template:spec:extraMounts:- containerPath: /var/run/docker.sockhostPath: /var/run/docker.sock
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:name: quick-start-default-worker-machinetemplatenamespace: default
spec:template:spec:extraMounts:- containerPath: /var/run/docker.sockhostPath: /var/run/docker.sock
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:name: quick-start-default-worker-bootstraptemplatenamespace: default
spec:template:spec:joinConfiguration:nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockkubeletExtraArgs:eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:name: capi-quickstartnamespace: default
spec:clusterNetwork:pods:cidrBlocks:- 192.168.0.0/16serviceDomain: cluster.localservices:cidrBlocks:- 10.128.0.0/12topology:class: quick-startcontrolPlane:metadata: {}replicas: 1variables:- name: imageRepositoryvalue: ""- name: etcdImageTagvalue: ""- name: coreDNSImageTagvalue: ""- name: podSecurityStandardvalue:audit: restrictedenabled: trueenforce: baselinewarn: restrictedversion: v1.27.0workers:machineDeployments:- class: default-workername: md-0replicas: 1
参考资料
refer: https://cluster-api.sigs.k8s.io/user/quick-start.html
refer: https://oracle.github.io/cluster-api-provider-oci/gs/gs.html