> 文章列表 > MongoDB K8s集群部署

MongoDB K8s集群部署

MongoDB K8s集群部署

1. 准备资源

1.1 NFS准备

这里已经安装了NFS,并且可以处理NFS所有的操作,在这里不会说明NFS中的任何信息,只是使用。如果需要知道NFS如何配置,可查看其他的文章,里面有详细的说明。

1.2 镜像

使用MongoDB在docker.io中,现在比较新的镜像mongo:4.4.9.

docker pull mongo:4.4.9
docker pull ibmcom/nfs-client-provisioner-ppc64le

如果在内部使用,不想因网络问题拉不到镜像,可以将镜像放到自己的镜像仓库中。在实践中,这是一个好方法。

docker tag mongo:4.4.9 172.16.0.111:2180/base/mongo:4.4.9
docker push 172.16.0.111:2180/base/mongo:4.4.9
docker tag ibmcom/nfs-client-provisioner-ppc64le:latest 172.16.0.111:2180/base/nfs-client-provisioner-ppc64le:latest

当然,需要在生成一个secret, 这里使用的是harbor, 用以下命令在mongo的命名空间中生成一个secret。生成的secret会在创建StatefulSet的时候用到,如果不需要,则记得在mongo-statefulset.yaml中把相应的内容删除。

注意一下,生成命名空间是下一步

kubectl  -n mongo create secret docker-registry harbor-secret --docker-server=172.16.0.111:2180 --docker-username=admin --docker-password=Harbor9999

1.3 创建命名空间

kubectl create ns mongo

2. NFS 存储验证

2.1 创建nfs pv文件

创建mongo-nfs-pv.yaml文件,文件中有以下内容。

apiVersion: v1
kind: PersistentVolume
metadata:name: pvmongo
spec:capacity:storage: 30GiaccessModes:- ReadWriteManynfs:server: 172.16.8.186path: "/opt/pvmongo"

2.2 创建nfs pvc文件

创建mongo-nfs-pvc.yaml, 文件中有以下内容。

apiVersion: v1
kind: PersistentVolume
metadata:name: pvmongo
spec:capacity:storage: 30GiaccessModes:- ReadWriteManynfs:server: 172.16.0.106path: "/opt/pvmongo"
[root@svc-04 mongo]# 
[root@svc-04 mongo]# cat mongo-nfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvcmongo
spec:accessModes:- ReadWriteManystorageClassName: ""resources:requests:storage: 30Gi

2.3 创建pv和pvc

注意,在pvc中增加了命名空间,如果在执行时,可以换成你需要的命名空间

kubectl  create -f mongo-nfs-pv.yaml 
kubectl  -n mongo create -f mongo-nfs-pvc.yaml # 注意,这里增加了命名空间

2.4 验证创建结果

[root@svc-04 mongo]# kubectl  get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
pvmongo       30Gi       RWX            Retain           Bound    default/pvcmongo                              6m29s
[root@svc-04 mongo]# kubectl get pvc 
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvcmongo   Bound    pvmongo   30Gi       RWX                           5m32s

以上的内容显示正常,则说明NFS连接Kubernetes正常,并且可以正常使用。

3. nfs-provisioner部署

需要让MongoDB自动创建可扩展的存储,则需要部署nfs-provisioner。

3.1 生成RBAC文件

生成rbac文件mongo-nfs-rbac.yaml, 文件内容如下。注意文件中的中文注释。

apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
subjects:- kind: ServiceAccountname: nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

3.2 创建RBAC

这里认为mongo-nfs-rbac.yaml文件在当前目录中

kubectl create -f mongo-nfs-rbac.yaml

3.3 创建provisioner deployment文件

创建provisioner的deployment的mongo-nfs-provisioner-deployment.yaml文件,文件内容如下。

注意: 这个文件需要修改一些内容,修改位置已经在文件中注释了

kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisioner# 替换需要部署provisioner的命名空间namespace: mongo
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-client-provisioner# 本地私库的镜像Secret# 如果直接从网上获取,不使用自己的镜像仓库,可以把这里删除掉。imagePullSecrets:- name: harbor-secretcontainers:- name: nfs-client-provisioner# 修改为自己的镜像地址image: 172.16.0.111:2180/base/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVER# NFS服务的IP地址value: 172.16.0.111- name: NFS_PATH# NFS服务的路径value: /opt/pvmongovolumes:- name: nfs-client-rootnfs:# NFS服务的IP地址server: 172.16.0.111# NFS服务的路径path: /opt/pvmongo

3.3 部署provisioner

执行以下命令部署provisioner.

kubectl create -f mongo-nfs-provisioner-deployment.yaml

验证部署内容:

[root@svc-04 mongo]# kubectl  -n mongo get pods 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7b6645d7d8-n9qjw   1/1     Running   0          84m
[root@svc-04 mongo]# kubectl  -n mongo get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           94m

3.4 创建storageClass

创建storageClass 的yaml文件 mongo-nfs-provisioner-storageclass.yaml, 文件内容如下。

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
# 这里可以换成其他的名子,但必须要和deployment中的环境变量PROVISIONER_NAME保持一致
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner 
parameters:archiveOnDelete: "false"

执行以下命令创建storageClass

kubectl create -f mongo-nfs-provisioner-storageclass.yaml

验证创建的StorageClass

[root@svc-04 mongo]# kubectl get storageclass | grep nfs
managed-nfs-storage   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                 88m

4. MongoDB集群部署

4.1 创建RBAC

创建rbac是要让集群间可以相互访问。生成文件mongo-rbac.yaml, 内容如下。

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: mongo-default-view
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: view
subjects:- kind: ServiceAccountname: default# 替换需要部署provisioner的命名空间namespace: mongo

执行以下命令创建

kubectl create -f mongo-rbac.yaml

4.2 创建service文件

生成service文件mongo-service.yaml,内容如下。

apiVersion: v1
kind: Service
metadata:name: mongo-svcnamespace: mongolabels:name: mongo
spec:ports:- port: 27017targetPort: 27017clusterIP: Noneselector:role: mongo

执行以下命令创建

kubectl  create -f mongo-service.yaml 
kubectl  -n mongo get svc

4.3 创建认证文件

认证文件主要用于MongoDB内部认证。执行以下命令创建认证文件。

openssl rand -base64 741 > authentication

执行效果如下:

[root@svc-04 mongo]# openssl rand -base64 741 > authentication
[root@svc-04 mongo]# cat authentication 
KIps7AU0ueqcQ+ErcMBPGXGVQxPO2hHrN4T5FmOqpafc8zn9m5SQz+feXiRqkjfq
vzgsFfXLtPzj1uW8BdC9H5CxUotxzk6p3/MlpFD0xt18gLjBCtN2QdYPuDU+zc3T
xEli7dnJDS2cqyHc5CNtxmYnh17/cOCe23TUGdc1TwExlSXX7ITE6NIj8LMNCh7L
srjLqmN0hTybnQKfDYuiqJoiZAEXsXHpArP9hp1npTDHFZkMQpBTxCmAoBRBfbUR
JuxXAj5TjNZHsGowId9zemOIFwgfqide9HSGjX585Sls7Ue8lYxpiCdL81Nyaw0d
GUpJV9LJHZDhlpkIM1esk0ydHZjJPYThpAyDMm61vBkIgxRlej811Ckguss3dsSo
oik8lHPPHteqLHkEuaQ8nHfF9PuTP/IS6wxnyneRBkn69uhs7nkUNEFw2Fk4zSwu
+s3pRGCJxPQyHnHauE1fenrAcFWl/rBBLiiJpqp3pZh4Ff+fqbTLBp94pt5ygryg
7nIaasfqEEBpkSQVCYndkiXSKhOiOWNlWVGMnbuV+mcbXES/V6g9UPXwQAhcEqDa
OrXYCjUxMmf/+upW+tyGP5ZX+K52yoS2zfR96vPBDgzPRenAkNdhxh6xbSk+df23
sxB2BskDZMzdMb5aruCBRsJDYxrGHf30oO3or+YXPmya5eWZBGzN6FSxvGLbD8tr
L5801PmUcTsqWPz6KmWzFGlPeNqufzrhWkx6uRRdfdjU7gSX8WIvH8pjDWSGzSG0
ezSQ2TsSh2Kg1jOr2WZMjHi/fe6EC9IE6wE3nVZzMwdv1RHE7kJt9f4Wa09mxc+z
+f5V4Ux+L13rhA78DeCTRmWRGU5gKlPiH0iR5gceY60jW16SatPOHPz+fh9bCNuy
1LTkOzCw3k6WWRw14OxMXQJ+LFbzP1XDpydOPDPmz0imrsTqpSOzJ4QRPYcZJeMd
M4SK1rxMpneOhx3dKCBcY9zRYAtx

将生成的文件创建成Kubernetes的Secret.

kubectl -n mongo create secret  generic shared-bootstrap-data --from-file=internal-auth-mongodb-keyfile=authentication

注意,这里的authentication文件一直放在当前目录,所以没有跟路径

执行后效果:

[root@svc-04 mongo]# kubectl -n mongo create secret  generic shared-bootstrap-data --from-file=internal-auth-mongodb-keyfile=authentication 
secret/shared-bootstrap-data created
[root@svc-04 mongo]# kubectl  -n mongo get secret
NAME                    TYPE                                  DATA   AGE
shared-bootstrap-data   Opaque                                1      69s

authentication文件在执行完后,可以直接删除掉。

5. 创建StatefulSet

5.1 创建StatefulSet文件

生成StatefulSet文件mongo-statefulset.yaml, 内空如下。

apiVersion: apps/v1
kind: StatefulSet
metadata:name: mongonamespace: mongo
spec:serviceName: mongo-svcselector:matchLabels:role: mongoreplicas: 3template:metadata:labels:role: mongoenvironment: devreplicaset: MainRepSetspec:terminationGracePeriodSeconds: 10volumes:- name: secrets-volumesecret:secretName: shared-bootstrap-datadefaultMode: 256imagePullSecrets:- name: harbor-secretcontainers:- name: mongo-containerimage: 172.16.0.107:2180/base/mongo:4.4.9command:- "mongod"- "--bind_ip"- "0.0.0.0"- "--replSet"- "MainRepSet"- "--auth"- "--clusterAuthMode"- "keyFile"- "--keyFile"- "/etc/secrets-volume/internal-auth-mongodb-keyfile"- "--setParameter"- "authenticationMechanisms=SCRAM-SHA-1"ports:- containerPort: 27017volumeMounts:- name: secrets-volumereadOnly: truemountPath: /etc/secrets-volume- name: pvcmongomountPath: /data/dbvolumeClaimTemplates:- metadata:name: pvcmongospec:accessModes: [ "ReadWriteMany" ]storageClassName: managed-nfs-storageresources:requests:storage: 10Gi

执行以下命令创建:

kubectl create -f mongo-statefulset.yaml

检查创建结果:

[root@svc-04 mongo]# kubectl -n mongo get statefulset
NAME    READY   AGE
mongo   3/3     100m
[root@svc-04 mongo]# kubectl -n mongo get pods
NAME                                      READY   STATUS    RESTARTS   AGE
mongo-0                                   1/1     Running   0          51m
mongo-1                                   1/1     Running   0          51m
mongo-2                                   1/1     Running   0          50m
nfs-client-provisioner-7b6645d7d8-n9qjw   1/1     Running   0          105m

以上,MongoDB集群已经创建完成


6. 集群连接和配置

6.1 查看集群的主机名

kubectl -n mongo exec -it mongo-0  -- hostname -f
kubectl -n mongo exec -it mongo-1  -- hostname -f
kubectl -n mongo exec -it mongo-2  -- hostname -f

如,获得的主机名为: mongo-0.mongo-svc.mongo.svc.cluster.local

将获得的主机名,生成以下mongo的命令, 这个命令接下来使用。

rs.initiate({_id: "MainRepSet", version: 1, members: [{ _id: 0, host : "mongo-0.mongo-svc.mongo.svc.cluster.local:27017" },{ _id: 1, host : "mongo-1.mongo-svc.mongo.svc.cluster.local:27017" },{ _id: 2, host : "mongo-2.mongo-svc.mongo.svc.cluster.local:27017" }]});

6.2 初始化MongoDB集群

用以下命令连接到集群

kubectl -n mongo exec -it mongo-0  -- mongo

执行结果如下:

[root@svc-04 mongo]# kubectl -n mongo exec -it mongo-0  -- mongo
MongoDB shell version v4.4.9
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("fb3ce999-3c70-48a3-bdbb-18283ceb2d9e") }
MongoDB server version: 4.4.9
MainRepSet:PRIMARY> 

将6.1中生成的命令复制到这里,在这里执行,完成集群的初始化。

执行完成后,查看集群的状态。

rs.status();

执行结果如下:

MainRepSet:PRIMARY> rs.status();
{"set" : "MainRepSet","date" : ISODate("2021-10-16T11:39:26.267Z"),"myState" : 1,"term" : NumberLong(1),"syncSourceHost" : "","syncSourceId" : -1,"heartbeatIntervalMillis" : NumberLong(2000),"majorityVoteCount" : 2,"writeMajorityCount" : 2,"votingMembersCount" : 3,"writableVotingMembersCount" : 3,"optimes" : {"lastCommittedOpTime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"lastCommittedWallTime" : ISODate("2021-10-16T11:39:18.188Z"),"readConcernMajorityOpTime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"readConcernMajorityWallTime" : ISODate("2021-10-16T11:39:18.188Z"),"appliedOpTime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"durableOpTime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"lastAppliedWallTime" : ISODate("2021-10-16T11:39:18.188Z"),"lastDurableWallTime" : ISODate("2021-10-16T11:39:18.188Z")},"lastStableRecoveryTimestamp" : Timestamp(1634384338, 1),"electionCandidateMetrics" : {"lastElectionReason" : "electionTimeout","lastElectionDate" : ISODate("2021-10-16T10:42:57.796Z"),"electionTerm" : NumberLong(1),"lastCommittedOpTimeAtElection" : {"ts" : Timestamp(0, 0),"t" : NumberLong(-1)},"lastSeenOpTimeAtElection" : {"ts" : Timestamp(1634380967, 1),"t" : NumberLong(-1)},"numVotesNeeded" : 2,"priorityAtElection" : 1,"electionTimeoutMillis" : NumberLong(10000),"numCatchUpOps" : NumberLong(0),"newTermStartDate" : ISODate("2021-10-16T10:42:58.080Z"),"wMajorityWriteAvailabilityDate" : ISODate("2021-10-16T10:42:58.800Z")},"members" : [{"_id" : 0,"name" : "mongo-0.mongo-svc.mongo.svc.cluster.local:27017","health" : 1,"state" : 1,"stateStr" : "PRIMARY","uptime" : 3695,"optime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"optimeDate" : ISODate("2021-10-16T11:39:18Z"),"syncSourceHost" : "","syncSourceId" : -1,"infoMessage" : "","electionTime" : Timestamp(1634380977, 1),"electionDate" : ISODate("2021-10-16T10:42:57Z"),"configVersion" : 1,"configTerm" : 1,"self" : true,"lastHeartbeatMessage" : ""},{"_id" : 1,"name" : "mongo-1.mongo-svc.mongo.svc.cluster.local:27017","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 3399,"optime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"optimeDurable" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"optimeDate" : ISODate("2021-10-16T11:39:18Z"),"optimeDurableDate" : ISODate("2021-10-16T11:39:18Z"),"lastHeartbeat" : ISODate("2021-10-16T11:39:25.509Z"),"lastHeartbeatRecv" : ISODate("2021-10-16T11:39:25.990Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncSourceHost" : "mongo-0.mongo-svc.mongo.svc.cluster.local:27017","syncSourceId" : 0,"infoMessage" : "","configVersion" : 1,"configTerm" : 1},{"_id" : 2,"name" : "mongo-2.mongo-svc.mongo.svc.cluster.local:27017","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 3399,"optime" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"optimeDurable" : {"ts" : Timestamp(1634384358, 1),"t" : NumberLong(1)},"optimeDate" : ISODate("2021-10-16T11:39:18Z"),"optimeDurableDate" : ISODate("2021-10-16T11:39:18Z"),"lastHeartbeat" : ISODate("2021-10-16T11:39:25.542Z"),"lastHeartbeatRecv" : ISODate("2021-10-16T11:39:26.213Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncSourceHost" : "mongo-0.mongo-svc.mongo.svc.cluster.local:27017","syncSourceId" : 0,"infoMessage" : "","configVersion" : 1,"configTerm" : 1}],"ok" : 1,"$clusterTime" : {"clusterTime" : Timestamp(1634384358, 1),"signature" : {"hash" : BinData(0,"9aHMf5irWqkvcz9HcjajwjrTiIM="),"keyId" : NumberLong("7019612849714495491")}},"operationTime" : Timestamp(1634384358, 1)
}

6.3 创建用户并测试

db.getSiblingDB('admin').auth("main_admin", "admin");
use test
db.testcoll.insert({a:1})
db.testcoll.find()
db.getSiblingDB("admin").createUser({user : "main_admin",pwd  : "KkkKkkPassword",roles: [ { role: "root", db: "admin" } ]});