作者:老Z,云原生爱好者,现在专注于云原生运维,KubeSphere Ambassador。

Spring Cloud Alibaba 全家桶之 RocketMQ 是一款典型的分布式架构下的音讯中间件产品,运用异步通信办法和发布订阅的音讯传输模型。

许多依据 Spring Cloud 开发的项目都喜欢选用 RocketMQ 作为音讯中间件。

RocketMQ 常用的布置方式如下:

  • 单 Master 方式
  • 多 Master 无 Slave 方式
  • 多 Master 多 Slave 方式-异步仿制
  • 多 Master 多 Slave 方式-同步双写

更多的布置计划具体信息能够参阅官方文档。

本文重点介绍 单 Master 方式和多 Master 多 Slave-异步仿制方式在 K8s 集群上的布置计划。

单 Master 方式

这种布置办法风险较大,仅布置一个 NameServer 和一个 Broker,一旦 Broker 重启或者宕机时,会导致整个服务不可用,不建议线上出产环境运用,仅能够用于开发和测验环境。

布置计划参阅官方rocketmq-docker项目中运用的容器化布置计划触及的镜像、发动办法、定制化装备。

多 Master 多 Slave-异步仿制方式

每个 Master 装备一个 Slave,有多对 Master-Slave,HA 选用异步仿制办法,主备有时间短音讯延迟(毫秒级),这种方式的优缺陷如下:

  • 优点:即便磁盘损坏,音讯丢掉的非常少,且音讯实时性不会受影响,一起 Master 宕机后,消费者依然能够从 Slave 消费,而且此进程对运用透明,不需求人工干预,功能同多 Master 方式几乎一样;
  • 缺陷:Master 宕机,磁盘损坏情况下会丢掉少量音讯。

多 Master 多 Slave-异步仿制方式适用于出产环境,布置计划选用官方提供的 RocketMQ Operator。

使用KubeSphere部署高可用RocketMQ集群

离线镜像制造

此进程为可选项,离线内网环境可用,假如不装备内网镜像,后续的资源装备清单中留意容器的 image 参数请运用默许值。

本文分别介绍了单 Master 方式和多 Master 多 Slave-异步仿制方式布置 RocketMQ 运用的离线镜像的制造办法。

  • 单 Master 方式直接选用 RocketMQ 官方文档中介绍的容器化布置计划中运用的镜像。

  • 多 Master 多 Slave-异步仿制方式的离线镜像制造办法选用 RocketMQ Operator 官方自带的镜像制造东西制造打包,制造进程中许多包都需求到国外网络下载,但是受限于国外网络拜访,默许成功率较低,需求屡次测验或采取特殊手法 ( 懂的都懂)。

    也能够用传统的办法手艺的 Pull Docker Hub 上已有的镜像,然后再 Push 到私有镜像库房。

在一台能一起拜访互联网和内网 Harbor 库房的服务器上进行下面的操作。

在 Harbor 中创立项目

自己习气内网离线镜像的命名空间跟运用镜像默许的命名空间保持一致,因而,在 Harbor 中创立 apacheapacherocketmq 两个项目,能够在 Harbor 办理界面中手艺创立项目,也能够用下面命令行的办法主动化创立。

curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apache", "public": true}'
curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apacherocketmq", "public": true}'

装置 Go 1.16

RocketMQ Operator 自定义镜像制造需求用到 Go 环境,需求先装置装备。

下载 Go 1.16 系列的最新版:

cd /opt/
wget https://golang.google.cn/dl/go1.16.15.linux-amd64.tar.gz

解压源代码到指定目录:

tar zxvf go1.16.15.linux-amd64.tar.gz -C /usr/local/

装备环境变量:

cat >> /etc/profile.d/go.sh << EOF
# go environment
export GOROOT=/usr/local/go
export GOPATH=/srv/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
EOF

GOPATH 为工作目录也是代码的寄存目录,能够依据自己的习气装备

装备 Go:

go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct

验证:

source /etc/profile.d/go.sh
go verison

获取 RocketMQ Operator

从 Apache 官方 GitHub 库房获取 rocketmq-operator 代码。

cd /srv
git clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

制造 RocketMQ Operator Image

修正 DockerFile:

cd /srv/rocketmq-operator
vi Dockerfile

Notice: 构建镜像的进程需拜访国外的软件源和镜像库房,在国内拜访有时会受限制,因而能够提早修正为国内的软件源和镜像库房。

此操作为可选项,假如拜访不受限则不需求装备。

必要的修正内容

# 第 10 行(修正署理地址为国内地址,加快拜访)
# 修正前
RUN go mod download
# 修正后
RUN go env -w GOPROXY=https://goproxy.cn,direct && go mod download
# 第 25 行(修正源地址为国内源)
# 修正前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras
# 修正后
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

可选的修正内容

# 默许装置的 ROCKETMQ版别为 4.9.4,能够修正为指定版别
# 第 28 行,修正 4.9.4
ENV ROCKETMQ_VERSION 4.9.4

制造镜像:

yum install gcc
cd /srv/rocketmq-operator
go mod tidy
IMAGE_URL=registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
make docker-build IMG=${IMAGE_URL}

验证镜像构建成功:

docker images | grep rocketmq-operator

推送镜像:

make docker-push IMG=${IMAGE_URL}

收拾暂时镜像

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0

制造 RocketMQ Broker Image

修正 DockerFile(可选):

cd /srv/rocketmq-operator/images/broker/alpine
vi Dockerfile

此操作为可选项,首要是为了装置软件加快,假如拜访不受限则不需求装备。

# 第 20 行(修正源地址为国内源)
# 修正前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras
# 修正后
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

修正镜像构建脚本

# 修正镜像库房地址为内网地址
sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-broker-image.sh

构建并推送镜像:

./build-broker-image.sh 4.9.4

验证镜像构建成功:

docker images | grep rocketmq-broker

收拾暂时镜像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

制造 RocketMQ Name Server Image

修正 DockerFile(可选):

cd /srv/rocketmq-operator/images/namesrv/alpine
vi Dockerfile

此操作为可选项,首要是为了装置软件加快,假如拜访不受限则不需求装备。

# 第 20 行(修正源地址为国内源)
# 修正前
RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras
# 修正后
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \
    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

修正镜像构建脚本:

# 修正镜像库房地址为内网地址
sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-namesrv-image.sh

构建并推送镜像:

./build-namesrv-image.sh 4.9.4

验证镜像构建成功:

docker images | grep rocketmq-nameserver

收拾暂时镜像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

依据官方已有镜像制造离线镜像

上面的 RocketMQ 多 Master 多 Slave-异步仿制方式布置计划中用到的离线镜像制造计划更合适于本地修正定制的场景,假如单纯的只想把官方已有镜像不做修正的下载并推送到本地库房,能够参阅下面的计划。

下载镜像:

docker pull apache/rocketmq-operator:0.3.0
docker pull apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker pull apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

Notice: 官方库房最新版的镜像是 2 年前的 4.5.0.

从头打 tag:

docker tag apache/rocketmq-operator:0.3.0-snapshot registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker tag apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker tag apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

推送到私有镜像库房:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

收拾暂时镜像:

docker rmi apache/rocketmq-operator:0.3.0
docker rmi apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker rmi apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

制造 RocketMQ Console Image

本文直接拉取官方镜像作为本地离线镜像,假如需求修正内容并重构,能够参阅 RocketMQ Console 运用的 官方 Dockerfile自行构建。

下载镜像:

docker pull apacherocketmq/rocketmq-console:2.0.0

从头打 tag:

docker tag apacherocketmq/rocketmq-console:2.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

推送到私有镜像库房:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

收拾暂时镜像:

docker rmi apacherocketmq/rocketmq-console:2.0.0
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

预备单 Master RocketMQ 布置计划触及的离线镜像

单 Master RocketMQ 布置计划触及的镜像跟集群方式布置计划选用的 RocketMQ Operator 中运用的镜像不同,在制造离线镜像时,直接从官方镜像库拉取然后从头打 tag,再推送本地镜像库房。

二者具体不同阐明如下:

  • 单 Master 计划运用的是 Docker Hub 中 apache 命名空间下的镜像,而且镜像名称不区别 nameserver 和 broker,RocketMQ Operator 运用的是 apacherocketmq 命名空间下的镜像,镜像名称区别 nameserver 和 broker。

  • 单 Master 计划和 RocketMQ Operator 计划中办理东西运用的镜像也不同,单 Master 计划运用的是 apacherocketmq 命名空间下的 rocketmq-dashboard 镜像,RocketMQ Operator 运用的是 apacherocketmq 命名空间下的 rocketmq-console 镜像。

具体的离线镜像制造流程如下:

下载镜像

docker pull apache/rocketmq:4.9.4
docker pull apacherocketmq/rocketmq-dashboard:1.0.0

从头打 tag

docker tag apache/rocketmq:4.9.4 registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker tag apacherocketmq/rocketmq-dashboard:1.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

推送到私有镜像库房

docker push registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

收拾暂时镜像

docker rmi apache/rocketmq:4.9.4
docker rmi apacherocketmq/rocketmq-dashboard:1.0.0
docker rmi registry.zdevops.com.cn/apache/rocketmq:4.9.4
docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

单 Master 方式布置

思路收拾

依据 RocketMQ 服务运用的组件,需求布置以下资源

  • Broker StatefulSet
  • NameServer StatefulSet
  • NameServer Cluster Service:内部服务
  • Dashboard Deployment
  • Dashboard External Service:Dashboard 外部办理用
  • ConfigMap:Broker 自定义装备文件

资源装备清单

参阅 GitHub 中 Apache rocketmq-docker项目中介绍的容器化发动示例装备,编写适用于 K8S 的资源装备清单。

Notice: 每个人技术才能、技术习气、服务环境有所不同,这儿介绍的仅仅我选用的一种简略办法,并不一定是最优的计划,我们能够依据实践情况编写合适自己的装备。

rocketmq-cm.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: rocketmq-broker-config
  namespace: zdevops
data:
  BROKER_MEM: ' -Xms2g -Xmx2g -Xmn1g '
  broker-common.conf: |-
    brokerClusterName = DefaultCluster
    brokerName = broker-0
    brokerId = 0
    deleteWhen = 04
    fileReservedTime = 48
    brokerRole = ASYNC_MASTER
    flushDiskType = ASYNC_FLUSH

rocketmq-name-service-sts.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: rocketmq-name-service
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-name-service
      name_service_cr: rocketmq-name-service
  template:
    metadata:
      labels:
        app: rocketmq-name-service
        name_service_cr: rocketmq-name-service
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: rocketmq-name-service
          image: 'registry.zdevops.com.cn/apache/rocketmq:4.9.4'
          command:
            - /bin/sh
          args:
            - mqnamesrv
          ports:
            - name: tcp-9876
              containerPort: 9876
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 250m
              memory: 512Mi
          volumeMounts:
            - name: rocketmq-namesrv-storage
              mountPath: /home/rocketmq/logs
              subPath: logs
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
          imagePullPolicy: Always
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: rocketmq-namesrv-storage
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: glusterfs
        volumeMode: Filesystem
  serviceName: ''
---
kind: Service
apiVersion: v1
metadata:
  name: rocketmq-name-server-service
  namespace: zdevops
spec:
  ports:
    - name: tcp-9876
      protocol: TCP
      port: 9876
      targetPort: 9876
  selector:
    name_service_cr: rocketmq-name-service
  type: ClusterIP

rocketmq-broker-sts.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: rocketmq-broker-0-master
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-broker
      broker_cr: rocketmq-broker
  template:
    metadata:
      labels:
        app: rocketmq-broker
        broker_cr: rocketmq-broker
    spec:
      volumes:
        - name: rocketmq-broker-config
          configMap:
            name: rocketmq-broker-config
            items:
              - key: broker-common.conf
                path: broker-common.conf
            defaultMode: 420
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
      containers:
        - name: rocketmq-broker
          image: 'apache/rocketmq:4.9.4'
          command:
            - /bin/sh
          args:
            - mqbroker
            - "-c"
            - /home/rocketmq/conf/broker-common.conf
          ports:
            - name: tcp-vip-10909
              containerPort: 10909
              protocol: TCP
            - name: tcp-main-10911
              containerPort: 10911
              protocol: TCP
            - name: tcp-ha-10912
              containerPort: 10912
              protocol: TCP
          env:
            - name: NAMESRV_ADDR
              value: 'rocketmq-name-server-service.zdevops:9876'
            - name: BROKER_MEM
              valueFrom:
                configMapKeyRef:
                  name: rocketmq-broker-config
                  key: BROKER_MEM
          resources:
            limits:
              cpu: 500m
              memory: 12Gi
            requests:
              cpu: 250m
              memory: 2Gi
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
            - name: rocketmq-broker-storage
              mountPath: /home/rocketmq/logs
              subPath: logs/broker-0-master
            - name: rocketmq-broker-storage
              mountPath: /home/rocketmq/store
              subPath: store/broker-0-master
            - name: rocketmq-broker-config
              mountPath: /home/rocketmq/conf/broker-common.conf
              subPath: broker-common.conf
          imagePullPolicy: Always
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: rocketmq-broker-storage
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 8Gi
        storageClassName: glusterfs
        volumeMode: Filesystem
  serviceName: ''

rocketmq-dashboard.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: rocketmq-dashboard
  namespace: zdevops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rocketmq-dashboard
  template:
    metadata:
      labels:
        app: rocketmq-dashboard
    spec:
      containers:
        - name: rocketmq-dashboard
          image: 'registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0'
          ports:
            - name: http-8080
              containerPort: 8080
              protocol: TCP
          env:
            - name: JAVA_OPTS
              value: >-
                -Drocketmq.namesrv.addr=rocketmq-name-server-service.zdevops:9876
                -Dcom.rocketmq.sendMessageWithVIPChannel=false
          resources:
            limits:
              cpu: 500m
              memory: 2Gi
            requests:
              cpu: 50m
              memory: 512Mi
          imagePullPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
  name: rocketmq-dashboard-service
  namespace: zdevops
spec:
  ports:
    - name: http-8080
      protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 31080
  selector:
    app: rocketmq-dashboard
  type: NodePort

GitOps

本操作为可选项,自己习气在个人开发服务器上修正或修正资源装备清单,然后提交到 Git 服务器 (Gitlab、Gitee、GitHub 等),然后在 k8s 节点上从 Git 服务器拉取资源装备清单并履行,从而完成资源装备清单的版别化办理,简略的完成运维 GitOps。

本系列文档的全部 k8s 资源装备清单文件,为了演示和操作便利,都放在了统一的 k8s-yaml 库房中,实践工作中都是一个运用一个 Git 库房,更便于运用装备的版别操控。

我们在实践运用中能够忽略本步骤,直接在 k8s 节点上编写资源装备清单并履行,也能够参阅我的运用办法,完成简略的 GitOps。

在个人运维开发服务器上操作

# 在已有代码库房创立 rocketmq/single 目录
mkdir -p rocketmq/single
# 修正资源装备清单
vi rocketmq/single/rocketmq-cm.yaml
vi rocketmq/single/rocketmq-name-service-sts.yaml
vi rocketmq/single/rocketmq-broker-sts.yaml
vi rocketmq/single/rocketmq-dashboard.yaml
# 提交 Git
git add rocketmq
git commit -am '添加 rocketmq 单节点资源装备清单'
git push

布置资源

在 k8s 集群 Master 节点上或是独立的运维办理服务器上操作

更新镜像库房代码

cd /srv/k8s-yaml
git pull

布置资源 (分步式,二选一)

测验环境运用分步独自布置的办法,以便测验资源装备清单的准确性。

cd /srv/k8s-yaml
kubectl apply -f rocketmq/single/rocketmq-cm.yaml
kubectl apply -f rocketmq/single/rocketmq-name-service-sts.yaml
kubectl apply -f rocketmq/single/rocketmq-broker-sts.yaml
kubectl apply -f rocketmq/single/rocketmq-dashboard.yaml

布置资源 (一键式,二选一)

实践运用中,能够直接 apply 整个目录,完成一键式主动布置,在正式研制和出产环境中运用目录的办法完成快速布置。

kubectl apply -f rocketmq/single/

验证

ConfigMap:

$ kubectl get cm -n zdevops
NAME                     DATA   AGE
kube-root-ca.crt         1      17d
rocketmq-broker-config   2      22s

StatefulSet:

$ kubectl get sts -o wide -n zdevops
NAME                       READY   AGE   CONTAINERS              IMAGES
rocketmq-broker-0-master   1/1     11s   rocketmq-broker         registry.zdevops.com.cn/apache/rocketmq:4.9.4
rocketmq-name-service      1/1     12s   rocketmq-name-service   registry.zdevops.com.cn/apache/rocketmq:4.9.4

Deployment:

$ kubectl get deploy -o wide -n zdevops
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS           IMAGES                                                            SELECTOR
rocketmq-dashboard   1/1     1            1           31s   rocketmq-dashboard   registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0   app=rocketmq-dashboard

Pods:

$ kubectl get pods -o wide -n zdevops
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
rocketmq-broker-0-master-0           1/1     Running   0          77s   10.233.116.103   ks-k8s-master-2   <none>           <none>
rocketmq-dashboard-b5dbb9d88-cwhqc   1/1     Running   0          3s    10.233.87.115    ks-k8s-master-1   <none>           <none>
rocketmq-name-service-0              1/1     Running   0          78s   10.233.116.102   ks-k8s-master-2   <none>           <none>

Service:

$ kubectl get svc -o wide -n zdevops
NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE     SELECTOR
rocketmq-dashboard-service     NodePort    10.233.5.237    <none>        8080:31080/TCP   74s     app=rocketmq-dashboard
rocketmq-name-server-service   ClusterIP   10.233.3.61     <none>        9876/TCP         2m29s   name_service_cr=rocketmq-name-service

经过浏览器打开 K8S 集群中恣意节点的 IP:31080,能够看到 RocketMQ 操控台的办理界面。

使用KubeSphere部署高可用RocketMQ集群

收拾资源

卸载 RocketMQ 或是装置失利需求收拾后从头装置,能够在 K8S 集群上运用下面的流程收拾资源。

收拾 StatefulSet:

kubectl delete sts rocketmq-broker-0-master -n zdevops
kubectl delete sts rocketmq-name-service -n zdevops

收拾 Deployment:

kubectl delete deployments rocketmq-dashboard -n zdevops

收拾 ConfigMap:

kubectl delete cm rocketmq-broker-config -n zdevops

收拾服务:

kubectl delete svc rocketmq-name-server-service -n zdevops
kubectl delete svc rocketmq-dashboard-service -n zdevops 

收拾存储卷:

kubectl delete pvc rocketmq-namesrv-storage-rocketmq-name-service-0 -n zdevops
kubectl delete pvc rocketmq-broker-storage-rocketmq-broker-0-master-0 -n zdevops

当然,也能够利用资源装备清单收拾资源,更简略快捷 (存储卷无法主动收拾,需求手艺收拾)。

$ kubectl delete -f rocketmq/single/
statefulset.apps "rocketmq-broker-0-master" deleted
configmap "rocketmq-broker-config" deleted
deployment.apps "rocketmq-dashboard" deleted
service "rocketmq-dashboard-service" deleted
statefulset.apps "rocketmq-name-service" deleted
service "rocketmq-name-server-service" deleted

多 Master 多 Slave-异步仿制方式布置

思路收拾

多 Master 多 Slave-异步仿制方式的 RocketMQ 布置,运用官方提供的 RocketMQ Operator,布置起来比较快速快捷,扩容也比较便利。

默许装备会布置 1 个 Master 和 1 个对应的 Slave,布置完成后能够依据需求扩容 Master 和 Slave。

获取 RocketMQ Operator

# git 获取代码时指定版别
cd /srv 
git clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

预备资源装备清单

本文演示的资源装备清单都是直接修正 rocketmq-operator 默许的装备,出产环境应依据默许装备修正一套合适自己环境的规范装备文件,并寄存于 git 库房中。

为 deploy 资源装备清单文件添加或修正命名空间:

cd /srv/rocketmq-operator
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_brokers.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_consoles.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_nameservices.yaml
sed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_topictransfers.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/operator.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/role_binding.yaml
sed -i 's/namespace: default/namespace: zdevops/g' deploy/role_binding.yaml
sed -i 'N;18 a \  namespace: zdevops' deploy/service_account.yaml
sed -i 'N;20 a \  namespace: zdevops' deploy/role.yaml

牢记此步骤只能履行一次,假如失利了则需求删掉后从头履行。

履行完成后一定要检查一下结果是否契合预期 grep -r zdevops deploy/*

修正 example 资源装备清单文件中的命名空间:

sed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
sed -i 'N;18 a \  namespace: zdevops' example/rocketmq_v1alpha1_cluster_service.yaml

修正镜像地址为内网地址:

sed -i 's#apache/rocketmq-operator:0.3.0#registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0#g' deploy/operator.yaml
sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 

修正 RocketMQ 版别 (可选):

sed -i 's/4.5.0/4.9.4/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 默许的资源装备清单示例中布置 RocketMQ 集群的版别为 4.5.0, 实践运用时请依据需求调整。

修正 NameService 网络方式 (可选):

sed -i 's/hostNetwork: true/hostNetwork: false/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
sed -i 's/dnsPolicy: ClusterFirstWithHostNet/dnsPolicy: ClusterFirst/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 官方示例默许装备运用 hostNetwork 方式 , 适用于一起给 K8S 集群内、外运用提供服务 , 实践运用时请依据需求调整 .

个人倾向于禁用 hostNetwork 方式 , 不跟外部运用混用 . 假如需求混用 , 则倾向于在外部独立布置 RocketMQ。

修正 storageClassName 为 glusterfs:

sed -i 's/storageClassName: rocketmq-storage/storageClassName: glusterfs/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
sed -i 's/storageMode: EmptyDir/storageMode: StorageClass/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 演示环境 GlusterFS 存储对应的 storageClassName 为 glusterfs,请依据实践情况修正。

修正 nameServers 为域名的方式:

sed -i 's/nameServers: ""/nameServers: "name-server-service.zdevops:9876"/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: name-server-service.zdevops 是 NameServer service 名称 + 项目名称的组合

默许装备选用 pod [ip:port] 的方式 , 一旦 Pod IP 发生变化 ,Console 就没法办理集群了 , 且 Console 不会主动改变装备,假如设置为空的话可能还会呈现随便装备的情况,因而一定要提早修正。

修正 RocketMQ Console 外部拜访的 NodePort:

sed -i 's/nodePort: 30000/nodePort: 31080/g' example/rocketmq_v1alpha1_cluster_service.yaml

Notice: 官方示例默许装备端口号为 30000, 实践运用时请依据需求调整。

修正 RocketMQ NameServer 和 Console 的 service 装备:

sed -i '32,46s/^#//g' example/rocketmq_v1alpha1_cluster_service.yaml
sed -i 's/nodePort: 30001/nodePort: 31081/g' example/rocketmq_v1alpha1_cluster_service.yaml
sed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_cluster_service.yaml

NameServer 默许运用了 NodePort 的方式,单纯在 K8S 集群内部运用的话,能够修正为集群方式。

GitOps

出产环境实践运用时建议将上面修正收拾后的资源装备清单,独自收拾,删去 rocketmq-operator 项目中多余的文件,行成一套合适于自己事务需求的资源装备清单,并运用 Git 进行版别操控。

单 Master 方式布置计划中已经具体介绍过操作流程,此处不再多做介绍。

4.5. 布置 RocketMQ Operator (主动)

官方介绍的主动布置办法,适用于能连接互联网的环境,布置进程中需求下载 controller-gen 和 kustomize 二进制文件,一起会下载一堆 go 依靠。

不合适于内网离线环境,这儿仅仅简略介绍,本文重点选用后边的手动布置的计划。

布置 RocketMQ Operator:

make deploy

布置 RocketMQ Operator (手动)

布置 RocketMQ Operator:

kubectl create -f deploy/crds/rocketmq.apache.org_brokers.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_nameservices.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_consoles.yaml
kubectl create -f deploy/crds/rocketmq.apache.org_topictransfers.yaml
kubectl create -f deploy/service_account.yaml
kubectl create -f deploy/role.yaml
kubectl create -f deploy/role_binding.yaml
kubectl create -f deploy/operator.yaml

验证 CRDS:

$ kubectl get crd | grep rocketmq.apache.org
brokers.rocketmq.apache.org                           2022-11-09T02:54:52Z
consoles.rocketmq.apache.org                          2022-11-09T02:54:54Z
nameservices.rocketmq.apache.org                      2022-11-09T02:54:53Z
topictransfers.rocketmq.apache.org                    2022-11-09T02:54:54Z

验证 RocketMQ Operator:

$ kubectl get deploy -n zdevops -o wide
NAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTOR
rocketmq-operator   1/1     1            1           6m46s   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator
$ kubectl get pods -n zdevops -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE              NOMINATED NODE   READINESS GATES
rocketmq-operator-7cc6b48796-htpk8   1/1     Running   0          2m28s   10.233.116.70   ks-k8s-master-2   <none>           <none>

布置 RocketMQ 集群

创立服务:

$ kubectl apply -f example/rocketmq_v1alpha1_cluster_service.yaml
service/console-service created
service/name-server-service created

创立集群:

$ kubectl apply -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml
configmap/broker-config created
broker.rocketmq.apache.org/broker created
nameservice.rocketmq.apache.org/name-service created
console.rocketmq.apache.org/console created

验证

StatefulSet:

$ kubectl get sts -o wide -n zdevops
NAME                 READY   AGE   CONTAINERS     IMAGES
broker-0-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-0-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
name-service         1/1     27s   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

Deployment:

$ kubectl get deploy -o wide -n zdevops
NAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTOR
console             1/1     1            1           52s     console      registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0    app=rocketmq-console
rocketmq-operator   1/1     1            1           4h43m   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator

Pod:

$ kubectl get pods -o wide -n zdevops
NAME                                 READY   STATUS    RESTARTS      AGE     IP              NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0             47s     10.233.87.24    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0             17s     10.233.117.28   ks-k8s-master-0   <none>           <none>
console-8d685798f-5pwct              1/1     Running   0             116s    10.233.116.84   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0             96s     10.233.116.85   ks-k8s-master-2   <none>           <none>
rocketmq-operator-7cc6b48796-htpk8   1/1     Running   2 (98s ago)   4h39m   10.233.116.70   ks-k8s-master-2   <none>           <none>

Services:

$ kubectl get svc -o wide -n zdevops
NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   SELECTOR
console-service                                          NodePort    10.233.38.15    <none>        8080:31080/TCP   21m   app=rocketmq-console
name-server-service                                      NodePort    10.233.56.238   <none>        9876:31081/TCP   21m   name_service_cr=name-service   

经过浏览器打开 K8S 集群中恣意节点的 IP:31080,能够看到 RocketMQ 操控台的办理界面。

使用KubeSphere部署高可用RocketMQ集群

使用KubeSphere部署高可用RocketMQ集群

收拾资源

收拾 RocketMQ Cluster

布置集群失利或是需求从头布置时,选用下面的次序收拾删去。

kubectl delete -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml
kubectl delete -f example/rocketmq_v1alpha1_cluster_service.yaml

收拾 RocketMQ Operator

kubectl delete -f deploy/crds/rocketmq.apache.org_brokers.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_nameservices.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_consoles.yaml
kubectl delete -f deploy/crds/rocketmq.apache.org_topictransfers.yaml
kubectl delete -f deploy/service_account.yaml
kubectl delete -f deploy/role.yaml
kubectl delete -f deploy/role_binding.yaml
kubectl delete -f deploy/operator.yaml

收拾存储卷

需求手艺查找 Broker 和 NameServer 相关的存储卷并删去。

# 查找存储卷
$ kubectl get pvc -n zdevops
NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
broker-storage-broker-0-master-0      Bound    pvc-6a78b573-d72a-47ca-9012-5bc888dfcb0f   8Gi        RWO            glusterfs      3m54s
broker-storage-broker-0-replica-1-0   Bound    pvc-4f096942-505d-4e34-ac7f-b871b9f33df3   8Gi        RWO            glusterfs      3m54s
namesrv-storage-name-service-0        Bound    pvc-2c45a77e-3ca1-4eab-bb57-8374aa9068d3   1Gi        RWO            glusterfs      3m54s
# 删去存储卷
kubectl delete pvc namesrv-storage-name-service-0 -n zdevops
kubectl delete pvc broker-storage-broker-0-master-0 -n zdevops
kubectl delete pvc broker-storage-broker-0-replica-1-0 -n zdevops

扩容 NameServer

假如当前的 name service 集群规划不能满意您的需求,您能够简略地运用 RocketMQ-Operator 来扩大或缩小 name service 集群的规划。

扩容 name service 需求编写并履行独立的资源装备清单,参阅官方示例Name Server Cluster Scale,并结合自己实践环境的 rocketmq-operator 装备修正。

Notice: 不要在已布置的资源中直接修正副本数,直接修正不会收效,会被 Operator 干掉。

修正扩容 NameServer 资源装备清单 , rocketmq_v1alpha1_nameservice_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1
kind: NameService
metadata:
  name: name-service
  namespace: zdevops
spec:
  size: 2
  nameServiceImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0
  imagePullPolicy: Always
  hostNetwork: false
  dnsPolicy: ClusterFirst
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1024Mi"
      cpu: "500m"
  storageMode: StorageClass
  hostPath: /data/rocketmq/nameserver
  volumeClaimTemplates:
    - metadata:
        name: namesrv-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: glusterfs
        resources:
          requests:
            storage: 1Gi

履行扩容操作:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_nameservice_cr.yaml

验证 StatefulSet:

$ kubectl get sts name-service -o wide -n zdevops
NAME           READY   AGE   CONTAINERS     IMAGES
name-service   2/2     16m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

验证 Pods:

$ kubectl get pods -o wide -n zdevops
NAME                                 READY   STATUS    RESTARTS   AGE    IP               NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0          18m    10.233.87.117    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0          43s    10.233.117.99    ks-k8s-master-0   <none>           <none>
console-8d685798f-hnmvg              1/1     Running   0          18m    10.233.116.113   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0          18m    10.233.116.114   ks-k8s-master-2   <none>           <none>
name-service-1                       1/1     Running   0          110s   10.233.87.120    ks-k8s-master-1   <none>           <none>
rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          18m    10.233.116.112   ks-k8s-master-2   <none>           <none>

特别阐明

NameServer 扩容一定要稳重,在实践验证测验中发现 NameServer 扩容会导致重建已有的除了 Broker-0 的 Master 之外的其他 Broker 的 Master 和 全部的 Slave。按官方文档上的阐明,应该是 Operator 告诉全部的 Broker 更新 name service list parameters,以便它们能够注册到新的 NameServer Service。

一起,在 allowRestart: true 策略下,Broker 将逐步更新,因而更新进程也不会被出产者和消费者客户端感知,也就是说理论上不会影响事务(未实践测验)。

使用KubeSphere部署高可用RocketMQ集群

使用KubeSphere部署高可用RocketMQ集群

但是,全部 Broker 的 Master 和 Slave 重建后,检查集群状态时,集群节点的信息不稳定,有的时分能看到 3 个节点,有的时分则能看到 4 个节点。

使用KubeSphere部署高可用RocketMQ集群

因而,出产环境最好在初次布置的时分就装备 NameServer 的副本数为 2 或是 3,尽量不要在后期扩容,除非你能搞定扩容造成的全部结果。

扩容 Broker

通常情况下,跟着事务的开展,现有的 Broker 集群规划可能不再满意您的事务需求。你能够简略地运用 RocketMQ-Operator 来晋级、扩容 Broker 集群。

扩容 Broker 需求编写并履行独立的资源装备清单,参阅官方示例Broker Cluster Scale,并结合自己实践环境的 rocketmq-operator 装备修正。

修正扩容 Broker 资源装备清单 , rocketmq_v1alpha1_broker_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1
kind: Broker
metadata:
  name: broker
  namespace: zdevops
spec:
  size: 2
  nameServers: "name-server-service.zdevops::9876"
  replicaPerGroup: 1
  brokerImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
  imagePullPolicy: Always
  resources:
    requests:
      memory: "2048Mi"
      cpu: "250m"
    limits:
      memory: "12288Mi"
      cpu: "500m"
  allowRestart: true
  storageMode: StorageClass
  hostPath: /data/rocketmq/broker
  # scalePodName is [Broker name]-[broker group number]-master-0
  scalePodName: broker-0-master-0
  env:
    - name: BROKER_MEM
      valueFrom:
        configMapKeyRef:
          name: broker-config
          key: BROKER_MEM
  volumes:
    - name: broker-config
      configMap:
        name: broker-config
        items:
          - key: broker-common.conf
            path: broker-common.conf
  volumeClaimTemplates:
    - metadata:
        name: broker-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: glusterfs
        resources:
          requests:
            storage: 8Gi

Notice: 留意重点字段 scalePodName: broker-0-master-0。

挑选源 Broker pod,将从其中将主题和订阅信息数据等旧元数据传输到新创立的 Broker。

履行扩容 Broker:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_broker_cr.yaml

Notice: 履行成功后将布置一个新的 Broker Pod 组,一起 Operator 将在发动新 Broker 之前将源 Broker Pod 中的元数据仿制到新创立的 Broker Pod 中,因而新 Broker 将从头加载已有的主题和订阅信息。

验证 StatefulSet:

$ kubectl get sts  -o wide -n zdevops
NAME                 READY   AGE   CONTAINERS     IMAGES
broker-0-master      1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-0-replica-1   1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-1-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
broker-1-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0
name-service         2/2     43m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

验证 Pods:

$ kubectl get pods -o wide -n zdevops
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0          44m   10.233.87.117    ks-k8s-master-1   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0          26m   10.233.117.99    ks-k8s-master-0   <none>           <none>
broker-1-master-0                    1/1     Running   0          72s   10.233.116.117   ks-k8s-master-2   <none>           <none>
broker-1-replica-1-0                 1/1     Running   0          72s   10.233.117.100   ks-k8s-master-0   <none>           <none>
console-8d685798f-hnmvg              1/1     Running   0          44m   10.233.116.113   ks-k8s-master-2   <none>           <none>
name-service-0                       1/1     Running   0          44m   10.233.116.114   ks-k8s-master-2   <none>           <none>
name-service-1                       1/1     Running   0          27m   10.233.87.120    ks-k8s-master-1   <none>           <none>
rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          44m   10.233.116.112   ks-k8s-master-2   <none>           <none>

在 KubeSphere 操控台验证:

使用KubeSphere部署高可用RocketMQ集群

在 RocketMQ 操控台验证:

使用KubeSphere部署高可用RocketMQ集群

常见问题

没装 gcc 编译东西

报错信息:

[root@zdevops-master rocketmq-operator]# make docker-build IMG=${IMAGE_URL}
/data/k8s-yaml/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crds
head -n 14 deploy/role_binding.yaml > deploy/role.yaml.bak
cat deploy/role.yaml >> deploy/role.yaml.bak
rm deploy/role.yaml && mv deploy/role.yaml.bak deploy/role.yaml
/data/k8s-yaml/rocketmq-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
/usr/local/go/src/net/cgo_linux.go:12:8: no such package located
Error: not all generators ran successfully
run `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -w` to see all available markers, or `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -h` for usage
make: *** [generate] Error 1

解决计划:

$ yum install gcc

go mod 错误

报错信息:

# 履行 make docker-build IMG=${IMAGE_URL} 报错
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0
go get: added sigs.k8s.io/controller-tools v0.7.0
/data/build/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crds
Error: err: exit status 1: stderr: go: github.com/google/uuid@v1.1.2: missing go.sum entry; to add it:
        go mod download github.com/google/uuid
Usage:
  controller-gen [flags]
......
output rules (optionally as output:<generator>:...)
+output:artifacts[:code=<string>],config=<string>  package  outputs artifacts to different locations, depending on whether they're package-associated or not.   
+output:dir=<string>                               package  outputs each artifact to the given directory, regardless of if it's package-associated or not.      
+output:none                                       package  skips outputting anything.                                                                          
+output:stdout                                     package  outputs everything to standard-out, with no separation.                                             
run `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -w` to see all available markers, or `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -h` for usage
make: *** [manifests] Error 1

解决计划:

go mod tidy

结束语

本文仅仅初步介绍了 RocketMQ 在 K8s 渠道上的单 Master 节点和多 Master 多 Slave-异步仿制方式布置的办法,归于入门级。

在出产环境中还需求依据实践环境优化装备,例如调整集群的 Broker 数量、Master 和 Slave 的分配、功能调优、装备优化等。

本文由博客一文多发渠道 OpenWrite 发布!