作者:老Z,中电信数智科技有限公司山东分公司运维架构师,云原生爱好者,目前专心于云原生运维,云原生范畴技能栈触及Kubernetes、KubeSphere、DevOps、OpenStack、Ansible等。

KubeKey 是一个用于布置 K8s 集群的开源轻量级东西。

它供给了一种灵活、快速、快捷的方法来仅装置 Kubernetes/K3s,或同时装置 K8s/K3s 和 KubeSphere,以及其他云原生插件。除此之外,它也是扩展和晋级集群的有效东西。

KubeKey v2.1.0 版别新增了清单 (manifest) 和制品 (artifact) 的概念,为用户离线布置 K8s 集群供给了一种解决方案。

manifest 是一个描述当前 K8s 集群信息和界说 artifact 制品中需求包含哪些内容的文本文件。

在过去,用户需求准备布置东西,镜像 tar 包和其他相关的二进制文件,每位用户需求布置的 K8s 版别和需求布置的镜像都是不同的。现在运用 KubeKey,用户只需运用清单 manifest 文件来界说即将离线布置的集群环境需求的内容,再经过该 manifest 来导出制品 artifact 文件即可完成准备工作。离线布置时只需求 KubeKey 和 artifact 就可快速、简略的在环境中布置镜像库房和 K8s 集群。

KubeKey 生成 manifest 文件有两种方法。

  • 使用现有运转中的集群作为源生成 manifest 文件,也是官方引荐的一种方法,详细参阅 KubeSphere 官网的离线布置文档。
  • 依据 模板文件 手动编写 manifest 文件。

第一种方法的好处是能够构建 1:1 的运转环境,可是需求提早布置一个集群,不够灵活度,并不是所有人都具有这种条件的。

因而,本文参阅官方的离线文档,选用手写 manifest 文件的方法,实现离线环境的装置布置。

本文知识点

  • 定级:入门级
  • 了解清单 (manifest) 和制品 (artifact) 的概念
  • 把握 manifest 清单的编写方法
  • 依据 manifest 清单制造 artifact
  • 离线布置 KubeSphere 和 Kubernetes

演示服务器装备

主机名 IP CPU 内存 体系盘 数据盘 用处
zdeops-master 192.168.9.9 2 4 40 200 Ansible 运维控制节点
ks-k8s-master-0 192.168.9.91 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-1 192.168.9.92 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
ks-k8s-master-2 192.168.9.93 4 16 40 200+200 KubeSphere/k8s-master/k8s-worker/Ceph
es-node-0 192.168.9.95 2 8 40 200 ElasticSearch
es-node-1 192.168.9.96 2 8 40 200 ElasticSearch
es-node-2 192.168.9.97 2 8 40 200 ElasticSearch
harbor 192.168.9.89 2 8 40 200 Harbor
合计 8 22 84 320 2200

演示环境触及软件版别信息

  • 操作体系:CentOS-7.9-x86_64

  • KubeSphere:3.3.0

  • Kubernetes:1.24.1

  • Kubekey:v2.2.1

  • Ansible:2.8.20

  • Harbor:2.5.1

离线布置资源制造

下载 KubeKey

# 在zdevops-master 运维开发服务器履行

# 挑选中文区下载(拜访github受限时运用)
$ export KKZONE=cn

# 下载KubeKey
$ mkdir /data/kubekey
$ cd /data/kubekey/
$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

获取 manifest 模板

参阅 github.com/kubesphere/…

有两个参阅用例,一个简略版,一个完好版。参阅简略版就能够。

获取 ks-installer images-list

$ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt

文中的 image 列表选用的 Docker Hub 库房其他组件寄存的公共库房,国内主张一致更改前缀为 registry.cn-beijing.aliyuncs.com/kubesphereio

修正后的完好的镜像列表鄙人面的 manifest 文件中展示。

请留意,example-images 包含的 image 中只保留了 busybox,其他的在本文中没有运用。

获取操作体系依靠包

$ wget https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso

将该 ISO 文件放到制造离线镜像的服务器的 /data/kubekey 目录下

生成 manifest 文件

依据上面的文件及相关信息,生成最终 manifest.yaml

命名为 ks-v3.3.0-manifest.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: centos
    version: "7"
    osImage: CentOS Linux 7 (Core)
    repository:
      iso:
        localPath: "/data/kubekey/centos7-rpms-amd64.iso"
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.24.1
  components:
    helm: 
      version: v3.6.3
    cni: 
      version: v0.9.1
    etcd: 
      version: v3.4.13
    containerRuntimes:
    - type: containerd
      version: 1.6.4
    crictl: 
      version: v1.24.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.4.1
    docker-compose:
      version: v2.2.2
  images:
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.13
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
  - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
  - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
  - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
  - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
  - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
  - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
  - registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
  registry:
    auths: {}

manifest 修正阐明

  • 敞开 harbordocker-compose 装备项,为后边经过 KubeKey 自建 harbor 库房推送镜像运用。
  • 默许创立的 manifest 里面的镜像列表从 docker.io 获取,替换前缀为 registry.cn-beijing.aliyuncs.com/kubesphereio
  • 若需求导出的 artifact 文件中包含操作体系依靠文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中装备相应的 ISO 依靠文件下载地址为 localPath ,填写提早下载好的 ISO 包在本地的寄存路径,并将 url 装备项置空。
  • 您能够拜访 github.com/kubesphere/… 下载 ISO 文件。

导出制品 artifact

$ export KKZONE=cn

$ ./kk artifact export -m ks-v3.3.0-manifest.yaml -o kubesphere-v3.3.0-artifact.tar.gz

制品 (artifact) 阐明

  • 制品(artifact)是一个依据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。
  • 在 KubeKey 初始化镜像库房、创立集群、增加节点和晋级集群的命令中均可指定一个 artifact,KubeKey 将主动解包该 artifact 并在履行命令时直接运用解包出来的文件。

导出 Kubekey

$ tar zcvf kubekey-v2.2.1.tar.gz kk kubekey-v2.2.1-linux-amd64.tar.gz

K8s 服务器初始化装备

本节履行离线环境 K8s 服务器初始化装备。

Ansible hosts 装备

[k8s]
ks-k8s-master-0 ansible_ssh_host=192.168.9.91  host_name=ks-k8s-master-0
ks-k8s-master-1 ansible_ssh_host=192.168.9.92  host_name=ks-k8s-master-1
ks-k8s-master-2 ansible_ssh_host=192.168.9.93  host_name=ks-k8s-master-2
[es]
es-node-0 ansible_ssh_host=192.168.9.95 host_name=es-node-0
es-node-1 ansible_ssh_host=192.168.9.96 host_name=es-node-1
es-node-2 ansible_ssh_host=192.168.9.97 host_name=es-node-2
harbor ansible_ssh_host=192.168.9.89 host_name=harbor
[servers:children]
k8s
es
[servers:vars]
ansible_connection=paramiko
ansible_ssh_user=root
ansible_ssh_pass=F@ywwpTj4bJtYwzpwCqD

检测服务器连通性

# 使用 ansible 检测服务器的连通性

$ cd /data/ansible/ansible-zdevops/inventories/dev/
$ source /opt/ansible2.8/bin/activate
$ ansible -m ping all

初始化服务器装备

# 使用 ansible-playbook 初始化服务器装备

$ ansible-playbook ../../playbooks/init-base.yaml -l k8s

挂载数据盘

  • 挂载第一块数据盘
# 使用 ansible-playbook 初始化主机数据盘
# 留意 -e data_disk_path="/data" 指定挂载目录, 用于存储 Docker 容器数据

$ ansible-playbook ../../playbooks/init-disk.yaml -e data_disk_path="/data" -l k8s
  • 挂载验证
# 使用 ansible 验证数据盘是否格式化并挂载
$ ansible harbor -m shell -a 'df -h'

# 使用 ansible 验证数据盘是否装备主动挂载

$ ansible harbor -m shell -a 'tail -1  /etc/fstab'

装置 K8s 体系依靠包

# 使用 ansible-playbook 装置 kubernetes 体系依靠包
# ansible-playbook 中设置了启用 GlusterFS 存储的开关,默许敞开,不需求的能够将参数设置为 False

$ ansible-playbook ../../playbooks/deploy-kubesphere.yaml -e k8s_storage_glusterfs=false -l k8s

离线装置集群

传输离线布置资源到布置节点

将以下离线布置资源,传到布置节点 (通常是第一个 master 节点) 的 /data/kubekey 目录。

  • Kubekey:kubekey-v2.2.1.tar.gz
  • 制品 artifact:kubesphere-v3.3.0-artifact.tar.gz

履行以下操作,解压 kubekey。

$ cd /data/kubekey
$ tar xvf kubekey-v2.2.1.tar.gz

创立离线集群装备文件

  • 创立装备文件
$ ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.24.1 -f config-sample.yaml
  • 修正装备文件
$ vim config-sample.yaml

修正内容阐明

  • 依照实际离线环境装备修正节点信息。
  • 按实际情况增加 registry 的相关信息。
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ks-k8s-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  - {name: ks-k8s-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  - {name: ks-k8s-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "F@ywwpTj4bJtYwzpwCqD"}
  roleGroups:
    etcd:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    control-plane: 
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
    worker:
    - ks-k8s-master-0
    - ks-k8s-master-1
    - ks-k8s-master-2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy
    domain: lb.zdevops.com.cn
    address: ""
    port: 6443
  kubernetes:
    version: v1.24.1
    clusterName: zdevops.com.cn
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: "harbor"
    auths:
      "registry.zdevops.com.cn":
         username: admin
         password: Harbor12345
    privateRegistry: "registry.zdevops.com.cn"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
  addons: []
# 下面的内容不修正,不做展示

在 Harbor 中创立项目

本文选用提早布置好的 Harbor 来寄存镜像,布置过程参阅我之前写的 基于 KubeSphere 玩转 k8s-Harbor 装置手记。

你能够运用 kk 东西主动布置 Harbor,详细参阅官方离线布置文档。

  • 下载创立项目脚本模板
$ curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
  • 依据实际情况修正项目脚本
#!/usr/bin/env bash

# Harbor 库房地址
url="https://registry.zdevops.com.cn"

# 拜访 Harbor 库房用户
user="admin"

# 拜访 Harbor 库房用户暗码
passwd="Harbor12345"

# 需求创立的项目名列表,正常只需求创立一个**kubesphereio**即可,这儿为了保留变量可扩展性多写了两个。
harbor_projects=(library
    kubesphereio
    kubesphere
)
for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}"
done
  • 履行脚本创立项目
$ sh create_project_harbor.sh

推送离线镜像到 Harbor 库房

将提早准备好的离线镜像推送到 Harbor 库房,这一步为可选项,因为创立集群的时分也会再次推送镜像。为了布置一次成功率,主张先推送。

$ ./kk artifact image push -f config-sample.yaml -a  kubesphere-v3.3.0-artifact.tar.gz

创立集群并装置 OS 依靠

$ ./kk create cluster -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz --with-packages

参数阐明

  • config-sample.yaml:离线环境集群的装备文件。
  • kubesphere-v3.3.0-artifact.tar.gz:制品包的 tar 包镜像。
  • –with-packages:若需求装置操作体系依靠,需指定该选项。

检查集群状况

$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

正确装置完成后,您会看到以下内容:

**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################
Console: http://192.168.9.91:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.
#####################################################
https://kubesphere.io             2022-06-30 14:30:19
#####################################################

登录 Web 控制台

经过 http://{IP}:30880 运用默许帐户和暗码 admin/P@88w0rd 拜访 KubeSphere 的 Web 控制台,进行后续的操作装备。

KubeSphere 3.3.0 离线安装教程

总结

感谢您完好的阅读完本文,为此,您应该 Get 到了以下技能

  • 了解了清单 (manifest) 和制品 (artifact) 的概念
  • 了解 manifest 和 image 资源的获取地址
  • 手写 manifest 清单
  • 依据 manifest 清单制造 artifact
  • 离线布置 KubeSphere 和 Kubernetes
  • Harbor 镜像库房主动创立项目
  • Ansible 运用的小技巧

目前为止,我们已经完成了最小化环境的 KubeSphere 和 K8s 集群的布置。可是,这仅仅是一个开始,后续还有很多装备和运用技巧,敬请期待 …

本文由博客一文多发渠道 OpenWrite 发布!