前言

现在k8s基本上是每一个后端开发人员必学的技能了,为了能够花费最小的本钱学习k8s技能,我用自己的电脑跑了三个虚拟机节点,并希望在这三个节点上装置k8s 1.26版别,部署成一个master两个node的架构来满足最基本的学习;下面咱们就来逐步解说整个装置进程。

虚拟机镜像装置

我在机器上运用的虚拟机软件是VMware Fusion13.0.0,装置的操作体系是ubuntu 22.04,对应的镜像下载链接如下:ubuntu-22.04.2-live-server-amd64.iso。友情提示:能够复制下载链接经过迅雷下载,比浏览器

  • 网络设置

    在虚拟机软件装置成功后,咱们会单独创立一个网络供里面的实例运用,主要目的就是保证三个节点能够在同一个子网内相互通信,防止网络问题影响后续操作;

    Ubuntu 22安装K8S 1.26实战
    在创立好网络后,咱们在每个节点装置的进程中都需求把刚刚创立好的网络装备上去:
    Ubuntu 22安装K8S 1.26实战
    Ubuntu 22安装K8S 1.26实战
    最终咱们三个节点的ip分别是:

    192.168.56.135 k8s.master1

    192.168.56.134 k8s.node1

    192.168.56.133 k8s.node2

  • 处理器、内存、磁盘装备

    在节点装置进程中,需求修正一下处理器、内存以及磁盘的装备:

    Ubuntu 22安装K8S 1.26实战
    装备2个处理器内核以及4G内存
    Ubuntu 22安装K8S 1.26实战
    装备40G磁盘大小
    Ubuntu 22安装K8S 1.26实战

修正体系设置

Ubuntu 22.04装置结束后,咱们需求做以下检查和操作:

  • 检查网络

    在每个节点装置成功后,需求经过ping指令检查以下几项:

    1.是否能够pingbaidu.com

    2.是否能够ping通宿主机;

    3.是否能够ping通子网内其他节点;

  • 检查时区

    时区不正确的能够经过下面的指令来修正:

    sudo tzselect
    

    根据体系提示进行挑选即可;

  • 装备ubuntu体系国内源

    由于咱们需求在ubuntu 22.04体系上装置k8s,为了防止遭受科学上网的问题,咱们需求装备一下国内的源;

    • 备份默许源:

      sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
      sudo rm -rf /etc/apt/sources.list
      
    • 装备国内源:

      sudo vi /etc/apt/sources.list
      

      内容如下:

      deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
      deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
      deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
      
    • 更新

      修正结束后,需求履行以下指令收效

      sudo apt-get update
      sudo apt-get upgrade
      
  • 禁用 selinux

    默许ubuntu下没有这个模块,centos下需求禁用selinux;

  • 禁用swap

    临时禁用指令:

    sudo swapoff -a
    

    永久禁用:

    sudo vi /etc/fstab
    

    将最终一行注释后重启体系即收效:

    #/swap.img    none   swap   sw    0    0
    
  • 修正内核参数:

    sudo tee /etc/modules-load.d/containerd.conf <<EOF
    overlay
    br_netfilter
    EOF
    ​
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    

    运转以下指令使得上述装备收效:

    sudo sysctl --system
    

装置Docker Engine

官方文档在这里ubuntu装置Docker Engine;

  • 卸载旧版别

    sudo apt-get remove docker docker-engine docker.io containerd runc
    
  • 更新apt

    sudo apt-get update
    sudo apt-get install \
       ca-certificates \
      curl \
       gnupg \
       lsb-release
    
  • 增加官方GPG key

    sudo mkdir -m 0755 -p /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    

    此处会报正告,可是不妨碍;

  • 装备repository

    echo \
     "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
     $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  • 更新apt索引

    sudo apt-get update
    
  • 装置Docker Engine

    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
  • 装备阿里云镜像源

    需求登陆阿里云:cr.console.aliyun.com/cn-hangzhou…

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
     "registry-mirrors": ["阿里云镜像源链接"]
    }
    EOF
    

    不想登陆的小伙伴能够用中国科学技能大学镜像源:docker.mirrors.ustc.edu.cn

  • 设置开机自启动docker

    sudo systemctl enable docker.service
    
  • 重启docker engine

    sudo systemctl restart docker.service
    

修正containerd装备

修正这个装备是要害,不然会由于科学上网的问题导致k8s装置出错;比如后续kubeadm init失利,kubeadm join后节点状况一向处于NotReady状况等问题;

  • 备份默许装备

    sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.bak
    
  • 修正装备

    sudo vi /etc/containerd/config.toml
    

    装备内容如下:

    disabled_plugins = []
    imports = []
    oom_score = 0
    plugin_dir = ""
    required_plugins = []
    root = "/var/lib/containerd"
    state = "/run/containerd"
    temp = ""
    version = 2[cgroup]
     path = ""[debug]
     address = ""
     format = ""
     gid = 0
     level = ""
     uid = 0[grpc]
     address = "/run/containerd/containerd.sock"
     gid = 0
     max_recv_message_size = 16777216
     max_send_message_size = 16777216
     tcp_address = ""
     tcp_tls_ca = ""
     tcp_tls_cert = ""
     tcp_tls_key = ""
     uid = 0[metrics]
     address = ""
     grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]
      deletion_threshold = 0
      mutation_threshold = 100
      pause_threshold = 0.02
      schedule_delay = "0s"
      startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]
      device_ownership_from_security_context = false
      disable_apparmor = false
      disable_cgroup = false
      disable_hugetlb_controller = true
      disable_proc_mount = false
      disable_tcp_service = true
      enable_selinux = false
      enable_tls_streaming = false
      enable_unprivileged_icmp = false
      enable_unprivileged_ports = false
      ignore_image_defined_volumes = false
      max_concurrent_downloads = 3
      max_container_log_line_size = 16384
      netns_mounts_under_state_dir = false
      restrict_oom_score_adj = false
      sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
      selinux_category_range = 1024
      stats_collect_period = 10
      stream_idle_timeout = "4h0m0s"
      stream_server_address = "127.0.0.1"
      stream_server_port = "0"
      systemd_cgroup = false
      tolerate_missing_hugetlb_controller = true
      unset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]
       bin_dir = "/opt/cni/bin"
       conf_dir = "/etc/cni/net.d"
       conf_template = ""
       ip_pref = ""
       max_conf_num = 1[plugins."io.containerd.grpc.v1.cri".containerd]
       default_runtime_name = "runc"
       disable_snapshot_annotations = true
       discard_unpacked_layers = false
       ignore_rdt_not_enabled_errors = false
       no_pivot = false
       snapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
         base_runtime_spec = ""
         cni_conf_dir = ""
         cni_max_conf_num = 0
         container_annotations = []
         pod_annotations = []
         privileged_without_host_devices = false
         runtime_engine = ""
         runtime_path = ""
         runtime_root = ""
         runtime_type = "io.containerd.runc.v2"[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          BinaryName = ""
          CriuImagePath = ""
          CriuPath = ""
          CriuWorkPath = ""
          IoGid = 0
          IoUid = 0
          NoNewKeyring = false
          NoPivotRoot = false
          Root = ""
          ShimCgroup = ""
          SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]
       key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]
       config_path = ""[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
       tls_cert_file = ""
       tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]
      path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]
      interval = "10s"[plugins."io.containerd.internal.v1.tracing"]
      sampling_ratio = 1.0
      service_name = "containerd"[plugins."io.containerd.metadata.v1.bolt"]
      content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]
      no_prometheus = false[plugins."io.containerd.runtime.v1.linux"]
      no_shim = false
      runtime = "runc"
      runtime_root = ""
      shim = "containerd-shim"
      shim_debug = false[plugins."io.containerd.runtime.v2.task"]
      platforms = ["linux/amd64"]
      sched_core = false[plugins."io.containerd.service.v1.diff-service"]
      default = ["walking"]
    ​
     [plugins."io.containerd.service.v1.tasks-service"]
      rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]
      root_path = ""[plugins."io.containerd.snapshotter.v1.btrfs"]
      root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]
      async_remove = false
      base_image_size = ""
      discard_blocks = false
      fs_options = ""
      fs_type = ""
      pool_name = ""
      root_path = ""[plugins."io.containerd.snapshotter.v1.native"]
      root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]
      root_path = ""
      upperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]
      root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"]
      endpoint = ""
      insecure = false
      protocol = ""[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
      accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
      path = "ctd-decoder"
      returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
      accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
      path = "ctd-decoder"
      returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]
     "io.containerd.timeout.bolt.open" = "0s"
     "io.containerd.timeout.shim.cleanup" = "5s"
     "io.containerd.timeout.shim.load" = "5s"
     "io.containerd.timeout.shim.shutdown" = "3s"
     "io.containerd.timeout.task.state" = "2s"[ttrpc]
     address = ""
     gid = 0
     uid = 0
    
  • 重启containerd服务

    sudo systemctl enable containerd
    sudo systemctl daemon-reload && systemctl restart containerd
    

装置k8s组件

  • 增加k8s的阿里云yum源

    curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
    ​
    sudo apt-add-repository "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
    ​
    sudo apt-get update
    
  • 装置k8s组件

    sudo apt update
    sudo apt install -y kubelet=1.26.1-00 kubeadm=1.26.1-00 kubectl=1.26.1-00
    sudo apt-mark hold kubelet kubeadm kubectl
    

    能够经过apt-cache madison kubelet指令检查kubelet组件的版别;其他组件检查也是相同的指令,把对应方位的组件称号替换即可;

下载k8s镜像

  • 装备k8s镜像列表

    sudo kubeadm config images list --kubernetes-version=v1.26.1
    
  • 下载镜像

    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.1
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.1
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.1
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.1
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
    sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
    

初始化master

  • 生成kubeadm默许装备文件

    sudo kubeadm config print init-defaults > kubeadm.yaml
    

    修正默许装备

    sudo vi kubeadm.yaml
    

    一共做了四处修正:

    1.修正localAPIEndpoint.advertiseAddress为master的ip;

    2.修正nodeRegistration.name为当前节点称号;

    3.修正imageRepository为国内源:registry.cn-hangzhou.aliyuncs.com/google_containers

    4.增加networking.podSubnet,该网络ip规模不能与networking.serviceSubnet抵触,也不能与节点网络192.168.56.0/24相抵触;所以我就设置成192.168.66.0/24;

    修正后的内容如下:

    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.56.136
      bindPort: 6443
    nodeRegistration:
      criSocket: unix:///var/run/containerd/containerd.sock
      imagePullPolicy: IfNotPresent
      name: k8s-master1
      taints: null
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
       dataDir: /var/lib/etcd
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: 1.26.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 192.168.66.0/24
    scheduler: {}
    
  • 履行初始化操作

    sudo kubeadm init —config kubeadm.yaml
    

    假如在履行init操作中有任何过错,能够运用journalctl -u kubelet检查到的过错日志;失利后咱们能够经过下面的指令重置,不然再次init会存在端口抵触的问题:

    sudo kubeadm reset
    

    初始化成功后,依照提示履行下面指令:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    再切换到root用户履行下面指令:

    export KUBECONFIG=/etc/kubernetes/admin.conf
    

    先不要着急kubeadm join其他节点进来,能够切换root用户履行以下指令就能看到节点列表:

    kubectl get nodes
    

    可是此刻的节点状况还是NotReady状况,咱们接下来需求装置网络插件;

给master装置calico网络插件

  • 下载calico.yaml装备

    curl https://docs.tigera.io/archive/v3.25/manifests/calico.yaml -O
    

    假如下载不了的小伙伴能够自己经过上面的链接去浏览器下载,这个是calico 3.25版别的装备;

    下载后需求修正其间的一项装备:

    • 找到以下装备项:

      - name: CLUSTER_TYPE
        value: "k8s,bgp"
      
    • 在该装备项下增加以下装备:

      - name: IP_AUTODETECTION_METHOD
        value: "interface=ens.*"
      
  • 装置插件

    sudo kubectl apply -f calico.yaml
    

    卸载该插件能够运用下面指令:

    sudo kubectl delete -f calico.yaml
    

calico网络插件装置成功后,master节点的状况将逐步转变成Ready状况;假如状况一向是NotReady,建议重启一下master节点;

接入两个作业节点

咱们在master节点init成功后,会提示能够经过kubeadm join指令把作业节点参与进来。咱们在master节点装置好calico网络插件后,就能够分别在两个作业节点中履行kubeadm join指令了:

sudo kubeadm join --token ......

假如咱们忘记了master中生成的指令,咱们依然能够经过以下指令让master节点重新生成一下kubeadm join指令:

sudo kubeadm token create --print-join-command

咱们在作业节点履行完kubeadm join指令后,需求回到master节点履行以下指令检查作业节点是否逐步转变为Ready状况:

sudo kubectl get nodes

假如作业节点长时间处于NotReady状况,咱们需求检查pods状况:

sudo kubectl get pods -n kube-system

检查方针pod的日志能够运用下面指令:

kubectl describe pod -n kube-system [pod-name]

当所有作业节点都转变成Ready状况后,咱们就能够装置Dashboard了;

装置Dashboard

  • 预备装备文件

    能够科学上网的小伙伴能够依照github上的文档来:github.com/kubernetes/…,我挑选的是2.7.0版别;

    不能科学上网的小伙伴就依照下面步骤来,在master节点操作:

    sudo vi recommended.yaml
    

    文件内容如下:

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
       - port: 443
        targetPort: 8443
      selector:
       k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
     # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
       resources: ["secrets"]
       resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
       verbs: ["get", "update", "delete"]
      # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
       resources: ["configmaps"]
       resourceNames: ["kubernetes-dashboard-settings"]
       verbs: ["get", "update"]
      # Allow Dashboard to get metrics.
      - apiGroups: [""]
       resources: ["services"]
       resourceNames: ["heapster", "dashboard-metrics-scraper"]
       verbs: ["proxy"]
      - apiGroups: [""]
       resources: ["services/proxy"]
       resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
       verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
     # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
       resources: ["pods", "nodes"]
       verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
       name: kubernetes-dashboard
       namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
       name: kubernetes-dashboard
       namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
       k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
       matchLabels:
        k8s-app: kubernetes-dashboard
      template:
       metadata:
        labels:
         k8s-app: kubernetes-dashboard
       spec:
        securityContext:
         seccompProfile:
          type: RuntimeDefault
        containers:
         - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.6.1
          imagePullPolicy: Always
          ports:
           - containerPort: 8443
            protocol: TCP
          args:
           - --auto-generate-certificates
           - --namespace=kubernetes-dashboard
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
          volumeMounts:
           - name: kubernetes-dashboard-certs
            mountPath: /certs
           # Create on-disk volume to store exec logs
           - mountPath: /tmp
            name: tmp-volume
          livenessProbe:
           httpGet:
            scheme: HTTPS
            path: /
            port: 8443
           initialDelaySeconds: 30
           timeoutSeconds: 30
          securityContext:
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 1001
           runAsGroup: 2001
        volumes:
         - name: kubernetes-dashboard-certs
          secret:
           secretName: kubernetes-dashboard-certs
         - name: tmp-volume
          emptyDir: {}
        serviceAccountName: kubernetes-dashboard
        nodeSelector:
         "kubernetes.io/os": linux
       # Comment the following tolerations if Dashboard must not be deployed on master
        tolerations:
         - key: node-role.kubernetes.io/master
          effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
       k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
       - port: 8000
        targetPort: 8000
      selector:
       k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
       k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
       matchLabels:
        k8s-app: dashboard-metrics-scraper
      template:
       metadata:
        labels:
         k8s-app: dashboard-metrics-scraper
       spec:
        securityContext:
         seccompProfile:
          type: RuntimeDefault
        containers:
         - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
           - containerPort: 8000
            protocol: TCP
          livenessProbe:
           httpGet:
            scheme: HTTP
            path: /
            port: 8000
           initialDelaySeconds: 30
           timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
           name: tmp-volume
          securityContext:
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 1001
           runAsGroup: 2001
        serviceAccountName: kubernetes-dashboard
        nodeSelector:
         "kubernetes.io/os": linux
       # Comment the following tolerations if Dashboard must not be deployed on master
        tolerations:
         - key: node-role.kubernetes.io/master
          effect: NoSchedule
        volumes:
         - name: tmp-volume
          emptyDir: {}
    
  • 装置

    经过履行以下指令装置Dashboard:

    sudo kubectl apply -f recommended.yaml
    

    此刻就静静等待,直到kubectl get pods -A指令下显示都是Running状况:

    kubernetes-dashboard dashboard-metrics-scraper-7bc864c59-tdxdd 1/1 Running 0 5m32s

    kubernetes-dashboard kubernetes-dashboard-6ff574dd47-p55zl 1/1 Running 0 5m32s

  • 检查端口

    sudo kubectl get svc -n kubernetes-dashboard
    

    Ubuntu 22安装K8S 1.26实战

  • 浏览器拜访Dashboard

    根据上面的指令咱们知道端口后,那么就能够在宿主机的浏览器上拜访Dashboard了;假如chorme浏览器显示该网站不可信,请点击持续前往;

    Ubuntu 22安装K8S 1.26实战
    挑选运用Token登录

  • 生成Token

    在master节点中履行下面指令创立admin-user

    sudo vi dash.yaml
    

    装备文件内容如下:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    

    接下来咱们经过下面指令创立admin-user

    sudo kubectl apply -f dash.yaml
    

    日志如下:

    serviceaccount/admin-user created

    clusterrolebinding.rbac.authorization.k8s.io/admin-user created

    这就表示admin-user账号创立成功,接下来咱们生成该用户的Token

    kubectl -n kubernetes-dashboard create token admin-user
    

    咱们把生成的Token复制好贴进浏览器对应的输入框中并点击登录按钮

    Ubuntu 22安装K8S 1.26实战

至此,咱们就在ubuntu 22.04上完成了k8s的装置。

本文正在参与「金石方案」