您现在的位置是:首页 >其他 >Kubeadm 搭建 Kubernetes 集群网站首页其他

Kubeadm 搭建 Kubernetes 集群

A 小明。 2024-08-08 12:01:02
简介Kubeadm 搭建 Kubernetes 集群

1. 前置条件(服务器:192.168.255.11、192.168.255.12、192.168.255.13)

服务器名称IP地址网络适配器描述
宿主机127.0.0.1宿主机不参与集群环境
k8s-node-01192.168.255.11NAT模式Centos7系统,Kubernetes v1.25.10
k8s-node-02192.168.255.12NAT模式Centos7系统,Kubernetes v1.25.10
k8s-node-03192.168.255.13NAT模式Centos7系统,Kubernetes v1.25.10

# 时间同步
sudo yum install -y ntpdate 
sudo ntpdate time.windows.com
​
# 配置 hostname,如果已经配置可以略过
sudo hostnamectl set-hostname k8s-node-01

将配置好的 hostname 写入到 /etc/hosts 文件中。例如:

# 当前服务器 IP  hostname
192.168.255.11  k8s-node-01
# 节点服务器 IP  hostname
192.168.255.12  k8s-node-02
# 节点服务器 IP  hostname
192.168.255.13  k8s-node-03

1.1. SWAP、SELinux、Firewalld

安装和部署Kubernetes集群需要禁用swap分区、SELinux并配置防火墙,以确保Kubernetes集群的稳定性和安全性。 禁用swap分区是必需的,因为Kubernetes需要为容器提供内存资源。如果系统使用swap分区,可能会导致性能下降,甚至容器崩溃。

配置防火墙也是必需的,因为Kubernetes集群中的各个组件需要使用网络通信,并共享敏感的信息和敏感端口。

SELinux(Security-Enhanced Linux)是一种强制访问控制(MAC)机制,它可以限制进程和用户对系统资源的访问。如果 SELinux 未正确配置,可能会阻止容器运行时与 Kubernetes API 服务器通信,或者阻止容器访问存储卷。

sudo swapoff -a
# 防止系统启动时重新启用swap分区。
sudo sed -i '/swap/s/^/#/' /etc/fstab
​
sudo setenforce 0
# 防止系统启动时重新启用SELINUX。
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
​
sudo systemctl disable firewalld && systemctl stop firewalld 
​
# 保存并重启系统以使更改生效
sudo systemctl reboot

1.2. 转发 IPv4 并让 iptables 看到桥接流量

执行下述指令:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
​
sudo modprobe overlay
sudo modprobe br_netfilter
​
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
​
# 应用 sysctl 参数而不重新启动
sudo sysctl --system

通过运行以下指令确认 net.bridge.bridge-nf-call-iptablesnet.bridge.bridge-nf-call-ip6tablesnet.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

1.3. 安装容器运行时(CRI)

注:由于 Kubernetes 版本从 1.20 开始正式废弃了对 Docker Engine 的支持,所以这里选用Containerd 作为 Kubernetes 的容器运行时

可以按照以下步骤在 CentOS 7 Linux虚拟机上安装containerd:

1.3.1. 安装 Containerd

Containerd github 选择 containerd-<VERSION>-<OS>-<ARCH>.tar.gz 进行下载,验证其 sha256sum,并将文件解压至 /usr/local

sudo wget https://github.com/containerd/containerd/releases/download/v1.6.21/containerd-1.6.21-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.6.21-linux-amd64.tar.gz
# 以下为控制台输出
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stress

如果打算通过 systemd 启动 containerd ,还需要下载 containerd.service 单元文件,并放到 /usr/local/lib/systemd/system/ 该目录下。

sudo wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/

启动 containerd

sudo systemctl daemon-reload
sudo systemctl enable --now containerd

1.3.2. 安装 runc

runc github 选择 runc.<ARCH> 进行下载,验证其 sha256sum,并将文件安装至 /usr/local/sbin/runc

sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.6/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc

1.3.3. 安装 CNI 插件

CNI github 选择 cni-plugins-<OS>-<ARCH>-<VERSION>.tgz 进行下载,验证其 sha256sum,并将文件安装至 /opt/cni/bin

sudo wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
# 以下为控制台输出
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth

1.3.4. Containerd 配置文件

可以通过 containerd config default > /etc/containerd/config.toml 命令生成默认的配置文件。

# 以下是完整的 config.toml 配置文件内容。可以根据自己的需求调整下属配置
​
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
​
[cgroup]
  path = ""
​
[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0
​
[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0
​
[metrics]
  address = ""
  grpc_histogram = false
​
[plugins]
​
  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"
​
  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    # 修改为国内阿里云镜像地址
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""
​
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1
​
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"
​
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
​
        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
​
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
​
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"
​
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false
​
      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""
​
        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
​
    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"
​
    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""
​
      [plugins."io.containerd.grpc.v1.cri".registry.auths]
​
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
​
      [plugins."io.containerd.grpc.v1.cri".registry.headers]
​
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        # 配置国内镜像加速地址
          endpoint = ["<镜像加速地址>", "https://registry-1.docker.io"]
​
    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
​
  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"
​
  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"
​
  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"
​
  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"
​
  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false
​
  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false
​
  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false
​
  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]
​
  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""
​
  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""
​
  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""
​
  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""
​
  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""
​
  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false
​
  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""
​
  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""
​
[proxy_plugins]
​
[stream_processors]
​
  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"
​
  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"
​
[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"
​
[ttrpc]
  address = ""
  gid = 0
  uid = 0
​

/etc/containerd/config.toml 这个文件如有改动,需要重启 containerd 服务

sudo systemctl restart containerd

1.上述配置文件中如果 sandbox_image 使用默认配置,可能会导致 Kubernetes 集群部署失败。

2.以上就是 containerd 安装的全部过程,也可以查阅 Containerd 官方部署文档

2.部署 Kubernetes 集群

# 这一步所有服务器都需要执行。
# 添加 Kubernetes YUM 源
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF
​
# 安装 kubeadm kubectl kubelet
sudo yum install -y kubeadm-1.25.10 kubectl-1.25.10 kubelet-1.25.10
​
# 启动 kubelet
sudo systemctl enable --now kubelet 

2.1. Master 节点初始化集群(服务器:192.168.255.11)

通过 kubeadm config print init-defaults > kube-init.yaml 命令打印出默认配置并保存为 kube-init.yaml 文件。

# 以下是 kubeadm init 默认配置
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.255.11 
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-node-01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 镜像加速地址
imageRepository: registry.aliyuncs.com/google_containers 
kind: ClusterConfiguration
# 指定 Kubernetes 版本
kubernetesVersion: 1.25.10
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# Centos7 默认使用 Cgroup 驱动是 v1 版本,驱动默认为 cgroupfs。如果服务器默认使用 systemd cgroup 驱动,在 Kubernetes 版本 1.22 之后则不需要此配置
cgroupDriver: cgroupfs
# 创建集群
sudo kubeadm init --config kube-init.yaml

# 当出现 Your Kubernetes control-plane has initialized successfully! 说明 Kubernetes 已经部署成功。
# 接下来按照控制台上输出的命令操作即可
​
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 

2.2. Node 节点加入集群(服务器:192.168.255.12、192.168.255.13)

# 根据 Master 节点集群部署成功的输出信息,找到如下指令,回车运行,Node 节点就能加入到集群中。
kubeadm join 192.168.255.11:6443 --token abcdef.0123456789abcdef 
        --discovery-token-ca-cert-hash sha256:802e00c3f7ff35583cf621c33b42780d1ba34c0c864cd5660f4f0b884346eec3

2.3. 安装 CNI 插件(服务器:192.168.255.11)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 通过以下命令可以看到当前 Kubernetes 集群状态
kubectl get nodes
# 以下为控制台输出
k8s-node-01   Ready    control-plane   3h31m   v1.25.10
k8s-node-02   Ready    <none>          141m    v1.25.10
k8s-node-03   Ready    <none>          141m    v1.25.10
​至此使用 kubeadm 搭建 Kubernetes 集群服务已全部完成。文章内容均参考 Kubernetes 官方部署文档
风语者!平时喜欢研究各种技术,目前在从事后端开发工作,热爱生活、热爱工作。