公司研发资源集群环境搭建

资源清单

类型 主机名 IP 宿主机IP 用途 备注
虚机 dns1 192.168.127.253 192.168.1.235 DNS节点1
虚机 dns2 192.168.127.254 192.168.1.33 DNS节点2 待迁移到其他宿主机
虚机 ceph-mgr1 192.168.126.1 192.168.1.38 ceph管理节点
PC ceph-mon1 192.168.127.80 ceph监控节点 OSD共用
PC ceph-mon2 192.168.127.81 ceph监控节点 OSD共用
PC ceph-mon3 192.168.127.82 ceph监控节点 OSD共用
虚拟IP 192.168.127.1 负载均衡浮动IP
虚机 k8s-lb1 192.168.127.2 192.168.1.38 负载均衡节点1
虚机 k8s-lb2 192.168.127.3 192.168.1.33 负载均衡节点2 待迁移到其他宿主机
虚机 k8s-master1 192.168.127.11 192.168.1.38 控制平面节点
虚机 k8s-master2 192.168.127.12 192.168.1.33 控制平面节点
虚机 k8s-master3 192.168.127.13 192.168.1.235 控制平面节点
物理机 k8s-worker1 192.168.127.21 工作节点
物理机 k8s-worker2 192.168.127.22 工作节点
物理机 k8s-worker3 192.168.127.23 工作节点

基础操作系统

  • 虚拟机环境统一使用 VMware ESXi 6.5+

    ESXi 许可密钥 4F6FX-2W197-8ZKZ9-Y31ZM-1C3LZ

  • 虚拟机及物理机操作系统 使用 CentOS-7-x86_64-Minimal-2009.iso

    安装好之后执行以下操作

#修改文件句柄数限制
cat >> /etc/security/limits.conf << EOF
* soft nofile 1048576
* hard nofile 1048576
EOF
#修改系统参数
cat >> /etc/sysctl.conf << EOF
fs.file-max= 10485760
vm.max_map_count= 262144
kernel.pid_max = 4194303
vm.swappiness = 0 
net.ipv4.ip_local_port_range= 1024 65535
net.ipv4.tcp_mem= 786432 2097152 3145728
net.ipv4.tcp_rmem= 4096 4096 16777216
net.ipv4.tcp_wmem= 4096 4096 16777216
net.ipv4.tcp_max_orphans= 131072
EOF
sysctl -p
#关闭防火墙
systemctl disable firewalld --now
#关闭NetworkManager
systemctl disable NetworkManager --now
#关闭SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
#禁用swap
swapoff -a
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab
#释放swap分区空间
lvremove /dev/mapper/centos-swap
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root
sed -i 's/rd.lvm.lv=centos\/swap//' /etc/default/grub
grub2-mkconfig >/etc/grub2.cfg
#修改yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum makecache
#升级系统软件
yum update -y
#升级lt内核(视网卡支持情况可能需要mt内核)
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
yum --disablerepo=\* --enablerepo=elrepo-kernel install  kernel-lt.x86_64  -y
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64  -y
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt-tools.x86_64  -y
#设置内核启动顺序
grub2-set-default 0
#重启
reboot

分布式文件存储

使用虚拟机(不推荐)

ESXi配置硬盘直通,使用SSH连接ESXi宿主机

# 查看硬盘 显示带":"的是分区
ls -lh /vmfs/devices/disks/ | grep -v vml 
# 配置直通 注意修改硬盘名称和vmdk存储名称
disk="t10.ATA_____ST500DM0022D1SB10A___________________________________ZA40SAL4"
datastore="datastore1"
vmdkname="PassthruHDD1"
vmkfstools -z "/vmfs/devices/disks/$disk" "/vmfs/volumes/$datastore/$vmdkname.vmdk"

OSD节点除系统盘外 挂载独立的直通磁盘

安装

本次使用PC作为存储节点最小化部署,MON、OSD共用一台PC,MGR使用虚拟机

各节点配置IP、hostname等 CEPH-MGR节点作为部署节点

# 节点Hosts
cat >> /etc/hosts << EOF
192.168.126.1 ceph-mgr1
192.168.127.80 ceph-mon1
192.168.127.81 ceph-mon2
192.168.127.82 ceph-mon3
EOF
# 配置免费访问
### 分步执行
ssh-keygen -t rsa
### 分步执行
ssh-copy-id ceph-mgr1
### 分步执行
for i in {1..3}; do ssh-copy-id ceph-mon$i; done

配置ceph源

cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=Ceph packages for \$basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
EOF
#拷贝源
for i in {1..3}; do scp /etc/yum.repos.d/ceph.repo ceph-mon$i:/etc/yum.repos.d/ceph.repo; done
# 安装epel源 python依赖
for i in {1..3}; do ssh ceph-mon$i yum install -y epel-release python-setuptools ; done
# ADM节点安装Deploy
yum install -y epel-release
yum install -y ceph-deploy python-setuptools 
# 在各节点上安装ceph
ceph-deploy install --no-adjust-repos ceph-mgr1 ceph-mon1 ceph-mon2 ceph-mon3 

集群配置 ADM节点执行

mkdir /ceph-deply
cd /ceph-deploy
# 创建集群
ceph-deploy new ceph-mon1 ceph-mon2 ceph-mon3
# 修改配置
cat >> ceph.conf << EOF
public network = 192.168.1.0/17
cluster network = 192.168.1.0/17
osd pool default size       = 3
osd pool default min size   = 2
osd pool default pg num     = 256
osd pool default pgp num    = 256
osd pool default crush rule = 0
osd crush chooseleaf type   = 1
max open files              = 131072
ms bind ipv6                = false
[mon]
mon clock drift allowed      = 10
mon clock drift warn backoff = 30
mon osd full ratio           = .95
mon osd nearfull ratio       = .85
mon osd down out interval    = 600
mon osd report timeout       = 300
mon allow pool delete      = true
[osd]
osd recovery max active      = 3    
osd max backfills            = 5
osd max scrubs               = 2
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=1024
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
filestore max sync interval  = 5
osd op threads               = 2
EOF
# 初始化密钥
ceph-deploy --overwrite-conf mon create-initial
# 将ceph.client.admin.keyring拷贝到各个mon节点上
ceph-deploy --overwrite-conf admin ceph-mgr1 ceph-mon1 ceph-mon2 ceph-mon3 
# 部署MGR节点
ceph-deploy mgr create ceph-mgr1
# 查看存储节点硬盘
ceph-deploy disk list ceph-mon1 ceph-mon2 ceph-mon3
# 创建OSD(注意直通盘盘符)
ceph-deploy --overwrite-conf osd create --data /dev/sda ceph-mon1
ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph-mon1
ceph-deploy --overwrite-conf osd create --data /dev/sda ceph-mon2
ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph-mon2
ceph-deploy --overwrite-conf osd create --data /dev/sda ceph-mon3
ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph-mon3
# 如提示磁盘存在分区无法创建 删除磁盘分区后重试(按需执行)zap失败请使用fdisk手动删除分区
ceph-deploy disk zap ceph-mon2 /dev/sda
# 查看集群状态
ceph -s

不允许不安全的 global_id 回收

ceph config set mon mon_warn_on_insecure_global_id_reclaim true
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed true
ceph config set mon auth_allow_insecure_global_id_reclaim false

MGR节点 开启Dashboard

# 设置服务绑定地址及端口
ceph config set mgr mgr/dashboard/server_addr 0.0.0.0
# 进程使用ceph用户无法监听1024以下端口
ceph config set mgr mgr/dashboard/server_port 8080
ceph config set mgr mgr/dashboard/ssl false
#ceph config set mgr mgr/dashboard/ssl_server_port 8443
# 安装dashboard
yum install -y ceph-mgr-dashboard
ceph mgr module enable dashboard
# 生成并安装一个自签名证书
#ceph dashboard create-self-signed-cert
# 查看 mgr service 状态
ceph mgr services
# 创建账号
echo jsecode@123 > dash_admin_pwd
ceph dashboard set-login-credentials admin -i dash_admin_pwd
# 重启mgr
ceph mgr module disable dashboard
ceph mgr module enable dashboard
#systemctl restart ceph-mgr@ceph-mgr1
# 启用防火墙 转发80 到 8080
cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
EOF
sysctl -p
systemctl enable firewalld --now
firewall-cmd --add-port=6800-6801/tcp --permanent
firewall-cmd --add-forward=port=80:proto=tcp:toport=8080:toaddr=192.168.126.1 --permanent
firewall-cmd --reload

创建块存储池

#创建块存储池
ceph osd pool create k8s-rbd-pool 256 256
ceph osd pool application enable k8s-rbd-pool rbd 

#创建一个10GB的块存储进行测试
rbd create demo.img -p k8s-rbd-pool --size 10G
rbd ls -p k8s-rbd-pool
#性能测试
rados bench -p k8s-rbd-pool 10 write --no-cleanup
rbd bench-write k8s-rbd-pool/demo.img --io-size 512K --io-pattern seq --io-threads 16 --io-total 5G
rbd bench k8s-rbd-pool/demo.img --io-type read --io-size 1M --io-pattern seq --io-threads 16 --io-total 1G
#删除
rbd rm demo.img -p k8s-rbd-pool

开启 CephFS(支持ReadWriteMany)

# 部署MDS
ceph-deploy mds create ceph-mon1 ceph-mon2 ceph-mon3
# 创建存储池
ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64
# 创建fs
ceph fs new cephfs cephfs_metadata cephfs_data
# 检查是否已创建文件系统
ceph fs ls
# 创建文件系统后,MDS 将能够进入工作状态
ceph mds stat

获取admin认证key

获取到的内容在k8s中会用到

ceph auth get client.admin

新增节点

配置好新增节点网络、hostname ADM节点操作

# 定义新增节点hostname、IP
host=ceph-new
ip=192.168.126.x
# 将节点信息写入hosts
cat >> /etc/hosts << EOF
$ip $host
EOF
ssh-copy-id $host
cd /ceph-deploy
# 拷贝ceph源
scp /etc/yum.repos.d/ceph.repo $host:/etc/yum.repos.d/ceph.repo
# 安装必要依赖
ssh $host yum install -y epel-release python-setuptools
# 给节点安装ceph
ceph-deploy install --no-adjust-repos $host
# 新增节点设置为 mgr节点
ceph-deploy mgr create $host
# 新增节点设置为 mon节点
# 待补充
# 新增节点设置为 osd节点 或 osd节点新增磁盘
ceph-deploy disk zap $host /dev/sdb # 格式化磁盘
ceph-deploy disk zap $host /dev/sdc
ceph-deploy --overwrite-conf osd create --data /dev/sdb $host
ceph-deploy --overwrite-conf osd create --data /dev/sdc $host
# 待补充

移除节点

移除mgr节点

# 在将要被移除的mgr节点上执行
MGR_NAME=ceph-mgrx
sudo systemctl stop ceph-mgr@$MGR_NAME
sudo systemctl disable ceph-mgr@$MGR_NAME
sudo rm -rf /var/lib/ceph/mgr/ceph-$MGR_NAME

移除osd节点

# osd节点停止服务
systemctl stop ceph-osd@0
# mon节点 标记out 并移除
OSD_ID=osd.0
ceph osd out $OSD_ID
ceph osd crush remove $OSD_ID
ceph osd rm $OSD_ID
ceph auth del $OSD_ID
# osd节点
umount /var/lib/ceph/osd/ceph-x

卸载

#主节点
for i in {1..3}; do ceph-deploy purge master$i;ceph-deploy purgedata master$i; done
ceph-deploy forgetkeys
#然后每个节点
lsblk -f
sgdisk --zap-all /dev/sdb
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
rm -rf /dev/ceph-*
rm -rf /dev/mapper/ceph--*
partprobe /dev/sdb
reboot

OSD 硬盘损坏更换

# 在硬盘损坏的osd节点上执行
# 查看异常的osd服务
systemctl --state=failed
# 停止异常的osd服务
systemctl stop ceph-osd@0

# mon节点 标记out 并移除
OSD_ID=osd.0
ceph osd out $OSD_ID
ceph osd crush remove $OSD_ID
ceph osd rm $OSD_ID
ceph auth del $OSD_ID
# osd节点
umount /var/lib/ceph/osd/ceph-x
# 移除旧硬盘 换上新硬盘
# 查看新硬盘盘符(千万注意 不要搞错了)
fdisk -l

# 在部署节点上执行
host=ceph-new
disk=/dev/sda
ceph-deploy disk zap $host $disk # 格式化磁盘 zap失败请使用fdisk手动删除分区
# 创建OSD
ceph-deploy --overwrite-conf osd create --data $disk $host

# 查看集群状态
ceph -s

负载均衡

浮动IP地址 192.168.127.0

节点1

#修改hostname
hostnamectl set-hostname k8s-lb1
cat >> /etc/hosts << EOF
192.168.127.1 k8s-lb1
192.168.127.2 k8s-lb2
EOF
# 安装 Keepalived 和 HAproxy
yum install keepalived haproxy psmisc -y
#修改静态IP
sed -i 's/^BOOTPROTO=dhcp$/BOOTPROTO=static/' /etc/sysconfig/network-scripts/ifcfg-ens192
sed -i 's/^ONBOOT=no$/ONBOOT=yes/' /etc/sysconfig/network-scripts/ifcfg-ens192
cat >> /etc/sysconfig/network-scripts/ifcfg-ens192 << EOF
IPADDR=192.168.127.1
PREFIX=17
GATEWAY=192.168.1.1
DNS1=223.5.5.5
DNS2=114.114.114.114
EOF
systemctl restart network

节点2

#修改hostname
hostnamectl set-hostname k8s-lb2
cat >> /etc/hosts << EOF
192.168.127.1 k8s-lb1
192.168.127.2 k8s-lb2
EOF
# 安装 Keepalived 和 HAproxy
yum install keepalived haproxy psmisc -y
#修改静态IP
sed -i 's/^BOOTPROTO=dhcp$/BOOTPROTO=static/' /etc/sysconfig/network-scripts/ifcfg-ens192
sed -i 's/^ONBOOT=no$/ONBOOT=yes/' /etc/sysconfig/network-scripts/ifcfg-ens192
cat >> /etc/sysconfig/network-scripts/ifcfg-ens192 << EOF
IPADDR=192.168.127.2
PREFIX=17
GATEWAY=192.168.1.1
DNS1=223.5.5.5
DNS2=114.114.114.114
EOF
systemctl restart network

HAproxy配置

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log /dev/log  local0 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

   stats socket /var/lib/haproxy/stats

defaults
  log global
  option  httplog
  option  dontlognull
        timeout connect 5000
        timeout client 50000
        timeout server 50000

frontend kube-apiserver
  bind *:6443
  mode tcp
  option tcplog
  default_backend kube-apiserver

backend kube-apiserver
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server kube-apiserver-1 192.168.127.11:6443 check 
    server kube-apiserver-2 192.168.127.12:6443 check
    server kube-apiserver-3 192.168.127.13:6443 check
EOF

K8s集群

DNS服务器

虚拟机安装 ESXi宿主机IP:192.168.1.33 虚拟机IP:192.168.127.253``192.168.127.254两台

节点1

#修改hostname
hostnamectl set-hostname dns1
cat >> /etc/hosts << EOF
192.168.127.253 dns1
192.168.127.254 dns2
EOF
#安装dnsmasq
yum install dnsmasq -y
systemctl enable dnsmasq --now
#修改静态IP
sed -i 's/^BOOTPROTO=dhcp$/BOOTPROTO=static/' /etc/sysconfig/network-scripts/ifcfg-ens192
sed -i 's/^ONBOOT=no$/ONBOOT=yes/' /etc/sysconfig/network-scripts/ifcfg-ens192
cat >> /etc/sysconfig/network-scripts/ifcfg-ens192 << EOF
IPADDR=192.168.127.253
PREFIX=17
GATEWAY=192.168.1.1
DNS1=223.5.5.5
DNS2=114.114.114.114
EOF
systemctl restart network

节点2

#修改hostname
hostnamectl set-hostname dns2
cat >> /etc/hosts << EOF
192.168.127.253 dns1
192.168.127.254 dns2
EOF
#安装dnsmasq
yum install dnsmasq -y
systemctl enable dnsmasq --now
#修改静态IP
sed -i 's/^BOOTPROTO=dhcp$/BOOTPROTO=static/' /etc/sysconfig/network-scripts/ifcfg-ens192
sed -i 's/^ONBOOT=no$/ONBOOT=yes/' /etc/sysconfig/network-scripts/ifcfg-ens192
cat >> /etc/sysconfig/network-scripts/ifcfg-ens192 << EOF
IPADDR=192.168.127.254
PREFIX=17
GATEWAY=192.168.1.1
DNS1=223.5.5.5
DNS2=114.114.114.114
EOF
systemctl restart network

ssh免密互访

# 分步执行 
ssh-keygen -t rsa
# 输入节点2 密码
ssh-copy-id dns2

配置dnsmasq

# k8s内部域名解析
cat > /etc/dnsmasq.d/k8s.conf << EOF
server=/cluster.local/10.233.0.3
EOF
systemctl restart dnsmasq

#同步配置到dns2
scp /etc/dnsmasq.d/k8s.conf root@dns2:/etc/dnsmasq.d/k8s.conf
ssh root@dns2 systemctl restart dnsmasq

#验证
nslookup kubernetes.default.svc.cluster.local 192.168.127.253
nslookup kubernetes.default.svc.cluster.local 192.168.127.254

配置同步

在节点1上创建同步脚本

cat > /etc/dnsmasq.d/sync_and_restart.sh << EOF
#!/bin/bash

echo "config@dns1:"
cat /etc/dnsmasq.d/k8s.conf

echo ""
echo "sync config file..."
scp /etc/dnsmasq.d/*.conf dns2:/etc/dnsmasq.d/

echo ""
echo "config@dns2:"
ssh dns2 cat /etc/dnsmasq.d/k8s.conf

echo ""
echo "restaring dnsmasq@dns1..."
systemctl restart dnsmasq
systemctl status dnsmasq

echo ""
echo "restaring dnsmasq@dns2..."
ssh dns2 systemctl restart dnsmasq
ssh dns2 systemctl status dnsmasq

echo ""
echo "Success!"
EOF
chmod +x sync_and_restart.sh
# 修改配置后执行改脚本 即可将节点1的配置同步到节点2并重启dnsmasq服务使配置生效
./sync_and_restart.sh

路由器配置

添加策略路由 (H3C路由器 策略路由优先级大于静态路由)

协议类型:IP
源IP地址段:0.0.0.0-255.255.255.255
目的IP地址段:10.233.0.0-10.233.255.255
生效时间:全时段
出接口:无 下一跳:192.168.1.40
描述:K8s net pool

物理机升级及网卡驱动

# 配置好 ip host 尽量走千兆网络 这样升级会快一些 
# 网卡配置自启动
hostname=k8s-workerX
hostnamectl set-hostname $hostname
# 升级内核
#修改文件句柄数限制
cat >> /etc/security/limits.conf << EOF
* soft nofile 1048576
* hard nofile 1048576
EOF
#修改系统参数
cat >> /etc/sysctl.conf << EOF
fs.file-max= 10485760
vm.max_map_count= 262144
kernel.pid_max = 4194303
vm.swappiness = 0 
net.ipv4.ip_local_port_range= 1024 65535
net.ipv4.tcp_mem= 786432 2097152 3145728
net.ipv4.tcp_rmem= 4096 4096 16777216
net.ipv4.tcp_wmem= 4096 4096 16777216
net.ipv4.tcp_max_orphans= 131072
EOF
sysctl -p
#关闭防火墙
systemctl disable firewalld --now
#关闭NetworkManager
systemctl disable NetworkManager --now
#关闭SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
#禁用swap
swapoff -a
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab
#释放swap分区空间
#lvremove /dev/mapper/centos-swap
#lvextend -l +100%FREE /dev/mapper/centos-root
#xfs_growfs /dev/mapper/centos-root
#sed -i 's/rd.lvm.lv=centos\/swap//' /etc/default/grub
#grub2-mkconfig >/etc/grub2.cfg
#修改yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
#curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.163.com/.help/CentOS7-Base-163.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum makecache
#升级系统软件
yum update -y
#升级lt内核(视网卡支持情况可能需要mt内核)
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
yum --disablerepo=\* --enablerepo=elrepo-kernel install  kernel-lt.x86_64  -y
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64  -y
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt-tools.x86_64  -y
#设置内核启动顺序
grub2-set-default 0
#重启
reboot

五、安装集群环境

官方README 什么是 etcd? 高可用集群搭建

随便找一台Linux机器进行安装操作(最好不要使用集群中的节点 避免污染)

# 下载KubKey(使用国内镜像地址)
echo "export KKZONE=cn" >> ~/.bash_profile
source ~/.bash_profile
curl -sfL https://get-kk.kubesphere.io | sh -
# 创建集群配置示例文件
[root@node1 ~]# ./kk create config --with-kubesphere -f cluster-config.yaml
Generate KubeKey config file successfully
# 编辑配置文件cluster-config.yaml
[root@node1 ~]# vi cluster-config.yaml
# 修改spec.hosts: 节点信息 主机名 IP ssh登录账号密码
# 修改spec.roleGroups: 各节点角色分配
# 修改spec.controlPlaneEndpoint: 启用高可用配置
# 修改spec.system: 配置时钟同步、预装软件、安装完成清理脚本等
# 修改spec.registry: 配置DockerHub镜像地址
# 修改spec.addons: 添加NFS持久化存储

# 使用配置文件创建集群
[root@node1 ~]# ./kk create cluster -f cluster-config.yaml

# 墙的原因 ceph-csi 插件镜像可能无法下载 需要本地导入 所有节点都要导
docker load image -i ceph-csi-image.tar.gz

# ceph-csi-rbd-nodeplugin 默认不会在控制平面节点上运行 所以需要调整 容忍度设置
kubectl edit daemonset/ceph-csi-rbd-nodeplugin -n kube-system

spec.template.spec下添加以下内容

spec:
  template:
    spec:
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule

验证安装结果

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

若您看到以下信息,您的高可用集群便已创建成功。

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.1.21:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-10-28 18:21:09
#####################################################

*web控制台

Console: http://192.168.1.21:30880

Account: admin
Password: P@88w0rd

首次登录会要求修改初始密码

修改节点最大容器组数量以及最大Pod进程数限制(根据节点资源实际情况调整)

vi /var/lib/kubelet/config.yaml
# 修改
# maxPods: 110
# podPidsLimit: 10000
# 重启kubelet生效 重启不影响集群运行
systemctl restart kubelet

四、启用kubectl命令行自动补全功能

yum install -y bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
exit

五、安装应用

Helm应用中心

以搭建MariaDB集群为例

  1. 登录Kubesphere
  2. 企业空间->应用管理->应用仓库
  3. 添加 名称:bitnami 地址 https://charts.bitnami.com/bitnami
  4. 项目->应用负载->应用->创建->从应用模板->选择仓库->bitnami->搜索mariadb->安装
  5. 修改架构、账号密码、持久卷大小、打开初始化容器、Prometheus等配置
  6. 安装
  7. 到工作负载->有状态副本集中查看 等容器运行成功 进入命令
  8. 打开 root远程
$ bash
I have no name!@mariadb-pzp3dr-0:/$ mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 167
Server version: 10.6.11-MariaDB Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

# 注意密码 需与之前配置的一致 否则 健康检查无法通过(得修改保密字典中存储的密码才行)
MariaDB [(none)]> grant all privileges on *.* to root@"%" identified by "password" with grant option;
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]>

OpenELB 负载均衡器

在KubeSphere中安装

# 将OpenELB 调度到master上
kubectl label --overwrite nodes k8s-master1 k8s-master2 k8s-master3 lb.kubesphere.io/v1alpha1=openelb

在 system-workspace 下创建项目 lb-system 在其中部署 OpenELB应用 官方文档

安装时配置 openelb-manager 在master上运行

  nodeSelector:
    # 添加调度规则
    lb.kubesphere.io/v1alpha1: openelb

配置3副本

kubectl scale deployment openelb-manager --replicas=3 -n lb-system

配置

官方文档

  • EIP

在k8s-master1上执行

cat > eip.yaml << EOF
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
    name: eip-pool
    annotations:
      eip.openelb.kubesphere.io/is-default-eip: "true"
spec:
    address: 192.168.200.1-192.168.200.254
    protocol: bgp
    disable: false
EOF
kubectl apply -f eip.yaml
  • BGP路由

在k8s-master1上执行

cat > default-bgp-conf.yaml << EOF
apiVersion: network.kubesphere.io/v1alpha2
kind: BgpConf
metadata:
  name: default
spec:
  as: 50000
  listenPort: 17900
  routerId: 192.168.127.12 
EOF
kubectl apply -f default-bgp-conf.yaml

cat > bgp-peer-mer8300.yaml << EOF
apiVersion: network.kubesphere.io/v1alpha2
kind: BgpPeer
metadata:
  name: bgppeer-mer8300
spec:
  conf:
    peerAs: 65000
    neighborAddress: 192.168.1.1
EOF
kubectl apply -f bgp-peer-mer8300.yaml
  • 路由器配置(以H3C MER8300为例)
******************************************************************************
* Copyright (c) 2004-2020 New H3C Technologies Co., Ltd. All rights reserved.*
* Without the owner's prior written consent,                                 *
* no decompiling or reverse-engineering shall be allowed.                    *
******************************************************************************

Login: admin
Password:
Your login failures since the last successful login:
 Sun Feb 19 15:17:24 2023
 Sun Feb 19 15:17:37 2023
 Sun Feb 19 15:19:12 2023

Last successfully login time: Sun Feb 19 14:30:44 2023

<H3C>system-view
System View: return to User View with Ctrl+Z.
# 创建BGP
[H3C]bgp 65000 instance k8s-elb
# 设置router-id
[H3C-bgp-k8s-elb]router-id 192.168.1.1
# 创建BGP对等体
[H3C-bgp-k8s-elb]peer 192.168.127.11 as 50000
[H3C-bgp-k8s-elb]peer 192.168.127.12 as 50000
[H3C-bgp-k8s-elb]peer 192.168.127.13 as 50000
[H3C-bgp-k8s-elb]address-family ipv4 unicast
[H3C-bgp-k8s-elb-ipv4]peer 192.168.127.11 enable
[H3C-bgp-k8s-elb-ipv4]peer 192.168.127.12 enable
[H3C-bgp-k8s-elb-ipv4]peer 192.168.127.13 enable
[H3C-bgp-k8s-elb-ipv4]quit
[H3C-bgp-k8s-elb]quit
# 查看对等体状态 (冗余配置 只要有一路建立链接即可)
[H3C]show bgp instance k8s-elb peer ipv4

 BGP local router ID: 221.226.93.118
 Local AS number: 65000
 Total number of peers: 3                 Peers in established state: 3

  * - Dynamically created peer
  Peer                    AS  MsgRcvd  MsgSent OutQ PrefRcv Up/Down  State

  192.168.127.11       50000       10       14    0       1 00:02:53 Established
  192.168.127.12       50000       10       13    0       1 00:03:20 Established
  192.168.127.13       50000        9       14    0       1 00:02:55 Established
# 查看BGP路由表(能看到相应EIP的路由)
[H3C]show bgp instance k8s-elb routing-table ipv4

 Total number of routes: 3

 BGP local router ID is 221.226.93.118
 Status codes: * - valid, > - best, d - dampened, h - history,
               s - suppressed, S - stale, i - internal, e - external
               Origin: i - IGP, e - EGP, ? - incomplete

     Network            NextHop         MED        LocPrf     PrefVal Path/Ogn

* >e 10.244.0.1/32      192.168.127.23                        0       50000i
*  e                    192.168.127.21                        0       50000i
*  e                    192.168.127.13                        0       50000i
[H3C]

注意H3C路由有坑,策略路由如配置全网段IP协议会覆盖其他路由配置,导致静态路由、BGP路由局域网不生效的情况。

  • 测试
cat > test-dep.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: bgp-openelb
  annotations:
    kubesphere.io/description: openelb测试负载
spec:
  replicas: 2
  selector:
    matchLabels:
      app: bgp-openelb
  template:
    metadata:
      labels:
        app: bgp-openelb
    spec:
      containers:
        - image: luksa/kubia
          name: kubia
          ports:
            - containerPort: 8080
EOF
cat > test-svc.yaml << EOF
kind: Service
apiVersion: v1
metadata:
  name: bgp-svc
  annotations:
    kubesphere.io/description: openelb测试服务
    lb.kubesphere.io/v1alpha1: openelb
    protocol.openelb.kubesphere.io/v1alpha1: bgp
    eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:
  selector:
    app: bgp-openelb
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 8080
  externalTrafficPolicy: Cluster
EOF
kubectl apply -f test-dep.yaml
kubectl apply -f test-svc.yaml
# 查看服务状态 在局域网其他机器ping 服务的 EXTERNAL-IP
kubectl get svc/bgp-svc

域名绑定

参考文档 AliDNS-Webhook

安装cert-manager

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
   cert-manager jetstack/cert-manager \
   --namespace cert-manager \
   --create-namespace \
   --version v1.11.0 \
   --set installCRDs=true

注意 cert-manager与K8s版本兼容问题

安装 阿里云DNS webhook

# Install alidns-webhook to cert-manager namespace. 
wget -O alidns-webhook.yaml https://raw.githubusercontent.com/pragkent/alidns-webhook/master/deploy/bundle.yaml
sed -i 's/yourcompany.com/xxx.com/' alidns-webhook.yaml
kubectl apply -f alidns-webhook.yaml

export base64_access_key=$(echo -n "xxx"|base64)
export base64_secret_key=$(echo -n "xxx"|base64)

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: alidns-secret
  namespace: cert-manager
data:
  access-key: ${base64_access_key}
  secret-key: ${base64_secret_key}
EOF

export email=xxx@xxx.com
export group_name=acme.xxx.com
#创建 Issuer/ClusterIssuer
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: ${email}
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - dns01:
        webhook:
          groupName: ${group_name}
          solverName: alidns
          config:
            region: ""
            accessKeySecretRef:
              name: alidns-secret
              key: access-key
            secretKeySecretRef:
              name: alidns-secret
              key: secret-key
EOF

备注

cat << EOFF | kubectl apply -f -
kind: ConfigMap
apiVersion: v1
metadata:
  name: readme
  namespace: cert-manager
  annotations:
    kubesphere.io/description: 申请证书看这里
data:
  readme: |-
    namespace=kubesphere-system
    domain=dev.enxe.cn
    domain_name=$(echo ${domain} | sed 's/\./-/g')

    # 在需要使用证书的项目(namespace)下创建Certificate  这里以 kubesphere-system 为例
    cat <<EOF | kubectl apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-${domain_name}
      namespace: ${namespace}
    spec:
      secretName: tls-${domain_name}
      commonName: 
      dnsNames:
      - "*.${domain}"
      - "${domain}"
      issuerRef:
        name: letsencrypt-prod
        kind: ClusterIssuer
    EOF

    # 等待几分钟后查看证书 READY 状态
    watch -n 1 "kubectl get certificate -n ${namespace}"
EOFF

升级cert-manager

helm upgrade --set installCRDs=true --version v1.12.8 cert-manager jetstack/cert-manager

坑点 cert-manager 重装部署 alidns-webhook 时 一直提示service not found

apiservice.apiregistration.k8s.io/v1alpha1.acme.jsecode.com created
Error from server (InternalError): error when creating "alidns-webhook.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-ma-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": service "cert-ma-cert-manager-webhook" not found
Error from server (InternalError): error when creating "alidns-webhook.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-ma-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": service "cert-ma-cert-manager-webhook" not found
Error from server (InternalError): error when creating "alidns-webhook.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-ma-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": service "cert-ma-cert-manager-webhook" not found
Error from server (InternalError): error when creating "alidns-webhook.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-ma-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": service "cert-ma-cert-manager-webhook" not found

解决版本: 删除老的无效的webhook

[root@master1 cert-manager]# kubectl get validatingwebhookconfiguration.admissionregistration.k8s.io
NAME                                          WEBHOOKS   AGE
cert-ma-cert-manager-webhook                  1          4h31m
cert-manager-webhook                          1          19m
cluster.kubesphere.io                         1          2d8h
ks-events-admission-validate                  1          2d8h
network.kubesphere.io                         1          368d
notification-manager-validating-webhook       4          368d
resourcesquotas.quota.kubesphere.io           1          368d
rulegroups.alerting.kubesphere.io             3          2d8h
storageclass-accessor.storage.kubesphere.io   1          368d
users.iam.kubesphere.io                       1          368d
validating-webhook-configuration              3          368d
[root@master1 cert-manager]# kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/cert-ma-cert-manager-webhook
validatingwebhookconfiguration.admissionregistration.k8s.io "cert-ma-cert-manager-webhook" deleted
[root@master1 cert-manager]# kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io
NAME                                   WEBHOOKS   AGE
cert-ma-cert-manager-webhook           1          4h35m
cert-manager-webhook                   1          23m
ks-events-admission-mutate             1          2d8h
logsidecar-injector-admission-mutate   1          2d8h
mutating-webhook-configuration         1          368d
rulegroups.alerting.kubesphere.io      3          2d8h
[root@master1 cert-manager]# kubectl delete mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-ma-cert-manager-webhook
mutatingwebhookconfiguration.admissionregistration.k8s.io "cert-ma-cert-manager-webhook" deleted

还有一个超大坑 私有DNS服务 会导致cert-manager递归权威域名出错以及TXT预检测失败

解决办法 参考Setting Nameservers for DNS01 Self Check

kubectl edit -n cert-manager deploy/cert-manager
# 在 spec.template.spec.containers.args 中添加
        - --dns01-recursive-nameservers-only
        - --dns01-recursive-nameservers=223.5.5.5:53,223.6.6.6:53
# 另外某些情况下 卡在 propagation check failed 时,需要清理私有DNS服务的TXT记录缓存

关闭集群(未验证)

关闭K8s集群

参考文档

  1. 先备份etcd(以防万一)
# k8s-master1 节点
cd /root/backups
./ectd-backup.sh
  1. 关闭所有节点
# !!!注意这个命令会关闭 K8s集群所有节点,包括 k8s-master1 这个执行命令的节点
nodes=$(kubectl get node -o name | cut -d / -f 2 | tac)
for node in ${nodes[@]}
do
    echo "==== Shut down $node ===="
    ssh $node sudo shutdown -h 1
done

关闭Ceph集群

待补充

升级Kubesphere

使用KubeKey升级

# 升级KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
chmod +x kk
./kk create config --from-cluster -f update.yaml
vim update.yml
# 修改 确保以下字段正确 
#hosts:您主机的基本信息(主机名和 IP 地址)以及使用 SSH 连接至主机的信息。
#roleGroups.etcd:etcd 节点。
#controlPlaneEndpoint:负载均衡器地址(可选)。
#registry:镜像服务信息(可选)。
# 升级
#./kk upgrade --with-kubernetes v1.23.10 --with-kubesphere v3.4.1 -f update.yaml
./kk upgrade --with-kubesphere v3.4.1 -f update.yaml

添加节点

参考官方文档

./kk create config --from-cluster -f add-node.yaml
vim add-node.yaml
# 修改 确保以下字段正确 在hosts、roleGroups 下添加新节点
#hosts:您主机的基本信息(主机名和 IP 地址)以及使用 SSH 连接至主机的信息。
#roleGroups.etcd:etcd 节点。
#controlPlaneEndpoint:负载均衡器地址(可选)。
#registry:镜像服务信息(可选)。
# 执行添加节点
./kk add nodes -f add-node.yaml 

资源不足导致的集群雪崩

参考 提前预防K8s集群资源不足的处理方式配置

参考 kubelet配置文件

配置文件路径 /var/lib/kubelet/config.yaml

修改以下配置项

# 最大pod数量 根据机器配置设置(参考4Core+16GB 建议50)
maxPods: 110
# 最大进程数 根据机器配置设置 (建议maxPods*100)
podPidsLimit: 10000

#  配置 k8s组件预留资源的大小,CPU、Mem(以2核4GB内存40GB磁盘空间的配置为例)
kubeReserved:
  cpu: 200m
  memory: 250Mi

# 配置 系统守护进程预留资源的大小(以2核4GB内存40GB磁盘空间的配置为例)
systemReserved:
  cpu: 200m
  memory: 250Mi

# 配置 驱逐pod的硬阈值
evictionHard:
  memory.available: 5%
  pid.available: 10%
  nodefs.available: 10%
  imagefs.available: 10%

# 配置 驱逐pod的软阈值
evictionSoft:
  memory.available: 10%
  nodefs.available: 15%
  imagefs.available: 15%

# 定义达到软阈值之后,持续时间超过多久才进行驱逐
evictionSoftGracePeriod:
  memory.available: 2m
  nodefs.available: 2m
  imagefs.available: 2m

# 驱逐pod前最大等待时间
evictionMaxPodGracePeriod: 120

# 至少回收的资源量
evictionMinimumReclaim:
  memory.available: 200Mi
  nodefs.available: 500Mi
  imagefs.available: 500Mi

# 防止波动,kubelet 多久才上报节点的状态
evictionPressureTransitionPeriod: 30s

重启kubelet(不影响集群运行)

systemctl restart kubelet

查看节点资源预留情况

kubectl describe node [NODE_NAME] | grep Allocatable -B 7 -A 6

升级docker版本

  1. 查看原先的containerd和docker版本
containerd -v
docker -v
  1. 驱逐pod
kubectl drain master1 --ignore-daemonsets --delete-local-data --force
kubectl get node
  1. 停止节点上docker和kubelet
systemctl stop kubelet
systemctl stop docker
systemctl stop containerd
  1. 卸载老版本docker并安装新版本docker和containerd
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable docker --now
  1. 启动kubelet并恢复节点
systemctl start kubelet
systemctl status kubelet
kubectl uncordon master1
kubectl get node -o wide 

阿里云NodePort方式配置集群网关及路由设置白名单

  1. 创建 HaVip 并绑定到k8s-master节点
  2. master节点部署Keepalived并按阿里云文档进行配置
  3. 修改 kube-apiserver 配置
vi /etc/kubernetes/manifests/kube-apiserver.yaml
# 添加参数  扩展NodePort端口范围
- --service-node-port-range=80-32767
# 保存退出即可 kubelet 会自动应用配置
  1. 通过集群网关->资源状态修改副本数(只能在这里修改 有几个master节点就配几个)
  2. 修改工作负载kubesphere-router-kubesphere-system使其只在master上运行
      nodeSelector:
        kubernetes.io/os: linux
        node-role.kubernetes.io/master: ''
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
  1. 修改项目kubesphere-controls-system下的kubesphere-router-kubesphere-systemService,修改NodePort端口为80/443
spec:
  ports:
    - name: http
      protocol: TCP
      appProtocol: http
      port: 80
      targetPort: http
      nodePort: 80
    - name: https
      protocol: TCP
      appProtocol: https
      port: 443
      targetPort: https
      nodePort: 443
  # 改Cluster为Local 解决 ingress 获取不到真实client ip的问题    
  externalTrafficPolicy: Local
  1. 集群网关 编辑 添加 use-forwarded-headers=true
  2. 编辑项目路由 添加注解设置白名单 nginx.ingress.kubernetes.io/whitelist-source-range=221.226.92.222/32