手撕 kubernetes 核心组件以及相关证书签发、网络配置 技术笔记

系统:Windows10 专业版

安装虚拟机软件VMware

Workstation 16 Pro for Windows: https://www.vmware.com/go/getworkstation-win

序列号:ZF3R0-FHED2-M80TY-8QYGC-NPKYF

创建准备Linux虚拟机环境

下载系统镜像

Centos7:

http://mirrors.ustc.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso

Ubuntu20:

https://releases.ubuntu.com/20.04/ubuntu-20.04.2-live-server-amd64.iso

创建虚拟机
制作模板机
  • 修改软件镜像源地址

  • 安装基础软件工具包(net-tools、vim、git等)

  • 安装容器运行时docker

根据模板机克隆至少5个虚拟机
  • 运维管理节点
  • Maser主节点
  • DNS节点
  • 计算节点1
  • 计算节点2

网络规划和配置

虚拟机网络(服务器网络)

网段:10.4.7.X/24

子网掩码:255.255.255.0

网关:10.4.7.254

DNS服务器1:10.4.7.254

服务器1(主节点Master)

IP地址:10.4.7.200

服务器2(管理运维节点Manager)

IP地址:10.4.7.100

服务器3(DNS节点)

IP地址:10.4.7.11

服务器4(计算节点1)

IP地址:10.4.7.22

服务器5(计算节点2)

IP地址:10.4.7.33

手动配置静态IP地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static" // dhcp 自动获取 static 静态IP设置
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="8500e7cf-3d2e-4002-9b5b-47678d8118f9"
DEVICE="ens33"
ONBOOT="yes" // 启动时自动网络连接
IPADDR="10.4.7.200"
GATEWAY="10.4.7.254"
DNS1="10.4.7.254"

查看地址信息

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@centos7demo ~]# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:fd:f1:b9 brd ff:ff:ff:ff:ff:ff
inet 10.4.7.200/8 brd 10.255.255.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::95e8:b37e:802a:9184/64 scope link noprefixroute
valid_lft forever preferred_lft forever

测试ping网络地址

1
2
3
4
5
6
7
8
9
[root@centos7demo ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=44.2 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=47.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.3 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2012ms
rtt min/avg/max/mdev = 40.349/44.042/47.539/2.943 ms
系统配置
更改主机名
1
[root@centos7demo ~]# vim /etc/hostname

K8s官方安装解决方案:

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/#docker

关闭防火墙 firewalld

查看centos防火墙状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@centos7demo ~]# systemctl status firewalld

● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 一 2021-05-17 22:46:14 CST; 19min ago
Docs: man:firewalld(1)
Main PID: 682 (firewalld)
CGroup: /system.slice/firewalld.service
└─682 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

5月 17 22:46:14 centos7demo systemd[1]: Starting firewalld - dynamic firewall daemon...
5月 17 22:46:14 centos7demo systemd[1]: Started firewalld - dynamic firewall daemon.
5月 17 22:46:14 centos7demo firewalld[682]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure co... now.
Hint: Some lines were ellipsized, use -l to show in full.

关闭防火墙

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@centos7demo ~]# systemctl stop firewalld
[root@centos7demo ~]# systemctl status firewalld

● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since 一 2021-05-17 23:34:10 CST; 10s ago
Docs: man:firewalld(1)
Process: 682 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 682 (code=exited, status=0/SUCCESS)

5月 17 22:46:14 centos7demo systemd[1]: Starting firewalld - dynamic firewall daemon...
5月 17 22:46:14 centos7demo systemd[1]: Started firewalld - dynamic firewall daemon.
5月 17 22:46:14 centos7demo firewalld[682]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure co... now.
5月 17 23:34:06 centos7demo systemd[1]: Stopping firewalld - dynamic firewall daemon...
5月 17 23:34:10 centos7demo systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.

避免开机启动

1
2
3
4
[root@centos7demo ~]# systemctl disable firewalld

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

关于SELinux

SELinux 有三个模式(可以由用户设置)。这些模式将规定 SELinux 在主体请求时如何应对。

这些模式是:

Enforcing 强制— SELinux 策略强制执行,基于 SELinux 策略规则授予或拒绝主体对目标的访问

Permissive 宽容— SELinux 策略不强制执行,不实际拒绝访问,但会有拒绝信息写入日志

Disabled 禁用— 完全禁用SELinux

默认情况下,大部分系统的SELinux设置为Enforcing。

查看系统当前是什么模式?

1
2
3
[root@centos7demo ~]# getenforce

Enforcing

关闭SELinux

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@centos7demo ~]# vim /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled

# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

重启机器后

1
2
3
[root@centos7demo ~]# sestatus -v

SELinux status: disabled

Linux Swap

swap space是磁盘上的一块区域,可以是一个分区,也可以是一个文件,或者是他们的组合。简单点说,当系统物理内存吃紧时,Linux会将内存中不常访问的数据保存到swap上,这样系统就有更多的物理内存为各个进程服务,而当系统需要访问swap上存储的内容时,再将swap上的数据加载到内存中,这就是我们常说的swap out和swap in。

很多发行版(如ubuntu)的休眠功能依赖于swap分区,当系统休眠的时候,会将内存中的数据保存到swap分区上,等下次系统启动的时候,再将数据加载到内存中,这样可以加快系统的启动速度,所以如果要使用休眠的功能,必须要配置swap分区,并且大小一定要大于等于物理内存。

关闭SWAP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@centos7demo ~]# swapoff 

用法:
swapoff [选项] [<指定>]

选项:
-a, --all 禁用 /proc/swaps 中的所有交换区
-v, --verbose verbose mode

-h, --help 显示此帮助并退出
-V, --version 输出版本信息并退出

<指定> 参数包括:
-L <标签> 要使用设备的标签
-U <uuid> 要使用设备的 UUID
LABEL=<标签> 要使用设备的标签
UUID=<uuid> 要使用设备的 UUID
<设备> 要使用设备的名称
<文件> 要使用文件的名称

更多信息请参阅 swapoff(8)。
禁用 /proc/swaps 中的所有交换区
1
[root@centos7demo ~]# swapoff -a
注释掉 swap 相关配置
1
2
3
4
5
6
7
8
9
10
11
12
[root@centos7demo ~]# vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon May 17 19:12:22 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=48fe47be-026f-4654-9de9-80b019e25b1a /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
确保成功 列表为空
1
2
3
[root@centos7demo ~]# cat /proc/swaps 

Filename Type Size Used Priority

安装容器运行时(Docker)

更新yum软件包
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@centos7demo ~]# yum -y update && yum -y upgrade
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.163.com
正在解决依赖关系
--> 正在检查事务
...
完毕!
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.163.com
安装基础依赖

yum网络不通,导致安装软件包失败

1
2
3
4
Downloading packages:
wget-1.14-18.el7_6.1.x86_64.rp FAILED
http://mirrors.aliyun.com/centos/7.9.2009/os/x86_64/Packages/wget-1.14-18.el7_6.1.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.aliyun.com; Unknown error"
正在尝试其它镜像。

解决:添加DNS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@centos7demo yum.repos.d]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="8500e7cf-3d2e-4002-9b5b-47678d8118f9"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="10.4.7.200"
GATEWAY="10.4.7.254"
DNS1="10.4.7.254"
DNS2="8.8.8.8"
DNS3="114.114.114.114"
1
[root@centos7demo yum.repos.d]# systemctl restart network

安装基础包(虚拟磁盘管理工具)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@centos7demo yum.repos.d]# yum install yum-utils device-mapper-persistent-data lvm2
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.163.com
软件包 device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64 已安装并且是最新版本
软件包 7:lvm2-2.02.187-6.el7_9.5.x86_64 已安装并且是最新版本
正在解决依赖关系
--> 正在检查事务
...
作为依赖被安装:
libxml2-python.x86_64 0:2.9.1-6.el7.5 python-chardet.noarch 0:2.2.1-3.el7 python-kitchen.noarch 0:1.1.1-5.el7

完毕!
添加 Docker yum阿里云仓库
1
2
3
4
5
6
[root@centos7demo ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

已加载插件:fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
安装docker-ce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@centos7demo ~]# yum update && yum install docker-ce
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.163.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): docker-ce-stable/7/x86_64/primary_db | 60 kB 00:00:00
(2/2): docker-ce-stable/7/x86_64/updateinfo | 55 B 00:00:02
No packages marked for update
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.bfsu.edu.cn
* updates: mirrors.163.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.3.20.10.6-3.el7 将被 安装
...
已安装:
docker-ce.x86_64 3:20.10.6-3.el7
作为依赖被安装:
audit-libs-python.x86_64 0:2.8.5-4.el7 checkpolicy.x86_64 0:2.5-8.el7
container-selinux.noarch 2:2.119.2-1.911c772.el7_8 containerd.io.x86_64 0:1.4.4-3.1.el7
docker-ce-cli.x86_64 1:20.10.6-3.el7 docker-ce-rootless-extras.x86_64 0:20.10.6-3.el7
docker-scan-plugin.x86_64 0:0.7.0-3.el7 fuse-overlayfs.x86_64 0:0.7.2-6.el7_8
fuse3-libs.x86_64 0:3.6.1-4.el7 libcgroup.x86_64 0:0.41-21.el7
libseccomp.x86_64 0:2.3.1-4.el7 libsemanage-python.x86_64 0:2.5-14.el7
policycoreutils-python.x86_64 0:2.5-34.el7 python-IPy.noarch 0:0.75-6.el7
setools-libs.x86_64 0:3.3.8-4.el7 slirp4netns.x86_64 0:0.4.3-4.el7_8

完毕!
配置Docker daemon
1
2
3
4
5
6
7
8
9
10
11
12
[root@centos7demo ~]# mkdir /etc/docker
[root@centos7demo ~]# cd /etc/docker/
[root@centos7demo docker]# vim daemon.json

{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
重载配置和重启docker设置开机启动
1
2
3
4
[root@centos7demo docker]# systemctl daemon-reload
[root@centos7demo docker]# systemctl restart docker
[root@centos7demo docker]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

克隆系统镜像

以上述过程准备好的虚拟机环境为模板,创建三个虚拟机:master-7.200、node-7.22、node-7.33

分别修改主机名称:master-200、manager-100、dns-11、node-22、node-33

1
vim /etc/hostname

修改网络静态IP地址分配为:10.4.7.200、10.4.7.100、10.4.7.11、10.4.7.22、10.4.7.33

1
vim /etc/sysconfig/network-scripts/ifcfg-ens33

DNS节点安装bind服务

bind:https://www.isc.org/bind/

现在使用最为广泛的DNS服务器软件是BIND(Berkeley Internet Name Domain),最早有伯克利大学的一名学生编写,现在最新的版本是9,有ISC(Internet Systems Consortium)编写和维护。

BIND支持先今绝大多数的操作系统(Linux,UNIX,Mac,Windows)

BIND服务的名称称之为named

DNS默认使用UDP、TCP协议,使用端口为53(domain),953(mdc,远程控制使用)

1
yum install -y bind bind-chroot bind-utils

修改主配置

1
2
3
4
5
6
7
[root@node-11 ~]# vim /etc/named.conf

listen-on port 53 { 10.4.7.11; };
allow-query { any; };
forwarders { 10.4.7.254; };
dnssec-enable no;
dnssec-validation no;

检查配置

1
[root@node-11 ~]# named-checkconf

区域配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@node-11 ~]# vim /etc/named.rfc1912.zones 

zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.4.7.11; };
};

zone "zrf.com" IN {
type master;
file "zrf.com.zone";
allow-update { 10.4.7.11; };
};

编辑区域数据文件

主机域

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node-11 ~]# vim /var/named/host.com.zone

$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2021051801 ; serial
10800 ; refresh after 3 hours
900 ; retry after 15 minutes
604800 ; expire after 1 week
86400 ; minimum TTL of 1 day
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
node7-11 A 10.4.7.11
node7-22 A 10.4.7.22
node7-33 A 10.4.7.33
node7-100 A 10.4.7.100
node7-200 A 10.4.7.200
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@node-11 ~]# vim /var/named/zrf.com.zone

$ORIGIN zrf.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.zrf.com. dnsadmin.zrf.com. (
2021051801 ; serial
10800 ; refresh after 3 hours
900 ; retry after 15 minutes
604800 ; expire after 1 week
86400 ; minimum TTL of 1 day
)
NS dns.zrf.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
1
[root@node-11 ~]# named-checkconf

启动DNS服务

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@node-11 ~]# systemctl start named
[root@node-11 ~]# systemctl status named

● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2021-05-18 19:59:03 CST; 2s ago
Process: 5702 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
Process: 5699 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS)
Main PID: 5705 (named)
Tasks: 5
Memory: 52.2M
CGroup: /system.slice/named.service
└─5705 /usr/sbin/named -u named -c /etc/named.conf

查看进程 监听了 53 端口

1
2
3
4
5
6
7
8
9
10
11
[root@node-11 ~]# netstat -nltp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.4.7.11:53 0.0.0.0:* LISTEN 5705/named
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 885/sshd
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 5705/named
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1110/master
tcp6 0 0 :::22 :::* LISTEN 885/sshd
tcp6 0 0 ::1:953 :::* LISTEN 5705/named
tcp6 0 0 ::1:25 :::* LISTEN 1110/master

测试dns

Dig是一个在类Unix命令行模式下查询DNS包括NS记录,A记录,MX记录等相关信息的工具。

1
2
3
[root@node-11 ~]# dig -t A node7-22.host.com @10.4.7.11 +short

10.4.7.22

修改每个主机的DNS1配置为:10.4.7.11

1
2
3
4
5
[root@master-200 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

DNS1="10.4.7.11"

[root@node-200 ~]# systemctl restart network

配置宿主机(Windows的网络设置)虚拟机网卡(VMware Network Adapter VMnet8)的首选DNS为:10.4.7.11

然后本地 ping 自建域名

1
2
3
4
5
6
7
C:\Users\darifo>ping node7-100.host.com

正在 Ping node7-100.host.com [10.4.7.100] 具有 32 字节的数据:
来自 10.4.7.100 的回复: 字节=32 时间<1ms TTL=64
来自 10.4.7.100 的回复: 字节=32 时间=2ms TTL=64
来自 10.4.7.100 的回复: 字节=32 时间=4ms TTL=64
来自 10.4.7.100 的回复: 字节=32 时间=4ms TTL=64

证书签发

安装在 运维管理主机(10.4.7.100)上

安装 CFSSL R1.2

1
2
3
4
5
6
7
8
9
10
11
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

[root@node-100 ~]# ll

总用量 18812
-rw-------. 1 root root 1218 517 19:19 anaconda-ks.cfg
-rw-r--r-- 1 root root 6595195 417 03:17 cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root 2277873 417 03:17 cfssljson_linux-amd64
-rw-r--r-- 1 root root 10376657 417 03:17 cfssl_linux-amd64

赋予执行权限

1
chmod -x cfssl*

重命名

1
2
3
4
5
6
7
8
9
for x in cfssl*; do mv $x ${x%*_linux-amd64};  done

[root@node-100 ~]# ll

总用量 18812
-rw-------. 1 root root 1218 5月 17 19:19 anaconda-ks.cfg
-rw-r--r-- 1 root root 10376657 4月 17 03:17 cfssl
-rw-r--r-- 1 root root 6595195 4月 17 03:17 cfssl-certinfo
-rw-r--r-- 1 root root 2277873 4月 17 03:17 cfssljson

移动文件到目录 (/usr/bin)

1
[root@node-100 ~]# mv cfssl* /usr/bin

赋执行权限

1
2
3
4
[root@node-100 ~]# chmod +x /usr/bin/cfssl*
[root@node-100 ~]# which cfssl

/usr/bin/cfssl

准备证书目录

1
2
[root@node-100 ~]# cd /opt
[root@node-100 opt]# mkdir certs

创建CA证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@node-100 certs]# vim ca-csr.json

{
"CN": "Darifo",
"hosts": [

],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "CN",
"ST": "beijing",
"L": "beijing",
"O": "da",
"OU": "rifo"
}
],
"ca": {
"expiry": "175200h"
}
}

生成文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node-100 certs]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2021/05/18 21:18:03 [INFO] generating a new CA key and certificate from CSR
2021/05/18 21:18:03 [INFO] generate received request
2021/05/18 21:18:03 [INFO] received CSR
2021/05/18 21:18:03 [INFO] generating key: rsa-2048
2021/05/18 21:18:04 [INFO] encoded CSR
2021/05/18 21:18:04 [INFO] signed certificate with serial number 683121607095230134922243035588174960318498579169

[root@node-100 certs]# ll

总用量 16
-rw-r--r-- 1 root root 993 5月 18 21:18 ca.csr
-rw-r--r-- 1 root root 226 5月 18 21:13 ca-csr.json
-rw------- 1 root root 1675 5月 18 21:18 ca-key.pem
-rw-r--r-- 1 root root 1338 5月 18 21:18 ca.pem

安装Harbor

https://github.com/goharbor/harbor/releases

1
2
3
4
5
6
7
[root@node-100 src]# wget https://github.com/goharbor/harbor/releases/download/v2.1.5/harbor-offline-installer-v2.1.5.tgz

[root@node-100 src]# tar -zxvf harbor-offline-installer-v2.1.5.tgz -C /opt

[root@node-100 opt]# mv harbor/ harbor-v2.1.5

[root@node-100 opt]# ln -s /opt/harbor-v2.1.5/ /opt/harbor

配置 harbor.yml 并创建相关目录

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@node-100 harbor]# vim harbor.yml


hostname: harbor.zrf.com
http:
port: 180
harbor_admin_password: 12345678
data_volume: /data/harbor
log:
location: /data/harbor/logs


[root@node-100 harbor]# mkdir -p /data/harbor/logs

安装docker-compose

https://github.com/docker/compose/releases/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@node-100 src]# mv docker-compose-Linux-x86_64 docker-compose

[root@node-100 src]# mv docker-compose /usr/local/bin/docker-compose

[root@node-100 src]# chmod +x /usr/local/bin/docker-compose

[root@node-100 src]# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

[root@node-100 src]# docker-compose version

docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

执行安装脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@node-100 harbor]# ./install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.6

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.29.2

[Step 2]: loading Harbor images ...

[Step 5]: starting Harbor ...
Creating network "harbor-v215_harbor" with the default driver
Creating harbor-log ... done
Creating registry ... done
Creating harbor-portal ... done
Creating redis ... done
Creating harbor-db ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating nginx ... done
✔ ----Harbor has been installed and started successfully.----

启动 Harbor

1
2
3
4
5
6
7
8
9
10
11
[root@node-100 harbor]# docker-compose up -d

harbor-log is up-to-date
Starting harbor-db ... done
Starting registry ... done
Starting redis ... done
Starting registryctl ... done
Starting harbor-portal ... done
Starting harbor-core ... done
Starting nginx ... done
Starting harbor-jobservice ... done

安装 Nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@node-100 harbor]# cd /etc/yum.repos.d/
[root@node-100 yum.repos.d]# vim nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key

[root@node-100 yum.repos.d]# yum install nginx -y

配置Nginx代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node-100 ~]# vim /etc/nginx/conf.d/harbor.zrf.com.conf

server {
listen 80;
server_name harbor.zrf.com;

client_max_body_size 1000m;

location / {
proxy_pass http://127.0.0.1:180;
}
}

[root@node-100 ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

配置DNS服务器解析 (10.4.7.11上)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node-11 ~]# vim /var/named/zrf.com.zone 

$ORIGIN zrf.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.zrf.com. dnsadmin.zrf.com. (
2021051802 ; serial
10800 ; refresh after 3 hours
900 ; retry after 15 minutes
604800 ; expire after 1 week
86400 ; minimum TTL of 1 day
)
NS dns.zrf.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
harbor A 10.4.7.100

# 每次修改DNS解析记录记得修改 serial 值

[root@node-11 ~]# systemctl restart named

浏览器访问:

http://harbor.zrf.com/

管理节点(10.4.7.100)上配置docker的 registry-mirrors (镜像加速地址)和 insecure-registries(私有仓库地址)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@node-100 ~]# vim /etc/docker/daemon.json 

{
"registry-mirrors": [
"https://6adf82tk.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
],
"insecure-registries": [
"harbor.zrf.com"
],
"bip": "172.7.100.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}


[root@node-100 ~]# systemctl daemon-reload
[root@node-100 ~]# systemctl restart docker

[root@node-100 ~]# cd /opt/harbor
[root@node-100 harbor]# docker-compose up -d

查看Docker的桥接网卡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@node-100 harbor]# docker network inspect bridge
[
{
"Name": "bridge",
"Id": "e17b9f7b289ed8e90f5e803c460dfd54e09c82d62a2a778a4d237f249eba69ab",
"Created": "2021-05-19T00:27:54.304910387+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.7.100.0/24",
"Gateway": "172.7.100.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

测试镜像推送

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@node-100 ~]# docker pull nginx:1.20

1.20: Pulling from library/nginx
69692152171a: Pull complete
965615a5cec8: Pull complete
b141b026b9ce: Pull complete
8d70dc384fb3: Pull complete
525e372d6dee: Pull complete
6e60219fdb98: Pull complete
Digest: sha256:ea4560b87ff03479670d15df426f7d02e30cb6340dcd3004cdfc048d6a1d54b4
Status: Downloaded newer image for nginx:1.20
docker.io/library/nginx:1.20

[root@node-100 ~]# docker images | grep nginx
nginx 1.20 7ab27dbbfbdf 6 days ago 133MB


[root@node-100 ~]# docker tag 7ab27dbbfbdf harbor.zrf.com/public/nginx:v1.20

[root@node-100 ~]# docker login harbor.zrf.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


[root@node-100 ~]# docker push harbor.zrf.com/public/nginx:v1.20
The push refers to repository [harbor.zrf.com/public/nginx]
272bc57d3405: Pushed
f7141923aaa3: Pushed
9b63e6289fbe: Pushed
a2f4f809e04e: Pushed
1839f9962bd8: Pushed
02c055ef67f5: Pushed
v1.20: digest: sha256:598057a5c482d2fb42092fd6f4ba35ea4cc86c41f5db8bb68d1ab92c4c40db98 size: 1570

安装 K8s

在管理节点(10.4.7.100)上

创建根证书配置文件

vim /opt/certs/ca-config.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

[root@node-100 ~]# cd /opt/certs/

[root@node-100 certs]# vim ca-config.json

{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles":{
"server": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}

创建根证书请求文件 etcd-peer-csr.json

vim /opt/certs/etcd-peer-csr.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"CN": "k8s-etcd",
"hosts": [
"10.4.7.200",
"10.4.7.11",
"10.4.7.22",
"10.4.7.33"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "zrf",
"OU": "rf"
}
]
}

生成etcd用的证书文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

[root@node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssljson -bare etcd-peer

2021/05/19 02:56:59 [INFO] generate received request
2021/05/19 02:56:59 [INFO] received CSR
2021/05/19 02:56:59 [INFO] generating key: rsa-2048
2021/05/19 02:56:59 [INFO] encoded CSR
2021/05/19 02:56:59 [INFO] signed certificate with serial number 459984002147105853352453930204597808407893791844
2021/05/19 02:56:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@node-100 certs]# ll
总用量 36
-rw-r--r-- 1 root root 599 5月 19 00:55 ca-config.json
-rw-r--r-- 1 root root 993 5月 18 21:18 ca.csr
-rw-r--r-- 1 root root 226 5月 18 21:13 ca-csr.json
-rw------- 1 root root 1675 5月 18 21:18 ca-key.pem
-rw-r--r-- 1 root root 1338 5月 18 21:18 ca.pem
-rw-r--r-- 1 root root 1062 5月 19 02:56 etcd-peer.csr
-rw-r--r-- 1 root root 288 5月 19 02:03 etcd-peer-csr.json
-rw------- 1 root root 1675 5月 19 02:56 etcd-peer-key.pem
-rw-r--r-- 1 root root 1424 5月 19 02:56 etcd-peer.pem
master节点(10.4.7.200)安装etcd

主节点(10.4.7.200)

创建用户

1
2
3
[root@master-200 ~]# useradd -s /sbin/nologin -M etcd
[root@master-200 ~]# id etcd
uid=1000(etcd) gid=1000(etcd) 组=1000(etcd)

下载软件

https://github.com/etcd-io/etcd/releases

1
2
3
4
5
6
7
8
9
[root@master-200 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.16/etcd-v3.4.16-linux-amd64.tar.gz

[root@master-200 ~]# tar xfv etcd-v3.4.16-linux-amd64.tar.gz -C /opt

[root@master-200 ~]# cd /opt/

[root@master-200 opt]# mv etcd-v3.4.16-linux-amd64/ etcd-v3.4.16
[root@master-200 opt]# ln -s /opt/etcd-v3.4.16/ /opt/etcd
[root@master-200 opt]# cd etcd

创建数据目录

1
[root@master-200 etcd]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server

授权用户目录权限

1
2
3
[root@master-200 etcd]# chown -R etcd.etcd /opt/etcd-v3.4.16/
[root@master-200 etcd]# chown -R etcd.etcd /data/etcd/
[root@master-200 etcd]# chown -R etcd.etcd /data/logs/etcd-server/

拷贝证书从管理节点(10.4.7.100)到主节点(10.4.7.200)

1
2
3
4
[root@master-200 etcd]# cd /opt/certs
[root@master-200 certs]# scp node7-100.host.com:/opt/certs/etcd-peer.pem .
[root@master-200 certs]# scp node7-100.host.com:/opt/certs/ca.pem .
[root@master-200 certs]# scp node7-100.host.com:/opt/certs/etcd-peer-key.pem .

配置etcd启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master-200 etcd]# vim etcd-server-startup.sh

#!/bin/sh
./etcd --name etcd-7-200 \
--data-dir /data/logs/etcd-server \
--listen-peer-urls https://10.4.7.200:2380 \
--listen-client-urls https://10.4.7.200:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--advertise-client-urls https://10.4.7.200:2379,http://127.0.0.1:2379 \
--initial-advertise-peer-urls https://10.4.7.200:2380 \
--initial-cluster etcd-7-200=https://10.4.7.200:2380,etcd-7-22=https://10.4.7.22:2380,etcd-7-33=https://10.4.7.33:2380 \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
1
[root@master-200 etcd]# chmod +x etcd-server-startup.sh

安装 supervisor 软件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master-200 etcd]# yum install epel-release
[root@master-200 etcd]# yum install -y supervisor
[root@master-200 etcd]# systemctl enable supervisord
Created symlink from /etc/systemd/system/multi-user.target.wants/supervisord.service to /usr/lib/systemd/system/supervisord.service.

[root@master-200 etcd]# systemctl start supervisord
[root@master-200 etcd]# systemctl status supervisord

● supervisord.service - Process Monitoring and Control Daemon
Loaded: loaded (/usr/lib/systemd/system/supervisord.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2021-05-19 04:31:51 CST; 28s ago
Process: 6265 ExecStart=/usr/bin/supervisord -c /etc/supervisord.conf (code=exited, status=0/SUCCESS)
Main PID: 6268 (supervisord)
Tasks: 1
Memory: 10.9M
CGroup: /system.slice/supervisord.service
└─6268 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord.conf

5月 19 04:31:51 master-200 systemd[1]: Starting Process Monitoring and Control Daemon...
5月 19 04:31:51 master-200 systemd[1]: Started Process Monitoring and Control Daemon.

配置 supervisor 的 etcd-server.ini

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@master-200 etcd]# vim /etc/supervisord.d/etcd-server.ini

#内容如下
[program:etcd-server-7-200]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

更新和启动

1
2
3
4
5
6
7
8
9
10
[root@master-200 etcd]# supervisorctl update
etcd-server-7-200: added process group

[root@master-200 etcd]# supervisorctl reload
Restarted supervisord

[root@master-200 etcd]# tail -fn 200 /data/logs/etcd-server/etcd.stdout.log

[root@master-200 etcd]# supervisorctl status
etcd-server-7-200 RUNNING pid 6517, uptime 0:00:34
计算节点1(10.4.7.22)安装etcd

基础批处理命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/sh

# 新建用户
useradd -s /sbin/nologin -M etcd
id etcd
# 下载ETCD
wget https://github.com/etcd-io/etcd/releases/download/v3.4.16/etcd-v3.4.16-linux-amd64.tar.gz

# 解压重命名
tar xfv etcd-v3.4.16-linux-amd64.tar.gz -C /opt
cd /opt/
mv etcd-v3.4.16-linux-amd64/ etcd-v3.4.16

# 建立软连接
ln -s /opt/etcd-v3.4.16/ /opt/etcd

# 创建etcd数据目录
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
chown -R etcd.etcd /opt/etcd-v3.4.16/
chown -R etcd.etcd /data/etcd/
chown -R etcd.etcd /data/logs/etcd-server/

# 从管理节点拷贝证书到本机
cd /opt/etcd/certs

scp node7-100.host.com:/opt/certs/etcd-peer.pem .
scp node7-100.host.com:/opt/certs/ca.pem .
scp node7-100.host.com:/opt/certs/etcd-peer-key.pem .

# 安装 supervisor 软件
yum install epel-release
yum install -y supervisor
systemctl enable supervisord
systemctl start supervisord
systemctl status supervisord

编辑启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cd /opt/etcd
# 编辑启动脚本
vim etcd-server-startup.sh


#!/bin/sh
./etcd --name etcd-7-22 \
--data-dir /data/logs/etcd-server \
--listen-peer-urls https://10.4.7.22:2380 \
--listen-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--advertise-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \
--initial-advertise-peer-urls https://10.4.7.22:2380 \
--initial-cluster etcd-7-200=https://10.4.7.200:2380,etcd-7-22=https://10.4.7.22:2380,etcd-7-33=https://10.4.7.33:2380 \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout



chmod +x etcd-server-startup.sh
chown -R etcd.etcd /opt/etcd-v3.4.16/ /data/etcd/ /data/logs/etcd-server/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 配置etcd-server的进程守护
vim /etc/supervisord.d/etcd-server.ini

#内容如下
[program:etcd-server-7-22]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
1
2
3
4
5
# 更新重载查看 supervisor
supervisorctl update
supervisorctl reload
# tail -fn 200 /data/logs/etcd-server/etcd.stdout.log
supervisorctl status
计算节点2(10.4.7.33)安装etcd

基础操作同上

区别安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
vim etcd-server-startup.sh


#!/bin/sh
./etcd --name etcd-7-33 \
--data-dir /data/logs/etcd-server \
--listen-peer-urls https://10.4.7.33:2380 \
--listen-client-urls https://10.4.7.33:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--advertise-client-urls https://10.4.7.33:2379,http://127.0.0.1:2379 \
--initial-advertise-peer-urls https://10.4.7.33:2380 \
--initial-cluster etcd-7-200=https://10.4.7.200:2380,etcd-7-22=https://10.4.7.22:2380,etcd-7-33=https://10.4.7.33:2380 \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 配置etcd-server的进程守护
vim /etc/supervisord.d/etcd-server.ini

#内容如下
[program:etcd-server-7-33]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
下载安装 kubernetes

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG

https://dl.k8s.io/v1.21.1/kubernetes-server-linux-amd64.tar.gz

本机下载后上传到 7-100,然后远程拷贝到计算节点

计算节点1(10.4.7.22)安装k8s
1
[root@master-22 opt]# scp node7-100.host.com:/opt/src/kubernetes-server-linux-amd64.tar.gz .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@master-22 opt]# tar xfv kubernetes-server-linux-amd64.tar.gz -C /opt

[root@master-22 opt]# mv kubernetes kubernetes-v1.21.1

[root@master-22 opt]# ln -s kubernetes-v1.21.1/ kubernetes

[root@master-22 opt]# cd kubernetes

[root@master-22 kubernetes]# rm -rf kubernetes-src.tar.gz

[root@master-22 kubernetes]# cd server/
[root@master-22 server]# cd bin/
[root@master-22 bin]# rm -f *.tar
[root@master-22 bin]# rm -f *_tag
[root@master-22 bin]# ll
总用量 680144
-rwxr-xr-x 1 root root 50577408 5月 12 22:30 apiextensions-apiserver
-rwxr-xr-x 1 root root 46501888 5月 12 22:30 kubeadm
-rwxr-xr-x 1 root root 48521216 5月 12 22:30 kube-aggregator
-rwxr-xr-x 1 root root 122085376 5月 12 22:30 kube-apiserver
-rwxr-xr-x 1 root root 116297728 5月 12 22:30 kube-controller-manager
-rwxr-xr-x 1 root root 47583232 5月 12 22:30 kubectl
-rwxr-xr-x 1 root root 54980712 5月 12 22:30 kubectl-convert
-rwxr-xr-x 1 root root 118083408 5月 12 22:30 kubelet
-rwxr-xr-x 1 root root 43130880 5月 12 22:30 kube-proxy
-rwxr-xr-x 1 root root 47108096 5月 12 22:30 kube-scheduler
-rwxr-xr-x 1 root root 1593344 5月 12 22:30 mounter

终于启动了etcd集群

1
2
3
4
5
6
7
8
[root@node-33 etcd]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
计算节点1(10.4.7.33)安装k8s
1
[root@node-33 opt]# scp node7-100.host.com:/opt/src/kubernetes-server-linux-amd64.tar.gz .

其余操作同上。

签发k8s证书

在管理运维机器节点(10.4.7.100)上

client证书配置

1
[root@node-100 certs]# vim client-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"CN": "k8s-node",
"hosts": [

],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "CN",
"ST": "beijing",
"L": "beijing",
"O": "da",
"OU": "rifo"
}
]
}

签发 Client 证书

1
2
3
4
5
6
7
8
9
10
11
[root@node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client

2021/05/19 10:57:50 [INFO] generate received request
2021/05/19 10:57:50 [INFO] received CSR
2021/05/19 10:57:50 [INFO] generating key: rsa-2048
2021/05/19 10:57:51 [INFO] encoded CSR
2021/05/19 10:57:51 [INFO] signed certificate with serial number 350439271737060455865399561233766765490411981295
2021/05/19 10:57:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

api-server证书配置

1
[root@node-100 certs]# vim apiserver-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"10.0.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.4.7.110",
"10.4.7.200",
"10.4.7.22",
"10.4.7.33"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "da",
"OU": "rifo"
}
]
}

api-server证书生成

1
2
3
4
5
6
7
8
9
10
11
[root@node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssljson -bare apiserver

2021/05/19 13:01:12 [INFO] generate received request
2021/05/19 13:01:12 [INFO] received CSR
2021/05/19 13:01:12 [INFO] generating key: rsa-2048
2021/05/19 13:01:12 [INFO] encoded CSR
2021/05/19 13:01:12 [INFO] signed certificate with serial number 282787114856344111889075867610902440531740473558
2021/05/19 13:01:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
将证书拷贝到各个计算节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

[root@node-22 bin]# mkdir cert

[root@node-22 cert]# scp node7-100.host.com:/opt/certs/ca.pem .
[root@node-22 cert]# scp node7-100.host.com:/opt/certs/ca-key.pem .
[root@node-22 cert]# scp node7-100.host.com:/opt/certs/client.pem .
[root@node-22 cert]# scp node7-100.host.com:/opt/certs/client-key.pem .
[root@node-22 cert]# scp node7-100.host.com:/opt/certs/apiserver.pem .
[root@node-22 cert]# scp node7-100.host.com:/opt/certs/apiserver-key.pem .

[root@node-22 cert]# ll
总用量 24
-rw------- 1 root root 1675 5月 19 13:16 apiserver-key.pem
-rw-r--r-- 1 root root 1598 5月 19 13:15 apiserver.pem
-rw------- 1 root root 1675 5月 19 13:15 ca-key.pem
-rw-r--r-- 1 root root 1338 5月 19 13:15 ca.pem
-rw------- 1 root root 1675 5月 19 13:15 client-key.pem
-rw-r--r-- 1 root root 1363 5月 19 13:15 client.pem
创建 apiserver启动配置文件
1
2
3
4
5
[root@node-22 bin]# mkdir conf

/opt/kubernetes/server/bin/conf

[root@node-22 conf]# vim audit.yaml

audit.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]

# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]

# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]

# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"

# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]

# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]

# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.

# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"

副本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
apiVersion: audit.k8s.io/v1beta1
kind: Policy
omitStages:
- "RequestReceived"
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]

- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]

- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: ""
resources: ["endpoints", "services"]

- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*"
- "/version"

- level: Request
resources:
- group: ""
resources: ["configmaps"]
namespaces: ["kube-system"]

- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]

- level: Request
resources:
- group: ""
- group: "extensions"

- level: Metadata
omitStages:
- "RequestReceived"

kube-apiserver 启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
/opt/kubernetes/server/bin

[root@node-22 bin]# vim kube-apiserver.sh

#!/bin/bash
./kube-apiserver \
--insecure-port 8080 \
--insecure-bind-address 0.0.0.0 \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode Node,RBAC \
--client-ca-file ./cert/ca.pem \
--requestheader-client-ca-file ./cert/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./cert/ca.pem \
--etcd-certfile ./cert/client.pem \
--etcd-keyfile ./cert/client-key.pem \
--etcd-servers https://10.4.7.200:2379,https://10.4.7.22:2379,https://10.4.7.33:2379 \
--service-account-signing-key-file ./cert/ca-key.pem \
--service-account-issuer api \
--service-account-key-file ./cert/ca-key.pem \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./cert/client.pem \
--kubelet-client-key ./cert/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./cert/apiserver.pem \
--tls-private-key-file ./cert/apiserver-key.pem \


[root@node-22 bin]# chmod +x kube-apiserver.sh

配置supervisor守护

1
[root@node-22 bin]# vim /etc/supervisord.d/kube-apiserver.ini

kube-apiserver.ini

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[program:kube-apiserver-7-22]
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
killasgroup=true
stopasgroup=true

创建日志目录

1
[root@node-22 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
1
2
3
4
5
6
[root@node-22 bin]# supervisorctl update
kube-apiserver-7-22: added process group

[root@node-22 bin]# supervisorctl status
etcd-server-7-22 RUNNING pid 6448, uptime 4:05:18
kube-apiserver-7-22 RUNNING pid 6671, uptime 0:00:32

注:新版本的kube-apiserv 已经强制关闭了监听 http的 8080 端口,只能提供基于SSL的6443端口服务,我降版本重新安装了 v-1.19.11就可以配置 –insecure-port 8080 了。

安装部署主控节点L4反代

在 10.4.7.200 ( master) 和 10 .4.7.11(dns) 上安装 nginx(详见上文:安装nginx)

安装配置nginx
1
2
3
[root@master-200 yum.repos.d]# vim /etc/nginx/nginx.conf

[root@node-11 yum.repos.d]# vim /etc/nginx/nginx.conf

统一配置

1
2
3
4
5
6
7
8
9
10
11
12
stream {
upstream kube-apiserver {
server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
server 10.4.7.33:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
1
2
3
4
5
6
7
8
9
[root@node-11 yum.repos.d]# systemctl start nginx
[root@node-11 yum.repos.d]# systemctl enable nginx

Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

[root@master-200 yum.repos.d]# systemctl start nginx
[root@master-200 yum.repos.d]# systemctl enable nginx

Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
安装配置 keepalive
1
[root@node-11 yum.repos.d]# yum install keepalived -y
1
2
3
[root@node-11 yum.repos.d]# vim /etc/keepalived/check_port.sh

[root@node-11 yum.repos.d]# chmod +x /etc/keepalived/check_port.sh

#!/bin/bash
CHK_PORT=$1
if [ -n “$CHK_PORT” ];then
PORT_PROCESS=ss -lnt|grep $CHK_PORT|wc -l
if [ $PORT_PROCESS -eq 0 ];then
echo “Port $CHK_PORT Is Not Used,End.”
exit 1
fi
else
echo “Check Port Cant Be Empty!”
fi

1
[root@node-11 keepalived]# vim  /etc/keepalived/keepalived.conf

10.4.7.11上的keepalived配置

global_defs {
router_id 10.4.7.11

}

vrrp_script chk_nginx {
script “/etc/keepalived/check_port.sh 7443”
interval 2
weight -20
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.4.7.11
nopreempt

authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.110
}

}

10.4.7.200上的keepalived配置

global_defs {
router_id 10.4.7.200

}

vrrp_script chk_nginx {
script “/etc/keepalived/check_port.sh 7443”
interval 2
weight -20
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 251
priority 90
advert_int 1
mcast_src_ip 10.4.7.200

authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.110
}

}

安装 kube-controller-manager和kube-scheduler

controller启动脚本配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
vim kube-controller-manager.sh

#!/bin/bash
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./cert/ca.pem \
--v 2

mkdir -p /data/logs/kubernetes/kube-controller-manager

vim /etc/supervisord.d/kube-conntroller-manager.ini

[program:kube-controller-manager-7-22]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false



chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh

scheduler启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
vim /opt/kubernetes/server/bin/kube-scheduler.sh



#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2



chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh

mkdir -p /data/logs/kubernetes/kube-scheduler
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
vim /etc/supervisord.d/kube-scheduler.ini

[program:kube-scheduler-7-22]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false

supervisor更新启动

1
2
3
4
5
supervisor update
supervisor reload
supervisorctl status

tail -fn 200 /data/logs/etcd-server/etcd.stdout.log

建立快速命令链接

1
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

查看集群健康状态

1
kubectl get cs
计算节点安装kubelet
证书签发
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@node-100 certs]# vim  kubelet-csr.json

{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.4.7.100",
"10.4.7.110",
"10.4.7.200",
"10.4.7.22",
"10.4.7.33",
"10.4.7.44",
"10.4.7.55",
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "chengdu",
"L": "chengdu",
"O": "Da",
"OU": "rifo"
}
]
}

[root@node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet
拷贝证书到计算节点
1
2
3
4
cd /opt/kubernetes/server/bin/cert

scp node7-100.host.com:/opt/certs/kubelet.pem .
scp node7-100.host.com:/opt/certs/kubelet-key.pem .
分发配置证书
1
cd /opt/kubernetes/server/bin/conf/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
kubectl config set-cluster darifo-k8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.110:7443 \
--kubeconfig=kubelet.kubeconfig

kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig

kubectl config set-context darifo-k8s-context \
--cluster=darifo-k8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig

kubectl config use-context darifo-k8s-context --kubeconfig=kubelet.kubeconfig
集群角色绑定和权限授予

计算节点 10.4.7.22 和 10.4.7.33 上:

k8s-node.yaml

1
2
3
/opt/kubernetes/server/bin/conf

vim k8s-node.yaml
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node

只需要在一个计算节点执行

1
2
3
kubectl create -f k8s-node.yaml

kubectl get clusterrolebinding k8s-node -o yaml
准备pause基础镜像

在管理运维节点 10.4.7.100上

1
2
3
4
5
6
7
8

docker pull kubernetes/pause

docker login harbor.zrf.com

docker tag f9d5de079539 harbor.zrf.com/base-repo/pause:latest

docker push harbor.zrf.com/base-repo/pause:latest
kubelet启动脚本

在计算节点上: /opt/kubernetes/server/bin

[root@node-22 bin]# vim kubelet.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override node7-22.host.com \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.zrf.com/base-repo/pause:latest \
--root-dir /data/kubelet

[root@node-33 bin]# vim kubelet.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override node7-33.host.com \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.zrf.com/base-repo/pause:latest \
--root-dir /data/kubelet
1
2
mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
chmod +x /opt/kubernetes/server/bin/kubelet.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@node-22 bin]# vim /etc/supervisord.d/kube-kubelet.ini

[program:kube-kubelet-7-22]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@node-33 bin]# vim /etc/supervisord.d/kube-kubelet.ini

[program:kube-kubelet-7-33]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
1
2
3
4
5
supervisorctl update
supervisorctl status

# 备注:当启动失败时,查看报错日志
tail -fn 200 /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@node-22 bin]# supervisorctl status
etcd-server-7-22 RUNNING pid 2255, uptime 4:06:42
kube-apiserver-7-22 RUNNING pid 2269, uptime 4:05:34
kube-controller-manager-7-22 RUNNING pid 2417, uptime 2:29:12
kube-kubelet-7-22 RUNNING pid 2585, uptime 0:00:30
kube-scheduler-7-22 RUNNING pid 2494, uptime 1:38:56
[root@node-22 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node7-22.host.com Ready <none> 71s v1.19.11
node7-33.host.com Ready <none> 74s v1.19.11


[root@node-33 bin]# supervisorctl status
etcd-server-7-33 RUNNING pid 2136, uptime 4:06:05
kube-apiserver-7-33 RUNNING pid 2149, uptime 4:05:02
kube-controller-manager-7-33 RUNNING pid 2290, uptime 1:38:52
kube-kubelet-7-33 RUNNING pid 2393, uptime 0:00:31
kube-scheduler-7-33 RUNNING pid 2052, uptime 4:12:01
[root@node-33 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node7-22.host.com Ready <none> 117s v1.19.11
node7-33.host.com Ready <none> 2m v1.19.11

节点角色添加标签

1
2
3
4
5
6
7
8
9
10
# 给节点 node7-22.host.com 添加一个master标签
kubectl label node node7-22.host.com node-role.kubernetes.io/master=

# 给节点 node7-33.host.com 添加一个 node 标签
kubectl label node node7-33.host.com node-role.kubernetes.io/node=

[root@node-22 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node7-22.host.com Ready master 7m30s v1.19.11
node7-33.host.com Ready node 7m33s v1.19.11
计算节点安装kube-proxy
签发证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@node-100 certs]# vim kube-proxy-csr.json

{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "Da",
"OU": "rifo"
}
]
}
1
[root@node-100 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssljson -bare kube-proxy-client
分发证书

在计算节点: /opt/kubernetes/server/bin/cert

1
2
scp node7-100.host.com:/opt/certs/kube-proxy-client.pem .
scp node7-100.host.com:/opt/certs/kube-proxy-client-key.pem .
创建配置

在 节点 22 上:

/opt/kubernetes/server/bin/conf 目录内执行

1
2
3
4
5
kubectl config set-cluster darifo-k8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.110:7443 \
--kubeconfig=kube-proxy.kubeconfig
1
2
3
4
5
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
1
2
3
4
kubectl config set-context darifo-k8s-context \
--cluster=darifo-k8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
1
kubectl config use-context darifo-k8s-context --kubeconfig=kube-proxy.kubeconfig

拷贝到 33 节点

/opt/kubernetes/server/bin/conf

1
[root@node-33 cert]# scp node7-22.host.com:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
安装配置IPVS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
vi /root/ipvs.sh

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done

chmod u+x /root/ipvs.sh

cd ~

./ipvs.sh

查看ipvs模块

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node-22 ~]# lsmod | grep ip_vs

ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
ip_vs_sh 12688 0
ip_vs_sed 12519 0
ip_vs_rr 12600 0
ip_vs_pe_sip 12740 0
nf_conntrack_sip 33780 1 ip_vs_pe_sip
ip_vs_nq 12516 0
ip_vs_lc 12516 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_ftp 13079 0
ip_vs_dh 12688 0
ip_vs 145458 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat 26583 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack 139264 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
创建kube-proxy启动脚本

计算节点都需要操作

1
2
3
4
5
6
7
8
9
10
11
12
13
kube-proxy.sh

#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override node7-22.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig

chmod +x kube-proxy.sh

mkdir -p /data/logs/kubernetes/kube-proxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
vim /etc/supervisord.d/kube-proxy.ini

[program:kube-proxy-7-22]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false
1
2
supervisorctl update
supervisorctl status

创建 nginx pod 验证集群

1
[root@node-22 ~]# vim nginx-test.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-darifo
labels:
app: nginx
spec:
replicas: 3 #Pod的副本数量,多节点实现负载均衡与高可用
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.zrf.com/base-repo/nginx:v1.20
ports:
- containerPort: 80

验证查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@node-22 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
[root@node-22 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-darifo-686ccbfff7-9q2c8 1/1 Running 0 10m
nginx-darifo-686ccbfff7-md9fs 1/1 Running 0 10m
nginx-darifo-686ccbfff7-xq9cg 1/1 Running 0 10m
[root@node-22 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node7-22.host.com Ready master 132m v1.19.11
node7-33.host.com Ready node 132m v1.19.11
通过Service暴露服务

暴露服务的三种方式
NodePort
将服务的类型设置成NodePort-每个集群节点都会在节点上打 开 一
个端口, 对于NodePort服务, 每个集群节点在节点本身(因此得名叫
NodePort)上打开一个端口,并将在该端口上接收到的流量重定向到基础服务。
该服务仅在内部集群 IP 和端口上才可访间, 但也可通过所有节点上的专用端
口访问。
LoadBalane
将服务的类型设置成LoadBalance, NodePort类型的一 种扩展,这使得
服务可以通过一个专用的负载均衡器来访问, 这是由Kubernetes中正在运行
的云基础设施提供的。 负载均衡器将流量重定向到跨所有节点的节点端口。
客户端通过负载均衡器的 IP 连接到服务。
Ingress
创建一 个Ingress资源, 这是一 个完全不同的机制, 通过一 个IP地址公开多
个服务,就是一个网关入口,和springcloud的网关zuul、gateway类似。

编写服务yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: nginx-darifo-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 18080
selector:
app: nginx-darifo

外部访问

1
http://node7-33.host.com:18080/
基础组件安装总结

DNS节点:安装了bind9自建的DNS服务、Nginx服务及反代

运维管理节点:进行证书CA签发、Harbor镜像仓库私服

主节点:Etcd服务、Nginx服务、keepalived

计算节点:etcd服务、supervisor、kube-apiserver组件、kube-controller-manager组件、kubelet组件、kube-scheduler组件、kube-proxy组件、IPVS

路漫漫其修远兮,吾将上下而求索!