前言
尝到k8s甜头以后,我们就想着应用到生产环境里去,以提高业务迭代效率,可是部署在生产环境里有一个要求,就是k8s集群不可以存在单点故障。。。诶唷我的乖乖,这不就要求k8s集群高可用吗,好,下面就是介绍两种目前比较火的k8s集群master高可用方式。
介绍
首先介绍的第一种k8sHA集群我觉得更应该叫做主从结构k8s集群,它由三台master组成,有三个keepalived提供一个vip 来作为apiserver的ip入口,keepalived设置权重,使得vip落在权重大的master节点上,node节点通过访问这个vip从而访问到这一台master,另外两台master则通过etcd集群,来完成数据同步。
缺点:这样的集群是通过keepalived来实现高可用的,也就是说在权重较大的节点没有故障之前,keepalived所指向的流量永远都是经过主master,只有当主master出现故障或者宕机的情况下,才有可能转移到另外两台从master节点上。这样会导致主master节点压力过大,而另外两台从master可能永远不会被调用,导致资源浪费等等情况。
不过,这也是排除单点故障的一种方式。
下面是理想的高可用架构图。
k8s 理想HA高可用
本文中要部署高可用的架构图:
本文高可用架构
上图摘抄至https://www.kubernetes.org.cn/3536.html
好了,到此我们整理一下本文中需要使用的技术栈
keepalived+etcd+k8s master
其中keepalived提供vip供node做apiserver入口,etcd必须是高可用集群,实现数据同步;以及基本的k8s master节点部署。
安装准备
节点部署相关情况
软件版本:
docker17.03.2-ce
socat-1.7.3.2-2.el7.x86_64
kubelet-1.10.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.10.0-0.x86_64
kubeadm-1.10.0-0.x86_64
以上软件在上一篇初阶k8s集群搭建里已经介绍并附有下载地址。
环境配置
systemctl stop firewalldsystemctl disable firewalld
修改每个节点hostname
cat < /etc/hosts > /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.1 master1
192.168.100.2 master2
192.168.100.3 master3
EOF
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
setenforce 0
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl -w net.bridge.bridge-nf-call-iptables=1
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
sysctl -p
keepalived安装
libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm
wget http://www.keepalived.org/software/keepalived-1.4.3tar.gz
yum install -y libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm
yum -y install libnl libnl-devel
tar -xzvf keepalived-1.4.3.tar.gz
cd keepalived-1.4.3
./configure --prefix=/usr/local/keepalived #检查环境配置
出现上图即为正确环境,如果出现错误
checking openssl/ssl.h usability... no
checking openssl/ssl.h presence... no
checking foropenssl/ssl.h... no
configure: error:
!!! OpenSSL is not properly installed on your system. !!!
!!! Can not include OpenSSL headers files. !!!
则:安装openssl和openssl-devel包,然后从新编译配置文件。
yum install openssl openssl-devel
./configure --prefix=/usr/local/keepalived
make && make install
cp keepalived/etc/init.d/keepalived /etc/init.d/
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
ps -aux |grep keepalived
chkconfig keepalived on
通过systemctl status keepalived查看keepalived状态
三台master重复以上步骤,直到完成keepalived的安装。
安装完成后编写配置文件:
master1的keepalived.conf
cat >/etc/keepalived/keepalived.conf <<EOF
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster{
script "curl -k https://192.168.100.4:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33 #本机物理网卡名字,可通过ip a来查看
virtual_router_id 61
priority 120 # 主节点权重最高 依次减少
advert_int 1
mcast_src_ip 192.168.100.1 #修改为本地IP
nopreempt
authentication {
auth_type PASS
auth_pass awzhXylxy.T
}
unicast_peer{
#注释掉本地IP
#192.168.100.1
192.168.100.2
192.168.100.3
}
virtual_ipaddress {
192.168.100.4/22 #VIP
}
track_script {
#CheckK8sMaster#这个方法在没部署k8s之前最好注释掉,因为很可能因为这个报错
}
}
EOF
master2的keepalived.conf
cat >/etc/keepalived/keepalived.conf <
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster{
script "curl -k https://192.168.100.4:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33 #本机物理网卡名字,可通过ip a来查看
virtual_router_id 61
priority 110 # 主节点权重最高 依次减少
advert_int 1
mcast_src_ip 192.168.100.2 #修改为本地IP
nopreempt
authentication {
auth_type PASS
auth_pass awzhXylxy.T
}
unicast_peer{
#注释掉本地IP
192.168.100.1
#192.168.100.2
192.168.100.3
}
virtual_ipaddress {
192.168.100.4/22 #VIP
}
track_script {
#CheckK8sMaster#这个方法在没部署k8s之前最好注释掉,因为很可能因为这个报错
}
}
EOF
master3的keepalived.conf
cat >/etc/keepalived/keepalived.conf <
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster{
script "curl -k https://192.168.100.4:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33 #本机物理网卡名字,可通过ip a来查看
virtual_router_id 61
priority 100 # 主节点权重最高 依次减少
advert_int 1
mcast_src_ip 192.168.100.3 #修改为本地IP
nopreempt
authentication {
auth_type PASS
auth_pass awzhXylxy.T
}
unicast_peer{
#注释掉本地IP
192.168.100.1
192.168.100.2
#192.168.100.3
}
virtual_ipaddress {
192.168.100.4/22 #VIP
}
track_script {
#CheckK8sMaster#这个方法在没部署k8s之前最好注释掉,因为很可能因为这个报错
}
}
EOF
启动keepalived
systemctl restart keepalived
通过ip a可以查看
除了本机ip还多了一个虚拟ip
也可以通过ping ip去验证vip是否生效。
安装ETCD
1:设置cfssl环境
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfoexport PATH=/usr/local/bin:$PATH
2:创建 CA 配置文件(下面配置的IP为etc节点的IP
mkdir /root/ssl
cd /root/ssl
cat > ca-config.json <<EOF
{"signing": {"default": { "expiry": "8760h"},"profiles": { "kubernetes-Soulmate": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" }}}}EOF
cat > ca-csr.json <<EOF
{"CN": "kubernetes-Soulmate","key": {"algo": "rsa","size": 2048},"names": [{ "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System"}]}EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > etcd-csr.json <<EOF
{ "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.100.1", "192.168.100.2", "192.168.100.3" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System" } ]}EOF
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
3:master1分发etcd证书到master2、master3上面
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n master2 "mkdir -p /etc/etcd/ssl && exit"
ssh -n master3 "mkdir -p /etc/etcd/ssl && exit"
scp -r /etc/etcd/ssl/*.pem master2:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/*.pem master3:/etc/etcd/ssl/
解压etcd-v3.3.2-linux-amd64.tar.gz并安装
wget https://github.com/coreos/etcd/releases/download/v3.3.2/etcd-v3.3.2-linux-amd64.tar.gz
tar -xzvf etcd-v3.3.2-linux-amd64.tar.gz
cd etcd-v3.3.2-linux-amd64
cp etcd* /bin/
#查看是否安装好
etcd -version
etcd Version: 3.3.2Git SHA: c9d46ab37Go Version: go1.9.4Go OS/Arch: linux/amd64
etcdctl -version
etcdctl version: 3.3.2API version: 2
在每一个master上都创建一个etcd存储目录mkdir -p /u03/etcd/
这里可以自行选择储存数据地址,但是要记得在下面配置文件中做修改
master1
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/u03/etcd/
ExecStart=/usr/bin/etcd \
--name master1 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.100.1:2380 \
--listen-peer-urls https://192.168.100.1:2380 \
--listen-client-urls https://192.168.100.1:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.100.1:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \
--initial-cluster-state new \
--data-dir=/u03/etcd/
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
master2
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/u03/etcd/
ExecStart=/usr/bin/etcd \
--name master2 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.100.2:2380 \
--listen-peer-urls https://192.168.100.2:2380 \
--listen-client-urls https://192.168.100.2:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.220.146:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \
--initial-cluster-state new \
--data-dir=/u03/etcd/
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
master3
cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/u03/etcd/
ExecStart=/usr/bin/etcd \
--name master3 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.100.3:2380 \
--listen-peer-urls https://192.168.100.3:2380 \
--listen-client-urls https://192.168.100.3:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.100.3:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \
--initial-cluster-state new \
--data-dir=/u03/etcd/
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
每个master都执行以下命令以启动etcd集群
cd /etc/systemd/system/
mv etcd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
systemctl status etcd
通过以下命令检测集群是否正常
etcdctl --endpoints=https://192.168.100.1:2379,https://192.168.100.2:2379,https://192.168.100.3:2379 \
--ca-file=/etc/etcd/ssl/ca.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
keepalived+etcd安装完成后,开始部署k8s
安装docker、k8s相关rpm包,以及上传k8s相关镜像。请看我上一篇初阶k8s集群搭建。
所有节点修改kubelet配置文件
sed -i -e 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
重启kubelet
systemctl daemon-reload && systemctl restart kubelet
初始化集群,创建集群配置文件
# 生成token
# 保留token后面还要使用
我们使用coreDNS作为k8s集群内部DNS解析,使用canal作为网络服务
kubeadm token generate
cat <<EOF > config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- https://192.168.100.1:2379
- https://192.168.100.2:2379
- https://192.168.100.3:2379
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
networking:
podSubnet: 10.244.0.0/16
kubernetesVersion: 1.10.0
api:
advertiseAddress: "192.168.150.186"
token: "hpobow.vw1g1ya5dre7sq06" #刚刚保存的token
tokenTTL: "0s"#表示永不过期
apiServerCertSANs:
- master1
- master2
- master3
- 192.168.100.1
- 192.168.100.2
- 192.168.100.3
- 192.168.100.4
featureGates:
CoreDNS: true
EOF
编辑完成后执行kubeadm init --config config.yaml
如果失败则查看错误journalctl -xeu kubelet 查看服务启动日志或根据相关日志查看问题
通过kubeadm reset重置
注意,如果etcd已经写入数据,请先到etcd存储数据路径下清空数据记录。
若成功,你会看到以下内容
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315
按照上面提示,此时root用户还不能使用kubelet控制集群需要,配置下环境变量
对于非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下环境变量
source ~/.bash_profile
kubeadm生成证书密码文件分发到master2和master3上面去
scp -r /etc/kubernetes/pki master2:/etc/kubernetes/
scp -r /etc/kubernetes/pki master3:/etc/kubernetes/
部署canal网络,在master1执行
kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply-f \https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
可能镜像下载需要一点时间,也可以先将yaml文件下载到本地,自行修改镜像路径,使用自己下载好的镜像
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
等待部署好了之后查看当前节点是否准备好
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 31m v1.10.0
通过kubectl get pods --all-namespaces查看是否所有的容器都已经运行,如果出现error或crash,就使用kubectl describe pod -n kube-system来查看出现的问题。
在master2和master3上面分别执行初始化
使用之前在master1执行的配置config.yaml在另外两个节点上执行kubeadm init --config config.yaml,将获得与master1一样的结果
同样的配置下环境变量
对于非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下环境变量
source ~/.bash_profile
[root@master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 1h v1.10.0
master2 Ready master 1h v1.10.0
master3 Ready master 1h v1.10.0
查看所有节点运行的容器kubectl get pods --all-namespaces -o wide
这样,基本的主备模式的高可用就搭建完成了,若要部署dashboard请看我上一篇文章初阶k8s集群搭建,值得注意的是,设置dashboard的basicauth的方式进行apiserver的验证,这个设置需要在每一台master上执行以保证高可用。
另外,在k8s 1.10中想使用HPA需要在每个master节点 /etc/kubernetes/manifests/kube-controller-manager.yaml中增加 - --horizontal-pod-autoscaler-use-rest-clients=false 才可以监控到cpu使用率来完成自动扩容。
监控插件heapster
需要有heapster.yaml、influxdb.yaml、grafana.yaml
vim heapster.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:heapster
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: 192.168.220.84/third_party/heapster-amd64:v1.3.0 #这里我用的是自己的私服镜像地址
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
vim influxdb.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: 192.168.220.84/third_party/heapster-influxdb-amd64:v1.1.1 #私服地址,需要自行更换
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
vim grafana.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: 192.168.220.84/third_party/heapster-grafana-amd64:v4.4.1 #私服地址,需自行更换
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
value: /
volumes:
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
type: NodePort
ports:
- port: 80
targetPort: 3000
nodePort: 31236
selector:
k8s-app: grafana
执行kubectl apply -f heapster.yaml -f influxdb.yaml -f grafana.yaml
在dashboard上的展示效果
heapster展示
heapster展示
grafana展示
加入的node工作节点
安装以下软件版本,文章开头有说道
docker17.03.2-ce
socat-1.7.3.2-2.el7.x86_64
kubelet-1.10.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.10.0-0.x86_64
kubeadm-1.10.0-0.x86_64
环境配置
systemctl stop firewalldsystemctl disable firewalld
修改每个节点hostname
cat < /etc/hosts > /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.1 master1
192.168.100.2 master2
192.168.100.3 master3
EOF
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
setenforce 0
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl -w net.bridge.bridge-nf-call-iptables=1
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
sysctl -p
然后执行在master上留下的kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315即可。
作者:我的橙子很甜
链接:https://www.jianshu.com/p/3caccaf8aed1