Here are the basic steps to setup your local kubernetes cluster on vagrant centos boxes.
Prerequisite :-
- Virtual Box 5.0+
- vagrant (https://www.vagrantup.com/docs/installation/)
Used Windows machine for setting up this you can use any other os.
- Create a new directory in the work-space you want to use i am naming it vagrant. Now have to create a vagrant file i am using in simple way so its easy to understand file and all its node along with one of the developer machine to compile and run docker images/containers.
Vagrant.configure(“2”) do |config|
config.vm.define “executor”, primary: true do |executor|
executor.vm.box = “centos/7”
executor.vm.hostname = “executor”
executor.vm.network :private_network, ip:”192.168.2.103″
executor.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “executor”]
end
end
config.vm.define “kube-master”, primary: true do |master|
master.vm.box = “centos/7”
master.vm.hostname = “kube-master”
master.vm.network :private_network, ip:”192.168.2.100″
master.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-master”]
end
endconfig.vm.define “kube-node1”, primary: true do |node001|
node001.vm.box = “centos/7”
node001.vm.hostname = “kube-node1″
node001.vm.network :private_network, ip:”192.168.2.101”
node001.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-node1”]
end
endconfig.vm.define “kube-node2”, primary: true do |node002|
node002.vm.box = “centos/7”
node002.vm.hostname = “kube-node2″
node002.vm.network :private_network, ip:”192.168.2.102”
node002.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
v.customize [“modifyvm”, :id, “–memory”, 512]
v.customize [“modifyvm”, :id, “–name”, “kube-node2”]
end
end
end
Up here using two worker node and one master node, master node does not run any pod or service its just the manager of nodes and all elements deployed on it. Run command vagrant up
2. As those machines are ready you can check status by vagrant status
3. Now below are common steps for each nodes, use vagrant ssh node(master)
- yum update -y
- systemctl disable firewalld
- vi /etc/sysconfig/selinux (set SELINUX=disabled)
- yum remove chrony -y
- yum install ntp -y
- systemctl enable ntpd.service
- systemctl start ntpd.service
- vi /etc/hosts (add master and node names and ips)
- 192.168.2.100 kube-master
- 192.168.2.101 kube-node1
- 192.168.2.102 kube-node2
- cat /etc/hosts
- ping master
- ping node001
- ping node002
- vi /etc/yum.repos.d/virt7-docker-common-release.repo
- [virt7-docker-common-release]
- name=virt7-docker-common-release
- baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os
- gpgcheck=0
- yum install -y –enablerepo=virt7-docker-common-release kubernetes etcd flannel
- vi /etc/kubernetes/config
- KUBE_LOGTOSTDERR=”–logtostderr=true”
- KUBE_LOG_LEVEL=”–v=0″
- KUBE_ALLOW_PRIV=”–allow-privileged=false”
- KUBE_MASTER=”–master=http://master:8080″
4. Perfrom below steps to setup in master
- vi /etc/etcd/etcd.conf
- #[Member] #ETCD_CORS=””
- ETCD_DATA_DIR=”/var/lib/etcd/default.etcd” ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″
- ETCD_NAME=”default”
- #[Clustering] ETCD_ADVERTISE_CLIENT_URLS=”http://0.0.0.0:2379″
- [master] vi /etc/kubernetes/apiserver
# The address on the local server to listen to.
- KUBE_API_ADDRESS=”–address=0.0.0.0″
- # The port on the local server to listen on.
- KUBE_API_PORT=”–port=8080″
- # Port minions listen on
- KUBELET_PORT=”–kubelet-port=10250″
- # Comma separated list of nodes in the etcd cluster
- KUBE_ETCD_SERVERS=”–etcd-servers=http://master:2379″
- # Address range to use for services
- KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
- # default admission control policies
- KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”
- # Add your own!
- KUBE_API_ARGS=””
- systemctl start etcd
- etcdctl mkdir /kube-centos/network
- etcdctl mk /kube-centos/network/config “{ \”Network\”: \”172.30.0.0/16\”, \”SubnetLen\”:24, \”Backend\”:{\”Type\”:\”vxlan\” } }”
5. Setup nodes and master for flannel setup, flannel is to comminicate in the cluster.
- vi /etc/sysconfig/flanneldvi /etc/sysconfig/flanneld
- # Flanneld configuration options
# etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS=”http://master:2379″
# etcd config key. This is the configuration key that flannel queries- # For address range assignment
- FLANNEL_ETCD_PREFIX=”/kube-centos/network”
# Any additional options that you want to pass- FLANNEL_OPTIONS=”–iface=eth1″
6. Setup Kubelet on each nodes.
- vi /etc/kubernetes/kubelet
- ### # kubernetes kubelet (minion) config
- # The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)
- KUBELET_ADDRESS=”–address=0.0.0.0″
# The port for the info server to serve on- # KUBELET_PORT=”–port=10250″
# You may leave this blank to use the actual hostname- KUBELET_HOSTNAME=”–hostname-override=node-name”
# location of the api-server- KUBELET_API_SERVER=”–api-servers=http://master:8080″
# pod infrastructure container- #KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
# Add your own!- KUBELET_ARGS=””
7. Kick off all the services in master with below command
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done
8. Kick off all the services on nodes
for SERVICES in kube-proxy kubelet flanneld docker; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done
9. Setup node cluster metadata
- kubectl config set-cluster default-cluster –server=http://master:8080
- kubectl config set-cluster default-cluster –server=http://master:8080
- kubectl config set-context default-context –cluster=default-cluster –user=default-admin
- kubectl config use-context default-context
10. Setup port forward to communicate with the docker0 subnet
echo “net.ipv4.ip_forward = 1” > /etc/sysctl.d/docker_network.conf
sysctl -p /etc/sysctl.d/docker_network.conf
11. To access the pods on nodes from master run this command
iptables -P FORWARD ACCEPT
One thought on “Kubernetes cluster on vagrant Centos7 box (WIP).”