WWW.DUMAIS.IO
ARTICLES
OVERLAY NETWORKS WITH MY SDN CONTROLLERSIMPLE LEARNING SWITCH WITH OPENFLOWINSTALLING KUBERNETES MANUALLYWRITING A HYPERVISOR WITH INTEL VT-X CREATING YOUR OWN LINUX CONTAINERSVIRTIO DRIVER IMPLEMENTATIONNETWORKING IN MY OSESP8266 BASED IRRIGATION CONTROLLERLED STRIP CONTROLLER USING ESP8266.OPENVSWITCH ON SLACKWARESHA256 ASSEMBLY IMPLEMENTATIONPROCESS CONTEXT ID AND THE TLBTHREAD MANAGEMENT IN MY HOBBY OSENABLING MULTI-PROCESSORS IN MY HOBBY OSNEW HOME AUTOMATION SYSTEMINSTALLING AND USING DOCKER ON SLACKWARESYSTEM ON A CHIP EMULATORUSING JSSIP AND ASTERISK TO MAKE A WEBPHONEC++ WEBSOCKET SERVERSIP ATTACK BANNINGBLOCK CACHING AND WRITEBACKBEAGLEBONE BLACK BARE METAL DEVELOPEMENTARM BARE METAL DEVELOPMENTUSING EPOLLMEMORY PAGINGIMPLEMENTING HTTP DIGEST AUTHENTICATIONSTACK FRAME AND THE RED ZONE (X86_64)AVX/SSE AND CONTEXT SWITCHINGHOW TO ANSWER A QUESTION THE SMART WAY.REALTEK 8139 NETWORK CARD DRIVERREST INTERFACE ENGINECISCO 1760 AS AN FXS GATEWAYHOME AUTOMATION SYSTEMEZFLORA IRRIGATION SYSTEMSUMP PUMP MONITORINGBUILDING A HOSTED MAILSERVER SERVICEI AM NOW HOSTING MY OWN DNS AND MAIL SERVERS ON AMAZON EC2DEPLOYING A LAYER3 SWITCH ON MY NETWORKACD SERVER WITH RESIPROCATEC++ JSON LIBRARYIMPLEMENTING YOUR OWN MUTEX WITH CMPXCHGWAKEUPCALL SERVER USING RESIPROCATEFFT ON AMD64CLONING A HARD DRIVECONFIGURING AND USING KVM-QEMUUSING COUCHDBINSTALLING COUCHDB ON SLACKWARENGW100 MY OS AND EDXS/LSENGW100 - MY OSASTERISK FILTER APPLICATIONCISCO ROUTER CONFIGURATIONAASTRA 411 XML APPLICATIONSPA941 PHONEBOOKSPEEDTOUCH 780 DOCUMENTATIONAASTRA CONTACT LIST XML APPLICATIONAVR32 OS FOR NGW100ASTERISK SOUND INJECTION APPLICATIONNGW100 - DIFFERENT PROBLEMS AND SOLUTIONSAASTRA PRIME RATE XML APPLICATIONSPEEDTOUCH 780 CONFIGURATIONUSING COUCHDB WITH PHPAVR32 ASSEMBLY TIPAP7000 AND NGW100 ARCHITECTUREAASTRA WEATHER XML APPLICATIONNGW100 - GETTING STARTEDAASTRA ALI XML APPLICATION

INSTALLING KUBERNETES MANUALLY

2017-09-13

The procedure described here applies to centos but the same recipe can obviously be adapted for other distritions with some minor tweaks.

I am deploying a 3 nodes cluster and all servers are master/nodes at the same time. This is not recommended by the k8s team but for a lab environment it is perfectly fine. If you understand this procedure well, then you will find that deploying nodes and masters separately can be just as easy. Let's assume I have 3 servers with IPs 192.168.1.100,192.168.1.101 and 192.168.1.102.

System preparation

First thing we need to do is disable selinux, disable the firewall, NetworkManager and setup a yum repo. This is probably a bad idea, but then again, it makes things easier in a lab environment.

systemctl disable NetworkManager systemctl stop NetworkManager systemctl disable firewalld systemctl stop firewalld setenforce 0 sed -i "s/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config cat >> /etc/yum.repos.d/virt7-docker-common-release.repo << EOF [virt7-docker-common-release] name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0 EOF

If you are behind a corporate proxy, also add this line to that last repo file: proxy=http://yourproxy_address

Install docker

yum install -y docker systemctl enable docker systemctl start docker

If you are behind a corporate proxy, add this line to /etc/sysconfig/docker: HTTP_PROXY=http://yourproxy_address and then restart docker

Install etcd

etcd should be installed on all masters. You actually have to install etcd on the same server where the k8s binaries will be found, but you should definitely install several instances of it and cluster them. Some people also suggest to install at least one instance somewhere in the cloud. This is because etcd will store all the persistant admin stuff that k8s needs. So it would be painful to lose that data. Bringing back (or adding) a k8s node is very easy and transparent as long as your etcd cluster is intact.

yum install -y etcd cat >> /etc/etcd/etcd.conf << EOF ETCD_INITIAL_CLUSTER="k1=http://192.168.1.100:2380,k2=http://192.168.1.101:2380,k3=http://192.168.1.102:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="awesome-etcd-cluster" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.100:2380" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://192.168.1.100:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.100:2379" ETCD_NAME="k1" EOF systemctl enable etcd systemctl start etcd

The ETCD_INITIAL_CLUSTER line should list all the hosts where etcd will be running. The other lines where you see 192.168.1.100 should be modified to match the IP address of the server you are currently installing etcd on. and ETCD_NAME should also match the server you are installing on (see the first line where these names are used). The ETCD_NAME can be any arbitrary name (as long as you properly match them in ETCD_INITIAL_CLUSTER) but most people try to use the server's hostname.

After having installed etcd on all your servers, wait for the cluster to be up checking the output of "etcdctl cluster-health" Make sure the cluster is up and healthy before continuing.

Now add some info about k8s in etcd

etcdctl mkdir /kube-centos/network etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": {\"Type\": \"vxlan\" } }"

Install Flannel

Your k8s cluster will need an overlay network so that all pods appear to be on the same layer 2 segment. This is nice because even though you cluster runs on different servers, each pod will be able to talk to each other using a virtual network that sits on top of your real network and will span accross all k8s nodes. It's basically a virtual distributed switch, just like openvswitch does. Flannel is one such driver that enables that for kubernetes. It is the most basic overly driver and works just fine. For more advanced stuff, Nuage is an option and there are many other options. If you are new to this (like I was), then this is the SDN stuff that all the cool kids are talking about.

yum install -y flannel cat >> /etc/sysconfig/flanneld << EOF FLANNEL_ETCD_ENDPOINTS="http://192.168.1.100:2379,http://192.168.1.101:2379,http://192.168.1.102:2379" FLANNEL_ETCD_PREFIX="/kube-centos/network" EOF systemctl enable flanneld systemctl start flanneld

Install Kubernetes

So far, we have just installed all the stuff that k8s requires. Now we get to the part where we actually install Kubernetes. So to resume, we have installed:

  • Docker: This is what will actually run the containers
  • Etcd: This is the database that k8s and flannel uses to store admin data
  • Flannel: The overlay network

Before we continue, let's create a user and some folders

groupadd kube useradd -g kube -s /sbin/nologin kube mkdir -p /var/run/kubernetes chown root:kube /var/run/kubernetes chmod 770 /var/run/kubernetes mkdir /etc/kubernetes mkdir /var/lib/kubelet

Kubernetes can be downloaded as a binary package from github. What I really like about these binaries is that they are simple standalone applications. You don't need to install a RPM and a whole bunch of libraries. Simply copy the executable and run it. You will need a total of 6 process to run. So the first thing to do is to unpack the binaries. Download the version you want from https://github.com/kubernetes/kubernetes/releases

tar -zxf kubernetes.tar.gz cd kubernetes cluster/get-kube-binaries.sh cd server tar -zxf kubernetes-server-linux-amd64.tar.gz cp kube-apiserver /usr/bin cp kube-controller-manager /usr/bin cp kubectl /usr/bin cp kubelet /usr/bin cp kube-proxy /usr/bin cp kube-scheduler /usr/bin chmod 554 /usr/bin/kube*

Each of the daemons can simply be started from the CLI with a whole bunch of command line arguments, you don't even need any configuration files. This is beautiful because it is so easy. So technically, you could just do:

kube-apiserver & kube-controller-manager & kubelet & kube-proxy & kube-scheduler &

And that's it, kubernetes is running. Simple. But let's get systemd to manage those process. If you are not using systemd (slackware?) then you can setup a bunch of rc scripts to launch those. But for this example, let's create some systemd unit files that will launch this for us.

#/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver User=kube ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target #/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager User=kube ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target #/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_POD_INFRA_CONTAINER \ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target #/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target #/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler User=kube ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

Note how I did not hard code all command line arguments in the systemd unit files. Instead I will store them in separate environment files under /etc/kubernetes. So earlier I was praising about how nice it was to just be able to launch the processes on the command line with not extra config files or no service files but here I am create service files and config files. I know... I just like the fact that I can customize it any way I want. So here are the config files needed. But as I said, you could just write one bash script that invokes all those 5 process with all their command line arguments and you would have zero config file and no need for systemd. just 5 binaries that you copied on your server.

#/etc/kubernetes/config KUBE_ETCD_SERVERS="--etcd-servers=k1=http://192.168.1.100:2379,k2=http://192.168.1.101:2379,k3=http://192.168.1.102:2379" KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_MASTER="--master=http://127.0.0.1:8080" KUBE_SCHEDULER_ARGS="--leader-elect=true" KUBE_API_ARGS="--leader-elect=true" #/etc/kubernetes/apiserver KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit" KUBE_API_ARGS="" #/etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-period=2s --node-monitor-grace-period=16s --pod-eviction-timeout=30s" #/etc/kubernetes/kubelet KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_PORT="--port=10250" KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080" KUBELET_ARGS="--node-status-update-frequency=4s --cgroup-driver=systemd" #/etc/kubernetes/proxy # empty #/etc/kubernetes/scheduler # empty

Then you can enable and start all those new services. You have a k8s cluster running. You can test by invoking "kubectl get nodes"

Some notes about the k8s processes

kubectl can be copied on any machine. It will try to communicate to kubernetes through the apiserver process on the localhost. If you are running this tool on a server where apiserver is not running, then you need to specify --server=http://api-server-address:8080. In our case, we have installed the apiserver on all k8s nodes. So you can connect to any instances in the cluster.

The apiserver process needs to run on all masters. This is so you can control each masters remotely. I've configured each apiserver to only talk to it's local etcd because we have installed etcd on all nodes. But would could configure it to talk to all etcd servers, even remote ones.

the kube-proxy should run on all worker nodes (in our cae, workers and masters are the same). Let's say you have a pod running a web server on port 242. You don't know on which node your pod will run. But you want to be able to access it using the IP of any of the nodes in the cluster. That is what kube-proxy does. So in you go to http://192.168.1.100:242 but your pod runs on 192.168.1.102, then kube-proxy will handle it. It will, as you guessed it, proxy the request. And this is at the TCP/UDP level, it is not just a http proxy.

I am not entirely sure which of these processes are considered as "master" vs "worker nodes" processes. But I believe that the nodes should only be running kube-proxy and kubelet. The other processes should only run on what would be considered master servers. So since it is just a matter of copying the binaries over and changing some addresses in the environment files under /etc/kubernetes, it would be easy to tweak the procedure above to get a different infrastructure.

Ansible

As a bonus, I wrote an ansible script to do all this automatically. https://github.com/pdumais/ansible-kubernetes