This the multi-page printable view of this section. Click here to print.
Administration with kubeadm
- 1: Certificate Management with kubeadm
- 2: Configuring a cgroup driver
- 3: Upgrading kubeadm clusters
- 4: Adding Windows nodes
- 5: Upgrading Windows nodes
1 - Certificate Management with kubeadm
Kubernetes v1.15 [stable]
Client certificates generated by kubeadm expire after 1 year. This page explains how to manage certificate renewals with kubeadm.
Before you begin
You should be familiar with PKI certificates and requirements in Kubernetes.
Using custom certificates
By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
To do so, you must place them in whatever directory is specified by the
--cert-dir
flag or the certificatesDir
field of kubeadm's ClusterConfiguration
.
By default this is /etc/kubernetes/pki
.
If a given certificate and private key pair exists before running kubeadm init
,
kubeadm does not overwrite them. This means you can, for example, copy an existing
CA into /etc/kubernetes/pki/ca.crt
and /etc/kubernetes/pki/ca.key
,
and kubeadm will use this CA for signing the rest of the certificates.
External CA mode
It is also possible to provide only the ca.crt
file and not the
ca.key
file (this is only available for the root CA file, not other cert pairs).
If all other certificates and kubeconfig files are in place, kubeadm recognizes
this condition and activates the "External CA" mode. kubeadm will proceed without the
CA key on disk.
Instead, run the controller-manager standalone with --controllers=csrsigner
and
point to the CA certificate and key.
PKI certificates and requirements includes guidance on setting up a cluster to use an external CA.
Check certificate expiration
You can use the check-expiration
subcommand to check when certificates expire:
kubeadm certs check-expiration
The output is similar to this:
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
apiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no
controller-manager.conf Dec 30, 2020 23:36 UTC 364d no
etcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no
front-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no
scheduler.conf Dec 30, 2020 23:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 28, 2029 23:36 UTC 9y no
etcd-ca Dec 28, 2029 23:36 UTC 9y no
front-proxy-ca Dec 28, 2029 23:36 UTC 9y no
The command shows expiration/residual time for the client certificates in the /etc/kubernetes/pki
folder and for the client certificate embedded in the KUBECONFIG files used by kubeadm (admin.conf
, controller-manager.conf
and scheduler.conf
).
Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the user should take care of managing certificate renewal manually/using other tools.
Warning:kubeadm
cannot manage certificates signed by an external CA.
Note:kubelet.conf
is not included in the list above because kubeadm configures kubelet for automatic certificate renewal with rotatable certificates under/var/lib/kubelet/pki
. To repair an expired kubelet client certificate see Kubelet client certificate rotation fails.
Warning:On nodes created with
kubeadm init
, prior to kubeadm version 1.17, there is a bug where you manually have to modify the contents ofkubelet.conf
. Afterkubeadm init
finishes, you should updatekubelet.conf
to point to the rotated kubelet client certificates, by replacingclient-certificate-data
andclient-key-data
with:client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
Automatic certificate renewal
kubeadm renews all the certificates during control plane upgrade.
This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.
Note: It is a best practice to upgrade your cluster frequently in order to stay secure.
If you have more complex requirements for certificate renewal, you can opt out from the default behavior by passing --certificate-renewal=false
to kubeadm upgrade apply
or to kubeadm upgrade node
.
Warning: Prior to kubeadm version 1.17 there is a bug where the default value for--certificate-renewal
isfalse
for thekubeadm upgrade node
command. In that case, you should explicitly set--certificate-renewal=true
.
Manual certificate renewal
You can renew your certificates manually at any time with the kubeadm certs renew
command.
This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki
.
After running the command you should restart the control plane Pods. This is required since
dynamic certificate reload is currently not supported for all components and certificates.
Static Pods are managed by the local kubelet
and not by the API Server, thus kubectl cannot be used to delete and restart them.
To restart a static Pod you can temporarily remove its manifest file from /etc/kubernetes/manifests/
and wait for 20 seconds (see the fileCheckFrequency
value in KubeletConfiguration struct.
The kubelet will terminate the Pod if it's no longer in the manifest directory.
You can then move the file back and after another fileCheckFrequency
period, the kubelet will recreate
the Pod and the certificate renewal for the component can complete.
Warning: If you are running an HA cluster, this command needs to be executed on all the control-plane nodes.
Note:certs renew
uses the existing certificates as the authoritative source for attributes (Common Name, Organization, SAN, etc.) instead of the kubeadm-config ConfigMap. It is strongly recommended to keep them both in sync.
kubeadm certs renew
provides the following options:
The Kubernetes certificates normally reach their expiration date after one year.
-
--csr-only
can be used to renew certificates with an external CA by generating certificate signing requests (without actually renewing certificates in place); see next paragraph for more information. -
It's also possible to renew a single certificate instead of all.
Renew certificates with the Kubernetes certificates API
This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.
Caution: These are advanced topics for users who need to integrate their organization's certificate infrastructure into a kubeadm-built cluster. If the default kubeadm configuration satisfies your needs, you should let kubeadm manage certificates instead.
Set up a signer
The Kubernetes Certificate Authority does not work out of the box. You can configure an external signer such as cert-manager, or you can use the built-in signer.
The built-in signer is part of kube-controller-manager
.
To activate the built-in signer, you must pass the --cluster-signing-cert-file
and --cluster-signing-key-file
flags.
If you're creating a new cluster, you can use a kubeadm configuration file:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controllerManager:
extraArgs:
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
Create certificate signing requests (CSR)
See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.
Renew certificates with external CA
This section provide more details about how to execute manual certificate renewal using an external CA.
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs). A CSR represents a request to a CA for a signed certificate for a client. In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
Create certificate signing requests (CSR)
You can create certificate signing requests with kubeadm certs renew --csr-only
.
Both the CSR and the accompanying private key are given in the output.
You can pass in a directory with --csr-dir
to output the CSRs to the specified location.
If --csr-dir
is not specified, the default certificate directory (/etc/kubernetes/pki
) is used.
Certificates can be renewed with kubeadm certs renew --csr-only
.
As with kubeadm init
, an output directory can be specified with the --csr-dir
flag.
A CSR contains a certificate's name, domains, and IPs, but it does not specify usages. It is the responsibility of the CA to specify the correct cert usages when issuing a certificate.
- In
openssl
this is done with theopenssl ca
command. - In
cfssl
you specify usages in the config file.
After a certificate is signed using your preferred method, the certificate and the private key must be copied to the PKI directory (by default /etc/kubernetes/pki
).
Certificate authority (CA) rotation
Kubeadm does not support rotation or replacement of CA certificates out of the box.
For more information about manual rotation or replacement of CA, see manual rotation of CA certificates.
Enabling signed kubelet serving certificates
By default the kubelet serving certificate deployed by kubeadm is self-signed. This means a connection from external services like the metrics-server to a kubelet cannot be secured with TLS.
To configure the kubelets in a new kubeadm cluster to obtain properly signed serving
certificates you must pass the following minimal configuration to kubeadm init
:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following:
- Find and edit the
kubelet-config-1.22
ConfigMap in thekube-system
namespace. In that ConfigMap, thekubelet
key has a KubeletConfiguration document as its value. Edit the KubeletConfiguration document to setserverTLSBootstrap: true
. - On each node, add the
serverTLSBootstrap: true
field in/var/lib/kubelet/config.yaml
and restart the kubelet withsystemctl restart kubelet
The field serverTLSBootstrap: true
will enable the bootstrap of kubelet serving
certificates by requesting them from the certificates.k8s.io
API. One known limitation
is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically
approved by the default signer in the kube-controller-manager -
kubernetes.io/kubelet-serving
.
This will require action from the user or a third party controller.
These CSRs can be viewed using:
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending
To approve them you can do the following:
kubectl certificate approve <CSR-name>
By default, these serving certificate will expire after one year. Kubeadm sets the
KubeletConfiguration
field rotateCertificates
to true
, which means that close
to expiration a new set of CSRs for the serving certificates will be created and must
be approved to complete the rotation. To understand more see
Certificate Rotation.
If you are looking for a solution for automatic approval of these CSRs it is recommended that you contact your cloud provider and ask if they have a CSR signer that verifies the node identity with an out of band mechanism.
Caution: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects. This page follows CNCF website guidelines by listing projects alphabetically. To add a project to this list, read the content guide before submitting a change.
Third party custom controllers can be used:
Such a controller is not a secure mechanism unless it not only verifies the CommonName in the CSR but also verifies the requested IPs and domain names. This would prevent a malicious actor that has access to a kubelet client certificate to create CSRs requesting serving certificates for any IP or domain name.
2 - Configuring a cgroup driver
This page explains how to configure the kubelet cgroup driver to match the container runtime cgroup driver for kubeadm clusters.
Before you begin
You should be familiar with the Kubernetes container runtime requirements.
Configuring the container runtime cgroup driver
The Container runtimes page
explains that the systemd
driver is recommended for kubeadm based setups instead
of the cgroupfs
driver, because kubeadm manages the kubelet as a systemd service.
The page also provides details on how to setup a number of different container runtimes with the
systemd
driver by default.
Configuring the kubelet cgroup driver
kubeadm allows you to pass a KubeletConfiguration
structure during kubeadm init
.
This KubeletConfiguration
can include the cgroupDriver
field which controls the cgroup
driver of the kubelet.
Note: In v1.22, if the user is not setting thecgroupDriver
field underKubeletConfiguration
,kubeadm
will default it tosystemd
.
A minimal example of configuring the field explicitly:
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Such a configuration file can then be passed to the kubeadm command:
kubeadm init --config kubeadm-config.yaml
Note:Kubeadm uses the same
KubeletConfiguration
for all nodes in the cluster. TheKubeletConfiguration
is stored in a ConfigMap object under thekube-system
namespace.Executing the sub commands
init
,join
andupgrade
would result in kubeadm writing theKubeletConfiguration
as a file under/var/lib/kubelet/config.yaml
and passing it to the local node kubelet.
Using the cgroupfs
driver
As this guide explains using the cgroupfs
driver with kubeadm is not recommended.
To continue using cgroupfs
and to prevent kubeadm upgrade
from modifying the
KubeletConfiguration
cgroup driver on existing setups, you must be explicit
about its value. This applies to a case where you do not wish future versions
of kubeadm to apply the systemd
driver by default.
See the below section on "Modify the kubelet ConfigMap" for details on how to be explicit about the value.
If you wish to configure a container runtime to use the cgroupfs
driver,
you must refer to the documentation of the container runtime of your choice.
Migrating to the systemd
driver
To change the cgroup driver of an existing kubeadm cluster to systemd
in-place,
a similar procedure to a kubelet upgrade is required. This must include both
steps outlined below.
Note: Alternatively, it is possible to replace the old nodes in the cluster with new ones that use thesystemd
driver. This requires executing only the first step below before joining the new nodes and ensuring the workloads can safely move to the new nodes before deleting the old nodes.
Modify the kubelet ConfigMap
-
Find the kubelet ConfigMap name using
kubectl get cm -n kube-system | grep kubelet-config
. -
Call
kubectl edit cm kubelet-config-x.yy -n kube-system
(replacex.yy
with the Kubernetes version). -
Either modify the existing
cgroupDriver
value or add a new field that looks like this:cgroupDriver: systemd
This field must be present under the
kubelet:
section of the ConfigMap.
Update the cgroup driver on all nodes
For each node in the cluster:
- Drain the node using
kubectl drain <node-name> --ignore-daemonsets
- Stop the kubelet using
systemctl stop kubelet
- Stop the container runtime
- Modify the container runtime cgroup driver to
systemd
- Set
cgroupDriver: systemd
in/var/lib/kubelet/config.yaml
- Start the container runtime
- Start the kubelet using
systemctl start kubelet
- Uncordon the node using
kubectl uncordon <node-name>
Execute these steps on nodes one at a time to ensure workloads have sufficient time to schedule on different nodes.
Once the process is complete ensure that all nodes and workloads are healthy.
3 - Upgrading kubeadm clusters
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.21.x to version 1.22.x, and from version
1.22.x to 1.22.y (where y > x
). Skipping MINOR versions
when upgrading is unsupported.
To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:
- Upgrading a kubeadm cluster from 1.20 to 1.21
- Upgrading a kubeadm cluster from 1.19 to 1.20
- Upgrading a kubeadm cluster from 1.18 to 1.19
- Upgrading a kubeadm cluster from 1.17 to 1.18
The upgrade workflow at high level is the following:
- Upgrade a primary control plane node.
- Upgrade additional control plane nodes.
- Upgrade worker nodes.
Before you begin
- Make sure you read the release notes carefully.
- The cluster should use a static control plane and etcd pods or external etcd.
- Make sure to back up any important components, such as app-level state stored in a database.
kubeadm upgrade
does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. - Swap must be disabled.
Additional information
- Draining nodes before kubelet MINOR version upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads.
- All containers are restarted after upgrade, because the container spec hash value is changed.
Determine which version to upgrade to
Find the latest patch release for Kubernetes 1.22 using the OS package manager:
apt update
apt-cache madison kubeadm
# find the latest 1.22 version in the list
# it should look like 1.22.x-00, where x is the latest patch
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# find the latest 1.22 version in the list
# it should look like 1.22.x-0, where x is the latest patch
Upgrading control plane nodes
The upgrade procedure on control plane nodes should be executed one node at a time.
Pick a control plane node that you wish to upgrade first. It must have the /etc/kubernetes/admin.conf
file.
Call "kubeadm upgrade"
For the first control plane node
- Upgrade kubeadm:
# replace x in 1.22.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.22.x-00 && \
apt-mark hold kubeadm
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.22.x-00
# replace x in 1.22.x-0 with the latest patch version
yum install -y kubeadm-1.22.x-0 --disableexcludes=kubernetes
-
Verify that the download works and has the expected version:
kubeadm version
-
Verify the upgrade plan:
kubeadm upgrade plan
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. It also shows a table with the component config version states.
Note:kubeadm upgrade
also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag--certificate-renewal=false
can be used. For more information see the certificate management guide.
Note: Ifkubeadm upgrade plan
shows any component configs that require manual upgrade, users must provide a config file with replacement configs tokubeadm upgrade apply
via the--config
command line flag. Failing to do so will causekubeadm upgrade apply
to exit with an error and not perform an upgrade.
-
Choose a version to upgrade to, and run the appropriate command. For example:
# replace x with the patch version you picked for this upgrade sudo kubeadm upgrade apply v1.22.x
Once the command finishes you should see:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.x". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
-
Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
For the other control plane nodes
Same as the first control plane node but use:
sudo kubeadm upgrade node
instead of:
sudo kubeadm upgrade apply
Also calling kubeadm upgrade plan
and upgrading the CNI provider plugin is no longer needed.
Drain the node
-
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade kubelet and kubectl
- Upgrade the kubelet and kubectl:
# replace x in 1.22.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.22.x-00 kubectl=1.22.x-00 && \
apt-mark hold kubelet kubectl
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.22.x-00 kubectl=1.22.x-00
# replace x in 1.22.x-0 with the latest patch version
yum install -y kubelet-1.22.x-0 kubectl-1.22.x-0 --disableexcludes=kubernetes
-
Restart the kubelet:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Uncordon the node
-
Bring the node back online by marking it schedulable:
# replace <node-to-drain> with the name of your node kubectl uncordon <node-to-drain>
Upgrade worker nodes
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.
Upgrade kubeadm
- Upgrade kubeadm:
# replace x in 1.22.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.22.x-00 && \
apt-mark hold kubeadm
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.22.x-00
# replace x in 1.22.x-0 with the latest patch version
yum install -y kubeadm-1.22.x-0 --disableexcludes=kubernetes
Call "kubeadm upgrade"
-
For worker nodes this upgrades the local kubelet configuration:
sudo kubeadm upgrade node
Drain the node
-
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade kubelet and kubectl
- Upgrade the kubelet and kubectl:
# replace x in 1.22.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.22.x-00 kubectl=1.22.x-00 && \
apt-mark hold kubelet kubectl
-
# since apt-get version 1.1 you can also use the following method
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.22.x-00 kubectl=1.22.x-00
# replace x in 1.22.x-0 with the latest patch version
yum install -y kubelet-1.22.x-0 kubectl-1.22.x-0 --disableexcludes=kubernetes
-
Restart the kubelet:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Uncordon the node
-
Bring the node back online by marking it schedulable:
# replace <node-to-drain> with the name of your node kubectl uncordon <node-to-drain>
Verify the status of the cluster
After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:
kubectl get nodes
The STATUS
column should show Ready
for all your nodes, and the version number should be updated.
Recovering from a failure state
If kubeadm upgrade
fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade
again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run kubeadm upgrade apply --force
without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under /etc/kubernetes/tmp
:
kubeadm-backup-etcd-<date>-<time>
kubeadm-backup-manifests-<date>-<time>
kubeadm-backup-etcd
contains a backup of the local etcd member data for this control plane Node.
In case of an etcd upgrade failure and if the automatic rollback does not work, the contents of this folder
can be manually restored in /var/lib/etcd
. In case external etcd is used this backup folder will be empty.
kubeadm-backup-manifests
contains a backup of the static Pod manifest files for this control plane Node.
In case of a upgrade failure and if the automatic rollback does not work, the contents of this folder can be
manually restored in /etc/kubernetes/manifests
. If for some reason there is no difference between a pre-upgrade
and post-upgrade manifest file for a certain component, a backup file for it will not be written.
How it works
kubeadm upgrade apply
does the following:
- Checks that your cluster is in an upgradeable state:
- The API server is reachable
- All nodes are in the
Ready
state - The control plane is healthy
- Enforces the version skew policies.
- Makes sure the control plane images are available or available to pull to the machine.
- Generates replacements and/or uses user supplied overwrites if component configs require version upgrades.
- Upgrades the control plane components or rollbacks if any of them fails to come up.
- Applies the new
CoreDNS
andkube-proxy
manifests and makes sure that all necessary RBAC rules are created. - Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
kubeadm upgrade node
does the following on additional control plane nodes:
- Fetches the kubeadm
ClusterConfiguration
from the cluster. - Optionally backups the kube-apiserver certificate.
- Upgrades the static Pod manifests for the control plane components.
- Upgrades the kubelet configuration for this node.
kubeadm upgrade node
does the following on worker nodes:
- Fetches the kubeadm
ClusterConfiguration
from the cluster. - Upgrades the kubelet configuration for this node.
4 - Adding Windows nodes
Kubernetes v1.18 [beta]
You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster.
Before you begin
Your Kubernetes server must be at or later than version 1.17. To check the version, enterkubectl version
.
-
Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are using VXLAN/Overlay networking you must have also have KB4489899 installed.
-
A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see Creating a single control-plane cluster with kubeadm).
Objectives
- Register a Windows node to the cluster
- Configure networking so Pods and Services on Linux and Windows can communicate with each other
Getting Started: Adding a Windows Node to Your Cluster
Networking Configuration
Once you have a Linux-based Kubernetes control-plane node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity.
Configuring Flannel
-
Prepare Kubernetes control plane for Flannel
Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. The following command must be run on all Linux nodes:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
-
Download & configure Flannel for Linux
Download the most recent Flannel manifest:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Modify the
net-conf.json
section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows:net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan", "VNI": 4096, "Port": 4789 } }
Note: The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. See the VXLAN documentation. for an explanation of these fields.Note: To use L2Bridge/Host-gateway mode instead change the value ofType
to"host-gw"
and omitVNI
andPort
. -
Apply the Flannel manifest and validate
Let's apply the Flannel configuration:
kubectl apply -f kube-flannel.yml
After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.
kubectl get pods -n kube-system
The output should include the Linux flannel DaemonSet as running:
NAMESPACE NAME READY STATUS RESTARTS AGE ... kube-system kube-flannel-ds-54954 1/1 Running 0 1m
-
Add Windows Flannel and kube-proxy DaemonSets
Now you can add Windows-compatible versions of Flannel and kube-proxy. In order to ensure that you get a compatible version of kube-proxy, you'll need to substitute the tag of the image. The following example shows usage for Kubernetes v1.22.0, but you should adjust the version for your own deployment.
curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/v1.22.0/g' | kubectl apply -f - kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml
Note: If you're using host-gateway use https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.yml insteadNote:If you're using a different interface rather than Ethernet (i.e. "Ethernet0 2") on the Windows nodes, you have to modify the line:
wins cli process run --path /k/flannel/setup.exe --args "--mode=overlay --interface=Ethernet"
in the
flannel-host-gw.yml
orflannel-overlay.yml
file and specify your interface accordingly.# Example curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml | sed 's/Ethernet/Ethernet0 2/g' | kubectl apply -f -
Joining a Windows worker node
Note: All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Administrator) on the Windows worker node.
Install Docker EE
Install the Containers
feature
Install-WindowsFeature -Name containers
Install Docker Instructions to do so are available at Install Docker Engine - Enterprise on Windows Servers.
Install wins, kubelet, and kubeadm
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/kubeadm/scripts/PrepareNode.ps1
.\PrepareNode.ps1 -KubernetesVersion v1.22.0
Run kubeadm
to join the node
Use the command that was given to you when you ran kubeadm init
on a control plane host.
If you no longer have this command, or the token has expired, you can run kubeadm token create --print-join-command
(on a control plane host) to generate a new token and join command.
Install containerD
curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/Install-Containerd.ps1
.\Install-Containerd.ps1
Note:To install a specific version of containerD specify the version with -ContainerDVersion.
# Example .\Install-Containerd.ps1 -ContainerDVersion 1.4.1
Note:If you're using a different interface rather than Ethernet (i.e. "Ethernet0 2") on the Windows nodes, specify the name with
-netAdapterName
.# Example .\Install-Containerd.ps1 -netAdapterName "Ethernet0 2"
Install wins, kubelet, and kubeadm
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/kubeadm/scripts/PrepareNode.ps1
.\PrepareNode.ps1 -KubernetesVersion v1.22.0 -ContainerRuntime containerD
Run kubeadm
to join the node
Use the command that was given to you when you ran kubeadm init
on a control plane host.
If you no longer have this command, or the token has expired, you can run kubeadm token create --print-join-command
(on a control plane host) to generate a new token and join command.
Note: If using CRI-containerD add--cri-socket "npipe:////./pipe/containerd-containerd"
to the kubeadm call
Verifying your installation
You should now be able to view the Windows node in your cluster by running:
kubectl get nodes -o wide
If your new node is in the NotReady
state it is likely because the flannel image is still downloading.
You can check the progress as before by checking on the flannel pods in the kube-system
namespace:
kubectl -n kube-system get pods -l app=flannel
Once the flannel Pod is running, your node should enter the Ready
state and then be available to handle workloads.
What's next
5 - Upgrading Windows nodes
Kubernetes v1.18 [beta]
This page explains how to upgrade a Windows node created with kubeadm.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version 1.17. To check the version, enterkubectl version
.
- Familiarize yourself with the process for upgrading the rest of your kubeadm cluster. You will want to upgrade the control plane nodes before upgrading your Windows nodes.
Upgrading worker nodes
Upgrade kubeadm
-
From the Windows node, upgrade kubeadm:
# replace v1.22.0 with your desired version curl.exe -Lo C:\k\kubeadm.exe https://dl.k8s.io//bin/windows/amd64/kubeadm.exe
Drain the node
-
From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining kubectl drain <node-to-drain> --ignore-daemonsets
You should see output similar to this:
node/ip-172-31-85-18 cordoned node/ip-172-31-85-18 drained
Upgrade the kubelet configuration
-
From the Windows node, call the following command to sync new kubelet configuration:
kubeadm upgrade node
Upgrade kubelet
-
From the Windows node, upgrade and restart the kubelet:
stop-service kubelet curl.exe -Lo C:\k\kubelet.exe https://dl.k8s.io//bin/windows/amd64/kubelet.exe restart-service kubelet
Uncordon the node
-
From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:
# replace <node-to-drain> with the name of your node kubectl uncordon <node-to-drain>
Upgrade kube-proxy
-
From a machine with access to the Kubernetes API, run the following, again replacing v1.22.0 with your desired version:
curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/v1.22.0/g' | kubectl apply -f -