Kaspersky Unified Monitoring and Analysis Platform

Managing Kubernetes and accessing KUMA

When installing KUMA in a high availability configuration, a file named ./artifacts/k0s-kubeconfig.yml is created in the installer directory. This file contains the details required for connecting to the created Kubernetes cluster. An identical file is created on the main controller in the home directory of the user specified as the ansible_user in the inventory file.

To ensure that the Kubernetes cluster can be monitored and managed, the k0s-kubeconfig.yml file must be saved in a location accessible by the cluster administrators. Access to the file must be restricted.

Managing the Kubernetes cluster

To monitor and manage the cluster, you can use the k0s application that is installed on all cluster nodes during KUMA deployment. For example, you can use the following command to view the load on worker nodes:

k0s kubectl top nodes

Access to the KUMA Core

The URL of the KUMA Core is https://<worker node FQDN>:<worker node port>. Available ports: 7209, 7210, 7220, 7222, 7223. Port 7220 is the default port for connecting to the KUMA Core web interface. Any worker node whose extra_args parameter contains the value kaspersky.com/kuma-ingress=true can be used as an access point.

It is not possible to log in to the KUMA web interface on multiple worker nodes simultaneously using the same credentials. Only the most recently established connection remains active.

If you are using an external load balancer in the configuration of the high availability Kubernetes cluster, you must use the FQDN of the load balancer for access to KUMA Core ports.