Kaspersky Unified Monitoring and Analysis Platform

Distributed installation in a high availability configuration

The high availability configuration of KUMA involves deploying the KUMA Core on a Kubernetes cluster and using an external TCP traffic balancer.

To create a high availability KUMA installation, use the kuma-ansible-installer-ha-<build number>.tar.gz installer and prepare the k0s.inventory.yml inventory file by specifying the configuration of your cluster. For a new installation in a high availability configuration, OOTB resources are always imported. You can also perform an installation with deployment of demo services. To do this, set "deploy_example_services: true" in the inventory file.

You can deploy KUMA Core on a Kubernetes cluster in the following ways:

Minimum configuration

Kubernetes has 2 node roles:

  • Controllers (control-plane). Nodes with this role manage the cluster, store metadata, and balance the workload.
  • Workers (worker). Nodes with this role bear the workload by hosting KUMA processes.

To deploy KUMA in a high availability configuration, you need:

  • 3 dedicated controllers
  • 2 worker nodes
  • 1 TCP balancer

You must not use the balancer as the control machine for running the KUMA installer.

To ensure the adequate performance of the KUMA Core in Kubernetes, you must allocate 3 dedicated nodes that have only the controller role. This will provide high availability for the Kubernetes cluster itself and will ensure that the workload (KUMA processes and other processes) cannot affect the tasks involved in managing the Kubernetes cluster. If you are using virtualization tools, make sure that the nodes are hosted on different physical servers and that these physical servers are not being used as worker nodes.

For a demo installation of KUMA, you may combine the controller and worker roles. However, if you are expanding an installation to a distributed installation, you must reinstall the entire Kubernetes cluster and allocate 3 dedicated nodes with the controller role and at least 2 nodes with the worker role. KUMA cannot be upgraded to later versions if any of the nodes combine the controller and worker roles.

In this section

Additional requirements for deploying KUMA Core in Kubernetes

Installing KUMA on a Kubernetes cluster from scratch

Migrating the KUMA Core to a new Kubernetes cluster

KUMA Core availability in various scenarios

Managing Kubernetes and access to KUMA

Time zone in a Kubernetes cluster