Kaspersky Unified Monitoring and Analysis Platform

KUMA settings in the inventory file

The inventory file may include the following blocks:

  • all
  • kuma
  • kuma_k0s

For each host, you must specify the FQDN in the <host name>.<domain> format and, if necessary, an IPv4 or IPv6 address. The KUMA Core domain name and its subdomains may not start with a numeral.

Example:

hosts:

hostname.example.com:

ip: 0.0.0.0

The 'all' block

In this block, you can specify the variables that apply to all hosts listed in the inventory file, including the implicitly specified localhost on which the installation is started. Variables can be overridden at the level of host groups or individual hosts.

Example of overriding variables in the inventory file

The table below lists all possible variables in the vars section and their descriptions.

List of possible variables in the 'vars' section

Variable

Description

ansible_connection

Method used to connect to target machines.

Possible values:

  • ssh to connect to remote hosts over SSH.
  • local to establish no connection with remote hosts.

ansible_user

User name used to connect to target machines and install components.

If root login is blocked on the target machines, choose a user that has the right to establish SSH connections and elevate privileges using su or sudo.

ansible_become

This variable specifies if you want to elevate the privileges of the user that is used to install KUMA components.

Possible values:

  • You must specify true if ansible_user is not root.
  • false.

ansible_become_method

Method for elevating the privileges of the user that is used to install KUMA components.

You must specify su or sudo if ansible_user is not root.

ansible_ssh_private_key_file

Path to the private key in the /<path>/.ssh/id_rsa format. You must specify this variable if you want to use a key file other than the default key file (~/.ssh/id_rsa).

deploy_to_k8s

This variable specifies whether you want to deploy KUMA components in a Kubernetes cluster.

Possible values:

  • The false value is specified in the single.inventory.yml and distributed.inventory.yml templates.
  • The true value is specified in the k0s.inventory.yml template.

If you do not specify this variable, it defaults to false.

need_transfer

This variable specifies whether you want to migrate KUMA Core to a new Kubernetes cluster.

You need to specify this variable only if deploy_to_k8s is true.

Possible values:

If you do not specify this variable, it defaults to false.

no_firewall_actions

This variable specifies whether the installer must perform the steps to configure the firewall on the hosts.

Possible values:

  • true means that at startup, the installer does not perform the steps to configure the firewall on the hosts.
  • false means that at startup, the installer performs the steps to configure the firewall on the hosts. This is the value that is specified in all inventory file templates.

If you do not specify this variable, it defaults to false.

generate_etc_hosts

This variable specifies whether the machines must be registered in the DNS zone of your organization.

The installer automatically adds the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines on which KUMA components are installed. The specified IP addresses must be unique.

Possible values:

  • false.
  • true.

If you do not specify this variable, it defaults to false.

deploy_example_services

This variable specifies whether predefined services are created during the installation of KUMA.

You need to specify this variable if you want to create demo services independently of the single/distributed/k0s inventory file.

Possible values:

  • false means predefined services are not created when installing KUMA. This is the value that is specified in all inventory file templates.
  • true means predefined services are created when installing KUMA.

If you do not specify this variable, it defaults to false.

low_resources

This variable specifies whether KUMA is being installed in an environment with limited computational resources.

This variable is not specified in any of the inventory file templates.

Possible values:

  • false means KUMA is being installed for production use. In this case, the installer checks the requirements of the worker nodes (CPU, RAM, and free disk space) in accordance with the hardware and software requirements. If the requirements are not satisfied, the installation is aborted with an error message.
  • true means that KUMA is being installed in an environment with limited computational resources. In this case, the minimum size of the KUMA Core installation directory on the host is 4 GB. All other computational resource limitations are ignored.

If you do not specify this variable, it defaults to false.

The 'kuma' block

In this block, you can specify the settings of KUMA components deployed outside of the Kubernetes cluster. The kuma block can contain the following sections:

  • vars contains variables that apply to all hosts specified in the kuma block.
  • children contains groups of settings for components:
    • kuma_core contains settings of the KUMA Core. You can specify only one host and the following MongoDB database log rotation settings for the host:
      • mongo_log_archives_number is the number of previous logs that you want to keep when rotating the MongoDB database log.
      • mongo_log_file_size is the size of the MongoDB database log, in gigabytes, at which rotation begins. If the MongoDB database log never exceeds the specified size, no rotation occurs.
      • mongo_log_frequency_rotation is the interval for checking the size of the MongoDB database log for rotation purposes. Possible values:
        • hourly means the size of the MongoDB database log is checked every hour.
        • daily means the size of the MongoDB database log is checked every day.
        • weekly means the size of the MongoDB database log is checked every week.

      The MongoDB database log is stored in the /opt/kaspersky/kuma/mongodb/log directory.

      • raft_node_addr is the FQDN on which you want raft to listen for signals from other nodes. This value must be specified in the <host FQDN>:<port> format. If this setting is not specified explicitly, <host FQDN> defaults to the FQDN of the host on which the KUMA Core is deployed, and <port> defaults to 7209. You can specify an address of your choosing to adapt the KUMA Core to the configuration of your infrastructure.
    • kuma_collector contains settings of KUMA collectors. You can specify multiple hosts.
    • kuma_correlator contains settings of KUMA correlators. You can specify multiple hosts.
    • kuma_storage contains settings of KUMA storage nodes. You can specify multiple hosts as well as shard, replica, and keeper IDs for hosts using the following settings:
      • shard is the shard ID.
      • replica is the replica ID.
      • keeper is the keeper ID.

      The specified shard, replica, and keeper IDs are used only if you are deploying demo services as part of a fresh KUMA installation. In other cases, the shard, replica, and keeper IDs that you specified in the KUMA web interface when creating a resource set for the storage are used.

The 'kuma_k0s' block

In this block, you can specify the settings of the Kubernetes cluster that ensures high availability of KUMA. This block is specified only in an inventory file based on k0s.inventory.yml.template.

For test and demo installations in environments with limited computational resources, you must also set low_resources: true in the all block. In this case, the minimum size of the KUMA Core installation directory is reduced to 4 GB and the limitations of other computational resources are ignored.

For each host in the kuma_k0s block, a unique FQDN and IP address must be specified in the ansible_host variable, except for the host in the kuma_lb section. For the host in the kuma_lb section, only the FQDN must be specified. Hosts must be unique within a group.

For a demo installation, you may combine a controller with a worker node. Such a configuration does not provide high availability of the KUMA Core and is only intended for demonstrating the functionality or for testing the software environment.
The minimal configuration that ensures high availability is 3 controllers, 2 worker nodes, and 1 nginx load balancer. In production, we recommend using dedicated worker nodes and controllers. If a cluster controller is under workload and the pod with the KUMA Core is hosted on the controller, if the controller goes down, access to the KUMA Core will be completely lost.

The kuma_k0s block can contain the following sections:

  • vars contains variables that apply to all hosts specified in the kuma block.
  • сhildren contains settings of the Kubernetes cluster that provides high availability of KUMA.

The following table lists possible variables in the vars section and their descriptions.

List of possible variables in the vars section

Group of variables

Description

kuma_lb

FQDN of the load balancer. You can install the nginx load balancer or a third-party TCP load balancer.

If you are installing the nginx load balancer, you can set kuma_managed_lb=true to automatically configure the nginx load balancer when installing KUMA, open the necessary network ports on the nginx load balancer host (6443, 8132, 9443, 7209, 7210, 7220, 7222, 7223, 7226, 8429), and restart to apply the changes.

If you are installing a third-party TCP load balancer, you must manually configure it before installing KUMA.

kuma_control_plane_master

The host that acts as the primary controller of the cluster.

Groups for specifying the primary controller. You only need to specify a host for one group.

kuma_control_plane_master_worker

A host that combines the role of the primary controller and a worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".

kuma_control_plane

Hosts that act as controllers in the cluster.

Groups for specifying secondary controllers.

kuma_control_plane_worker 

Hosts that combine the roles of controller and worker node in the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".

kuma_worker 

Worker nodes of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".