Kaspersky Unified Monitoring and Analysis Platform

Installing KUMA in production environment

Prior to installing the program, carefully read the KUMA installation requirements as well as the hardware and system requirements. The KUMA installation takes place over several stages:

  1. Configuring network access

    Make sure all the necessary ports are open to allow KUMA components to interact with each other based on your organization's security structure.

  2. Preparing the test machine

    The test machine is used during the program installation process: the installer files are unpacked and run on it.

  3. Preparing the target machines

    The program components are installed on the target machines.

  4. Preparing the inventory file

    Create an inventory file describing the network structure of the program components that the installer can use to deploy KUMA.

  5. Installing the program

    Install the program and get the URL and login credentials for the web interface.

  6. Creating services

    Create services in the KUMA web interface and install them on the target machines intended for them.

In this section

Configuring network access

Preparing the test machine

Preparing the target machine

Preparing the inventory file

Installing the program

Creating services

Changing CA certificate

Additional ClickHouse clusters

Page top
[Topic 217917]

Configuring network access

For the program to run correctly, you need to ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components. The table below shows the default network ports values.

Network ports used so KUMA components can interact with each other

Protocol

Port

Direction

Destination of the connection

HTTPS

7222

From the KUMA client to the server with the KUMA Core component.

Reverse proxy in the CyberTrace system.

HTTPS

8123

From the storage service to the ClickHouse cluster node.

Writing and receiving normalized events in the ClickHouse cluster.

HTTPS

9009

Between ClickHouse cluster replicas.

Internal communication between ClickHouse cluster replicas for transferring data of the cluster.

TCP

2181

From ClickHouse cluster nodes to the ClickHouse keeper replication coordination service.

Receiving and writing of replication metadata by replicas of ClickHouse servers.

TCP

2182

From one ClickHouse keeper replication coordination service to another.

Internal communication between replication coordination services to reach a quorum.

TCP

7210

From all KUMA components to the KUMA Core server

Receipt of the configuration by KUMA from the KUMA Core server

TCP

7215

From the KUMA collector to the KUMA correlator

Forwarding of data by the collector to the KUMA correlator

TCP

7220

From the KUMA client to the server with the KUMA Core component

User access to the KUMA web interface

TCP

7221 and other ports used for service installation as the --api.port <port> parameter value.

From KUMA Core to KUMA services

Administration of services from the KUMA web interface

TCP

7223

To the KUMA Core server.

Default port used for API requests.

TCP

8001

From Victoria Metrics to the ClickHouse server.

Receiving ClickHouse server operation metrics.

TCP

9000

From the ClickHouse client to the ClickHouse cluster node.

Writing and receiving data in the ClickHouse cluster.

Page top
[Topic 217770]

Preparing the test machine

The test machine is used during the program installation process: the installer files are unpacked and run on it.

To prepare the test machine for the KUMA installation:

  1. Install an operating system on the test machine and then install the necessary packages.
  2. Configure the network interface.

    For convenience, you can use the graphical utility nmtui.

  3. Configure the system time to synchronize with the NTP server:
    1. If the machine does not have direct Internet access, edit the /etc/chrony.conf file to replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.
    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds and execute the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contain the line "System clock synchronized: yes".

  4. Generate an SSH key for authentication on the SSH servers of the target machines by executing the following command:

    sudo ssh-keygen -f /root/.ssh/id_rsa -N "" -C kuma-ansible-installer

  5. Make sure the test machine has network access to all the target machines by host name and copy the SSH key to each of them by executing the following command:

    sudo ssh-copy-id -i /root/.ssh/id_rsa root@<host name of the test machine>

  6. Copy the archive with the KUMA installer to the test machine and unpack it using the following command (about 2 GB of disk space is required):

    sudo tar -xpf kuma-ansible-installer-<version>.tar.gz

The test machine is ready for the KUMA installation.

Page top
[Topic 222083]

Preparing the target machine

The program components are installed on the target machines.

To prepare the target machine for the installation of KUMA components:

  1. Install an operating system on the test machine and then install the necessary packages.
  2. Configure the network interface.

    For convenience, you can use the graphical utility nmtui.

  3. Configure the system time to synchronize with the NTP server:
    1. If the machine does not have direct Internet access, edit the /etc/chrony.conf file to replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.
    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds and execute the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contain the line "System clock synchronized: yes".

  4. Specify the host name. It is highly recommended to use the FQDN. For example: kuma-1.mydomain.com.

    You should not change the KUMA host name after installation: this will make it impossible to verify the authenticity of certificates and will disrupt the network communication between the program components.

  5. Register the target machine in your organization's DNS zone to allow host names to be translated to IP addresses.

    If your organization does not use a DNS server, you can use the /etc/hosts file for name resolution. The content of the files can be automatically generated for each target machine when installing KUMA.

  6. Execute the following command and write down the result:

    hostname -f

    You will need this host name when installing KUMA. The test machine must be able to access the target machine using this name.

The target machine is ready for the installation of KUMA components.

The test machine can be used as a target one. To do so, prepare the test machine, then follow steps 4–6 in the instructions for preparing the target machine.

Page top
[Topic 217955]

Preparing the inventory file

Installation, update, and removal of KUMA components is performed from the folder containing the unpacked installer by using the Ansible tool and the user-created inventory file containing a list of the hosts of KUMA components and other parameters. The inventory file is in the YAML format.

To create an inventory file:

  1. Go to the KUMA installer folder by executing the following command:

    cd kuma-ansible-installer

  2. Create an inventory file by copying the distributed.inventory.yml.template:

    cp distributed.inventory.yml.template distributed.inventory.yml

  3. Edit the inventory file parameters:
    • If you want demonstration services to be created during the installation, set the deploy_example_services parameter value to true.

      deploy_example_services: true

      Demonstration services can only be created during the initial installation of KUMA. When updating the system using the same inventory file, no demonstration services will be created.

    • If the machines are not registered in your organization's DNS zone, set the generate_etc_hosts parameter to true, and for each machine in the inventory, replace the ip (0.0.0.0) parameter values with the actual IP addresses.

      generate_etc_hosts: true

      When using this parameter, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed.

    • If you are installing KUMA in a production environment and have a separate test machine, set the ansible_connection parameter to ssh:

      ansible_connection: ssh

  4. In the inventory file, specify the host of the target machines on which KUMA components should be installed. If the machines are not registered in the DNS zone of your organization, replace the parameter values ip (0.0.0.0) with the actual IP addresses.

    The hosts are specified in the following sections of the inventory file:

    • core is the section for specifying the host and IP address of the target machine on which KUMA Core will be installed. You may only specify one host in this section.
    • collector is the section for specifying the host and IP address of the target machine on which the collector will be installed. You may specify one of more hosts in this section.
    • correlator is the section for specifying the host and IP address of the target machine on which the correlator will be installed. You may specify one of more hosts in this section.
    • storage is the section for specifying the hosts and IP addresses of the target machines on which storage components will be installed. You may specify one of more hosts in this section.

      Storage components: clusters, shards, replicas, and keepers.

      A ClickHouse cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

      A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

      • Accumulate more events by increasing the total number of servers and disk space.
      • Absorb a larger stream of events by distributing the load associated with an influx of new events.
      • Reduce the time taken to search for events by distributing search areas among multiple machines.

      A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

      • Improve fault tolerance.
      • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

      A keeper is a machine that coordinates data replication at the cluster level. There must be at least one machine with this role for the entire cluster. The recommended number of the machines with this role is 3. The number of machines involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

      Each machine in the storage section can have the following parameter combinations:

      • shard + replica + keeper
      • shard + replica
      • keeper

      If the shard and replica parameters are specified, the machine is a part of a cluster and helps accumulate and search for normalized KUMA events. If the keeper parameter is additionally specified, the machine also helps coordinate data replication at the cluster-wide level.

      If only keeper is specified, the machine will not accumulate normalized events, but it will participate in coordinating data replication at the cluster-wide level. The keeper parameter values must be unique.

      If several replicas are defined within the same shard, the value of the replica parameter must be unique within this shard.

The inventory file is created. It can be used to install KUMA.

It is recommended that you not remove the inventory file after installing KUMA:

  • If you change this file (for example, add information about a new server for the collector), you can reuse it to update the system with a new component.
  • You can use this inventory file to delete KUMA.
Page top
[Topic 222085]

Installing the program

KUMA is installed using the Ansible tool and the YML inventory file. The installation is performed using the test machine, where all of the KUMA components are installed on the target machines.

To install KUMA:

  1. On the test machine, open the folder containing the unpacked installer.
  2. Place the file with the license key in the folder <installer folder>/roles/kuma/files/.

    The key file must be named license.key.

  3. Launch the installer by executing the following command:

    sudo ./install.sh distributed.inventory.yml

  4. Accept the terms of the End User License Agreement.

    If you do not accept the terms of the End User License Agreement, the program will not be installed.

KUMA components are installed on the target machines. The screen will display the URL of the KUMA web interface and the user name and password that must be used to access the web interface.

By default, the KUMA web interface address is https://kuma.example.com:7220.

Default login credentials (after the first login, you must change the password of the admin account):
- user name—admin
- password—mustB3Ch@ng3d!

It is recommended that you save the inventory file used to install the program. It can be used to add components to the system or remove KUMA.

Page top
[Topic 217914]

Creating services

KUMA services should be installed only after KUMA deployment is complete. The services can be installed in any order.

When deploying several KUMA services on the same host, you must specify unique ports for each service using the --api.port <port> parameters during installation.

Below is a list of the sections describing how specific services are created:

Page top
[Topic 217903]

Changing CA certificate

After KUMA Core is installed, a unique self-signed CA certificate with the matching key is generated. This CA certificate is used to sign all other certificates for internal communication between KUMA components and REST API requests. The CA certificate is stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ folder.

You can use your company's certificate and key instead of self-signed KUMA CA certificate and key.

Before changing KUMA certificate, make sure to make a backup copy of the previous certificate and key with the names backup_external.cert and backup_external.key.

To change KUMA certificate:

  1. Rename your company's certificate and key files to external.cert and external.key.

    Keys must be in PEM format.

  2. Place external.cert and external.key to the /opt/kaspersky/kuma/core/certificates/ folder.
  3. Restart the kuma-core service by running the sudo systemctl restart kuma-core command.
  4. Restart the browser hosting the KUMA web interface.

You company's certificate and key are now used for internal communication between KUMA components and REST API requests.

Page top
[Topic 217747]

Additional ClickHouse clusters

More than one ClickHouse storage cluster can be added to KUMA. The process of adding a new ClickHouse cluster consists of several steps:

  1. Preparing the target machine

    On the target machine, specify the FQDN of the server with the KUMA Core in the /etc/hosts directory.

  2. Preparing cluster inventory file

    Depending on the type of installation – local or remote – the inventory file is prepared on the target machine or on KUMA Core machine.

  3. Installing additional cluster
  4. Creating a storage

When creating storage cluster nodes, verify the network connectivity of the system and open the ports used by the components.

In this section

Preparing cluster inventory file

Installing additional cluster

Deleting a cluster

Page top
[Topic 238599]

Preparing cluster inventory file

Installation, update, and removal of KUMA components is performed from the directory containing the unpacked installer by using the Ansible tool and the user-created inventory file containing a list of the hosts of KUMA components and other parameters. The inventory file is in the YAML format.

To create an inventory file:

  1. Go to the KUMA unarchived installer directory by executing the following command:

    cd kuma-ansible-installer

  2. Create an inventory file by copying additional-storage-cluster.inventory.yml.template:

    cp additional-storage-cluster.inventory.yml.template additional-storage-cluster.inventory.yml

  3. Edit the inventory file parameters:
    • If you want demonstration services to be created during the installation, set the deploy_example_services parameter value to true.

      deploy_example_services: true

      Demonstration services can only be created during the initial installation of KUMA. When updating the system using the same inventory file, no demonstration services will be created.

    • If the machines are not registered in your organization's DNS zone, set the generate_etc_hosts parameter to true, and for each machine in the inventory, replace the ip (0.0.0.0) parameter values with the actual IP addresses.

      generate_etc_hosts: true

      When using this parameter, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed.

    • Set the ansible_connection parameter:
      • Specify local if you want to install the cluster locally:

        ansible_connection: local

      • Specify ssh if you want to install the cluster remotely, from a server with KUMA Core installed:

        ansible_connection: ssh

  4. In the storage section, specify the full names of the domains of the hosts on which you want to install the cluster nodes in the inventory file. If the machines are not registered in the DNS zone of your organization, replace the parameter values ip (0.0.0.0) with the actual IP addresses.

    Storage components: clusters, shards, replicas, and keepers.

    A ClickHouse cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

    A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

    • Accumulate more events by increasing the total number of servers and disk space.
    • Absorb a larger stream of events by distributing the load associated with an influx of new events.
    • Reduce the time taken to search for events by distributing search areas among multiple machines.

    A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

    • Improve fault tolerance.
    • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

    A keeper is a machine that coordinates data replication at the cluster level. There must be at least one machine with this role for the entire cluster. The recommended number of the machines with this role is 3. The number of machines involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

    Each machine in the storage section can have the following parameter combinations:

    • shard + replica + keeper
    • shard + replica
    • keeper

    If the shard and replica parameters are specified, the machine is a part of a cluster and helps accumulate and search for normalized KUMA events. If the keeper parameter is additionally specified, the machine also helps coordinate data replication at the cluster-wide level.

    If only keeper is specified, the machine will not accumulate normalized events, but it will participate in coordinating data replication at the cluster-wide level. The keeper parameter values must be unique.

    If several replicas are defined within the same shard, the value of the replica parameter must be unique within this shard.

The inventory file is created. It can be used to create a ClickHouse cluster.

It is recommended that you not remove the inventory file after installing KUMA:

  • If you change this file (for example, add information about a new server for the collector), you can reuse it to update the system with a new component.
  • You can use this inventory file to delete KUMA.
Page top
[Topic 238675]

Installing additional cluster

KUMA is installed using the Ansible tool and the YML inventory file.

To install an additional KUMA cluster:

  1. On a preconfigured target machine or a machine with the KUMA Core installed (depending on the ansible_connection setting), open the folder with an unpacked installer file.
  2. Launch the installer by executing the following command:

    PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i additional-storage-cluster.inventory.yml additional-storage-cluster.playbook.yml

The additional ClickHouse cluster is installed. To write data to the cluster using KUMA, you need to create a storage.

Page top
[Topic 238677]

Deleting a cluster

To delete a ClickHouse cluster,

execute the following command:

systemctl stop kuma-storage-<storage ID> && systemctl stop kuma-clickhouse && systemctl disable kuma-storage-<storage ID> && systemctl disable kuma-clickhouse && rm -rf /usr/lib/systemd/system/kuma-storage-<storage ID>.service && rm -rf /usr/lib/systemd/system/kuma-clickhouse.service && systemctl daemon-reload && rm -rf /opt/kaspersky/kuma

The KUMA storage and ClickHouse cluster services are stopped and deleted.

Page top
[Topic 239283]