Kaspersky Unified Monitoring and Analysis Platform
- Kaspersky Unified Monitoring and Analysis Platform Help
- About Kaspersky Unified Monitoring and Analysis Platform
- Program architecture
- Program licensing
- About the End User License Agreement
- About the license
- About the License Certificate
- About the license key
- About the key file
- Adding a license key to the program web interface
- Viewing information about an added license key in the program web interface
- Removing a license key in the program web interface
- Administrator's guide
- Installing and removing KUMA
- Program installation requirements
- Ports used by KUMA during installation
- Synchronizing time on servers
- About the inventory file
- Installation on a single server
- Distributed installation
- Distributed installation in a fault-tolerant configuration
- KUMA backup
- Modifying the configuration of KUMA
- Updating previous versions of KUMA
- Troubleshooting update errors
- Delete KUMA
- Working with tenants
- Managing users
- KUMA services
- Services tools
- Service resource sets
- Creating a storage
- Creating a correlator
- Creating a collector
- Predefined collectors
- Creating an agent
- Configuring event sources
- Configuring receipt of Auditd events
- Configuring receipt of KATA/EDR events
- Configuring Kaspersky Security Center event receiving in CEF format
- Configuring receiving Kaspersky Security Center event from MS SQL
- Creating an account in the MS SQL database
- Configuring the SQL Server Browser service
- Creating a secret in KUMA
- Configuring a connector
- Configuring the KUMA Collector for receiving Kaspersky Security Center events from an MS SQL database
- Installing the KUMA Collector for receiving Kaspersky Security Center events from the MS SQL database
- Configuring receipt of events from Windows devices using KUMA Agent (WEC)
- Configuring audit of events from Windows devices
- Configuring centralized receipt of events from Windows devices using the Windows Event Collector service
- Granting permissions to view Windows events
- Granting permissions to log on as a service
- Configuring the KUMA Collector for receiving events from Windows devices
- Installing the KUMA Collector for receiving events from Windows devices
- Configuring forwarding of events from Windows devices to KUMA using KUMA Agent (WEC)
- Configuring receipt of events from Windows devices using KUMA Agent (WMI)
- Configuring receipt of PostgreSQL events
- Configuring receipt of IVK Kolchuga-K events
- Configuring receipt of CryptoPro NGate events
- Configuring receipt of Ideco UTM events
- Configuring receipt of KWTS events
- Configuring receipt of KLMS events
- Configuring receipt of KSMG events
- Configuring receipt of PT NAD events
- Configuring receipt of events using the MariaDB Audit Plugin
- Configuring receipt of Apache Cassandra events
- Configuring receipt of FreeIPA events
- Configuring receipt of VipNet TIAS events
- Configuring receipt of Sendmail events
- Configuring receipt of Nextcloud events
- Configuring receipt of Snort events
- Configuring receipt of Suricata events
- Configuring receipt of FreeRADIUS events
- Configuring receipt of zVirt events
- Configuring receipt of Zeek IDS events
- Monitoring event sources
- Managing assets
- Adding an asset category
- Configuring the table of assets
- Searching assets
- Exporting asset data
- Viewing asset details
- Adding assets
- Assigning a category to an asset
- Editing the parameters of assets
- Deleting assets
- Updating third-party applications and fixing vulnerabilities on Kaspersky Security Center assets
- Moving assets to a selected administration group
- Asset audit
- Custom asset fields
- Critical information infrastructure assets
- Integration with other solutions
- Integration with Kaspersky Security Center
- Configuring Kaspersky Security Center integration settings
- Adding a tenant to the list for Kaspersky Security Center integration
- Creating Kaspersky Security Center connection
- Editing Kaspersky Security Center connection
- Deleting Kaspersky Security Center connection
- Importing events from the Kaspersky Security Center database
- Kaspersky Endpoint Detection and Response integration
- Integration with Kaspersky CyberTrace
- Integration with Kaspersky Threat Intelligence Portal
- Integration with R-Vision Security Orchestration, Automation and Response
- Integration with Active Directory, Active Directory Federation Services and FreeIPA
- Connecting over LDAP
- Enabling and disabling LDAP integration
- Adding a tenant to the LDAP server integration list
- Creating an LDAP server connection
- Creating a copy of an LDAP server connection
- Changing an LDAP server connection
- Changing the data update frequency
- Changing the data storage period
- Starting account data update tasks
- Deleting an LDAP server connection
- Authentication using domain accounts
- Connecting over LDAP
- RuCERT integration
- Integration with Security Vision Incident Response Platform
- Kaspersky Industrial CyberSecurity for Networks integration
- Kaspersky Automated Security Awareness Platform
- Sending notifications to Telegram
- UserGate integration
- Integration with Kaspersky Web Traffic Security
- Integration with Kaspersky Secure Mail Gateway
- Importing asset information from RedCheck
- Integration with Kaspersky Security Center
- Managing KUMA
- Working in hierarchy mode
- Working with geographic data
- Installing and removing KUMA
- User guide
- KUMA resources
- Operations with resources
- Destinations
- Working with events
- Filtering and searching events
- Selecting Storage
- Generating an SQL query using a builder
- Manually creating an SQL query
- Filtering events by period
- Displaying names instead of IDs
- Presets
- Limiting the complexity of queries in alert investigation mode
- Saving and selecting events filter configuration
- Deleting event filter configurations
- Supported ClickHouse functions
- Viewing event detail areas
- Exporting events
- Configuring the table of events
- Refreshing events table
- Getting events table statistics
- Viewing correlation event details
- Filtering and searching events
- Normalizers
- Aggregation rules
- Enrichment rules
- Correlation rules
- Filters
- Active lists
- Viewing the table of active lists
- Adding active list
- Viewing the settings of an active list
- Changing the settings of an active list
- Duplicating the settings of an active list
- Deleting an active list
- Viewing records in the active list
- Searching for records in the active list
- Adding a record to an active list
- Duplicating records in the active list
- Changing a record in the active list
- Deleting records from the active list
- Import data to an active list
- Exporting data from the active list
- Predefined active lists
- Dictionaries
- Response rules
- Notification templates
- Connectors
- Secrets
- Segmentation rules
- Example of incident investigation with KUMA
- Incident conditions
- Step 1. Preliminary steps
- Step 2. Assigning an alert to a user
- Step 3. Check if the triggered correlation rule matches the data of the alert events
- Step 4. Analyzing alert information
- Step 5. False positive check
- Step 6. Determining alert severity
- Step 7. Incident creation
- Step 8. Investigation
- Step 9. Searching for related assets
- Step 10. Searching for related events
- Step 11. Recording the causes of the incident
- Step 12. Incident response
- Step 13. Restoring assets operability
- Step 14. Closing the incident
- Analytics
- Dashboard
- Reports
- Widgets
- Working with alerts
- Working with incidents
- About the incidents table
- Saving and selecting incident filter configuration
- Deleting incident filter configurations
- Viewing information about an incident
- Incident creation
- Incident processing
- Changing incidents
- Automatic linking of alerts to incidents
- Categories and types of incidents
- Interaction with RuCERT
- Special consideration for successful export from the KUMA hierarchical structure to RuCERT
- Exporting data to RuCERT
- Supplementing incident data on request
- Sending files to RuCERT
- Sending incidents involving personal information leaks to RuCERT
- Communication with RuCERT experts
- Supported categories and types of RuCERT incidents
- Notifications about the incident status change in RuCERT
- Retroscan
- KUMA resources
- Contacting Technical Support
- REST API
- Creating a token
- Configuring permissions to access the API
- Authorizing API requests
- Standard error
- Operations
- Viewing a list of active lists on the correlator
- Import entries to an active list
- Searching alerts
- Closing alerts
- Searching assets
- Importing assets
- Deleting assets
- Searching events
- Viewing information about the cluster
- Resource search
- Loading resource file
- Viewing the contents of a resource file
- Importing resources
- Exporting resources
- Downloading the resource file
- Search for services
- Tenant search
- View token bearer information
- Dictionary updating in services
- Dictionary retrieval
- Viewing custom fields of the assets
- Creating a backup of the KUMA Core
- Restoring the KUMA Core from the backup
- Appendices
- Commands for components manual starting and installing
- Integrity check of KUMA files
- Normalized event data model
- Alert data model
- Asset data model
- User account data model
- KUMA audit events
- Event fields with general information
- User was successfully signed in or failed to sign in
- User login successfully changed
- User role was successfully changed
- Other data of the user was successfully changed
- User successfully logged out
- User password was successfully changed
- User was successfully created
- User role was successfully assigned
- User role was successfully revoked
- User access token was successfully changed
- Service was successfully created
- Service was successfully deleted
- Service was successfully reloaded
- Service was successfully restarted
- Service was successfully started
- Service was successfully paired
- Service status was changed
- Storage partition was deleted by user
- Storage partition was deleted automatically due to expiration
- Active list was successfully cleared or operation failed
- Active list item was successfully changed, or operation was unsuccessful
- Active list item was successfully deleted or operation was unsuccessful
- Active list was successfully imported or operation failed
- Active list was exported successfully
- Resource was successfully added
- Resource was successfully deleted
- Resource was successfully updated
- Asset was successfully created
- Asset was successfully deleted
- Asset category was successfully added
- Asset category was deleted successfully
- Settings were updated successfully
- Tenant was successfully created
- Tenant was successfully enabled
- Tenant was successfully disabled
- Other tenant data was successfully changed
- Updated data retention policy after changing drives
- The dictionary was successfully updated on the service or operation was unsuccessful
- Response in Active Directory
- Response via KICS for Networks
- Kaspersky Automated Security Awareness Platform response
- KEDR response
- Correlation rules
- Sending test events to KUMA
- Information about third-party code
- Trademark notices
- Glossary
Modifying the configuration of KUMA
The following KUMA configuration changes can be performed.
- Extending an all-in-one installation to a distributed installation.
To expand an all-in-one installation to a distributed installation:
- Create a backup copy of KUMA.
- Remove the pre-installed correlator, collector, and storage services from the server.
- In the KUMA web interface, under Resources → Active services, select a service and click Copy ID. On the server where the services were installed, run the service removal command:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID copied from the KUMA web interface> --uninstall
Repeat the removal command for each service.
- Then remove the services in the KUMA web interface:
As a result, only the KUMA Core remains on the initial installation server.
- In the KUMA web interface, under Resources → Active services, select a service and click Copy ID. On the server where the services were installed, run the service removal command:
- Prepare the distributed.inventory.yml inventory file and in that file, specify the initial all-in-one initial installation server in the
kuma_core
group.In this way, the KUMA Core remains on the original server, and you can deploy the other components on other servers. Specify the servers on which you want to install the KUMA components in the inventory file.
Sample inventory file for expanding an all-in-one installation to a distributed installation
- Create and install the storage, collector, correlator, and agent services on other machines.
- After you specify the settings for all sections in the distributed.inventory.yml inventory file, run the installer on the test machine.
sudo ./install.sh distributed.inventory.yml
Running the command causes the files necessary to install the KUMA components (storages, collectors, correlators) to appear on each target machine specified in the distributed.inventory.yml inventory file.
- Create storage, collector, and correlator services.
- After you specify the settings for all sections in the distributed.inventory.yml inventory file, run the installer on the test machine.
The expansion of the installation is completed.
- Adding servers for collectors to a distributed installation.
The following instructions show how to add one or more servers to an existing infrastructure and then install collectors on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.
To add servers to a distributed installation:
- Ensure that the target machines meet hardware, software, and installation requirements.
- On the test machine, go to the directory with the unpacked KUMA installer by running the following command:
cd kuma-ansible-installer
- Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:
cp expand.inventory.yml.template expand.inventory.yml
- Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_collector section.
Sample expand.inventory.yml inventory file for adding collector servers
- On the test machine start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the collector.
- Create and install the collectors. A KUMA collector consists of a client part and a server part, therefore creating a collector involves two steps.
- Creating the client part of the collector, which includes a set of resources and the collector service.
To create a set of resources for a collector, in the KUMA web interface, under Resources → Collectors, click Add collector and edit the settings. For more details, see Creating a collector.
At the last step of the configuration wizard, after you click Create and save, a resource set for the collector is created and the collector service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
- Creating the server part of the collector.
- On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameters are filled in automatically.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The collector service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
- Run the same command on each target machine specified in the expand.inventory.yml inventory file.
- Creating the client part of the collector, which includes a set of resources and the collector service.
- Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.
Servers are successfully added.
- Adding servers for correlators to a distributed installation.
The following instructions show how to add one or more servers to an existing infrastructure and then install correlators on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.
To add servers to a distributed installation:
- Ensure that the target machines meet hardware, software, and installation requirements.
- On the test machine, go to the directory with the unpacked KUMA installer by running the following command:
cd kuma-ansible-installer
- Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:
cp expand.inventory.yml.template expand.inventory.yml
- Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the kuma_correlator section.
Sample expand.inventory.yml inventory file for adding correlator servers
- On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the correlator.
- Create and install the correlators. A KUMA correlator consists of a client part and a server part, therefore creating a correlator involves two steps.
- Creating the client part of the correlator, which includes a set of resources and the correlator service.
To create a resource set for a correlator, in the KUMA web interface, under Resources → Correlators, click Add correlator and edit the settings. For more details, see Creating a correlator.
At the last step of the configuration wizard, after you click Create and save, a resource set for the correlator is created and the correlator service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
- Creating the server part of the correlator.
- On the target machine, run the command you copied at the previous step. The command looks as follows, but all parameter values are assigned automatically.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The correlator service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
- Run the same command on each target machine specified in the expand.inventory.yml inventory file.
- Creating the client part of the correlator, which includes a set of resources and the correlator service.
- Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.
Servers are successfully added.
- Adding servers to an existing storage cluster.
The following instructions show how to add multiple servers to an existing storage cluster. You can use these instructions as an example and adapt them to your requirements.
To add servers to an existing storage cluster:
- Ensure that the target machines meet hardware, software, and installation requirements.
- On the test machine, go to the directory with the unpacked KUMA installer by running the following command:
cd kuma-ansible-installer
- Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:
cp expand.inventory.yml.template expand.inventory.yml
- Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN, the roles of shards and replicas are assigned later in the KUMA web interface by following the steps of the instructions. You can adapt this example to suit your needs.
Sample expand.inventory.yml inventory file for adding servers to an existing storage cluster
- On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.
- You do not need to create a separate storage because you are adding servers to an existing storage cluster. You must edit the storage settings of the existing cluster:
- In the Resources → Storages section, select an existing storage and open the storage for editing.
- In the ClickHouse cluster nodes section, click Add nodes and specify roles in the fields for the new node. The following example shows how to specify identifiers to add two shards, containing two replicas each, to an existing cluster. You can adapt the example to suit your needs.
Example:
ClickHouse cluster nodes
<existing nodes>
FQDN: kuma-storage-cluster1server8.example.com
Shard ID: 1
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 1
Replica ID: 2
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 2
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server10.example.com
Shard ID: 2
Replica ID: 2
Keeper ID: 0
- Save the storage settings.
Now you can create storage services for each ClickHouse cluster node.
- To create a storage service, in the KUMA web interface, in the Resources → Active services section, click Add service.
This opens the Choose a service window; in that window, select the storage you edited at the previous step and click Create service. Do the same for each ClickHouse storage node you are adding.
As a result, the number of created services must be the same as the number of nodes added to the ClickHouse cluster, that is, four services for four nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section. Now storage services must be installed on each server by using the service ID.
- Now storage services must be installed on each server by using the service ID.
- In the KUMA web interface, in the Resources → Active services section, select the storage service that you need and click Copy ID.
The service ID is copied to the clipboard; you need it for running the service installation command.
- Compose and run the following command on the target machine:
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
- Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
- In the KUMA web interface, in the Resources → Active services section, select the storage service that you need and click Copy ID.
- To apply changes to a running cluster, in the KUMA web interface, under Resources → Active services, select the check box next to all storage services in the cluster that you are expanding and click Update configuration. Changes are applied without stopping services.
- Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.
Servers are successfully added to a storage cluster.
- Adding an additional storage cluster.
The following instructions show how to add an additional storage cluster to existing infrastructure. You can use these instructions as an example and adapt them to your requirements.
To add an additional storage cluster:
- Ensure that the target machines meet hardware, software, and installation requirements.
- On the test machine, go to the directory with the unpacked KUMA installer by running the following command:
cd kuma-ansible-installer
- Copy the expand.inventory.yml.template template to create an inventory file called expand.inventory.yml:
cp expand.inventory.yml.template expand.inventory.yml
- Edit the settings in the expand.inventory.yml inventory file and specify the servers that you want to add in the 'storage' section. In the following example, the 'storage' section specifies servers for installing three dedicated keepers and two shards, each of which contains two replicas. In the expand.inventory.yml inventory file, you must only specify the FQDN, the roles of keepers, shards, and replicas are assigned later in the KUMA web interface by following the steps of the instructions. You can adapt this example to suit your needs.
Sample expand.inventory.yml inventory file for adding an additional storage cluster
- On the test machine, start expand.inventory.playbook by running the following command as root in the directory with the unpacked installer:
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.
- Create and install a storage. For each storage cluster, you must create a separate storage, that is, three storages for three storage clusters. A storage consists of a client part and a server part, therefore creating a storage involves two steps.
- Creating the client part of the storage, which includes a set of resources and the storage service.
- To create a resource set for a storage, in the KUMA web interface, under Resources → Storages, click Add storage and edit the settings. In the ClickHouse cluster nodes section, specify roles for each server that you are adding: keeper, shard, replica. For more details, see Creating a set of resources for a storage.
The created set of resources for the storage is displayed in the Resources → Storages section. Now you can create storage services for each ClickHouse cluster node.
- To create a storage service, in the KUMA web interface, in the Resources → Active services section, click Add service.
This opens the Choose a service window; in that window, select the set of resources that you created for the storage at the previous step and click Create service. Do the same for each ClickHouse storage.
As a result, the number of created services must be the same as the number of nodes in the ClickHouse cluster, that is, fifty services for fifty nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section. Now storage services must be installed to each node of the ClickHouse cluster by using the service ID.
- To create a resource set for a storage, in the KUMA web interface, under Resources → Storages, click Add storage and edit the settings. In the ClickHouse cluster nodes section, specify roles for each server that you are adding: keeper, shard, replica. For more details, see Creating a set of resources for a storage.
- Creating the server part of the storage.
- On the target machine, create the server part of the storage: in the KUMA web interface, in the Resources → Active services section, select the relevant storage service and click Copy ID.
The service ID is copied to the clipboard; you need it for running the service installation command.
- Compose and run the following command on the target machine:
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
- Run the storage service installation command on each target machine listed in the 'storage' section of the expand.inventory.yml inventory file, one machine at a time. On each machine, the unique service ID within the cluster must be specified in the installation command.
- Dedicated keepers are automatically started immediately after installation and are displayed in the Resources → Active services section with a green status. Services on other storage nodes may not start until services are installed for all nodes in that cluster. Up to that point, services can be displayed with a red status. This is normal behavior for creating a new storage cluster or adding nodes to an existing storage cluster. As soon as the command to install services on all nodes of the cluster is executed, all services acquire the green status.
- Creating the client part of the storage, which includes a set of resources and the storage service.
- Specify the added servers in the distributed.inventory.yml inventory file so that it has up-to-date information in case of a KUMA update.
An additional storage cluster is successfully added.
- Removing servers from a distributed installation.
To remove a server from a distributed installation:
- Remove all services from the server that you want to remove from the distributed installation.
- Remove the server part of the service. Copy the service ID in the KUMA web interface and run the following command on the target machine:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
- Remove the client part of the service in the KUMA web interface in the Active services → Delete section.
The service is removed.
- Remove the server part of the service. Copy the service ID in the KUMA web interface and run the following command on the target machine:
- Repeat step 1 for each server that you want to remove from the infrastructure.
- Remove servers from the relevant sections of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case of a KUMA update.
The servers are removed from the distributed installation.
- Remove all services from the server that you want to remove from the distributed installation.
- Removing a storage cluster from a distributed installation.
To remove one or more storage clusters from a distributed installation:
- Remove the storage service on each cluster server that you want to removed from the distributed installation.
- Remove the server part of the storage service. Copy the service ID in the KUMA web interface and run the following command on the target machine:
sudo /opt/kaspersky/kuma/kuma <storage> --id <service ID> --uninstall
Repeat for each server.
- Remove the client part of the service in the KUMA web interface in the Resources → Active services → Delete section.
The service is removed.
- Remove the server part of the storage service. Copy the service ID in the KUMA web interface and run the following command on the target machine:
- Remove servers from the 'storage' section of the distributed.inventory.yml inventory file to make sure the inventory file has up-to-date information in case of a KUMA update or a configuration change.
The cluster is removed from the distributed installation.
- Remove the storage service on each cluster server that you want to removed from the distributed installation.
- Migrating the KUMA Core to a new Kubernetes cluster.
Preparing the inventory file
When migrating the KUMA Core to a Kubernetes cluster, it is recommended to use the template file named k0s.inventory.yml.template when creating the inventory file.
The
kuma_core
,kuma_ collector
,kuma_correlator
, andkuma_storage
sections of your inventory file must contain the same hosts that were used when upgrading KUMA from version 2.0.x to version 2.1 or when performing a new installation of the application. In the inventory file, set thedeploy_to_k8s
,need_transfer
andairgap
parameters totrue
. Thedeploy_example_services
parameter must be set tofalse
.Example inventory file with 1 dedicated controller and 2 worker nodes
all:
vars:
ansible_connection: ssh
ansible_user: root
deploy_to_k8s: True
need_transfer: True
airgap: True
deploy_example_services: False
kuma:
children:
kuma_core:
hosts:
kuma.example.com:
mongo_log_archives_number: 14
mongo_log_frequency_rotation: daily
mongo_log_file_size: 1G
kuma_collector:
hosts:
kuma.example.com:
kuma_correlator:
hosts:
kuma.example.com:
kuma_storage:
hosts:
kuma.example.com:
shard: 1
replica: 1
keeper: 1
kuma_k0s:
children:
kuma_control_plane_master:
hosts:
kuma2.example.com:
ansible_host: 10.0.1.10
kuma_control_plane_master_worker:
kuma_control_plane:
kuma_control_plane_worker:
kuma_worker:
hosts:
kuma.example.com:
ansible_host: 10.0.1.11
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
kuma3.example.com:
ansible_host: 10.0.1.12
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with this template file, it searches for the installed KUMA Core on all hosts where you intend to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core
- /opt/kaspersky/kuma/grafana
- /opt/kaspersky/kuma/mongodb
- /opt/kaspersky/kuma/victoria-metrics
to the following directories:
- /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, analyze the logs of the core-transfer migration task in the kuma namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of /etc/hosts from the host where the primary controller is deployed are entered into the ConfigMap.