Kaspersky Unified Monitoring and Analysis Platform
- Kaspersky Unified Monitoring and Analysis Platform Help
- About Kaspersky Unified Monitoring and Analysis Platform
- Program architecture
- Program licensing
- About the End User License Agreement
- About the license
- About the License Certificate
- About the license key
- About the key file
- About the license code
- Data provision in Kaspersky Unified Monitoring and Analysis Platform
- Adding a license key to the program web interface
- Viewing information about an added license key in the program web interface
- Removing a license key in the program web interface
- Administrator's guide
- Installing and removing KUMA
- Program installation requirements
- Ports used by KUMA during installation
- Reissuing internal CA certificates
- Modifying the self-signed web console certificate
- Synchronizing time on servers
- About the inventory file
- Installation on a single server
- Distributed installation
- Distributed installation in a high availability configuration
- KUMA backup
- Modifying the configuration of KUMA
- Updating previous versions of KUMA
- Troubleshooting update errors
- Delete KUMA
- Working with tenants
- Managing users
- KUMA services
- Services tools
- Service resource sets
- Creating a storage
- Creating a correlator
- Creating an event router
- Creating a collector
- Predefined collectors
- Creating an agent
- Configuring event sources
- Configuring receipt of Auditd events
- Configuring receipt of KATA/EDR events
- Configuring the export of Kaspersky Security Center events to the KUMA SIEM system
- Configuring receiving Kaspersky Security Center event from MS SQL
- Creating an account in the MS SQL database
- Configuring the SQL Server Browser service
- Creating a secret in KUMA
- Configuring a connector
- Configuring the KUMA Collector for receiving Kaspersky Security Center events from an MS SQL database
- Installing the KUMA Collector for receiving Kaspersky Security Center events from the MS SQL database
- Configuring receipt of events from Windows devices using KUMA Agent (WEC)
- Configuring audit of events from Windows devices
- Configuring centralized receipt of events from Windows devices using the Windows Event Collector service
- Granting permissions to view Windows events
- Granting permissions to log on as a service
- Configuring the KUMA Collector for receiving events from Windows devices
- Installing the KUMA Collector for receiving events from Windows devices
- Configuring forwarding of events from Windows devices to KUMA using KUMA Agent (WEC)
- Configuring receipt of events from Windows devices using KUMA Agent (WMI)
- Configuring receipt of DNS server events using the ETW agent
- Configuring receipt of PostgreSQL events
- Configuring receipt of IVK Kolchuga-K events
- Configuring receipt of CryptoPro NGate events
- Configuring receipt of Ideco UTM events
- Configuring receipt of KWTS events
- Configuring receipt of KLMS events
- Configuring receipt of KSMG events
- Configuring the receipt of KICS for Networks events
- Configuring receipt of PT NAD events
- Configuring receipt of events using the MariaDB Audit Plugin
- Configuring receipt of Apache Cassandra events
- Configuring receipt of FreeIPA events
- Configuring receipt of VipNet TIAS events
- Configuring receipt of Nextcloud events
- Configuring receipt of Snort events
- Configuring receipt of Suricata events
- Configuring receipt of FreeRADIUS events
- Configuring receipt of VMware vCenter events
- Configuring receipt of zVirt events
- Configuring receipt of Zeek IDS events
- Configuring Windows event reception using Kaspersky Endpoint Security for Windows
- Configuring receipt of Codemaster Mirada events
- Configuring receipt of Postfix events
- Configuring receipt of CommuniGate Pro events
- Configuring receipt of Yandex Cloud events
- Configuring receipt of MongoDB events
- Monitoring event sources
- Managing assets
- Adding an asset category
- Configuring the table of assets
- Searching assets
- Exporting asset data
- Viewing asset details
- Adding assets
- Assigning a category to an asset
- Editing the parameters of assets
- Archiving assets
- Deleting assets
- Updating third-party applications and fixing vulnerabilities on Kaspersky Security Center assets
- Moving assets to a selected administration group
- Asset audit
- Custom asset fields
- Critical information infrastructure assets
- Integration with other solutions
- Integration with Kaspersky Security Center
- Configuring Kaspersky Security Center integration settings
- Adding a tenant to the list for Kaspersky Security Center integration
- Creating Kaspersky Security Center connection
- Editing Kaspersky Security Center connection
- Deleting Kaspersky Security Center connection
- Importing events from the Kaspersky Security Center database
- Kaspersky Endpoint Detection and Response integration
- Integration with Kaspersky CyberTrace
- Integration with Kaspersky Threat Intelligence Portal
- Integration with R-Vision Security Orchestration, Automation and Response
- Integration with Active Directory, Active Directory Federation Services and FreeIPA
- Connecting over LDAP
- Enabling and disabling LDAP integration
- Adding a tenant to the LDAP server integration list
- Creating an LDAP server connection
- Creating a copy of an LDAP server connection
- Changing an LDAP server connection
- Changing the data update frequency
- Changing the data storage period
- Starting account data update tasks
- Deleting an LDAP server connection
- Authentication using domain accounts
- Connecting over LDAP
- NCIRCC integration
- Integration with the Security Orchestration Automation and Response Platform (SOAR)
- Kaspersky Industrial CyberSecurity for Networks integration
- Integration with Neurodat SIEM IM
- Kaspersky Automated Security Awareness Platform
- Sending notifications to Telegram
- UserGate integration
- Integration with Kaspersky Web Traffic Security
- Integration with Kaspersky Secure Mail Gateway
- Importing asset information from RedCheck
- Configuring receipt of Sendmail events
- Integration with Kaspersky Security Center
- Managing KUMA
- Working with geographic data
- Downloading CA certificates
- Installing and removing KUMA
- User guide
- KUMA resources
- Operations with resources
- Destinations
- Normalizers
- Aggregation rules
- Enrichment rules
- Correlation rules
- Filters
- Active lists
- Viewing the table of active lists
- Adding active list
- Viewing the settings of an active list
- Changing the settings of an active list
- Duplicating the settings of an active list
- Deleting an active list
- Viewing records in the active list
- Searching for records in the active list
- Adding a record to an active list
- Duplicating records in the active list
- Changing a record in the active list
- Deleting records from the active list
- Import data to an active list
- Exporting data from the active list
- Predefined active lists
- Proxies
- Dictionaries
- Response rules
- Notification templates
- Connectors
- Viewing connector settings
- Adding a connector
- Connector settings
- Connector, tcp type
- Connector, udp type
- Connector, netflow type
- Connector, sflow type
- Connector, nats-jetstream type
- Connector, kafka type
- Connector, kata/edr type
- Connector, http type
- Connector, sql type
- Connector, file type
- Connector, 1c-xml type
- Connector, 1c-log type
- [2.0] Connector, diode type
- Connector, ftp type
- Connector, nfs type
- Connector, vmware type
- Connector, wmi type
- Connector, wec type
- Connector, snmp type
- [2.0.1] Connector, snmp-trap type
- Connector, elastic type
- Connector, etw type
- Predefined connectors
- Secrets
- Segmentation rules
- Context tables
- Viewing the list of context tables
- Adding a context table
- Viewing context table settings
- Editing context table settings
- Duplicating context table settings
- Deleting a context table
- Viewing context table records
- Searching context table records
- Adding a context table record
- Editing a context table record
- Deleting a context table record
- Importing data into a context table
- Exporting data from a context table
- Example of incident investigation with KUMA
- Incident conditions
- Step 1. Preliminary steps
- Step 2. Assigning an alert to a user
- Step 3. Check if the triggered correlation rule matches the data of the alert events
- Step 4. Analyzing alert information
- Step 5. False positive check
- Step 6. Determining alert severity
- Step 7. Incident creation
- Step 8. Investigation
- Step 9. Searching for related assets
- Step 10. Searching for related events
- Step 11. Recording the causes of the incident
- Step 12. Incident response
- Step 13. Restoring assets operability
- Step 14. Closing the incident
- Analytics
- Working with events
- Filtering and searching events
- Selecting Storage
- Generating an SQL query using a builder
- Manually creating an SQL query
- Filtering events by period
- Grouping events
- Displaying names instead of IDs
- Presets
- Limiting the complexity of queries in alert investigation mode
- Saving and selecting events filter configuration
- Deleting event filter configurations
- Supported ClickHouse functions
- Viewing event detail areas
- Exporting events
- Configuring the table of events
- Refreshing events table
- Getting events table statistics
- Viewing correlation event details
- Filtering and searching events
- Dashboard
- Reports
- Widgets
- Working with alerts
- Working with incidents
- About the incidents table
- Saving and selecting incident filter configuration
- Deleting incident filter configurations
- Viewing information about an incident
- Incident creation
- Incident processing
- Changing incidents
- Automatic linking of alerts to incidents
- Categories and types of incidents
- Interaction with NCIRCC
- Retroscan
- Working with events
- KUMA resources
- Contacting Technical Support
- REST API
- Creating a token
- Configuring permissions to access the API
- Authorizing API requests
- Standard error
- REST API v1 operations
- Viewing a list of active lists on the correlator
- Import entries to an active list
- Searching alerts
- Closing alerts
- Searching assets
- Importing assets
- Deleting assets
- Searching events
- Viewing information about the cluster
- Resource search
- Loading resource file
- Viewing the contents of a resource file
- Importing resources
- Exporting resources
- Downloading the resource file
- Search for services
- Tenant search
- View token bearer information
- Dictionary updating in services
- Dictionary retrieval
- Viewing custom fields of the assets
- Creating a backup of the KUMA Core
- Restoring the KUMA Core from the backup
- Viewing the list of context tables in the correlator
- Importing records into a context table
- Exporting records from a context table
- REST API v2 operations
- Viewing a list of active lists on the correlator
- Import entries to an active list
- Searching alerts
- Closing alerts
- Searching assets
- Importing assets
- Deleting assets
- Searching events
- Viewing information about the cluster
- Resource search
- Loading resource file
- Viewing the contents of a resource file
- Importing resources
- Exporting resources
- Downloading the resource file
- Search for services
- Tenant search
- View token bearer information
- Dictionary updating in services
- Dictionary retrieval
- Viewing custom fields of the assets
- Creating a backup of the KUMA Core
- Restoring the KUMA Core from the backup
- Viewing the list of context tables in the correlator
- Importing records into a context table
- Exporting records from a context table
- REST API v2.1 operations
- Appendices
- Commands for components manual starting and installing
- Integrity check of KUMA files
- Normalized event data model
- Configuring the data model of a normalized event from KATA EDR
- Alert data model
- Asset data model
- User account data model
- KUMA audit events
- Event fields with general information
- User was successfully signed in or failed to sign in
- User login successfully changed
- User role was successfully changed
- Other data of the user was successfully changed
- User successfully logged out
- User password was successfully changed
- User was successfully created
- User role was successfully assigned
- User role was successfully revoked
- The user has successfully edited the set of fields settings to define sources
- User access token was successfully changed
- Service was successfully created
- Service was successfully deleted
- Service was successfully reloaded
- Service was successfully restarted
- Service was successfully started
- Service was successfully paired
- Service status was changed
- Victoria Metrics alert registered for the service
- Monitoring thresholds changed for the service
- Storage partition was deleted by user
- Storage partition was deleted automatically due to expiration
- Active list was successfully cleared or operation failed
- Active list item was successfully changed, or operation was unsuccessful
- Active list item was successfully deleted or operation was unsuccessful
- Active list was successfully imported or operation failed
- Active list was exported successfully
- Resource was successfully added
- Resource was successfully deleted
- Resource was successfully updated
- Asset was successfully created
- Asset was successfully deleted
- Asset category was successfully added
- Asset category was deleted successfully
- Settings were updated successfully
- Tenant was successfully created
- Tenant was successfully enabled
- Tenant was successfully disabled
- Other tenant data was successfully changed
- Updated data retention policy after changing drives
- The dictionary was successfully updated on the service or operation was unsuccessful
- Response in Active Directory
- Response via KICS for Networks
- Kaspersky Automated Security Awareness Platform response
- KEDR response
- Correlation rules
- Sending test events to KUMA
- Time format
- Mapping fields of predefined normalizers
- Deprecated resources
- Generating events for testing a normalizer
- Information about third-party code
- Trademark notices
- Glossary
Updating previous versions of KUMA
The upgrade procedure is the same for all hosts and involves using the installer and inventory file.
Version upgrade scheme:
2.0.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.3 → 3.0.3 → 3.2.x
3.0.x → 3.0.3 → 3.2.x
Upgrading from version 2.0.x to 2.1.3
To install KUMA version 2.1.3 over version 2.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Create a backup copy of the KUMA Core. If necessary, you will be able to recover from a backup copy for version 2.0.
KUMA backups created in versions 2.0 and earlier cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.0 backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
- Make sure that all application installation requirements are met.
- Make sure that MongoDB versions are compatible by running the following commands on the KUMA Core device:
cd /opt/kaspersky/kuma/mongodb/bin/
./mongo
use kuma
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
If the component version is different from 4.4, set the version to 4.4 using the following command:
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
- During installation or upgrade, make sure that TCP port 7220 on the KUMA Core is accessible from the KUMA storage hosts.
- If you have a keeper deployed on a separate device in the ClickHouse cluster, install the storage service on the same device before you start the upgrade:
- Use the existing storage of the cluster to create a storage service for the keeper in the web interface.
- Install the service on the device with the dedicated ClickHouse keeper.
- In the inventory file, specify the same hosts that were used when installing KUMA version 2.0.X. Set the following settings to
false
:deploy_to_k8s false
need_transfer false
deploy_example_services false
When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.3. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:
- ClickHouse's systemd services are removed.
- Certificates are deleted from the /opt/kaspersky/kuma/clickhouse/certificates directory.
- The 'Shard ID', 'Replica ID', 'Keeper ID', and 'ClickHouse configuration override' fields are filled in for each node in the storage resource based on values from the inventory file and service configuration files on the host. Subsequently, you will manage the roles of each node in the KUMA web interface.
- All existing configuration files from the /opt/kaspersky/kuma/clickhouse/cfg directory are deleted (subsequently, they will be generated by the storage service).
- The value of the LimitNOFILE parameter ('Service' section) is changed from 64,000 to 500,000 in the kuma-storage systemd services.
- If you use alert segmentation rules, prepare the data for migrating the existing rules and save. You can use this data to re-create the rules at the next step. During the upgrade, alert segmentation rules are not migrated automatically.
- To perform the upgrade, you will need the password of the admin user. If you forgot the password of the admin user, contact Technical Support to reset the current password, then use the new password to perform the upgrade at the next step.
Upgrading KUMA
- Depending on the KUMA deployment scheme that you are using, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the
Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.
- Prepare the k0s.inventory.yml inventory file.
- When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return this error because of KUMA being unable to start the Core service due to a timeout error and resource limitations. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.
The final stage of preparing KUMA for work
- After upgrading KUMA, clear your browser cache.
- Re-create the alert segmentation rules.
- Manually upgrade the KUMA agents.
KUMA is successfully upgraded.
Upgrading from version 2.1.x to 2.1.3
To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you will be able to recover from a backup copy for version 2.1.x.
KUMA backups created in versions earlier than 2.1.3 cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.1.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
- To perform an update, you need a valid password from the admin user. If you forgot the password of the admin user, contact Technical Support to reset the current password, then use the new password to perform the upgrade at the next step.
Upgrading KUMA
- Depending on the KUMA deployment scheme that you are using, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the
Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.
- When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return this error because of KUMA being unable to start the Core service due to a timeout error and resource limitations. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Upgrading from version 2.1.3 to 3.0.3
To install KUMA version 3.0.3 over version 2.1.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you will be able to restore data from backup for version 3.0.3.
KUMA backups created in versions 2.1.3 and earlier cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 2.1.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
- Hierarchical structure is not supported in 3.0.2, therefore all KUMA hosts become standalone hosts when upgrading from version 2.1.3 to 3.0.2.
- For existing users, after upgrading from 2.1.3 to 3.0.2, the universal dashboard layout is not refreshed.
Possible solution: restart the Core service (kuma-core.service), and the data will be refreshed with the interval configured for the layout.
Upgrading from version 3.0.x to 3.0.3
To install KUMA version 3.0.3 over version 3.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you will be able to restore data from backup for version 3.0.x.
KUMA backups created in versions earlier than 3.0.3 cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 3.0.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
For existing users, after upgrading from 3.0.x to 3.0.3, the universal dashboard layout is not refreshed.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.3 to 3.2.x
To install KUMA version 3.2.x over version 3.0.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you can restore data from backup for version 3.0.3.
KUMA backups created in versions 3.0.3 and earlier cannot be restored in version 3.2.x. This means that you cannot install KUMA 3.2.x from scratch and restore a KUMA 3.0.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.2.x.
- Make sure that all application installation requirements are met.
- Make sure that the host name of the KUMA Core does not start with a numeral. The upgrade to version 3.2.x cannot be completed successfully if the host name of the KUMA Core starts with a numeral. In such a case, you will need to take certain measures to successfully complete the upgrade. Contact Technical Support for additional instructions.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster. For subsequent upgrades, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- If you are using agents, manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
- For existing users, after upgrading from 3.0.3 to 3.2.x, the universal dashboard layout is not refreshed.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
- If the old Core service, "kuma-core.service" is still displayed after the upgrade, run the following command after installation is complete:
sudo systemctl reset-failed
After running the command, the old service is no longer displayed, and the new service starts successfully.
If you want to upgrade a distributed installation of KUMA to the latest version of KUMA in a fault tolerant configuration, first upgrade your distributed installation to the latest version and then migrate KUMA Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if a migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If a migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 service has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts do not need to be running.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of space characters):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it moves the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.