Kaspersky Unified Monitoring and Analysis Platform

Contents

[Topic 217694]

About Kaspersky Unified Monitoring and Analysis Platform

Kaspersky Unified Monitoring and Analysis Platform (hereinafter KUMA or "program") is an integrated software solution that includes the following set of functions:

  • Receiving, processing, and storing information security events.
  • Analysis and correlation of incoming data.
  • Search within the obtained events.
  • Creation of notifications upon detecting symptoms of information security threats.

The program is built on a microservice architecture. This means that you can create and configure the relevant microservices (hereinafter also "services"), thereby making it possible to use KUMA both as a log management system and as a full-fledged SIEM system. In addition, flexible data streams routing allows you to use third-party services for additional event processing.

In this Help topic

What's new

Distribution kit

Hardware and software requirements

KUMA interface

Compatibility with other applications

Page top

[Topic 220925]

What's new

Page top

[Topic 217846]

Distribution kit

The distribution kit includes the following files:

  • kuma-ansible-installer-<build number>.tar.gz to install KUMA components;
  • files containing information about the version (release notes) in Russian and English.
Page top

[Topic 217889]

Hardware and software requirements

Recommended hardware requirements

The hardware listed below will ensure an event-processing capacity of 40,000 events per second. This figure depends on the type of parsed events and efficiency of the parser. Consider also that it is more efficient to have more cores than a lower number of cores with higher CPU frequency.

  • Servers to install collectors:
    • CPU: Intel or AMD with at least 4 cores (8 threads) and support for the SSE 4.2 instruction set or 8 vCPU (virtual processors).
    • RAM: 16 GB

      Each collector that uses geographic data event enrichment requires an additional amount of RAM equal to the size of the geographic database.

    • Disk: 500 GB of available disk space mounted on /opt
  • Servers to install correlators:
    • CPU: Intel or AMD with at least 4 cores (8 threads) and support for the SSE 4.2 instruction set or 8 vCPU (virtual processors).
    • RAM: 16 GB
    • Disk: 500 GB of available disk space mounted on /opt
  • Servers to install the Core:
    • CPU: Intel or AMD with at least 4 cores (8 threads) and support for the SSE 4.2 instruction set or 4 vCPU (virtual processors).
    • RAM: 16 GB

      When importing geographic data, the server requires additional RAM equal to the size of the geographic database.

    • Disk: 500 GB of available disk space mounted on /opt
  • Servers to install storages:
    • CPU: Intel or AMD with at least 12 cores (24 threads) and support for the SSE 4.2 instruction set or 24 vCPU (virtual processors).

      Support is required for SSE4.2 commands.

    • RAM: 48 GB
    • Disk: 500 GB of available disk space mounted on /opt

    To connect a data storage system to storage servers, you must use high-speed protocols (for example, Fibre Channel or iSCSI 10G). It is not recommended to connect storage systems using application-layer protocols (for example, NFS or SMB).

    Using SSDs highly improves cluster node indexing and search efficiency.

    Local mounted HDD/SSD are more efficient than external JBODs. RAID 0 is recommended for faster performance, while RAID 10 is recommended for redundancy.

    To increase reliability, it is not recommended to deploy all cluster nodes on a single JBOD or single physical server (if virtual servers are used).

    To increase efficiency, we recommend keeping all servers in a single data center.

    Ext4 is the recommended file system for ClickHouse cluster servers.

  • Machines to install Windows agents:
    • Processor: single-core, 1.4 GHz or higher
    • RAM: 512 MB
    • Disk: 1 GB
    • OS:
      • Microsoft Windows 2012
      • Microsoft Windows Server 2012 R2
      • Microsoft Windows Server 2016
      • Microsoft Windows Server 2019
      • Microsoft Windows 10 (20H2, 21H1)
  • Machines to install Linux agents:
    • Processor: single-core, 1.4 GHz or higher
    • RAM: 512 MB
    • Disk: 1 GB
    • OS:
      • Ubuntu 20.04 LTS, 21.04
      • Oracle Linux version 8.6
      • Astra Linux Special Edition RUSB.10015-01 (2021-1126SE17 update 1.7.1)
  • Installation in virtual environments is supported:
    • VMware 6.5 or later
    • Hyper-V for Windows Server 2012 R2 or later
    • KVM Qumu version 4.2 or later
    • Software package of virtualization tools "Brest" RDTSP.10001-02

Software requirements

The Collector, Correlator, Kernel, and Storage components can be deployed using only Oracle Linux 8.6, or Astra Linux Special Edition (version RUSB.10015-01, 2021-1126SE17 update 1.7.1).

Network requirements

The network interface bandwidth must be at least 100 Mbps.

For KUMA to be able to process more than 20,000 events per second, ensure a data transfer speed of at least 10 Gbps between ClickHouse nodes.

Additional requirements

Computers used for the KUMA web interface:

  • CPU: Intel Core i3 8th generation
  • RAM: 8 GB
  • Installed Google Chrome browser version 102 or later, or Mozilla Firefox browser version 103 or later.
Page top

[Topic 230383]

KUMA interface

The program is managed through the web interface.

The window of the program web interface contains the following items:

  • Sections in the left part of the program web interface window
  • Tabs in the upper part of the program web interface window for some sections of the program
  • Workspace in the lower part of the program web interface window

The workspace displays the information that you choose to view in the sections and on the tabs of the program web interface window. It also contains management elements that you can use to configure how the information is displayed.

While working with the program web interface, you can use hot keys to perform the following actions:

  • In all sections: close the window that opens in the right side pane—Esc.
  • In the Events section:
    • Switch between events in the right side pane— and .
    • Start a search (when focused on the query field)—Ctrl/Command + Enter.
    • Save a search query—Ctrl/Command + S.
Page top

[Topic 230384]

Compatibility with other applications

Kaspersky Endpoint Security for Linux

If the components of KUMA and Kaspersky Endpoint Security for Linux are installed on the same server, the report.db directory may grow very large and even take up the entire drive space. To avoid this problem, the following is recommended:

  • Upgrade Kaspersky Endpoint Security for Linux to version 11.2 or later.
  • Add the following directories to general exclusions and to on-demand scan exclusions:
    • /opt/kaspersky/kuma/clickhouse/data/store/
    • /opt/kaspersky/kuma/victoria-metrics/
    • /var/lib/rsyslog/imjournal.state

    For more details on scan exclusions, please refer to the Kaspersky Endpoint Security for Linux Online Help Guide.

Page top

[Topic 217958]

Program architecture

The standard program installation includes the following components:

  • One or more Collectors that receive messages from event sources and parse, normalize, and, if required, filter and/or aggregate them.
  • A Correlator that analyzes normalized events received from Collectors, performs the necessary actions with active lists, and creates alerts in accordance with the correlation rules.
  • The Core that includes a graphical interface to monitor and manage the settings of system components.
  • The Storage, which contains normalized events and registered incidents.

Events are transmitted between components over optionally encrypted, reliable transport protocols. You can configure load balancing to distribute load between service instances, and it is possible to enable automatic switching to the backup component if the primary one is unavailable. If all components are unavailable, events are saved to the hard disk buffer and sent later. The buffer disk size for temporary event storage can be adjusted.

kuma-arch

KUMA architecture

In this Help topic

Core

Collector

Correlator

Storage

Basic entities

Page top

[Topic 217779]

Core

The Core is the central component of KUMA that serves as the foundation upon which all other services and components are built. It provides a graphical user interface that is intended for everyday use by operators/analysts and for configuring the entire system.

The Core allows you to:

  • create and configure services, or components, of the program, as well as integrate the necessary software into the system;
  • manage program services and user accounts in a centralized way;
  • visualize statistical data on the program;
  • investigate security threats based on the received events.
Page top

[Topic 217762]

Collector

A collector is an application component that receives messages from event sources, processes them, and transmits them to a storage, correlator, and/or third-party services to identify alerts.

For each collector, you need to configure one connector and one normalizer. You can also configure an unlimited number of additional Normalizers, Filters, Enrichment rules, and Aggregation rules. To enable the collector to send normalized events to other services, specific destinations must be added. Normally, two destinations are used: the storage and the correlator.

The collector operation algorithm includes the following steps:

  1. Receiving messages from event sources

    To receive messages, you must configure an active or passive connector. The passive connector can only receive messages from the event source, while the active connector can initiate a connection to the event source, such as a database management system.

    Connectors can also vary by type. The choice of connector type depends on the transport protocol for transmitting messages. For example, for an event source that transmits messages over TCP, you must install a TCP type connector.

    The program has the following connector types available:

    • internal
    • tcp
    • udp
    • netflow
    • sflow
    • nats
    • kafka
    • http
    • sql
    • file
    • diode
    • ftp
    • nfs
    • wmi
    • wec
    • snmp
  2. Event parsing and normalization

    Events received by the connector are processed using the parser and normalization rules set by the user. The choice of normalizer depends on the format of the messages received from the event source. For example, you must select a CEF-type root normalizer for a source that sends events in CEF format.

    The following normalizers are available in the program:

    • JSON
    • CEF
    • Regexp
    • Syslog (as per RFC3164 and RFC5424)
    • CSV
    • Key-value
    • XML
    • NetFlow v5
    • NetFlow v9
    • IPFIX (v10)
  3. Filtering of normalized events

    You can configure filters that allow you to filter out events that meet specified conditions. Events that do not meet the filtering conditions will be sent for processing.

  4. Enrichment and conversion of normalized events

    Enrichment rules let you to supplement event contents with information from internal and external sources. The program has the following enrichment sources:

    • constants
    • cybertrace
    • dictionaries
    • dns
    • events
    • ldap
    • templates
    • timezone data
    • geographic data

    Mutation rules let you convert event contents in accordance with the defined criteria. The program has the following conversion methods:

    • lower—converts all characters to lower case.
    • upper—converts all characters to upper case.
    • regexp—extracts a substring using RE2 regular expressions.
    • substring—selects text strings by specified item numbers.
    • replace—replaces text with the entered string.
    • trim—deletes the specified characters.
    • append—adds characters to the end of the field value.
    • prepend—adds characters to the beginning of the field value.
  5. Aggregation of normalized events

    You can configure aggregation rules to reduce the number of similar events that are transmitted to the storage and/or the correlator. For example, you can aggregate into one event all messages about network connections transmitted over the same protocol (transport and application layers) between two IP addresses and received during a specified time interval. If aggregation rules are configured, multiple events will be processed and saved as a single event. This helps you reduce the load on the services responsible for further event processing, saves you storage space, and reduces events processed per second (EPS) count.

  6. Transmission of normalized events

    After all the processing stages are completed, the event is sent to configured destinations.

Page top

[Topic 217784]

Correlator

The Correlator is a program component that analyzes normalized events. Information from active lists and/or dictionaries can be used in the correlation process.

The data obtained by analysis is used to carry out the following tasks:

Event correlation is performed in real time. The operating principle of the correlator is based on an event signature analysis. This means that every event is processed according to the correlation rules set by the user. When the program detects a sequence of events that satisfies the conditions of the correlation rule, it creates a correlation event and sends it to the Storage. The correlation event can also be sent to the correlator for repeated analysis, which allows you to customize the correlation rules so that they are triggered by the results of a previous analysis. Products of one correlation rule can be used by other correlation rules.

You can distribute correlation rules and the active lists they use among correlators, thereby sharing the load between services. In this case, the collectors will send normalized events to all available correlators.

The correlator operation algorithm has the following steps:

  1. Obtaining an event

    The correlator receives a normalized event from the collector or from another service.

  2. Applying correlation rules

    You can configure correlation rules so they are triggered based on a single event or a sequence of events. If no alert was detected using the correlation rules, the event processing ends.

  3. Responding to an alert

    You can specify actions that the program must perform when an alert is detected. The following actions are available in the program:

    • Event enrichment
    • Operations with active lists
    • Sending notifications
    • Storing correlation event
  4. Sending a correlation event

    When the program detects a sequence of events that satisfies the conditions of the correlation rule, it creates a correlation event and sends it to the storage. Event processing by the correlator is now finished.

Page top

[Topic 218010]

Storage

A KUMA storage is used to store normalized events so that they can be quickly and continually accessed from KUMA for the purpose of extracting analytical data. Access speed and continuity are ensured through the use of the ClickHouse technology. This means that a storage is a ClickHouse cluster bound to a KUMA storage service.

Storage components: clusters, shards, replicas, and keepers.

A ClickHouse cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

  • Accumulate more events by increasing the total number of servers and disk space.
  • Absorb a larger stream of events by distributing the load associated with an influx of new events.
  • Reduce the time taken to search for events by distributing search areas among multiple machines.

A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

  • Improve fault tolerance.
  • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

A keeper is a machine that coordinates data replication at the cluster level. There must be at least one machine with this role for the entire cluster. The recommended number of the machines with this role is 3. The number of machines involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

When choosing a ClickHouse cluster configuration, consider the specific event storage requirements of your organization. For more information, please refer to the ClickHouse documentation.

In repositories, you can create spaces. The spaces enable to create a data structure in the cluster and, for example, store the events of a certain type together.

Page top

[Topic 220211]

Basic entities

This section describes the main entities that KUMA works with.

In this section

About tenants

About events

About alerts

About incidents

About assets

About resources

About services

About agents

About Priority

Page top

[Topic 221264]

About tenants

KUMA has a multitenancy mode in which one instance of the KUMA application installed in the infrastructure of the main organization (main tenant) enables isolation of branches (tenants) so that they receive and process their own events.

The system is managed centrally through the main interface while tenants operate independently of each other and have access only to their own resources, services, and settings. Events of tenants are stored separately.

Users can have access to multiple tenants at the same time. You can also select which tenants' data will be displayed in sections of the KUMA web interface.

In KUMA, two tenants are created by default:

  • The main tenant contains resources and services related to the main tenant. These resources are available only to the general administrator.
  • The shared tenant is where the general administrator can place resources, asset categories, and monitoring policies that users of all tenants will be able to utilize.
Page top

[Topic 217693]

About events

Events are instances of the security-related activities of network assets and services that can be detected and recorded. For example, events include login attempts, interactions with a database, and sensor information broadcasts. Each separate event may seem meaningless, but when considered together they form a bigger picture of network activities to help identify security threats. This is the core functionality of KUMA.

KUMA receives events from logs and restructures their information, making the data from different event sources consistent (this process is called normalization). Afterwards, the events are filtered, aggregated, and later sent to the correlator service for analysis and to the Storage for retaining. When KUMA recognizes specific event or a sequences of events, it creates correlation events, that are also analyzed and retained. If an event or sequence of events indicates a potential security threat, KUMA creates an alert. This alert consists of a warning about the threat and all related data that should be investigated by a security officer.

Throughout their life cycle, events undergo conversions and may receive different names. Below is a description of a typical event life cycle:

The first steps are carried out in a collector.

  1. Raw event. The original message received by KUMA from an event source using a Connector is called a raw event. This is an unprocessed message and it cannot be used yet by KUMA. To fit into the KUMA pipeline, raw events must be normalized into the KUMA data model. That's what the next stage is for.
  2. Normalized event. A normalizer is a set of parsers that maps raw events into KUMA data model. After this conversion, the original message becomes a normalized event and can be used by KUMA for analysis. From here on, only normalized events are used in KUMA. Raw events are no longer used, but they can be kept as a part of normalized events inside the Raw field.

    The program has the following normalizers:

    • JSON
    • CEF
    • Regexp
    • Syslog (as per RFC3164 and RFC5424)
    • CSV/TSV
    • Key-value
    • XML
    • Netflow v5, v9, IPFIX (v10), sFlow v5
    • SQL

    At this point normalized events can already be used for analysis.

  3. Event destination. After the Collector service have processed an event, it is ready to be used by other KUMA services and sent to the KUMA Correlator and/or Storage.

The next steps of the event life cycle are completed in the correlator.

Event types:

  1. Base event. An event that was normalized.
  2. Aggregated event. When dealing with a large number of similar events, you can "merge" them into a single event to save processing time and resources. They act as base events, but In addition to all the parameters of the parent events (events that are "merged"), aggregated events have a counter that shows the number of parent events it represents. Aggregated events also store the time when the first and last parent events were received.
  3. Correlation event. When a sequence of events is detected that satisfies the conditions of a correlation rule, the program creates a correlation event. These events can be filtered, enriched, and aggregated. They can also be sent for storage or looped into the Correlator pipeline.
  4. Audit event. Audit events are created when certain security-related actions are completed in KUMA. These events are used to ensure system integrity. They are automatically placed in a separate storage space and stored for at least 365 days.
  5. Monitoring event. These events are used to track changes in the amount of data received by KUMA.
Page top

[Topic 217691]

About alerts

In KUMA, an alert is created when a sequence of events is received that triggers a correlation rule. Correlation rules are created by KUMA analysts to check incoming events for possible security threats, so when a correlation rule is triggered, it's a warning there may be some malicious activity happening. Security officers should investigate these alerts and respond if necessary.

KUMA automatically assigns the severity to each alert. This parameter shows how important or numerous the processes are that triggered the correlation rule. Alerts with higher priority should be dealt with first. The priority value is automatically updated when new correlation events are received, but a security officer can also set it manually. In this case, the alert priority is no longer automatically updated.

Alerts have related events linked to them, making alerts enriched with data from these events. KUMA also offers drill down functionality for alert investigations.

You can create incidents based on alerts.

Below is the life cycle of an alert:

  1. KUMA creates an alert when a correlation rule is triggered. The alert is updated if the correlation rule is triggered again. Alert is assigned the New status.
  2. A security officer assigns the alert to an operator for investigation. The alert status changes to assigned.
  3. The operator performs one of the following actions:
    • Close the alert as false a positive (alert status changes to closed).
    • Respond to the threat and close the alert (alert status changes to closed).

Afterwards, the alert is no longer updated with new events and if the correlation rule is triggered again, a new alert is created.

Alert management in KUMA is described in this section.

Page top

[Topic 220212]

About incidents

If the nature of the data received by KUMA or the generated correlation events and alerts indicate a possible attack or vulnerability, the symptoms of such an event can be combined into an incident. This allows security experts to analyze threat manifestations in a comprehensive manner and facilitates response.

You can assign a category, type, and severity to incidents, and assign incidents to data protection officers for processing.

Incidents can be exported to RuCERT.

Page top

[Topic 217692]

About assets

Assets are network devices registered in KUMA. Assets generate network traffic when they send and receive data. The KUMA program can be configured to track this activity and create baseline events with a clear indication of where the traffic is coming from and where it is going. The event can contain source and destination IP addresses, as well as DNS names. If you register an asset with certain parameters (for example, a specific IP address), this asset is linked to all events that contain this IP address in any of its parameters.

Assets can be divided into logical groups. This helps keep your network structure transparent and gives you additional ways to work with correlation rules. When an event linked to an asset is processed, the category of this asset is also taken into consideration. For example, if you assign high severity to a certain category of assets, the base events involving these assets will trigger the creation of correlation events with higher severity. This in turn will cascade into higher severity alerts and, therefore, a faster response to it.

It is recommended to register network assets in KUMA because their use makes it possible to formulate clear and versatile correlation rules for much more efficient analysis of events.

Asset management in KUMA is described in this section.

Page top

[Topic 221640]

About resources

Resources are KUMA components that contain parameters for implementing various functions: for example, establishing a connection with a given web address or converting data according to certain rules. Like parts of an erector set, these components are assembled into resource sets for services that are then used as the basis for creating KUMA services.

Page top

[Topic 221642]

About services

Services are the main components of KUMA that work with events: receiving, processing, analyzing, and storing them. Each service consists of two parts that work together:

  • One part of the service is created inside the KUMA web interface based on set of resources for services.
  • The second part of the service is installed in the network infrastructure where the KUMA system is deployed as one of its components. The server part of a service can consist of several instances: for example, services of the same agent or storage can be installed on several computers at once.

Parts of services are connected to each other by using the IDs of services.

Page top

[Topic 217690]

About agents

KUMA agents are services that are used to forward unprocessed events from servers and workstations to KUMA collectors.

Types of agents:

  • Wmi agents are used to receive data from remote Windows machines using Windows Management Instrumentation. They are installed to Windows assets.
  • Wec agents are used to receive Windows logs from a local machine using Windows Event Collector. They are installed to Windows assets.
  • Tcp agents are used to receive data over the TCP protocol. They are installed to Linux and Windows assets.
  • Udp agents are used to receive data over the UDP protocol. They are installed to Linux and Windows assets.
  • Nats agents are used for NATS communications. They are installed to Linux and Windows assets.
  • Kafka agents are used for Kafka communications. They are installed to Linux and Windows assets.
  • Http agents are used for communication over the HTTP protocol. They are installed to Linux and Windows assets.
  • File agents are used to get data from a file. They are installed to Linux assets.
  • Ftp agents are used to receive data over the File Transfer Protocol. They are installed to Linux and Windows assets.
  • Nfs agents are used to receive data over the Network File System protocol. They are installed to Linux and Windows assets.
  • Snmp agents are used to receive data over the Simple Network Management Protocol. They are installed to Linux and Windows assets.
  • Diode agents are used together with data diodes to receive events from isolated network segments. They are installed to Linux assets.
Page top

[Topic 217695]

About Priority

Priority reflects the relative importance of security-sensitive activity detected by a KUMA correlator. It shows the order in which multiple alerts should be processed, and indicates whether senior security officers should be involved.

The Correlator automatically assigns severity to correlation events and alerts based on correlation rule settings. The severity of an alert also depends on the assets related to the processed events because correlation rules take into account the severity of a related asset's category. If the alert or correlation event does not have linked assets with a defined severity or does not have any related assets at all, the severity of this alert or correlation event is equal to the severity of the correlation rule that triggered them. The alert or the correlation event severity is never lower than the severity of the correlation rule that triggered them.

Alert severity can be changed manually. The severity of alerts changed manually is no longer automatically updated by correlation rules.

Possible severity values:

  • Low
  • Medium
  • High
  • Critical
Page top

[Topic 217904]

Installing and removing KUMA

This section described the installation of KUMA. KUMA can be installed on one server to get familiar with the program capabilities. KUMA can also be installed in a production environment.

In this Help topic

Program installation requirements

Installing for demo

Installing KUMA in production environment

Delete KUMA

Updating previous versions of KUMA

Page top

[Topic 231034]

Program installation requirements

Requirements for program installation in the Oracle Linux operating system

  • A disk image for installation is available on the official Oracle site.
  • Make sure that Python version 3.6 or later is installed in the operating system.
  • Make sure that the SELinux module is NOT enabled in the operating system. KUMA and the KUMA installer cannot run at the same time while SELinux is enabled.
  • Make sure that the pip3 package management system is installed in the operating system.
  • Make sure that the following packages are installed in the operating system:
    • netaddr
    • firewalld

    These packages can be installed using the following commands:

    • pip3 install netaddr
    • yum install firewalld

Requirements for program installation in the Astra Linux Special Edition operating system

  • Make sure that Python version 3.6 or later is installed in the operating system.
  • Make sure that the pip3 package management system is installed in the operating system.
  • Make sure that the following packages are installed in the operating system:
    • python3-apt
    • curl
    • libcurl4

    These packages can be installed by running the apt install python3-apt curl libcurl4 command.

  • Make sure that the following dependent packages are installed in the operating system:
    • netaddr
    • python3-cffi-backend

    These packages can be installed using the following commands:

    • apt install python3-cffi-backend
    • pip3 install netaddr

    If you are planning to query Oracle databases from KUMA, you must install the libaio1 package.

  • You must use the sudo pdpl-user -i 63 <user name> command to assign the required permissions to the user who will be running the program installation.

General installation requirements

  • The server where the installer is run cannot have the name localhost or localhost.<domain>. The installer can be run from any folder.
  • Before deploying the program, make sure that the servers where you intend to install the components meet the hardware and software requirements.
  • KUMA components are addressed using the fully qualified domain name (FQDN) of the host. Before you install the program, you must ensure that the command hostnamectl status returns the true name of the host FQDN in the Static hostname field.
  • It is recommended to use Network Time Protocol (NTP) to synchronize time between servers with KUMA services.
  • Ext4 is the recommended file system for ClickHouse cluster servers.
Page top

[Topic 217908]

Installing for demo

For demonstration purposes, you can deploy KUMA components on a single server. Prior to installing the program, carefully read the KUMA installation requirements as well as the hardware and system requirements.

The KUMA installation takes place over several stages:

  1. Preparing the test machine

    The test machine is used during the program installation process: the installer files are unpacked and run on it.

  2. Preparing the target machine

    The program components are installed on the target machines. The test machine can be used as a target one.

  3. Preparing an inventory file for demonstration installation

    Create an inventory file describing the network structure of the program components that the installer can use to deploy KUMA.

  4. Installing the program for demonstration purposes

    Install the program and get the URL and login credentials for the web interface.

If necessary, the program installed for demonstration purposes can be distributed to different servers for full-fledged operation.

In this section

Preparing an inventory file for demonstration installation

Installing the program for demonstration purposes

Upgrading the demonstration installation

Page top

[Topic 222158]

Preparing an inventory file for demonstration installation

Installation, update, and removal of KUMA components is performed from the folder containing the unpacked installer by using the Ansible tool and the user-created inventory file containing a list of the hosts of KUMA components and other parameters. In the case of a demonstration installation, the host will be the same for all components. The inventory file is in the YAML format.

To create an inventory file for a demonstration installation:

  1. Go to the KUMA installer folder by executing the following command:

    cd kuma-ansible-installer

  2. Create an inventory file by copying the single.inventory.yml.template:

    cp single.inventory.yml.template single.inventory.yml

  3. Edit the parameters of the single.inventory.yml: inventory file:
    • If you want demonstration services to be created during the installation, set the deploy_example_services parameter value to true.

      deploy_example_services: true

      Demonstration services can only be created during the initial installation of KUMA. When updating the system using the same inventory file, no demonstration services will be created.

    • If you are installing KUMA in a production environment and have a separate test machine, set the ansible_connection parameter to ssh:

      ansible_connection: ssh

  4. Replace all kuma.example.com lines in the inventory file with the host of the target machine on which you want to install KUMA components.

The inventory file is created. You can install KUMA for demonstration purposes using it.

It is recommended that you not remove the inventory file after installing KUMA:

  • If you change this file (for example, add information about a new server for the collector), you can reuse it to update the system with a new component.
  • You can use this inventory file to delete KUMA.
Page top

[Topic 222159]

Installing the program for demonstration purposes

KUMA is installed using the Ansible tool and the YML inventory file. The installation is performed using the test machine, where all of the KUMA components are installed on the target machines.

To install KUMA for demonstration purposes:

  1. On the test machine, open the folder containing the unpacked installer.
  2. Place the file with the license key in the folder <installer folder>/roles/kuma/files/.

    The key file must be named license.key.

  3. Launch the installer by executing the following command:

    sudo ./install.sh single.inventory.yml

  4. Accept the terms of the End User License Agreement.

    If you do not accept the terms of the End User License Agreement, the program will not be installed.

KUMA components are installed on the target machine. The screen will display the URL of the KUMA web interface and the user name and password that must be used to access the web interface.

By default, the KUMA web interface address is https://kuma.example.com:7220.

Default login credentials (after the first login, you must change the password of the admin account):
- user name—admin
- password—mustB3Ch@ng3d!

It is recommended that you save the inventory file used to install the program. It can be used to add components to the system or remove KUMA.

You can later upgrade the demonstration installation to the full one.

Page top

[Topic 222160]

Upgrading the demonstration installation

You can upgrade the demonstration installation by installing the program over the installed KUMA using the distributed.inventory.yml template.

Several steps are required to upgrade the demonstration installation:

  1. Installing the program

    Specify the host of the demonstration server and place it in the core group when preparing the inventory file.

  2. Deleting the demonstration services

    In the KUMA web interface under ResourcesActive services copy the IDs for the existing services and delete them.

    Then delete the services from the machine where they were installed by running the command sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID> --uninstall. Repeat the delete command for each service.

  3. Rebuilding services on the right machines
Page top

[Topic 217917]

Installing KUMA in production environment

Prior to installing the program, carefully read the KUMA installation requirements as well as the hardware and system requirements. The KUMA installation takes place over several stages:

  1. Configuring network access

    Make sure all the necessary ports are open to allow KUMA components to interact with each other based on your organization's security structure.

  2. Preparing the test machine

    The test machine is used during the program installation process: the installer files are unpacked and run on it.

  3. Preparing the target machines

    The program components are installed on the target machines.

  4. Preparing the inventory file

    Create an inventory file describing the network structure of the program components that the installer can use to deploy KUMA.

  5. Installing the program

    Install the program and get the URL and login credentials for the web interface.

  6. Creating services

    Create services in the KUMA web interface and install them on the target machines intended for them.

In this section

Configuring network access

Preparing the test machine

Preparing the target machine

Preparing the inventory file

Installing the program

Creating services

Changing CA certificate

Additional ClickHouse clusters

Page top

[Topic 217770]

Configuring network access

For the program to run correctly, you need to ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components. The table below shows the default network ports values.

Network ports used so KUMA components can interact with each other

Protocol

Port

Direction

Destination of the connection

HTTPS

7222

From the KUMA client to the server with the KUMA Core component.

Reverse proxy in the CyberTrace system.

HTTPS

8123

From the storage service to the ClickHouse cluster node.

Writing and receiving normalized events in the ClickHouse cluster.

HTTPS

9009

Between ClickHouse cluster replicas.

Internal communication between ClickHouse cluster replicas for transferring data of the cluster.

TCP

2181

From ClickHouse cluster nodes to the ClickHouse keeper replication coordination service.

Receiving and writing of replication metadata by replicas of ClickHouse servers.

TCP

2182

From one ClickHouse keeper replication coordination service to another.

Internal communication between replication coordination services to reach a quorum.

TCP

7210

From all KUMA components to the KUMA Core server

Receipt of the configuration by KUMA from the KUMA Core server

TCP

7215

From the KUMA collector to the KUMA correlator

Forwarding of data by the collector to the KUMA correlator

TCP

7220

From the KUMA client to the server with the KUMA Core component

User access to the KUMA web interface

TCP

7221 and other ports used for service installation as the --api.port <port> parameter value.

From KUMA Core to KUMA services

Administration of services from the KUMA web interface

TCP

7223

To the KUMA Core server.

Default port used for API requests.

TCP

8001

From Victoria Metrics to the ClickHouse server.

Receiving ClickHouse server operation metrics.

TCP

9000

From the ClickHouse client to the ClickHouse cluster node.

Writing and receiving data in the ClickHouse cluster.

Page top

[Topic 222083]

Preparing the test machine

The test machine is used during the program installation process: the installer files are unpacked and run on it.

To prepare the test machine for the KUMA installation:

  1. Install an operating system on the test machine and then install the necessary packages.
  2. Configure the network interface.

    For convenience, you can use the graphical utility nmtui.

  3. Configure the system time to synchronize with the NTP server:
    1. If the machine does not have direct Internet access, edit the /etc/chrony.conf file to replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.
    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds and execute the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contain the line "System clock synchronized: yes".

  4. Generate an SSH key for authentication on the SSH servers of the target machines by executing the following command:

    sudo ssh-keygen -f /root/.ssh/id_rsa -N "" -C kuma-ansible-installer

  5. Make sure the test machine has network access to all the target machines by host name and copy the SSH key to each of them by executing the following command:

    sudo ssh-copy-id -i /root/.ssh/id_rsa root@<host name of the test machine>

  6. Copy the archive with the KUMA installer to the test machine and unpack it using the following command (about 2 GB of disk space is required):

    sudo tar -xpf kuma-ansible-installer-<version>.tar.gz

The test machine is ready for the KUMA installation.

Page top

[Topic 217955]

Preparing the target machine

The program components are installed on the target machines.

To prepare the target machine for the installation of KUMA components:

  1. Install an operating system on the test machine and then install the necessary packages.
  2. Configure the network interface.

    For convenience, you can use the graphical utility nmtui.

  3. Configure the system time to synchronize with the NTP server:
    1. If the machine does not have direct Internet access, edit the /etc/chrony.conf file to replace 2.pool.ntp.org with the name or IP address of your organization's internal NTP server.
    2. Start the system time synchronization service by executing the following command:

      sudo systemctl enable --now chronyd

    3. Wait a few seconds and execute the following command:

      sudo timedatectl | grep 'System clock synchronized'

      If the system time is synchronized correctly, the output will contain the line "System clock synchronized: yes".

  4. Specify the host name. It is highly recommended to use the FQDN. For example: kuma-1.mydomain.com.

    You should not change the KUMA host name after installation: this will make it impossible to verify the authenticity of certificates and will disrupt the network communication between the program components.

  5. Register the target machine in your organization's DNS zone to allow host names to be translated to IP addresses.

    If your organization does not use a DNS server, you can use the /etc/hosts file for name resolution. The content of the files can be automatically generated for each target machine when installing KUMA.

  6. Execute the following command and write down the result:

    hostname -f

    You will need this host name when installing KUMA. The test machine must be able to access the target machine using this name.

The target machine is ready for the installation of KUMA components.

The test machine can be used as a target one. To do so, prepare the test machine, then follow steps 4–6 in the instructions for preparing the target machine.

Page top

[Topic 222085]

Preparing the inventory file

Installation, update, and removal of KUMA components is performed from the folder containing the unpacked installer by using the Ansible tool and the user-created inventory file containing a list of the hosts of KUMA components and other parameters. The inventory file is in the YAML format.

To create an inventory file:

  1. Go to the KUMA installer folder by executing the following command:

    cd kuma-ansible-installer

  2. Create an inventory file by copying the distributed.inventory.yml.template:

    cp distributed.inventory.yml.template distributed.inventory.yml

  3. Edit the inventory file parameters:
    • If you want demonstration services to be created during the installation, set the deploy_example_services parameter value to true.

      deploy_example_services: true

      Demonstration services can only be created during the initial installation of KUMA. When updating the system using the same inventory file, no demonstration services will be created.

    • If the machines are not registered in your organization's DNS zone, set the generate_etc_hosts parameter to true, and for each machine in the inventory, replace the ip (0.0.0.0) parameter values with the actual IP addresses.

      generate_etc_hosts: true

      When using this parameter, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed.

    • If you are installing KUMA in a production environment and have a separate test machine, set the ansible_connection parameter to ssh:

      ansible_connection: ssh

  4. In the inventory file, specify the host of the target machines on which KUMA components should be installed. If the machines are not registered in the DNS zone of your organization, replace the parameter values ip (0.0.0.0) with the actual IP addresses.

    The hosts are specified in the following sections of the inventory file:

    • core is the section for specifying the host and IP address of the target machine on which KUMA Core will be installed. You may only specify one host in this section.
    • collector is the section for specifying the host and IP address of the target machine on which the collector will be installed. You may specify one of more hosts in this section.
    • correlator is the section for specifying the host and IP address of the target machine on which the correlator will be installed. You may specify one of more hosts in this section.
    • storage is the section for specifying the hosts and IP addresses of the target machines on which storage components will be installed. You may specify one of more hosts in this section.

      Storage components: clusters, shards, replicas, and keepers.

      A ClickHouse cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

      A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

      • Accumulate more events by increasing the total number of servers and disk space.
      • Absorb a larger stream of events by distributing the load associated with an influx of new events.
      • Reduce the time taken to search for events by distributing search areas among multiple machines.

      A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

      • Improve fault tolerance.
      • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

      A keeper is a machine that coordinates data replication at the cluster level. There must be at least one machine with this role for the entire cluster. The recommended number of the machines with this role is 3. The number of machines involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

      Each machine in the storage section can have the following parameter combinations:

      • shard + replica + keeper
      • shard + replica
      • keeper

      If the shard and replica parameters are specified, the machine is a part of a cluster and helps accumulate and search for normalized KUMA events. If the keeper parameter is additionally specified, the machine also helps coordinate data replication at the cluster-wide level.

      If only keeper is specified, the machine will not accumulate normalized events, but it will participate in coordinating data replication at the cluster-wide level. The keeper parameter values must be unique.

      If several replicas are defined within the same shard, the value of the replica parameter must be unique within this shard.

The inventory file is created. It can be used to install KUMA.

It is recommended that you not remove the inventory file after installing KUMA:

  • If you change this file (for example, add information about a new server for the collector), you can reuse it to update the system with a new component.
  • You can use this inventory file to delete KUMA.
Page top

[Topic 217914]

Installing the program

KUMA is installed using the Ansible tool and the YML inventory file. The installation is performed using the test machine, where all of the KUMA components are installed on the target machines.

To install KUMA:

  1. On the test machine, open the folder containing the unpacked installer.
  2. Place the file with the license key in the folder <installer folder>/roles/kuma/files/.

    The key file must be named license.key.

  3. Launch the installer by executing the following command:

    sudo ./install.sh distributed.inventory.yml

  4. Accept the terms of the End User License Agreement.

    If you do not accept the terms of the End User License Agreement, the program will not be installed.

KUMA components are installed on the target machines. The screen will display the URL of the KUMA web interface and the user name and password that must be used to access the web interface.

By default, the KUMA web interface address is https://kuma.example.com:7220.

Default login credentials (after the first login, you must change the password of the admin account):
- user name—admin
- password—mustB3Ch@ng3d!

It is recommended that you save the inventory file used to install the program. It can be used to add components to the system or remove KUMA.

Page top

[Topic 217903]

Creating services

KUMA services should be installed only after KUMA deployment is complete. The services can be installed in any order.

When deploying several KUMA services on the same host, you must specify unique ports for each service using the --api.port <port> parameters during installation.

Below is a list of the sections describing how specific services are created:

Page top

[Topic 217747]

Changing CA certificate

After KUMA Core is installed, a unique self-signed CA certificate with the matching key is generated. This CA certificate is used to sign all other certificates for internal communication between KUMA components and REST API requests. The CA certificate is stored on the KUMA Core server in the /opt/kaspersky/kuma/core/certificates/ folder.

You can use your company's certificate and key instead of self-signed KUMA CA certificate and key.

Before changing KUMA certificate, make sure to make a backup copy of the previous certificate and key with the names backup_external.cert and backup_external.key.

To change KUMA certificate:

  1. Rename your company's certificate and key files to external.cert and external.key.

    Keys must be in PEM format.

  2. Place external.cert and external.key to the /opt/kaspersky/kuma/core/certificates/ folder.
  3. Restart the kuma-core service by running the sudo systemctl restart kuma-core command.
  4. Restart the browser hosting the KUMA web interface.

You company's certificate and key are now used for internal communication between KUMA components and REST API requests.

Page top

[Topic 238599]

Additional ClickHouse clusters

More than one ClickHouse storage cluster can be added to KUMA. The process of adding a new ClickHouse cluster consists of several steps:

  1. Preparing the target machine

    On the target machine, specify the FQDN of the server with the KUMA Core in the /etc/hosts directory.

  2. Preparing cluster inventory file

    Depending on the type of installation – local or remote – the inventory file is prepared on the target machine or on KUMA Core machine.

  3. Installing additional cluster
  4. Creating a storage

When creating storage cluster nodes, verify the network connectivity of the system and open the ports used by the components.

In this section

Preparing cluster inventory file

Installing additional cluster

Deleting a cluster

Page top

[Topic 238675]

Preparing cluster inventory file

Installation, update, and removal of KUMA components is performed from the directory containing the unpacked installer by using the Ansible tool and the user-created inventory file containing a list of the hosts of KUMA components and other parameters. The inventory file is in the YAML format.

To create an inventory file:

  1. Go to the KUMA unarchived installer directory by executing the following command:

    cd kuma-ansible-installer

  2. Create an inventory file by copying additional-storage-cluster.inventory.yml.template:

    cp additional-storage-cluster.inventory.yml.template additional-storage-cluster.inventory.yml

  3. Edit the inventory file parameters:
    • If you want demonstration services to be created during the installation, set the deploy_example_services parameter value to true.

      deploy_example_services: true

      Demonstration services can only be created during the initial installation of KUMA. When updating the system using the same inventory file, no demonstration services will be created.

    • If the machines are not registered in your organization's DNS zone, set the generate_etc_hosts parameter to true, and for each machine in the inventory, replace the ip (0.0.0.0) parameter values with the actual IP addresses.

      generate_etc_hosts: true

      When using this parameter, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed.

    • Set the ansible_connection parameter:
      • Specify local if you want to install the cluster locally:

        ansible_connection: local

      • Specify ssh if you want to install the cluster remotely, from a server with KUMA Core installed:

        ansible_connection: ssh

  4. In the storage section, specify the full names of the domains of the hosts on which you want to install the cluster nodes in the inventory file. If the machines are not registered in the DNS zone of your organization, replace the parameter values ip (0.0.0.0) with the actual IP addresses.

    Storage components: clusters, shards, replicas, and keepers.

    A ClickHouse cluster is a logical group of machines that possess all accumulated normalized KUMA events. It consists of one or more logical shards.

    A shard is a logical group of machines that possess a specific portion of all normalized events accumulated in the cluster. It consists of one or more replicas. Increasing the number of shards lets you do the following:

    • Accumulate more events by increasing the total number of servers and disk space.
    • Absorb a larger stream of events by distributing the load associated with an influx of new events.
    • Reduce the time taken to search for events by distributing search areas among multiple machines.

    A replica is a machine that is a member of the logical shard and possesses a copy of the data of this shard. If there are multiple replicas, there are multiple copies (data is replicated). Increasing the number of replicas lets you do the following:

    • Improve fault tolerance.
    • Distribute the total load related to data searches among multiple machines (although it's best to increase the number of shards for this purpose).

    A keeper is a machine that coordinates data replication at the cluster level. There must be at least one machine with this role for the entire cluster. The recommended number of the machines with this role is 3. The number of machines involved in coordinating replication must be an odd number. The keeper and replica roles can be combined in one machine.

    Each machine in the storage section can have the following parameter combinations:

    • shard + replica + keeper
    • shard + replica
    • keeper

    If the shard and replica parameters are specified, the machine is a part of a cluster and helps accumulate and search for normalized KUMA events. If the keeper parameter is additionally specified, the machine also helps coordinate data replication at the cluster-wide level.

    If only keeper is specified, the machine will not accumulate normalized events, but it will participate in coordinating data replication at the cluster-wide level. The keeper parameter values must be unique.

    If several replicas are defined within the same shard, the value of the replica parameter must be unique within this shard.

The inventory file is created. It can be used to create a ClickHouse cluster.

It is recommended that you not remove the inventory file after installing KUMA:

  • If you change this file (for example, add information about a new server for the collector), you can reuse it to update the system with a new component.
  • You can use this inventory file to delete KUMA.
Page top

[Topic 238677]

Installing additional cluster

KUMA is installed using the Ansible tool and the YML inventory file.

To install an additional KUMA cluster:

  1. On a preconfigured target machine or a machine with the KUMA Core installed (depending on the ansible_connection setting), open the folder with an unpacked installer file.
  2. Launch the installer by executing the following command:

    PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i additional-storage-cluster.inventory.yml additional-storage-cluster.playbook.yml

The additional ClickHouse cluster is installed. To write data to the cluster using KUMA, you need to create a storage.

Page top

[Topic 239283]

Deleting a cluster

To delete a ClickHouse cluster,

execute the following command:

systemctl stop kuma-storage-<storage ID> && systemctl stop kuma-clickhouse && systemctl disable kuma-storage-<storage ID> && systemctl disable kuma-clickhouse && rm -rf /usr/lib/systemd/system/kuma-storage-<storage ID>.service && rm -rf /usr/lib/systemd/system/kuma-clickhouse.service && systemctl daemon-reload && rm -rf /opt/kaspersky/kuma

The KUMA storage and ClickHouse cluster services are stopped and deleted.

Page top

[Topic 217962]

Delete KUMA

To remove KUMA, use the Ansible tool and the user-generated inventory file.

To remove KUMA:

  1. On the test machine, go to the installer folder:

    cd kuma-ansible-installer

  2. Execute the following command:

    sudo ./uninstall.sh <inventory file>

KUMA and all of the program data will be removed from the server.

The databases that were used by KUMA (for example, the ClickHouse storage database) and the information they contain must be deleted separately.

Page top

[Topic 222156]

Updating previous versions of KUMA

You can install KUMA version 2.x over versions 1.5.x or later. To do this, follow the instructions for installing the program in a production environment, and when you reach the stage of preparing the inventory file list the hosts of the already deployed KUMA system in it.

After updating KUMA, you need to clear your browser cache.

If after updating KUMA in the storage logs start appearing errors about suspicious strings, run the following command on the KUMA storage server: touch /opt/kaspersky/kuma/clickhouse/data/flags/force_restore_data && systemctl restart kuma-clickhouse.

Page top

[Topic 222243]

About the End User License Agreement

The End User License agreement is a legal agreement between you and AO Kaspersky Lab that specifies the conditions under which you can use the program.

Read the terms of the End User License Agreement carefully before using the program for the first time.

You can familiarize yourself with the terms of the End User License Agreement in the following ways:

  • During the installation of KUMA.
  • By reading the LICENSE document. This document is included in the distribution kit and is located inside the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    After the program is deployed, the document is available in the /opt/kaspersky/kuma/LICENSE folder.

You accept the terms of the End User License Agreement by confirming your acceptance of the End User License Agreement during the program installation. If you do not accept the terms of the End User License Agreement, you must cease the installation of the program and must not use the program.

Page top

[Topic 233460]

About the license

A License is a time-limited right to use the program, granted under the terms of the End User License Agreement.

A license entitles you to the following kinds of services:

  • Use of the program in accordance with the terms of the End User License Agreement
  • Getting technical support

The scope of services and the duration of usage depend on the type of license under which the program was activated.

A license is provided when the program is purchased. When the license expires, the program continues to work but with limited functionality (for example, new resources cannot be created). To continue using KUMA with its full functionality, you need to renew your license.

We recommend that you renew your license no later than its expiration date to ensure maximum protection against cyberthreats.

Page top

[Topic 233471]

About the License Certificate

A License Certificate Is a document that is provided to you along with a key file or activation code.

The License Certificate contains the following information about the license being granted:

  • License key or order number
  • Information about the user who is granted the license
  • Information about the program that can be activated under the provided license
  • Restriction on the number of licensing units (for example, the number of events that can be processed per second)
  • Start date of the license term
  • License expiration date or license term
  • License type

Page top

[Topic 233462]

About the license key

A license key is a sequence of bits that you can apply to activate and then use the program in accordance with the terms of the End User License Agreement. License keys are generated by Kaspersky specialists.

You can add a license key to the program by applying a key file. The license key is displayed in the program interface as a unique alphanumeric sequence after you add it to the program.

The license key may be blocked by Kaspersky in case the terms of the License Agreement have been violated. If the license key has been blocked, you need to add another one if you want to use the program.

A license key may be active or reserve.

An active license key is the license key currently used by the program. An active license key can be added for a trial or commercial license. The program cannot have more than one active license key.

A reserve license key is the license key that entitles the user to use the program but is not currently in use. The additional license key automatically becomes active when the license associated with the current active license key expires. An additional license key can be added only if an active license key has already been added.

A license key for a trial license can be added as an active license key. A license key for a trial license cannot be added as an additional license key.

Page top

[Topic 233467]

About the key file

The key file is a file named license.key provided to you by Kaspersky. The key file is used to add a license key that activates the program.

You receive a key file at the email address that you provided after purchasing KUMA.

You do not need to connect to Kaspersky activation servers in order to activate the program with a key file.

If the key file has been accidentally deleted, you can restore it. You may need a key file, for example, to register with Kaspersky CompanyAccount.

To restore the key file, you need to do one of the following:

  • Contact the license seller.
  • Get a key file on the Kaspersky website based on the available activation code.

Page top

[Topic 217709]

Adding a license key to the program web interface

You can add an application license key in the KUMA web interface.

Only users with the Administrator role can add a license key.

To add a license key to the KUMA web interface:

  1. Open the KUMA web interface and select SettingsLicense.

    The window with KUMA license conditions opens.

  2. Select the key you want to add:
    • If you want to add the active key, click the Add active license key button.

      This button is not displayed if a license key has already been added to the program. If you want to add an active license key instead of the key that has already been added, the current license key must be deleted.

    • If you want to add the reserve key, click the Add reserve license key button.

      This button is inactive until an active key is added. If you want to add a reserve license key instead of the key that has already been added, the current reserve license key must be deleted.

    The license key file selection window appears on the screen.

  3. Select a license file by specifying the path to the folder and the name of the license key file with the KEY extension.

The license key from the selected file will be loaded into the program. Information about the license key is displayed under SettingsLicense.

Page top

[Topic 218040]

Viewing information about an added license key in the program web interface

In the KUMA web interface, you can view information about the added license key. Information about the license key is displayed under SettingsLicense.

Only users with the Administrator role can view license information.

The License tab window displays the following information about added license keys:

  • Expires on—date when the license key expires.
  • Days remaining—number of days before the license is expired.
  • EPS available—number of events processed per second supported by the license.
  • EPS current—current average number of events per second processed by KUMA.
  • License key—unique alphanumeric sequence.
  • Company—name of the company that purchased the license.
  • Client name—name of client who purchased the license.
  • Modules—modules available for the license.
Page top

[Topic 217963]

Removing a license key in the program web interface

In KUMA, you can remove an added license key from the program (for example, if you need to replace the current license key with a different key). After the license key is removed, the program stops to receive and process events. This functionality will be re-activated the next time you add a license key.

Only users with the administrator role can delete license keys.

To delete an added license key:

  1. Open the KUMA web interface and select SettingsLicense.

    The window with KUMA license conditions opens.

  2. Click the delete-icon icon on the license that you want to delete.

    A confirmation window opens.

  3. Confirm deletion of the license key.

The license key will be removed from the program.

Page top

[Topic 217923]

Integration with Kaspersky Security Center

You can configure integration with selected Kaspersky Security Center servers for one, several, or all KUMA tenants. If Kaspersky Security Center integration is enabled, you can import information about the assets protected by this application, manage assets using tasks, and import events from the Kaspersky Security Center event database.

First, you need to make sure that the relevant Kaspersky Security Center server allows an incoming connection for the server hosting KUMA.

Configuring KUMA integration with Kaspersky Security Center includes the following steps:

  1. Creating a user account in the Kaspersky Security Center Administration Console

    The credentials of this account are used when creating a secret to establish a connection with Kaspersky Security Center. The account must be assigned general administrator rights.

    For more details about creating a user account and assigning permissions to a user, please refer to the Kaspersky Security Center Help Guide.

  2. Creating a secret of the credentials type for connecting to Kaspersky Security Center
  3. Configuring Kaspersky Security Center integration settings
  4. Creating a connection to the Kaspersky Security Center server for importing information about assets

    If you want to import information about assets registered on Kaspersky Security Center servers into KUMA, you need to create a separate connection to each Kaspersky Security Center server for each selected tenant.

    If integration is disabled for the tenant or there is no connection to Kaspersky Security Center, an error is displayed in the KUMA web interface when attempting to import information about assets. In this case, the import process does not start.

In this section

Configuring Kaspersky Security Center integration settings

Adding a tenant to the list for Kaspersky Security Center integration

Creating Kaspersky Security Center connection

Editing Kaspersky Security Center connection

Deleting Kaspersky Security Center connection

Working with Kaspersky Security Center tasks

Importing events from the Kaspersky Security Center database

Page top

[Topic 217933]

Configuring Kaspersky Security Center integration settings

To configure the settings for integration with Kaspersky Security Center:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Select the tenant for which you want to configure integration with Kaspersky Security Center.

    The Kaspersky Security Center integration window opens.

  3. For the Disabled check box, do one of the following:
    • Clear the check box if you want to enable integration with Kaspersky Security Center for this tenant.
    • Select the check box if you want to disable integration with Kaspersky Security Center for this tenant.

    This check box is cleared by default.

  4. In the Data refresh interval field, specify the time interval at which KUMA updates data on Kaspersky Security Center devices.

    The interval is specified in hours and must be an integer.

    The default time interval is 12 hours.

  5. Click the Save button.

The Kaspersky Security Center integration settings for the selected tenant will be configured.

If the required tenant is not in the list of tenants, you need to add it to the list.

Page top

[Topic 232734]

Adding a tenant to the list for Kaspersky Security Center integration

To add a tenant to the list of tenants for integration with Kaspersky Security Center:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Click the Add tenant button.

    The Kaspersky Security Center integration window opens.

  3. In the Tenant drop-down list, select the tenant that you need to add.
  4. Click the Save button.

The selected tenant will be added to the list of tenants for integration with Kaspersky Security Center.

Page top

[Topic 217788]

Creating Kaspersky Security Center connection

To create a new Kaspersky Security Center connection:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Select the tenant for which you want to create a connection to Kaspersky Security Center.
  3. Click the Add connection button and define the values for the following settings:
    • Name (required)—the name of the connection. The name can contain from 1 to 128 Unicode characters.
    • URL (required)—the URL of the Kaspersky Security Center server in hostname:port or IPv4:port format.
    • In the Secret drop-down list, select the secret resource with the Kaspersky Security Center account credentials or create a new secret resource.
      1. Click the AddResource button.

        The secret window is displayed.

      2. Enter information about the secret:
        1. In the Name field, choose a name for the added secret.
        2. In the Tenant drop-down list, select the tenant that will own the Kaspersky Security Center account credentials.
        3. In the Type drop-down list, select credentials.
        4. In the User and Password fields, enter the account credentials for your Kaspersky Security Center server.
        5. If you want, enter a Description of the secret.
      3. Click Save.

      The selected secret can be changed by clicking on the EditResource button.

    • Disabled—the state of the connection to the selected Kaspersky Security Center server. If the check box is selected, the connection to the selected server is inactive. If this is the case, you cannot use this connection to connect to the Kaspersky Security Center server.

      This check box is cleared by default.

  4. If you want KUMA to import only assets that are connected to secondary servers or included in groups:
    1. Click the Load hierarchy button.
    2. Select the check boxes next to the names of the secondary servers and groups from which you want to import asset information.
    3. If you want to import assets only from new groups, select the Import assets from new groups check box.

    If no check boxes are selected, information about all assets of the selected Kaspersky Security Center server is uploaded during the import.

  5. Click the Save button.

The connection to the Kaspersky Security Center server is now created. It can be used to import information about assets from Kaspersky Security Center to KUMA and to create asset-related tasks in Kaspersky Security Center from KUMA.

Page top

[Topic 217849]

Editing Kaspersky Security Center connection

To edit a Kaspersky Security Center connection:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Select the tenant for which you want to configure integration with Kaspersky Security Center.

    The Kaspersky Security Center integration window opens.

  3. Click the Kaspersky Security Center connection you want to change.

    The window with the selected Kaspersky Security Center connection parameters opens.

  4. Make the necessary changes to the settings.
  5. Click the Save button.

The Kaspersky Security Center connection will be changed.

Page top

[Topic 217829]

Deleting Kaspersky Security Center connection

To delete a Kaspersky Security Center connection:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Select the tenant for which you want to configure integration with Kaspersky Security Center.

    The Kaspersky Security Center integration window opens.

  3. Select the Kaspersky Security Center connection that you want to delete.
  4. Click the Delete button.

The Kaspersky Security Center connection will be deleted.

Page top

[Topic 218045]

Working with Kaspersky Security Center tasks

You can connect Kaspersky Security Center assets to KUMA and download database and application module updates to these assets, or run an anti-virus scan on them by using Kaspersky Security Center tasks. Tasks are started in the KUMA web interface.

To run Kaspersky Security Center tasks on assets connected to KUMA, it is recommended to use the following script:

  1. Creating a user account in the Kaspersky Security Center Administration Console

    The credentials of this account are used when creating a secret to establish a connection with Kaspersky Security Center, and can be used to create a task.

    For more details about creating a user account and assigning permissions to a user, please refer to the Kaspersky Security Center Help Guide.

  2. Creating KUMA tasks in Kaspersky Security Center
  3. Configuring KUMA integration with Kaspersky Security Center
  4. Importing asset information from Kaspersky Security Center into KUMA
  5. Assigning a category to the imported assets

    After import, the assets are automatically placed in the Uncategorized devices group. You can assign one of the existing categories to the imported assets, or create a category and assign it to the assets.

  6. Running tasks on assets

    You can manually start tasks in the asset information or configure tasks to start automatically.

In this section

Starting Kaspersky Security Center tasks manually

Starting Kaspersky Security Center tasks automatically

Checking the status of Kaspersky Security Center tasks

Page top

[Topic 218009]

Starting Kaspersky Security Center tasks manually

You can manually run the anti-virus database, application module update task, and the anti-virus scan task on Kaspersky Security Center assets connected to KUMA. The assets must have Kaspersky Endpoint Security for Windows or Linux installed.

First, you need to configure the integration of Kaspersky Security Center with KUMA and create tasks in Kaspersky Security Center.

To manually start a Kaspersky Security Center task:

  1. In the Assets section of the KUMA web interface, select the asset that was imported from Kaspersky Security Center.

    The Asset details window opens.

  2. Click the KSC response button.

    This button is displayed if the connection to the Kaspersky Security Center that owns the selected asset is enabled.

  3. In the opened Select task window, select the check boxes next to the tasks that you want to start, and click the Start button.

Kaspersky Security Center starts the selected tasks.

Some types of tasks are available only for certain assets.

You can obtain vulnerability and software information only for assets running a Windows operating system.

Page top

[Topic 218008]

Starting Kaspersky Security Center tasks automatically

Kaspersky Security Center tasks can be started automatically by Correlators. When certain conditions are met, the correlator activates response rules that contain the list of Kaspersky Security Center tasks to start and identify the relevant assets.

To configure Response resource that can be used by Correlators to start Kaspersky Security Center task automatically:

  1. In the KUMA web interface, select ResourcesResponse.
  2. Click the Add response button and set parameters as described below:
    • In the Name field enter the resource name that will let you identify it.
    • In the Type drop-down list, select ksctasks (Kaspersky Security Center tasks).
    • In the Kaspersky Security Center task drop-down list, select the tasks that must be run when the correlator linked to this response resource is triggered.

      You can select several tasks. When a response is activated, it picks only the first task from the list of the selected tasks that match the relevant asset. The rest of the matching tasks are disregarded. If you want to start multiple tasks based on one condition, you need to create multiple response rules.

    • Under Event field, select the event fields that will trigger the correlators. Possible values:
      • SourceAssetID
      • DestinationAssetID
      • DeviceAssetID
  3. If necessary, in the Workers field specify the number of response processes that can be run simultaneously.
  4. If necessary, use the Filter settings block to specify the conditions under which events will be processed by the created resource. You can select an existing filter resource from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

  5. Click Save.

The Response resource is created. It can now be linked to a Correlator that would trigger it, starting a Kaspersky Security Center task as a result.

Page top

[Topic 217753]

Checking the status of Kaspersky Security Center tasks

In the KUMA web interface, you can check whether a Kaspersky Security Center task was started or whether a search for events owned by the collector listening for Kaspersky Security Center events was completed.

To check the status of Kaspersky Security Center tasks:

  1. In KUMA, select ResourcesActive services.
  2. Select the collector that is configured to receive events from the Kaspersky Security Center server and click the Go to Events button.

A new browser tab will open in the Events section of KUMA. The table displays events from the Kaspersky Security Center server. The status of the tasks can be seen in the Name column.

Kaspersky Security Center event fields:

  • Name—status or type of the task.
  • Message—message about the task or event.
  • FlexString<number>Label—name of the attribute received from Kaspersky Security Center. For example, FlexString1Label=TaskName.
  • FlexString<number>—value of the FlexString<number>Label attribute. For example, FlexString1=Download updates.
  • DeviceCustomNumber<number>Label—name of the attribute related to the task state. For example, DeviceCustomNumber1Label=TaskOldState.
  • DeviceCustomNumber<number>—value related to the task state. For example, DeviceCustomNumber1=1 means the task is executing.
  • DeviceCustomString<number>Label—name of the attribute related to the detected vulnerability: for example, a virus name, affected application.
  • DeviceCustomString<number>—value related to the detected vulnerability. For example, the attribute-value pairs DeviceCustomString1Label=VirusName and DeviceCustomString1=EICAR-Test-File mean that the EICAR test virus was detected.
Page top

[Topic 222247]

Importing events from the Kaspersky Security Center database

In KUMA, you can receive events directly from the Kaspersky Security Center SQL database. Events are received by using a collector, which utilizes the provided resources of the connector [OOTB] KSC SQL and normalizer [OOTB] KSC from SQL.

To create a collector to receive Kaspersky Security Center events:

  1. Start the Collector Installation Wizard in one of the following ways:
    • In the KUMA web interface, in the Resources section, click Add event source.
    • In the KUMA web interface in the ResourcesCollectors section click Add collector.
  2. At step 2 of the Installation Wizard, select the [OOTB] KSC SQL connector:
    • In the URL field, specify the server connection address in the following format:

      sqlserver://user:password@kscdb.example.com:1433/KAV

      where:

      • user—user account with public and db_datareader rights to the required database.
      • password—user account password.
      • kscdb.example.com:1433—address and port of the database server.
      • KAV—name of the database.
    • In the Query field, specify a database query based on the need to receive certain events.

      An example of a query to the Kaspersky Security Center SQL database

      SELECT ev.event_id AS externalId, ev.severity AS severity, ev.task_display_name AS taskDisplayName,

              ev.product_name AS product_name, ev.product_version AS product_version,

               ev.event_type As deviceEventClassId, ev.event_type_display_name As event_subcode, ev.descr As msg,

      CASE

              WHEN ev.rise_time is not NULL THEN DATEADD(hour,DATEDIFF(hour,GETUTCDATE(),GETDATE()),ev.rise_time )

                  ELSE ev.rise_time

              END

          AS endTime,

          CASE

              WHEN ev.registration_time is not NULL

                  THEN DATEADD(hour,DATEDIFF(hour,GETUTCDATE(),GETDATE()),ev.registration_time )

                  ELSE ev.registration_time

              END

          AS kscRegistrationTime,

          cast(ev.par7 as varchar(4000)) as sourceUserName,

          hs.wstrWinName as dHost,

          hs.wstrWinDomain as strNtDom, serv.wstrWinName As kscName,

              CAST(hs.nIp / 256 / 256 / 256 % 256 AS VARCHAR) + '.' +

          CAST(hs.nIp / 256 / 256 % 256 AS VARCHAR) + '.' +

          CAST(hs.nIp / 256 % 256 AS VARCHAR) + '.' +

          CAST(hs.nIp % 256 AS VARCHAR) AS sourceAddress,

          serv.wstrWinDomain as kscNtDomain,

              CAST(serv.nIp / 256 / 256 / 256 % 256 AS VARCHAR) + '.' +

          CAST(serv.nIp / 256 / 256 % 256 AS VARCHAR) + '.' +

          CAST(serv.nIp / 256 % 256 AS VARCHAR) + '.' +

          CAST(serv.nIp % 256 AS VARCHAR) AS kscIP,

          CASE

          WHEN virus.tmVirusFoundTime is not NULL

                  THEN DATEADD(hour,DATEDIFF(hour,GETUTCDATE(),GETDATE()),virus.tmVirusFoundTime )

                  ELSE ev.registration_time

              END

          AS virusTime,

          virus.wstrObject As filePath,

          virus.wstrVirusName as virusName,

          virus.result_ev as result

      FROM KAV.dbo.ev_event as ev

      LEFT JOIN KAV.dbo.v_akpub_host as hs ON ev.nHostId = hs.nId

      INNER JOIN KAV.dbo.v_akpub_host As serv ON serv.nId = 1

      Left Join KAV.dbo.rpt_viract_index as Virus on ev.event_id = virus.nEventVirus

      where registration_time >= DATEADD(minute, -191, GetDate())

  3. At step 3 of the Installation Wizard, select the [OOTB] KSC from SQL normalizer.
  4. Specify other parameters in accordance with your collector requirements.

Upon completion of the Wizard, a collector service is created in the KUMA web interface. You can use this collector service to import events from the SQL database of Kaspersky Security Center.

Page top

[Topic 235592]

Kaspersky Endpoint Detection and Response integration

Kaspersky Endpoint Detection and Response (hereinafter also referred to as "KEDR") is a functional unit of Kaspersky Anti Targeted Attack Platform that protects assets in an enterprise LAN.

You can configure KUMA integration with Kaspersky Endpoint Detection and Response versions 4.0 and 4.1 to manage threat response actions on assets connected to Kaspersky Endpoint Detection and Response servers, and on Kaspersky Security Center assets. Commands to perform operations are received by the Kaspersky Endpoint Detection and Response server, which then relays those commands to the Kaspersky Endpoint Agent installed on assets.

You can also import events to KUMA and receive information about Kaspersky Endpoint Detection and Response alerts (for more details, see the Configuring integration with an SIEM system section of the Kaspersky Anti Targeted Attack Platform online help).

When KUMA is integrated with Kaspersky Endpoint Detection and Response, you can perform the following operations on Kaspersky Endpoint Detection and Response assets that have Kaspersky Endpoint Agent:

  • Manage network isolation of assets.
  • Manage prevention rules.
  • Start applications.

You can manage response actions only if you have a Kaspersky Symphony XDR license.

To get instructions on configuring integration for response action management, contact your account manager or Technical Support.

In this section

Importing events from Kaspersky Endpoint Detection and Response

Page top

[Topic 234627]

Importing events from Kaspersky Endpoint Detection and Response

When importing events from Kaspersky Endpoint Detection and Response, telemetry is transmitted in clear text and may be intercepted by an intruder.

Kaspersky Endpoint Detection and Response 4.0 raw events can be imported into KUMA with the help of a Kafka connector.

To import events, you will need to perform actions on the Kaspersky Endpoint Detection and Response side and on the KUMA side.

On the Kaspersky Endpoint Detection and Response side, perform the following actions:

  1. Use SSH or a terminal to log in to the management console of the Central Node server from which you want to export events.
  2. When prompted by the system, enter the administrator account name and the password that was set during installation of Kaspersky Endpoint Detection and Response.

    The program component administrator menu is displayed.

  3. In the program component administrator menu, select Technical Support Mode.
  4. Press Enter.

    The Technical Support Mode confirmation window opens.

  5. Confirm that you want to operate the application in Technical Support Mode. To do so, select Yes and press Enter.
  6. Run the sudo -i command.
  7. In the /etc/sysconfig/apt-services configuration file, in the KAFKA_PORTS field, delete the value 10000.

    If Secondary Central Node servers or the Sensor component installed on a separate server are connected to the Central Node server, you need to allow the connection with the server where you modified the configuration file via port 10000.

    It is strongly not recommended to use this port for any external connections other than KUMA. To restrict connection on port 10000 to KUMA only, run the command iptables -I INPUT -p tcp! -s KUMA_IP_address --dport 10000 -j DROP.

  8. Run the command systemctl restart apt_ipsec.service.
  9. In the configuration file /usr/bin/apt-start-sedr-iptables add the value 10000 in the WEB_PORTS field, separated by a comma without a space.
  10. Run sudo sh /usr/bin/apt-start-sedr-iptables.

Preparations for exporting events on the Kaspersky Endpoint Detection and Response side are now complete.

On the KUMA side, complete the following steps:

  1. On the KUMA server, add the IP address of the Central Node server in the format <IP address> centralnode to one of the following files:
    • %WINDIR%\System32\drivers\etc\hosts—for Windows.
    • /etc/hosts file—for Linux.
  2. In the KUMA web interface, create a connector of the Kafka type.

    When creating the connector, in the URL field, you will need to specify the <Central Node server IP address>:10000.

  3. In the KUMA web interface, create a collector.

    Use the connector created at the previous step as the transport for the collector.

If the collector is successfully created and installed, Kaspersky Endpoint Detection and Response events will be imported into KUMA. You can find and view these events in the events table.

Page top

[Topic 217924]

Integration with Kaspersky CyberTrace

Kaspersky CyberTrace (hereinafter CyberTrace) is a tool that integrates threat data streams with SIEM solutions. It provides users with instant access to analytics data, increasing their awareness of security decisions.

You can integrate CyberTrace with KUMA in one of the following ways:

In this section

Integrating CyberTrace indicator search

Integrating CyberTrace interface

Page top

[Topic 217921]

Integrating CyberTrace indicator search

Integration of the CyberTrace indicator search function includes the following steps:

  1. Configuring CyberTrace to receive and process KUMA requests.

    You can configure the integration with KUMA immediately after installing CyberTrace in the Quick Start Wizard or later in the CyberTrace web interface.

  2. Creating an event enrichment rule in KUMA.

After completing all stages of integration, you need to restart the collector responsible for receiving events that you want to enrich with information from CyberTrace.

In this section

Configuring CyberTrace to receive and process requests

Creating event Enrichment rules

Page top

[Topic 217768]

Configuring CyberTrace to receive and process requests

You can configure CyberTrace to receive and process requests from KUMA immediately after its installation in the Quick Start Wizard or later in the program web interface.

To configure CyberTrace to receive and process requests in the Quick Start Wizard:

  1. Wait for the CyberTrace Quick Start Wizard to start after the program is installed.

    The Welcome to Kaspersky CyberTrace window opens.

  2. In the <select SIEM> drop-down list, select the type of SIEM system from which you want to receive data and click the Next button.

    The Connection Settings window opens.

  3. Do the following:
    1. In the Service listens on settings block, select the IP and port option.
    2. In the IP address field, enter 0.0.0.0.
    3. In the Port field, enter 9999.
    4. In the IP address or hostname field below, specify 127.0.0.1.

      Leave the default values for everything else.

    5. Click Next.

    The Proxy Settings window opens.

  4. If a proxy server is being used in your organization, define the settings for connecting to it. If not, leave all the fields blank and click Next.

    The Licensing Settings window opens.

  5. In the Kaspersky CyberTrace license key field, add a license key for CyberTrace.
  6. In the Kaspersky Threat Data Feeds certificate field, add a certificate that allows you to download updated data feeds from servers, and click Next.

CyberTrace will be configured.

To configure CyberTrace to receive and process requests in the program web interface:

  1. In the CyberTrace web interface window, select SettingsService.
  2. In the Connection Settings block:
    1. Select the IP and port option.
    2. In the IP address field, enter 0.0.0.0.
    3. In the Port field, enter 9999.
  3. In the Web interface settings block, in the IP address or hostname field, enter 127.0.0.1.
  4. In the upper toolbar, click Restart Feed Service.
  5. Select SettingsEvents format.
  6. In the Alert events format field, enter %Date% alert=%Alert%%RecordContext%.
  7. In the Detection events format field, enter Category=%Category%|MatchedIndicator=%MatchedIndicator%%RecordContext%.
  8. In the Records context format field, enter |%ParamName%=%ParamValue%.
  9. In the Actionable fields context format field, enter %ParamName%:%ParamValue%.

CyberTrace will be configured.

After updating CyberTrace configuration you have to restart the CyberTrace server.

Page top

[Topic 217808]

Creating event Enrichment rules

To create event enrichment rules:

  1. In the KUMA web interface, open ResourcesEnrichment rules. In the left part of the window, select or create a folder for the new resource.

    The list of available enrichment rules will be displayed.

  2. Click the Add enrichment rule button to create a new resource.

    The enrichment rule window will be displayed.

  3. Enter the rule configuration parameters:
    1. In the Name field, enter a unique name for this type of resource. The name must contain from 1 to 128 Unicode characters.
    2. In the Tenant drop-down list, select the tenant that will own this resource.
    3. In the Source kind drop-down list, select cybertrace.
    4. Specify the URL of the CyberTrace server to which you want to connect. For example, example.domain.com:9999.
    5. If necessary, use the Number of connections field to specify the maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    6. In the RPS field, enter the number of requests to the CyberTrace server per second that KUMA can make. The default value is 1000.
    7. In the Timeout field, specify the maximum number of seconds KUMA should wait for a response from the CyberTrace server. Until a response is received or the time expires, the event is not sent to the Correlator. If a response is received before the timeout, it is added to the TI field of the event and the event processing continues. The default value is 30.
    8. In the Mapping settings block, you must specify the fields of events to be checked via CyberTrace, and define the rules for mapping fields of KUMA events to CyberTrace indicator types:
      • In the KUMA field column, select the field whose value must be sent to CyberTrace.
      • In the CyberTrace indicator column, select the CyberTrace indicator type for every field you selected:
        • ip
        • url
        • hash

      You must provide at least one string to the table. You can use the Add row button to add a string, and can use the cross button to remove a string.

    9. Use the Debug drop-down list to indicate whether or not to enable logging of service operations. Logging is disabled by default.
    10. If necessary, in the Description field, add up to 256 Unicode characters describing the resource.
    11. In the Filter section, you can specify conditions to identify events that will be processed by the enrichment rule resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

  4. Click Save.

A new enrichment rule will be created.

CyberTrace indicator search integration is now configured. You can now add the created enrichment rule to a collector. You must restart KUMA collectors to apply the new settings.

If any of the CyberTrace fields in the events details area contains "[{" or "}]" values, it means that information from CyberTrace data feed was processed incorrectly and it's possible that some of the data is not displayed. You can get all data feed information by copying the events TI indicator field value from KUMA and searching for it in the CyberTrace in the indicators section. All relevant information will be displayed in the Indicator context section of CyberTrace.

Page top

[Topic 217922]

Integrating CyberTrace interface

You can integrate the CyberTrace web interface into the KUMA web interface. When this integration is enabled, the KUMA web interface will show a CyberTrace section that provides access to the CyberTrace web interface. Integration is configured under SettingsKaspersky CyberTrace in the KUMA web interface.

To integrate the CyberTrace web interface in KUMA:

  1. In the KUMA web interface, open ResourcesSecrets.

    The list of available secrets will be displayed.

  2. Click the Add secret button to create a new secret. This resource is used to store credentials of the CyberTrace server.

    The secret window is displayed.

  3. Enter information about the secret:
    1. In the Name field, choose a name for the added secret. The name must contain from 1 to 128 Unicode characters.
    2. In the Tenant drop-down list, select the tenant that will own this resource.
    3. In the Type drop-down list, select credentials.
    4. In the User and Password fields, enter credentials for your CyberTrace server.
    5. If necessary, in the Description field, add up to 256 Unicode characters describing the resource.
  4. Click Save.

    The CyberTrace server credentials are now saved and can be used in other KUMA resources.

  5. In the KUMA web interface, open SettingsKaspersky CyberTrace.

    The window with CyberTrace integration parameters opens.

  6. Make the necessary changes to the following parameters:
    • Disabled—clear this check box if you want to integrate the CyberTrace web interface into the KUMA web interface.
    • Host (required)—enter the URL of the CyberTrace server in hostname:port format.
    • Port (required)—enter the port of the CyberTrace server.
  7. In the Secret drop-down list select the Secret resource you created before.
  8. Click Save.

CyberTrace is now integrated with KUMA, and the CyberTrace section is displayed in the KUMA web interface.

If you are using the Mozilla Firefox browser to work with the program web interface, the CyberTrace section may fail to display data. If this is the case, clear the browser cache and configure the display of data (see below).

To configure data to be displayed in the CyberTrace section:

  1. In the browser's address bar, enter the FQDN of the KUMA web interface with port number 7222 as follows: https://kuma.example.com:7222. It is not recommended to specify an IP address as the server address.

    A window will open to warn you of a potential security threat.

  2. Click the Details button.
  3. In the lower part of the window, click the Accept risk and continue button.

    An exclusion will be created for the URL of the KUMA web interface.

  4. In the browser's address bar, enter the URL of the KUMA web interface with port number 7220.
  5. Go to the CyberTrace section.

Data will be displayed in this section.

Updating CyberTrace deny list (Internal TI)

When the CyberTrace web interface is integrated into the KUMA web interface, you can update the CyberTrace denylist or Internal TI with information from KUMA events.

To update CyberTrace Internal TI:

  1. Open the event details area from the events table, Alert window, or correlation event window and click the link on a domain, web address, IP address, or file hash.

    The context menu opens.

  2. Select Add to Internal TI of CyberTrace.

The selected object is now added to the CyberTrace denylist.

Page top

[Topic 217925]

Integration with Kaspersky Threat Intelligence Portal

The Kaspersky Threat Intelligence Portal combines all of Kaspersky's knowledge about cyberthreats and how they're related into a single, powerful web service. When integrated with KUMA, it helps KUMA users to make faster and better-informed decisions, providing them with data about URLs, domains, IP addresses, WHOIS / DNS data.

Access to the Kaspersky Threat Intelligence Portal is provided based on a fee. License certificates are created by Kaspersky experts. To obtain a certificate for Kaspersky Threat Intelligence Portal, contact your Technical Account Manager.

In this Help topic

Initializing integration

Requesting information from Kaspersky Threat Intelligence Portal

Viewing information from Kaspersky Threat Intelligence Portal

Updating information from Kaspersky Threat Intelligence Portal

Page top

[Topic 217900]

Initializing integration

To integrate Kaspersky Threat Intelligence Portal into KUMA:

  1. In the KUMA web interface, open ResourcesSecrets.

    The list of available secrets will be displayed.

  2. Click the Add secret button to create a new secret. This resource is used to store credentials of your Kaspersky Threat Intelligence Portal account.

    The secret window is displayed.

  3. Enter information about the secret:
    1. In the Name field, choose a name for the added secret.
    2. In the Tenant drop-down list, select the tenant that will own the created resource.
    3. In the Type drop-down list, select ktl.
    4. In the User and Password fields, enter credentials for your Kaspersky Threat Intelligence Portal account.
    5. If you want, enter a Description of the secret.
  4. Upload your Kaspersky Threat Intelligence Portal certificate key:
    1. Click the Upload PFX button and select the PFX file with your certificate.

      The name of the selected file appears to the right of the Upload PFX button.

    2. Enter the password to the PFX file in the PFX password field.
  5. Click Save.

    The Kaspersky Threat Intelligence Portal account credentials are now saved and can be used in other KUMA resources.

  6. In the Settings section of the KUMA web interface, open the Kaspersky Threat Lookup tab.

    The list of available connections will be displayed.

  7. Make sure the Disabled check box is cleared.
  8. In the Secret drop-down list select the Secret resource you created before.

    You can create a new secret by clicking the button with the plus sign. The created secret will be saved in the ResourcesSecrets section.

  9. If required, select the Proxy resource in the Proxy drop-down list.
  10. Click Save.

The integration process of Kaspersky Threat Intelligence Portal with KUMA is completed.

Once Kaspersky Threat Intelligence Portal and KUMA are integrated, you can request additional information from the event details area about hosts, domains, URLs, IP addresses, and file hashes (MD5, SHA1, SHA256).

Page top

[Topic 217967]

Requesting information from Kaspersky Threat Intelligence Portal

To request information from Kaspersky Threat Intelligence Portal:

  1. Open the event details area from the events table, alert window, or correlation event window and click the link on a domain, web address, IP address, or file hash.

    The Threat Lookup enrichment area opens in the right part of the screen.

  2. Select check boxes next to the data types you want to request.

    If neither check box is selected, all information types are requested.

  3. In the Maximum number of records in each data group field enter the number of entries per selected information type you want to receive. The default value is 10.
  4. Click Request.

A ktl task has been created. When it is completed, events are enriched with data from Kaspersky Threat Intelligence Portal which can be viewed from the events table, Alert window, or correlation event window.

Page top

[Topic 218041]

Viewing information from Kaspersky Threat Intelligence Portal

To view information from Kaspersky Threat Intelligence Portal:

Open the event details area from the events table, alert window, or correlation event window and click the link on a domain, web address, IP address, or file hash for which you previously requested information from Kaspersky Threat Intelligence Portal.

The event details area opens in the right part of the screen with data from Kaspersky Threat Intelligence Portal; the time when it was received is indicated at the bottom of the screen.

Information received from Kaspersky Threat Intelligence Portal is cached. If you click a domain, web address, IP address, or file hash in the event details pane for which KUMA has information available, the data from Kaspersky Threat Intelligence Portal opens, with the time it was received indicated at the bottom, instead of the Threat Lookup enrichment window. You can update the data.

Page top

[Topic 218026]

Updating information from Kaspersky Threat Intelligence Portal

To update information, received from Kaspersky Threat Intelligence Portal:

  1. Open the event details area from the events table, alert window, or correlation event window and click the link on a domain, web address, IP address, or file hash for which you previously requested information from Kaspersky Threat Intelligence Portal.
  2. Click Update in the event details area containing the data received from the Kaspersky Threat Intelligence Portal.

    The Threat Lookup enrichment area opens in the right part of the screen.

  3. Select the check boxes next to the types of information you want to request.

    If neither check box is selected, all information types are requested.

  4. In the Maximum number of records in each data group field enter the number of entries per selected information type you want to receive. The default value is 10.
  5. Click Update.

    The KTL task is created and the new data received from Kaspersky Threat Intelligence Portal is requested.

  6. Close the Threat Lookup enrichment window and the details area with KTL information.
  7. Open the event details area from the events table, Alert window or correlation event window and click the link on a domain, URL, IP address, or file hash for which you updated Kaspersky Threat Intelligence Portal information and select Show info from Threat Lookup.

The event details area opens on the right with data from Kaspersky Threat Intelligence Portal, indicating the time when it was received on the bottom of the screen.

Page top

[Topic 217928]

Integration with R-Vision Incident Response Platform

R-Vision Incident Response Platform (hereinafter referred to as R-Vision IRP) is a software platform used for automation of monitoring, processing, and responding to information security incidents. It aggregates cyberthreat data from various sources into a single database for further analysis and investigation to facilitate incident response capabilities.

R-Vision IRP can be integrated with KUMA. When this integration is enabled, the creation of a KUMA alert triggers the creation of an incident in R-Vision IRP. A KUMA alert and its R-Vision IRP incident are interdependent. When the status of an incident in R-Vision IRP is updated, the status of the corresponding KUMA alert is also changed.

Integration of R-Vision IRP and KUMA is configured in both applications. In KUMA integration settings are available only for general administrators.

Mapping KUMA alert fields to R-Vision IRP incident fields when transferring data via API

KUMA alert field

R-Vision IRP incident field

FirstSeen

detection

priority

level

correlationRuleName

description

events

(as a JSON file)

files

In this section

Configuring integration in KUMA

Configuring integration in R-Vision IRP

Managing alerts using R-Vision IRP

Page top

[Topic 224436]

Configuring integration in KUMA

This section describes integration of KUMA with R-Vision IRP from the KUMA side.

Integration in KUMA is configured in the web interface under SettingsIRP / SOAR.

To configure integration with R-Vision IRP:

  1. In the KUMA web interface, open ResourcesSecrets.

    The list of available secrets will be displayed.

  2. Click the Add secret button to create a new secret. This resource is used to store token for R-Vision IRP API requests.

    The secret window is displayed.

  3. Enter information about the secret:
    1. In the Name field, enter a name for the added secret. The name must contain from 1 to 128 Unicode characters.
    2. In the Tenant drop-down list, select the tenant that will own the created resource.
    3. In the Type drop-down list, select token.
    4. In the Token field, enter your R-Vision IRP API token.

      You can obtain the token in the R-Vision IRP web interface under SettingsGeneralAPI.

    5. If required, add the secret description in the Description field. The description must contain from 1 to 256 Unicode characters.
  4. Click Save.

    The R-Vision IRP API token is now saved and can be used in other KUMA resources.

  5. In the KUMA web interface, go to SettingsIRP / SOAR.

    The window containing R-Vision IRP integration settings opens.

  6. Make the necessary changes to the following parameters:
    • Disabled—select this check box if you want to disable R-Vision IRP integration with KUMA.
    • In the Secret drop-down list, select the previously created Secret resource.

      You can create a new secret by clicking the button with the plus sign. The created secret will be saved in the ResourcesSecrets section.

    • URL (required)—URL of the R-Vision IRP server host.
    • Field name where KUMA alert IDs must be placed (required)—name of the R-Vision IRP field where the ID of the KUMA alert must be written.
    • Field name where KUMA alert URLs must be placed (required)—name of the R-Vision IRP field where the link for accessing the KUMA alert should be written.
    • Category (required)—category of R-Vision IRP incident that is created after KUMA alert is received.
    • KUMA event fields that must be sent to IRP / SOAR (required)—drop-down list for selecting the KUMA event fields that should be sent to R-Vision IRP.
    • Severity group of settings (required)—used to map KUMA severity values to R-Vision IRP severity values.
  7. Click Save.

In KUMA integration with R-Vision IRP is now configured. If integration is also configured in R-Vision IRP, when alerts appear in KUMA, information about those alerts will be sent to R-Vision IRP to create an incident. The Details on alert section in the KUMA web interface displays a link to R-Vision IRP.

If you are working with multiple tenants and want to integrate with R-Vision IRP, the names of tenants must match the abbreviated names of companies in R-Vision IRP.

Page top

[Topic 224437]

Configuring integration in R-Vision IRP

This section describes KUMA integration with R-Vision IRP from the R-Vision IRP side.

Integration in R-Vision IRP is configured in the Settings section of the R-Vision IRP web interface. For details on configuring R-Vision IRP, please refer to the documentation on this application.

Configuring integration with KUMA consists of the following steps:

In R-Vision IRP integration with KUMA is now configured. If integration is also configured in KUMA, when alerts appear in KUMA, information about those alerts will be sent to R-Vision IRP to create an incident. The Details on alert section in the KUMA web interface displays a link to R-Vision IRP.

In this section

Adding the ALERT_ID and ALERT_URL incident fields

Creating R-Vision IRP collector

Creating connector in R-Vision IRP

Creating rule for closing KUMA alert when R-Vision IRP incident is closed

Page top

[Topic 225573]

Adding the ALERT_ID and ALERT_URL incident fields

To add the ALERT_ID incident field in the R-Vision IRP:

  1. In the R-Vision IRP web interface, under SettingsIncident managementIncident fields, select the No group fields group.
  2. Click the plus icon in the right part of the screen.

    The right part of the screen will display the settings area for the incident field you are creating.

  3. In the Title field, enter the name of the field (for example: Alert ID).
  4. In the Type drop-down list, select Text field.
  5. In the Parsing Tag field, enter ALERT_ID.

ALERT_ID field added to R-Vision IRP incident.

ALERT_ID field in R-Vision IRP version 4.0

rvision_3

ALERT_ID field in R-Vision IRP version 5.0

rvision_3_v5

To add the ALERT_URL incident field in the R-Vision IRP:

  1. In the R-Vision IRP web interface, under SettingsIncident managementIncident fields, select the No group fields group.
  2. Click the plus icon in the right part of the screen.

    The right part of the screen will display the settings area for the incident field you are creating.

  3. In the Title field, enter the name of the field (for example: Alert URL).
  4. In the Type drop-down list, select Text field.
  5. In the Parsing Tag field, enter ALERT_URL.
  6. Select the Display links and Display URL as links check boxes.

ALERT_URL field added to R-Vision IRP incident.

ALERT_URL field in R-Vision IRP version 4.0

rvision_5

ALERT_URL field in R-Vision IRP version 5.0

rvision_5_v5

If necessary, you can likewise configure the display of other data from a KUMA alert in an R-Vision IRP incident.

Page top

[Topic 225575]

Creating R-Vision IRP collector

To create R-Vision IRP collector:

  1. In the R-Vision IRP web interface, under SettingsCommonCollectors, click the plus icon.
  2. Specify the collector name in the Name field (for example, Main collector).
  3. In the Collector address field, enter the IP address or hostname where the R-Vision IRP is installed (example: 127.0.0.1).
  4. In the Port field type 3001.
  5. Click Add.
  6. On the Organizations tab, select the organization for which you want to add integration with KUMA and select the Default collector and Response collector check boxes.

R-Vision IRP collector created.

Page top

[Topic 225576]

Creating connector in R-Vision IRP

To create connector in R-Vision IRP:

  1. In the R-Vision IRP web interface, under SettingsIncident managementConnectors, click the plus icon.
  2. In the Type drop-down list, select REST.
  3. In the Name field, specify the connector name, such as KUMA.
  4. In the URL field type API request to close an alert in the format <KUMA Core server FQDN>:<Port used for API requests (7223 by default)>/api/v1/alerts/close.

    Example: https://kuma-example.com:7223/api/v1/alerts/close

  5. In the Authorization type drop-down list, select Token.
  6. In the Auth header field type Authorization.
  7. In the Auth value field enter the token of KUMA user with general administrator role in the following format:

    Bearer <KUMA General administrator token>

  8. In the Collector drop-down list select previously created collector.
  9. Click Save.

The connector has been created.

Connector in R-Vision IRP version 4.0

rvision_7

Connector in R-Vision IRP version 5.0

rvision_7_v5

When connector is created you must configure sending API queries for closing alerts in KUMA.

To configure API queries in R-Vision IRP:

  1. In the R-Vision IRP web interface, under SettingsIncident managementConnectors open for editing a newly created connector.
  2. In the request type drop-down list, select POST.
  3. In the Params field type API request to close an alert in the format <KUMA Core server FQDN>:<Port used for API requests (7223 by default)>/api/v1/alerts/close.

    Example: https://kuma-example.com:7223/api/v1/alerts/close

  4. On the HEADERS tab add the following keys and values:
    • Key Content-Type; value: application/json.
    • Key Authorization; value: Bearer <KUMA general administrator token>.

      The token of the KUMA general administrator can be obtained in the KUMA web interface under SettingsUsers.

  5. On the BODYRaw tab type contents of the API request body:

    {

        "id":"{{tag.ALERT_ID}}",

        "reason":"<Reason for closing the alert. Available values: "Incorrect Correlation Rule", "Incorrect Data", "Responded".> "

    }

  6. Click Save.

The connector is configured.

Connector in R-Vision IRP version 4.0

API request header

rvision_7.2

API request body

rvision_7.3

Connector in R-Vision IRP version 5.0

rvision_7-2_v5

rvision_7.3_v5

Page top

[Topic 225579]

Creating rule for closing KUMA alert when R-Vision IRP incident is closed

To create a rule for sending KUMA alert closing request when R-Vision IRP incident is closed:

  1. In the R-Vision IRP web interface, under SettingsIncident managementResponse playbooks, click the plus icon.
  2. In the Name field, type the name of the rule, for example, Close alert.
  3. In the Group drop-down list select All playbooks.
  4. In the Autostart criteria settings block, click Add and enter the conditions for triggering the rule in the opened window:
    1. In the Type drop-down list, select Field value.
    2. In the Field drop-down list, select Incident status.
    3. Select the Closed status.
    4. Click Add.

    Rule trigger conditions are added. The rule will trigger when an incident is closed.

  5. In the Incident Response Actions settings block, click Add Run connector. In the opened window, select the connector that should be run when the rule is triggered:
    1. In the Connector drop-down list select previously created connector.
    2. Click Add.

    Connector added to the rule.

  6. Click Add.

A rule for sending KUMA alert closing request when R-Vision IRP incident created.

R-Vision IRP version 4.0 playbook rule

rvision_9

R-Vision IRP version 5.0 playbook rule

rvision_9_v5

Page top

[Topic 224487]

Managing alerts using R-Vision IRP

After integration of KUMA and R-Vision IRP is configured, data on KUMA alerts is received in R-Vision IRP. Any change to alert settings in KUMA is reflected in R-Vision IRP. Any change in the statuses of alerts in KUMA or R-Vision IRP (except closing an alert) is also reflected in the other system.

Alert management scenarios when KUMA and R-Vision IRP are integrated:

  • Forward cyberthreat data from KUMA to R-Vision IRP

    Data on detected alerts is automatically forwarded from KUMA to R-Vision IRP. An incident is also created in R-Vision IRP.

    The following information about a KUMA alert is forwarded to R-Vision IRP:

    • ID.
    • Name.
    • Status.
    • Date of the first event related to the alert.
    • Date of the last detection related to the alert.
    • User account name or email address of the security officer assigned to process the alert.
    • Alert severity.
    • Category of the R-Vision IRP incident corresponding to the KUMA alert.
    • Hierarchical list of events related to the alert.
    • List of alert-related assets (internal and external).
    • List of users related to the alert.
    • Alert change log.
    • Link to the alert in KUMA.
  • Investigate cyberthreats in KUMA

    Initial processing of an alert is performed in KUMA. The security officer can update and change any parameters of an alert except its ID and name. Any implemented changes are reflected in the R-Vision IRP incident card.

    If a cyberthreat turns out to be a false positive and its alert is closed in KUMA, its corresponding incident in R-Vision IRP is also automatically closed.

  • Close incident in R-Vision IRP

    After all necessary work is completed on an incident and the course of the investigation is recorded in R-Vision IRP, the incident is closed. The corresponding KUMA alert is also automatically closed.

  • Open a previously closed incident

    If active monitoring detects that an incident was not completely resolved or if additional information is detected, this incident is re-opened in R-Vision IRP. However, the alert remains closed in KUMA.

    The security officer can use a link to navigate from an R-Vision IRP incident to the corresponding alert in KUMA and make the necessary changes to any of its parameters except the ID, name, and status of the alert. Any implemented changes are reflected in the R-Vision IRP incident card.

    Further analysis is performed in R-Vision IRP. When the investigation is complete and the incident is closed again in R-Vision IRP, the status of the corresponding alert in KUMA remains closed.

  • Request additional data from the source system as part of the response playbook or manually

    If additional information is required from KUMA when analyzing incidents in R-Vision IRP, you can send to KUMA a search request (for example, you can request telemetry data, reputation, host info). This request is sent via REST API KUMA and the response is recorded in the R-Vision IRP incident card for further analysis and report generation.

    This same sequence of actions is performed during automatic processing if it is not possible to immediately save all information on an incident during an import.

Page top

[Topic 217926]

Integration with Active Directory

You can integrate KUMA with the Active Directory services that are being used in your organization.

You can configure a connection to the Active Directory catalog service over the LDAP protocol. This lets you use information from Active Directory in correlation rules for enrichment of events and alerts, and for analytics.

If you configure a connection to a domain controller server, you can use domain authorization. In this case, you will be able to bind groups of users from Active Directory to KUMA role filters. The users belonging to these groups will be able to use their domain account credentials to log in to the KUMA web interface and will obtain access to application sections based on their assigned role.

It is recommended to create these groups of users in Active Directory in advance if you want to provide such groups with the capability to complete authorization using their domain account in the KUMA web interface. An email address must be indicated in the properties of a user account in Active Directory.

In this section

Connecting over LDAP

Authorization with domain accounts

Page top

[Topic 221426]

Connecting over LDAP

LDAP connections are created and managed under SettingsLDAP server in the KUMA web interface. The LDAP server integration by tenant section shows the tenants for which LDAP connections were created. Tenants can be created or deleted.

If you select a tenant, the LDAP server integration window opens to show a table containing existing LDAP connections. Connections can be created or edited. In this window, you can change the frequency of queries sent to LDAP servers and set the retention period for obsolete data.

After integration is enabled, information about Active Directory accounts becomes available in the alert window, the correlation events detailed view window, and the incidents window. If you click an account name in the Related users section of the window, the Account details window opens with the data imported from Active Directory.

Data from LDAP can also be used when enriching events in collectors and in analytics.

Imported Active Directory attributes

The following account attributes can be requested from Active Directory:

  • AccountExpires
  • BadPasswordTime
  • cn
  • co
  • company
  • department
  • description
  • displayName (this attribute can be used for search during correlation)
  • distinguishedName (this attribute can be used for search during correlation)
  • division
  • employeeID
  • givenName
  • l
  • lastLogon
  • lastLogonTimestamp
  • mail (this attribute can be used for search during correlation)
  • mailNickname
  • managedObjects
  • manager
  • memberOf (this attribute can be used for search during correlation)
  • mobile
  • name
  • objectCategory
  • objectGUID (this attribute always requested from Active Directory even if a user doesn't specify it)
  • ObjectSID
  • PhysicalDeliveryOfficeName
  • pwdLastSet
  • sAMAccountName (this attribute can be used for search during correlation)
  • SAMAccountType
  • sn (this attribute can be used for search during correlation)
  • StreetAddress
  • TelephoneNumber
  • title
  • userAccountControl (this attribute can be used for search during correlation)
  • userPrincipalName (this attribute can be used for search during correlation)
  • WhenChanged
  • WhenCreated

In this section

Enabling and disabling LDAP integration

Adding a tenant to the LDAP server integration list

Creating an LDAP server connection

Creating a copy of an LDAP server connection

Changing an LDAP server connection

Changing the data update frequency

Changing the data storage period

Starting account data update tasks

Deleting an LDAP server connection

Page top

[Topic 221481]

Enabling and disabling LDAP integration

You can enable or disable all LDAP connections of the tenant at the same time, or enable and disable an LDAP connection individually.

To enable or disable all LDAP connections of a tenant:

  1. In the KUMA web interface, open SettingsLDAP server and select the tenant for which you want to enable or disable all LDAP connections.

    The LDAP server integration by tenant window opens.

  2. Select or clear the Disabled check box.
  3. Click Save.

To enable or disable a specific LDAP connection:

  1. In the KUMA web interface, open SettingsLDAP server and select the tenant for which you want to enable or disable an LDAP connection.

    The LDAP server integration window opens.

  2. Select the relevant connection and either select or clear the Disabled check box in the opened window.
  3. Click Save.
Page top

[Topic 233077]

Adding a tenant to the LDAP server integration list

To add a tenant to the list of tenants for integration with an LDAP server:

  1. Open the KUMA web interface and select SettingsLDAP server.

    The LDAP server integration by tenant window opens.

  2. Click the Add tenant button.

    The LDAP server integration window is displayed.

  3. In the Tenant drop-down list, select the tenant that you need to add.
  4. Click Save.

The selected tenant is added to the LDAP server integration list.

To delete a tenant from the list of tenants for integration with an LDAP server:

  1. Open the KUMA web interface and select SettingsLDAP server.

    The LDAP server integration by tenant window is displayed.

  2. Select the check box next to the tenant that you need to delete, and click Delete.
  3. Confirm deletion of the tenant.

The selected tenant is deleted from the LDAP server integration list.

Page top

[Topic 217795]

Creating an LDAP server connection

To create a new LDAP connection to Active Directory:

  1. In the KUMA web interface, open SettingsLDAP server.
  2. Select or create a tenant for which you want to create a LDAP connection.

    The LDAP server integration by tenant window opens.

  3. Click the Add connection button.

    The Connection parameters window opens.

  4. Add a secret containing the account credentials for connecting to the Active Directory server. To do so:
    1. If you previously added a secret, use the Secret drop-down list to select the existing secret resource (with the credentials type).

      The selected secret can be changed by clicking on the EditResource button.

    2. If you want to create a new secret, click the AddResource button.

      The Secret window opens.

    3. In the Name (required) field, enter the name of the resource. This name can contain from 1 to 128 Unicode characters.
    4. In the User and Password (required) fields, enter the account credentials for connecting to the Active Directory server.

      You can enter the user name in one of the following formats: <user name>@<domain> or <domain><user name>.

    5. In the Description field, you can enter up to 256 Unicode characters to describe the resource.
    6. Click the Save button.
  5. In the Name (required) field, enter the unique name of the LDAP connection.

    Must contain from 1 to 128 Unicode characters.

  6. In the URL (required) field, enter the address of the domain controller in the format <hostname or IP address of server>:<port>.

    In case of server availability issues, you can specify multiple servers with domain controllers by separating them with commas. All of the specified servers must reside in the same domain.

  7. If you want to use TLS encryption for the connection with the domain controller, select one of the following options from the Type drop-down list:
    • startTLS.

      When the

      method is used, first it establishes an unencrypted connection over port 389, then it sends an encryption request. If the STARTTLS command ends with an error, the connection is terminated.

      Make sure that port 389 is open. Otherwise, a connection with the domain controller will be impossible.

    • ssl.

      When using SSL, an encrypted connection is immediately established over port 636.

    • insecure.

    When using an encrypted connection, it is impossible to specify an IP address as a URL.

  8. If you enabled TLS encryption at the previous step, add a TLS certificate. You must use the certificate of the certification authority that signed the LDAP server certificate. You may not use custom certificates. To add a certificate:
    1. If you previously uploaded a certificate, select it from the Certificate drop-down list.

      If no certificate was previously added, the drop-down list shows No data.

    2. If you want to upload a new certificate, click the AD_plus button on the right of the Certificate list.

      The Secret window opens.

    3. In the Name field, enter the name that will be displayed in the list of certificates after the certificate is added.
    4. Click the Upload certificate file button to add the file containing the Active Directory certificate. X.509 certificate public keys in Base64 are supported.
    5. If necessary, provide any relevant information about the certificate in the Description field.
    6. Click the Save button.

    The certificate will be uploaded and displayed in the Certificate list.

  9. In the Timeout in seconds field, indicate the amount of time to wait for a response from the domain controller server.

    If multiple addresses are indicated in the URL field, KUMA will wait the specified number of seconds for a response from the first server. If no response is received during that time, the program will contact the next server, and so on. If none of the indicated servers responds during the specified amount of time, the connection will be terminated with an error.

  10. In the Search base (Base DN) field, enter the base distinguished name of the directory in which you need to run the search query.
  11. Select the Disabled check box if you do not want to use this LDAP connection.

    This check box is cleared by default.

  12. Click the Save button.

The LDAP connection to Active Directory will be created and displayed in the LDAP server integration window.

Account information from Active Directory will be requested immediately after the connection is saved, and then it will be updated at the specified frequency.

If you want to use multiple LDAP connections simultaneously for one tenant, you need to make sure that the domain controller address indicated in each of these connections is unique. Otherwise KUMA lets you enable only one of these connections. When checking the domain controller address, the program does not check whether the port is unique.

Page top

[Topic 231112]

Creating a copy of an LDAP server connection

You can create an LDAP connection by copying an existing connection. In this case, all settings of the original connection are duplicated in the newly created connection.

To copy an LDAP connection:

  1. In the KUMA web interface, open SettingsLDAP server and select the tenant for which you want to copy an LDAP connection.

    The LDAP server integration window opens.

  2. Select the relevant connection.
  3. In the opened Connection parameters window, click the Duplicate connection button.

    The New Connection window opens. The word copy will be added to the connection name.

  4. If necessary, change the relevant settings.
  5. Click the Save button.

The new connection is created.

If you want to use multiple LDAP connections simultaneously for one tenant, you need to make sure that the domain controller address indicated in each of these connections is unique. Otherwise KUMA lets you enable only one of these connections. When checking the domain controller address, the program does not check whether the port is unique.

Page top

[Topic 233080]

Changing an LDAP server connection

To change an LDAP server connection:

  1. Open the KUMA web interface and select SettingsLDAP server.

    The LDAP server integration by tenant window opens.

  2. Select the tenant for which you want to change the LDAP server connection.

    The LDAP server integration window opens.

  3. Click the LDAP server connection that you want to change.

    The window with the settings of the selected LDAP server connection opens.

  4. Make the necessary changes to the settings.
  5. Click the Save button.

The LDAP server connection is changed. Restart the KUMA services that use LDAP server data enrichment for the changes to take effect.

Page top

[Topic 233081]

Changing the data update frequency

KUMA queries the LDAP server to update account data. This occurs:

  • Immediately after creating a new connection.
  • Immediately after changing the settings of an existing connection.
  • According to a regular schedule every several hours. Every 12 hours by default.
  • Whenever a user creates a task to update account data.

When querying LDAP servers, a task is created in the Task manager section of the KUMA web interface.

To change the schedule of KUMA queries to LDAP servers:

  1. In the KUMA web interface, open SettingsLDAP serverLDAP server integration by tenant.
  2. Select the relevant tenant.

    The LDAP server integration window opens.

  3. In the Data refresh interval field, specify the required frequency in hours. The default value is 12.

The query schedule has been changed.

Page top

[Topic 233213]

Changing the data storage period

Received user account data is stored in KUMA for 90 days by default if information about these accounts is no longer received from the Active Directory server. After this period, the data is deleted.

After KUMA account data is deleted, new and existing events are no longer enriched with this information. Account information will also be unavailable in alerts. If you want to view information about accounts throughout the entire period of alert storage, you must set the account data storage period to be longer than the alert storage period.

To change the storage period for the account data:

  1. In the KUMA web interface, open SettingsLDAP serverLDAP server integration by tenant.
  2. Select the relevant tenant.

    The LDAP server integration window opens.

  3. In the Data storage time field, specify the number of days you need to store data received from the LDAP server.

The account data storage period is changed.

Page top

[Topic 233094]

Starting account data update tasks

After a connection to an Active Directory server is created, tasks to obtain account data are created automatically. This occurs:

  • Immediately after creating a new connection.
  • Immediately after changing the settings of an existing connection.
  • According to a regular schedule every several hours. Every 12 hours by default. The schedule can be changed.

Account data update tasks can be created manually. You can download data for all connections or for one connection of the required tenant.

To start an account data update task for all LDAP connections of a tenant:

  1. In the KUMA web interface, open SettingsLDAP serverLDAP server integration by tenant.
  2. Select the relevant tenant.

    The LDAP server integration window opens.

  3. Click the Import accounts button.

A task to receive account data from the selected tenant is added to the Task manager section of the KUMA web interface.

To start an account data update task for one LDAP connection of a tenant:

  1. In the KUMA web interface, open SettingsLDAP serverLDAP server integration by tenant.
  2. Select the relevant tenant.

    The LDAP server integration window opens.

  3. Select the relevant LDAP server connection.

    The Connection parameters window opens.

  4. Click the Import accounts button.

A task to receive account data from the selected connection of the tenant is added to the Task manager section of the KUMA web interface.

Page top

[Topic 217830]

Deleting an LDAP server connection

To delete LDAP connection to Active Directory:

  1. In the KUMA web interface, open SettingsLDAP server and select the tenant that owns the relevant LDAP connection.

    The LDAP server integration window opens.

  2. Click the LDAP connection that you want to delete and click the Delete button.
  3. Confirm deletion of the connection.

The LDAP connection to Active Directory will be deleted.

Page top

[Topic 221427]

Authorization with domain accounts

To enable users to complete authorization in the KUMA web interface using their own domain account credentials, you must complete the following configuration steps.

  1. Enable domain authorization if it is disabled.

    Domain authorization is enabled by default, but a connection to the domain is not yet configured.

  2. Configure a connection to the domain controller.

    You can connect only to one domain.

  3. Add groups of user roles.

    You can specify an Active Directory group for each KUMA role. After completing authorization using their own domain accounts, users from this group will obtain access to the KUMA web interface in accordance with their defined role.

    The program checks whether the Active Directory user group matches the specified filter according to the following order of roles in the KUMA web interface: operator → analyst → tenant administrator → general administrator. Upon the first match, the program assigns a role to the user and does not check any further. If a user matches two groups in the same tenant, the role with the least privileges will be used. If multiple groups are matched for different tenants, the user will be assigned the specified role in each tenant.

If you completed all the configuration steps but the user is unable to use their domain account for authorization in the KUMA web interface, it is recommended to check the configuration for the following issues:

  • An email address is not indicated in the properties of the user account in Active Directory. If this is the case, an error message is displayed during the user's first authorization attempt and a KUMA account will not be created.
  • There is already an existing local KUMA account with the email address indicated in the domain account properties. If this is the case, the user will see an error message when attempting authorization with the domain account.
  • Domain authorization is disabled in the KUMA settings.
  • An error was made when entering the group of roles.
  • The domain user name contains a space.

In this section

Enabling and disabling domain authorization

Configuring a connection to the domain controller

Adding groups of user roles

Page top

[Topic 221428]

Enabling and disabling domain authorization

Domain authorization is enabled by default, but a connection to the Active Directory domain is not yet configured. If you want to temporarily pause domain authorization after configuring a connection, you can disable it in the KUMA web interface without deleting the previously defined values of settings. If necessary, you will be able to enable authorization again at any time.

To enable or disable domain authorization of users in the KUMA web interface:

  1. In the program web interface, select SettingsDomain authorization.
  2. Do one of the following:
    • If you want to disable domain authorization, select the Disabled check box in the upper part of the workspace.
    • If you want to enable domain authorization, clear the Disabled check box in the upper part of the workspace.
  3. Click the Save button.

Domain authorization will be enabled or disabled based on your selection.

Page top

[Topic 221429]

Configuring a connection to the domain controller

You can connect only to one Active Directory domain. To do so, you must configure a connection to the domain controller.

To configure a connection to an Active Directory domain controller.

  1. In the program web interface, select SettingsDomain authorization.
  2. In the Connection settings block, in the Base DN field, enter the DistinguishedName of the root record to search for access groups in the Active Directory catalog service.
  3. In the URL field, indicate the address of the domain controller in the format <hostname or IP address of server>:<port>.

    In case of server availability issues, you can specify multiple servers with domain controllers by separating them with commas. All of the specified servers must reside in the same domain.

  4. If you want to use TLS encryption for the connection with the domain controller, select one of the following options from the TLS mode drop-down list:
    • startTLS.

      When the startTLS method is used, first it establishes an unencrypted connection over port 389, then it sends an encryption request. If the STARTTLS command ends with an error, the connection is terminated.

      Make sure that port 389 is open. Otherwise, a connection with the domain controller will be impossible.

    • ssl.

      When using SSL, an encrypted connection is immediately established over port 636.

    • insecure.

    When using an encrypted connection, it is impossible to specify an IP address as a URL.

  5. If you enabled TLS encryption at the previous step, add a TLS certificate:
    • If you previously uploaded a certificate, select it from the Secret drop-down list.

      If no certificate was previously added, the drop-down list shows No data.

    • If you want to upload a new certificate, click the AD_plus button on the right of the Secret list. In the opened window, in the Name field, enter the name that will be displayed in the list of certificates after the certificate is added. Add the file containing the Active Directory certificate (X.509 certificate public keys in Base64 are supported) by clicking the Upload certificate file button. Click the Save button.

      The certificate will be uploaded and displayed in the Secret list.

  6. In the Timeout in seconds field, indicate the amount of time to wait for a response from the domain controller server.

    If multiple addresses are indicated in the URL field, KUMA will wait the specified number of seconds for a response from the first server. If no response is received during that time, the program will contact the next server, and so on. If none of the indicated servers responds during the specified amount of time, the connection will be terminated with an error.

  7. If you want to configure domain authorization for a user with the KUMA general administrator role, specify the DistinguishedName of the Active Directory group containing the user in the General administrators group field.

    If a user matches two groups in the same tenant, the role with the least privileges will be used.

    Filter input example: CN=KUMA team,OU=Groups,OU=Clients,DC=test,DC=domain.

  8. Click the Save button.

A connection with the Active Directory domain controller is now configured. For domain authorization to work, you must also add group of KUMA user roles.

You can also check the connection for the previously entered domain controller connection settings.

To check the connection to the domain controller:

  1. In the program web interface, select SettingsDomain authorization.
  2. In the Test connection settings block, select the relevant secret in the User credentials field.

    If necessary, you can create a new secret by clicking the AddSecret button or change the settings of an existing secret by clicking the ChangeSecret button.

  3. Click Test.

A pop-up notification is displayed with the test results. The pop-up notification shows the following message: Connection established. If a connection could not be established, the reason for the lack of connection is displayed.

Page top

[Topic 221430]

Adding groups of user roles

You can specify groups only for those roles that require configuration of domain authorization. You can leave the rest of the fields empty.

To add groups of user roles:

  1. In the program web interface, select SettingsDomain authorization.
  2. In the Role groups settings block, click the Add role groups button.
  3. In the Tenant drop-down list, select the tenant of the users for whom you want to configure domain authorization.
  4. In the fields for the following roles, specify the DistinguishedName of the Active Directory group whose users must have the capability to complete authorization with their domain accounts:
    • Operator.
    • Analyst.
    • Administrator.

    Group input example: CN=KUMA team,OU=Groups,OU=Clients,DC=test,DC=domain.

    You can specify only one Active Directory group for each role. If you need to specify multiple groups, you must repeat steps 2–4 for each group while indicating the same tenant.

  5. If necessary, repeat steps 2–4 for each tenant for which you want to configure domain authorization with operator, analyst, and tenant administrator roles.
  6. Click the Save button.

The groups of user roles will be added. The defined settings will be applied the next time the user logs in to the KUMA web interface.

After the first authorization of the user, information about them is displayed under SettingsUsers. The Login and Password fields received from Active Directory will be unavailable for editing. The user role will also be unavailable for editing. To edit a role, you will have to change the user role groups. Changes to a role are applied after the next authorization of the user. The user will continue to operate under the old role until the current session expires.

If the user name or email address is changed in the Active Directory account properties, these changes will need to be manually entered into the KUMA account.

Page top

[Topic 221777]

RuCERT integration

In the KUMA web interface, you can create a connection to the National Computer Incident Response & Coordination Center Incidents (hereinafter referred to as "RuCERT"). This will let you export incidents registered by KUMA to RuCERT. Integration is configured under SettingsRuCERT in the KUMA web interface.

You can use the Disabled check box to enable or disable integration.

To create a connection to RuCERT:

  1. In the KUMA web interface, open SettingsRuCERT.
  2. In the URL field, enter the URL for accessing RuCERT. For example: https://example.cert.gov.ru/api/v2/
  3. In the Token settings block, create or select an existing secret resource with the API token that was issued to your organization for a connection to RuCERT:
    • If you already have a secret, you can select it from the drop-down list.
    • If you want to create a new secret:
      1. Click the AddResource button and specify the following settings:
        • Name (required)—unique name of the service you are creating. The name must contain from 1 to 128 Unicode characters.
        • Token (required)—token that was issued to your organization for a connection to RuCERT.
        • Description—service description containing up to 256 Unicode characters.
      2. Click Save.

      The secret containing the token for connecting to RuCERT will be created. It is saved under ResourcesSecrets and is owned by the main tenant.

    The selected secret can be changed by clicking on the EditResource button.

  4. In the Affected system function drop-down list, select the area of activity of your organization.

    Available company business sectors

    • Nuclear energy
    • Banking and other financial market sectors
    • Mining
    • Federal/municipal government
    • Healthcare
    • Metallurgy
    • Science
    • Defense industry
    • Education
    • Aerospace industry
    • Communication
    • Mass media
    • Fuel and power
    • Transportation
    • Chemical industry
    • Other
  5. In the Company field, indicate the name of your company. This data will be forwarded to RuCERT when incidents are exported.
  6. Use the Location drop-down list to specify where your company is located. This data will be forwarded to RuCERT when incidents are exported.
  7. If necessary, in the Proxy settings block, create or select an existing proxy server resource that should be used when connecting to RuCERT.
  8. Click Save.

KUMA is now integrated with RuCERT. Now you can export incidents to it.

Page top

[Topic 232020]

Integration with Security Vision Incident Response Platform

Security Vision Incident Response Platform (hereinafter referred to as Security Vision IRP) is a software platform used for automation of monitoring, processing, and responding to information security incidents. It aggregates cyberthreat data from various sources into a single database for further analysis and investigation to facilitate incident response capabilities.

Security Vision IRP can be integrated with KUMA. After configuring integration in Security Vision IRP, you can perform the following tasks:

  • Request information about alerts from KUMA. In Security Vision IRP, incidents are created based on received data.
  • Send requests to KUMA to close alerts.

Integration is implemented by using the KUMA REST API. On the Security Vision IRP side, integration is carried out by using the preconfigured Kaspersky KUMA connector. Contact your Security Vision IRP vendor to learn more about the methods and conditions for obtaining a Kaspersky KUMA connector.

Working with Security Vision IRP incidents

Security Vision IRP incidents generated from KUMA alert data can be viewed in Security Vision IRP under IncidentsIncidents (2 lines)All incidents (2 lines). Events related to KUMA alerts are logged in each Security Vision IRP incident. Imported events can be viewed on the Response tab.

KUMA alert imported as Security Vision IRP incident

commandSV

Security Vision IRP incident that was created based on KUMA alert

incidentSV-2

Events from KUMA alert that were imported to Security Vision IRP

In this section

Configuring integration in KUMA

Configuring integration in Security Vision IRP

See also:

About alerts

About events

REST API

Page top

[Topic 232289]

Configuring integration in KUMA

To configure KUMA integration with Security Vision IRP, you must configure authorization of API requests in KUMA. To do so, you need to create a token for the KUMA user on whose behalf the API requests will be processed on KUMA side.

A token can be generated in your account profile. Users with the General Administrator role can generate tokens in the accounts of other users. You can always generate a new token.

To generate a token in your account profile:

  1. In the KUMA web interface, click the user account name in the lower-left corner of the window and click the Profile button in the opened menu.

    The User window with your user account parameters opens.

  2. Click the Generate token button.
  3. Copy the generated token displayed in the opened window. This will be required to configure Security Vision IRP.

    When the window is closed, the token is no longer displayed. If you did not copy the token before closing the window, you will have to generate a new token.

The generated token must be indicated in the Security Vision IRP connector settings.

See also:

Configuring integration in Security Vision IRP

Page top

[Topic 232073]

Configuring integration in Security Vision IRP

Configuration of integration in Security Vision IRP consists of importing and configuring a connector. If necessary, you can also change other Security Vision IRP settings related to KUMA data processing, such as the data processing schedule and worker.

For more detailed information about configuring Security Vision IRP, please refer to the product documentation.

In this section

Importing and configuring a connector

Configuring the handler, schedule, and worker process

See also:

Configuring integration in KUMA

Page top

[Topic 232293]

Importing and configuring a connector

Adding a connector in Security Vision IRP

Integration of Security Vision IRP and KUMA is carried out by using the Kaspersky KUMA connector. Contact your Security Vision IRP vendor to learn more about the methods and conditions for obtaining a Kaspersky KUMA connector.

To import a Kaspersky KUMA connector into Security Vision IRP:

  1. In Security Vision IRP, open SettingsConnectorsConnectors.

    You will see a list of connectors that have been added to Security Vision IRP.

  2. At the top of the screen, click the import button and select the ZIP archive containing the Kaspersky KUMA connector.

The connector has been imported into Security Vision IRP and is ready to be configured.

Configuring a connector for a connection to KUMA

To use a connector, you need to configure its connection to KUMA.

To configure a connection to KUMA in Security Vision IRP using the Kaspersky KUMA connector:

  1. In Security Vision IRP, open SettingsConnectorsConnectors.

    You will see a list of connectors that have been added to your Security Vision IRP.

  2. Select the Kaspersky KUMA connector.

    The general settings of the connector will be displayed.

  3. Under Connector settings, click the Edit button.

    The connector configuration will be displayed.

  4. In the URL field, specify the address and port of KUMA. For example, kuma.example.com:7223.
  5. In the Token field, specify KUMA user API token.

The connection to KUMA has been configured in the Security Vision IRP connector.

Security Vision IRP connector settings

connectorSV-config

Configuring commands for interaction with KUMA in the Security Vision IRP connector

You can use Security Vision IRP to receive information about KUMA alerts (referred to as incidents in Security Vision IRP terminology) and send requests to close these alerts. To perform these actions, you need to configure the appropriate commands in the Security Vision IRP connector.

The instructions below describe how to add commands to receive and close alerts. However, if you need to implement more complex logic of interaction between Security Vision IRP and KUMA, you can similarly create your own commands containing other API requests.

To configure a command to receive alert information from KUMA:

  1. In Security Vision IRP, open SettingsConnectorsConnectors.

    You will see a list of connectors that have been added to Security Vision IRP.

  2. Select the Kaspersky KUMA connector.

    The general settings of the connector will be displayed.

  3. Click the +Command button.

    The command creation window opens.

  4. Specify the command settings for receiving alerts:
    • In the Name field, enter the command name: Receive incidents.
    • In the Request type drop-down list, select GET.
    • In the Called method field, enter API request to search for alerts: api/v1/alerts/?withEvents&status=new
    • Under Request headers, in the Name field, indicate authorization. In the Value field, indicate Bearer <token>.
    • In the Content type drop-down list, select application/json.
  5. Save the command and close the window.

The connector command is configured. When this command is executed, the Security Vision IRP connector will query KUMA for information about all alerts with the New status and all events related to those alerts. The received data will be relayed to the Security Vision IRP handler, which will create Security Vision IRP incidents based on this data. If an already imported alert is updated in KUMA with additional information, new data will be imported to Security Vision IRP incident.

To configure a command to close KUMA alerts:

  1. In Security Vision IRP, open SettingsConnectorsConnectors.

    You will see a list of connectors that have been added to Security Vision IRP.

  2. Select the Kaspersky KUMA connector.

    The general settings of the connector will be displayed.

  3. Click the +Command button.

    The command creation window will be displayed.

  4. Specify the command settings for receiving alerts:
    • In the Name field, enter the command name: Close incident.
    • In the Request type drop-down list, select POST.
    • In the Called method field, enter API request to close an alert: api/v1/alerts/close
    • In the Request field, enter the contents of the API request to be sent: {"id":"<Alert ID>","reason":"responded"}

      You can create multiple commands for different reasons to close alerts, such as responded, incorrect data, and incorrect correlation rule.

    • Under Request headers, in the Name field, indicate authorization. In the Value field, indicate Bearer <token>.
    • In the Content type drop-down list, select application/json.
  5. Save the command and close the window.

The connector command is configured. When this command is executed, the incident will be closed in Security Vision IRP and the corresponding alert will be closed in KUMA.

Creating commands in Security Vision IRP

commandSV

After configuring the connector, KUMA alerts will be sent to the platform as Security Vision IRP incidents. Then you need to configure incident handling in Security Vision IRP based on the security policies of your organization.

Page top

[Topic 232323]

Configuring the handler, schedule, and worker process

Security Vision IRP handler

The Security Vision IRP handler receives KUMA alert data from the Security Vision IRP connector and creates Security Vision IRP incidents based on this data. A predefined KUMA (Incidents) handler is used for processing data. The settings of the KUMA (Incidents) handler are available in Security Vision IRP under SettingsEvent processingEvent handlers:

  • The rules for processing KUMA alerts can be viewed in the handler settings on the Normalization tab.
  • The available actions when creating new objects can be viewed in the handler settings on the Actions tab for creating objects of the Incident (2 lines) type.

Handler run schedule

The connector and handler are started according to a predefined KUMA schedule. This schedule can be configured in Security Vision IRP under SettingsEvent processingSchedule:

  • In the Connector settings block, you can configure the settings for starting the connector.
  • In the Handler settings block, you can configure the settings for starting the handler.

Security Vision IRP worker process

The life cycle of Security Vision IRP incidents created based on KUMA alerts follows the preconfigured Incident processing (2 lines) worker. The worker can be configured in Security Vision IRP under SettingsWorkersWorker templates: select the Incident processing (2 lines) worker and click the transaction or state that you need to change.

Page top

[Topic 233668]

Kaspersky Industrial CyberSecurity for Networks integration

Kaspersky Industrial CyberSecurity for Networks (hereinafter referred to as "KICS for Networks") is an application designed to protect the industrial enterprise infrastructure from information security threats, and to ensure uninterrupted operation. The application analyzes industrial network traffic to identify deviations in the values of process parameters, detect signs of network attacks, and monitor the operation and current state of network devices.

KICS for Networks version 4.0 or later can be integrated with KUMA. After configuring integration, you can perform the following tasks in KUMA:

  • Import asset information from KICS for Networks to KUMA.
  • Send asset status change commands from KUMA to KICS for Networks.

Unlike KUMA, KICS for Networks refers to assets as devices.

The integration of KICS for Networks and KUMA must be configured in both applications:

  1. In KICS for Networks, you need to create a KUMA connector and save the communication data package of this connector.
  2. In KUMA, the communication data package of the connector is used to create a connection to KICS for Networks.

The integration described in this section applies to importing asset information. KICS for Networks can also be configured to send events to KUMA. To do so, you need to create a SIEM/Syslog connector in KICS for Networks, and configure a collector on the KUMA side.

In this section

Configuring integration in KICS for Networks

Configuring integration in KUMA

Enabling and disabling integration with KICS for Networks

Changing the data update frequency

Special considerations when importing asset information from KICS for Networks

Changing the status of a KICS for Networks asset

Page top

[Topic 233670]

Configuring integration in KICS for Networks

The program supports integration with KICS for Networks version 4.0 or later.

It is recommended to configure integration of KICS for Networks and KUMA after ending Process Control rules learning mode. For more details, please refer to the documentation on KICS for Networks.

On the KICS for Networks side, integration configuration consists of creating a KUMA-type connector. In KICS for Networks, connectors are specialized application modules that enable KICS for Networks to exchange data with recipient systems, including KUMA. For more details on creating connectors, please refer to the documentation on KICS for Networks.

When a connector is added to KICS for Networks, a communication data package is automatically created for this connector. This is an encrypted configuration file for connecting to KICS for Networks that is used when configuring integration on the KUMA side.

Page top

[Topic 233669]

Configuring integration in KUMA

It is recommended to configure integration of KICS for Networks and KUMA after ending Process Control rules learning mode. For more details, please refer to the documentation on KICS for Networks.

To configure integration with KICS for Networks in KUMA:

  1. Open the KUMA web interface and select SettingsKaspersky Industrial CyberSecurity for Networks.

    The Kaspersky Industrial CyberSecurity for Networks integration by tenant window opens.

  2. Select or create a tenant for which you want to create an integration with KICS for Networks.

    The Kaspersky Industrial CyberSecurity for Networks integration window opens.

  3. Click the Communication data package field and select the communication data package that was created in KICS for Networks.
  4. In the Communication data package password field, enter the password of the communication data package.
  5. Select the Enable response check box if you want to change the statuses of KICS for Networks assets by using KUMA response rules.
  6. Click Save.

Integration with KICS for Networks is configured in KUMA, and the window shows the IP address of the node where the KICS for Networks connector will be running and its ID.

Page top

[Topic 233717]

Enabling and disabling integration with KICS for Networks

To enable or disable KICS for Networks integration for a tenant:

  1. In the KUMA web interface, open SettingsKaspersky Industrial CyberSecurity for Networks and select the tenant for which you want to enable or disable KICS for Networks integration.

    The Kaspersky Industrial CyberSecurity for Networks integration window opens.

  2. Select or clear the Disabled check box.
  3. Click Save.
Page top

[Topic 233718]

Changing the data update frequency

KUMA queries KICS for Networks to update its asset information. This occurs:

  • Immediately after creating a new integration.
  • Immediately after changing the settings of an existing integration.
  • According to a regular schedule every several hours. This occurs every 3 hours by default.
  • Whenever a user creates a task for updating asset data.

When querying KICS for Networks, a task is created in the Task manager section of the KUMA web interface.

To edit the schedule for importing information about KICS for Networks assets:

  1. In the KUMA web interface, open SettingsKaspersky Industrial CyberSecurity for Networks.
  2. Select the relevant tenant.

    The Kaspersky Industrial CyberSecurity for Networks integration window opens.

  3. In the Data refresh interval field, specify the required frequency in hours. The default value is 3.

The import schedule has been changed.

See also:

Special considerations when importing asset information from KICS for Networks

Page top

[Topic 233699]

Special considerations when importing asset information from KICS for Networks

Importing assets

Assets are imported according to the asset import rules. Only assets with the Authorized and Unauthorized statuses are imported.

KICS for Networks assets are identified by a combination of the following parameters:

  • IP address of the KICS for Networks instance with which the integration is configured.
  • KICS for Networks connector ID is used to configure the integration.
  • ID assigned to the asset (or "device") in the KICS for Networks instance.

Importing vulnerability information

When importing assets, KUMA also receives information about active vulnerabilities in KICS for Networks. If a vulnerability has been flagged as Remediated or Negligible in KICS for Networks, the information about this vulnerability is deleted from KUMA during the next import.

Information about asset vulnerabilities is displayed in the localization language of KICS for Networks in the Asset details window in the Vulnerabilities settings block.

In KICS for Networks, vulnerabilities are referred to as risks and are divided into several types. All types of risks are imported into KUMA.

Imported data storage period

If information about a previously imported asset is no longer received from KICS for Networks, the asset is deleted after 30 days.

Page top

[Topic 233750]

Changing the status of a KICS for Networks asset

After configuring integration, you can change the statuses of KICS for Networks assets from KUMA. Statuses can be changed either automatically or manually.

Asset statuses can be changed only if you enabled a response in the settings for connecting to KICS for Networks.

Manually changing the status of a KICS for Networks asset

Users with the General Administrator, Administrator, and Analyst roles in the tenants available to them can manually change the statuses of assets imported from KICS for Networks.

To manually change a KICS for Networks asset status:

  1. In the Assets section of the KUMA web interface, click the asset that you want to edit.

    The Asset details area opens in the right part of the window.

  2. In the Status in KICS for Networks drop-down list, select the status that you need to assign to the KICS for Networks asset. The Authorized or Unauthorized statuses are available.

The asset status is changed. The new status is displayed in KICS for Networks and in KUMA.

Automatically changing the status of a KICS for Networks asset

Automatic changes to the statuses of KICS for Networks assets are implemented using response rules. The rules must be added to the correlator, which will determine the conditions for triggering these rules.

Page top

[Topic 217687]

KUMA resources

Resources are KUMA components that contain parameters for implementing various functions: for example, establishing a connection with a given web address or converting data according to certain rules. Like parts of an erector set, these components are assembled into resource sets for services that are then used as the basis for creating KUMA services.

Resources are contained in the Resources section, Resources block of KUMA web interface. The following resource types are available:

  • Correlation rules—resources of this type contain rules for identifying event patterns that indicate threats. If the conditions specified in these resources are met, a correlation event is generated.
  • Normalizers—resources of this type contain rules for converting incoming events into the format used by KUMA. After processing in the normalizer, the "raw" event becomes normalized and can be processed by other KUMA resources and services.
  • Connectors—resources of this type contain settings for establishing network connections.
  • Aggregation rules—resources of this type contain rules for combining several basic events of the same type into one aggregation event.
  • Enrichment rules—resources of this type contain rules for supplementing events with information from third-party sources.
  • Destinations—resources of this type contain settings for forwarding events to a destination for further processing or storage.
  • Filters—resources of this type contain conditions for rejecting or selecting individual events from the stream of events.
  • Response rules—resources of this type are used in correlators to, for example, execute scripts or launch Kaspersky Security Center tasks when certain conditions are met.
  • Notification templates—resources of this type are used when sending notifications about new alerts.
  • Active lists—resources of this type are used by correlators for dynamic data processing when analyzing events according to correlation rules.
  • Dictionaries—resources of this type are used to store keys and their values, which may be required by other KUMA resources and services.
  • Proxies—resources of this type contain settings for using proxy servers.
  • Secrets—resources of this type are used to securely store confidential information (such as credentials) that KUMA needs to interact with external services.

When you click on a resource type, a window opens displaying a table with the available resources of this type. The resource table contains the following columns:

  • Name—the name of a resource. Can be used to search for resources and sort them.
  • Updated—the date and time of the last update of a resource. Can be used to sort resources.
  • Created by—the name of the user who created a resource.
  • Description—the description of a resource.

Resources can be organized into folders. On the left side of each window, the folder structure is displayed, where the number and names of the root folders correspond to the tenants created in KUMA. When a folder is selected, the resources it contains are displayed as a table in the right pane of the window.

Resources can be created, edited, copied, moved from one folder to another, and deleted. Resources can also be exported and imported.

In this Help topic

Operations with resources

Correlation rules

Normalizers

Connectors

Aggregation rules

Enrichment rules

Destinations

Filters

Response rules

Notification templates

Active lists

Dictionaries

Proxies

Secrets

In the beginning

[Topic 217971]

Operations with resources

To manage KUMA resources, you can create, move, copy, edit, delete, import, and export them. These operations are available for all resources, regardless of the resource type.

KUMA resources reside in folders. You can add, rename, move, or delete resource folders.

In this section

Creating, renaming, moving, and deleting resource folders

Creating, duplicating, moving, editing, and deleting resources

Exporting and importing resources

Page top

[Topic 218051]

Creating, renaming, moving, and deleting resource folders

You can create, rename, move and delete folders.

To create a folder:

  1. Select the folder in the tree where the new folder is required.
  2. Click the Add folder button.

The folder will be created.

To rename a folder:

  1. Locate required folder in the folder structure.
  2. Hover over the name of the folder.

    The More-DropDown icon will appear near the name of the folder.

  3. Open the More-DropDown drop-down list and select Rename.

    The folder name will become active for editing.

  4. Enter the new folder name and press ENTER.

    The folder name cannot be empty.

The folder will be renamed.

To move a folder,

Drag and drop the folder to a required place in folder structure by clicking its name.

Folders cannot be dragged from one tenant to another.

To delete a folder:

  1. Locate required folder in the folder structure.
  2. Hover over the name of the folder.

    The More-DropDown icon will appear near the name of the folder.

  3. Open the More-DropDown drop-down list and select Delete.

    The conformation window appears.

  4. Click OK.

The folder will be deleted.

The program does not delete folders that contain files or subfolders.

Page top

[Topic 218050]

Creating, duplicating, moving, editing, and deleting resources

You can create, move, copy, edit, and delete resources.

To create the resource:

  1. In the Resources<resource type> section, select or create a folder where you want to add the new resource.

    Root folders correspond to tenants. For a resource to be available to a specific tenant, it must be created in the folder of that tenant.

  2. Click the Add <resource type> button.

    The window for configuring the selected resource type opens. The available configuration parameters depend on the resource type.

  3. Enter a unique resource name in the Name field.
  4. Specify the required parameters (marked with a red asterisk).
  5. If necessary, specify the optional parameters (not required).
  6. Click Save.

The resource will be created and available for use in services and other resources.

To move the resource to a new folder:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box near the resource you want to move. You can select multiple resources.

    The DragIcon icon appears near the selected resources.

  3. Use the DragIcon icon to drag and drop resources to the required folder.

The resources will be moved to the new folders.

You can only move resources to folders of the tenant in which the resources were created. Resources cannot be moved to another tenant's folders.

To copy the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box next to the resource that you want to copy and click Duplicate.

    A window opens with the settings of the resource that you have selected for copying. The available configuration parameters depend on the resource type.

    The <selected resource name> - copy value is displayed in the Name field.

  3. Make the necessary changes to the parameters.
  4. Enter a unique name in the Name field.
  5. Click Save.

The copy of the resource will be created.

To edit the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the resource.

    A window with the settings of the selected resource opens. The available configuration parameters depend on the resource type.

  3. Make the necessary changes to the parameters.
  4. Click Save.

The resource will be updated. If this resource is used in a service, restart the service to apply the new settings.

To delete the resource:

  1. In the Resources<resource type> section, find the required resource in the folder structure.
  2. Select the check box next to the resource that you want to delete and click Delete.

    A confirmation window opens.

  3. Click OK.

The resource will be deleted.

Page top

[Topic 217870]

Exporting and importing resources

You can export and import resources.

To export resources:

  1. In the Resources section → <resource type> click the icon MoreButton.
  2. In the drop-down list, select Export resources.

    The Export resources window opens with the tree of all available resources.

  3. In the Password field enter the password that must be used to protect exported data.
  4. In the Tenant drop-down list, select the tenant whose resources you want to export.
  5. Check boxes near the resources you want to export.

    If selected resources are linked to other resources, linked resources will be exported, too.

  6. Click the Export button.

The resources in a password-protected file are saved on your computer using your browser settings. The Secret resources are exported blank.

To import resources:

  1. Open the MoreButton drop-down list and select Import resources.

    The Resource import window opens.

  2. In the Password field enter the password for the file you want to import.
  3. In the Tenant drop-down list, select the tenant that will own the imported resources.
  4. Click the Select file button and locate the file with the resources you want to import.

    In the Resource import window the tree of all available resources in the selected file is displayed.

  5. Select resources you want to import.
  6. Click the Import button.
  7. Resolve conflicts (see below) between imported and existing resources if they appear. Read more about resource conflicts below.
    1. If the name of any of the imported resource matches the name of the already existing resource, the Conflicts window opens with the table where the kind and the name of conflicting resources are displayed. Resolve displayed conflicts:
      • If you want to replace the existing resource with a new one, click Replace.

        Click Replace all to replace all existing conflicting resources.

      • If you want to leave the existing resource, click Skip.

        Click Skip all to keep all existing resources.

    2. Click the Resolve button.

The resources are imported to KUMA. The Secret resources are imported blank.

About conflict resolving

When resources are imported to KUMA, the program compares them with the existing resources, checking their name, kind, and guid (or identifier) parameters:

  • If an imported resource's name and kind parameters match those of the existing one, the imported resource's name is automatically changed.
  • If identifiers of two resources match, a conflict appears that must be resolved by the user. This could happen when you import resources to the same KUMA server from which they were exported.

When resolving a conflict you can choose either to replace existing resource with the imported one or to keep exiting resource, skipping the imported one.

Some resources are linked (for example, the Connector resource requires the Connection resource); such resources are exported and imported together. If during the import a conflict occurs and you choose to replace existing resource with a new one, it would mean that all the other resources linked to the one being replaced are going to be automatically replaced with the imported resources, even if you chose to Skip any of them.

During import, all resources are imported into one tenant even if they belonged to different tenants during export (for example, if an associated resource was in a shared tenant).

Page top

[Topic 217783]

Correlation rules

Correlation rule resources are used in services of correlators to recognize specific sequences of processed events and to take certain actions after recognition, such as creating correlation events/alerts or interacting with an active list.

The available correlation rule settings depend on the selected type. Types of correlation rules:

  • standard—used to find correlations between several events. Resources of this kind can create correlation events.

    This resource kind is used to determine complex correlation patterns. For simpler patterns you should use other correlation rule kinds that require less resources to operate.

  • simple—used to create correlation events if a certain event was found.
  • operational—used for operations with Active lists. This resource kind cannot create correlation events.

For these resources, you can enable the display of control characters in all input fields except the Description field.

If a correlation rule is used in the correlator and an alert was created based on it, any change to the correlation rule resource will not result in a change to the existing alert even if the correlator service is restarted. For example, if the name of a correlation rule is changed, the name of the alert will remain the same. If you close the existing alert, a new alert will be created and it will take into account the changes made to the correlation rule resource.

In this section

Standard correlation rules

Simple correlation rules

Operational correlation rules

Variables in correlators

Page top

[Topic 221197]

Standard correlation rules

Standard correlation rules are used to identify complex patterns in processed events.

The search for patterns is conducted by using buckets

Bucket is a data container that is used by the Correlation rule resources to determine if the correlation event should be created. It has the following functions:

  • Group together events that were matched by the filters in the Selectors group of settings of the Correlation rule resource. Events are grouped by the fields that were selected by user in the Identical fields field.
  • Determine the instance when the Correlation rule should trigger, affecting the events that are grouped in the bucket.
  • Perform the actions that are selected in the Actions group of settings.
  • Create correlation events.

Available states of the Bucket:

  • Empty—the bucket has no events. This can happen only when it was created by the correlation rule triggering.
  • Partial Match—the bucket has some of the expected events (recovery events are not counted).
  • Full Match—the bucket has all of the expected events (recovery events are not counted). When this condition is achieved:
    • The Correlation rule triggers
    • Events are cleared from the bucket
    • The trigger counter of the bucket is updated
    • The state of the bucket becomes Empty
  • False Match—this state of the Bucket is possible:
    • when the Full Match state was achieved but the join-filter returned false.
    • when Recovery check box was selected and the recovery events were received.

    When this condition is achieved the Correlation rule does not trigger. Events are cleared from the bucket, the trigger counter is updated, and the state of the bucket becomes Empty

The correlation rule resource window contains the following configuration tabs:

  • General—used to specify the main settings of the correlation rule resource. On this tab, you can select the type of correlation rule.
  • Selectors—used to define the conditions that the processed events must fulfill to trigger the correlation rule. Available parameters vary based on the selected resource type.
  • Actions—used to set the triggers that will activate when the conditions configured in the Selectors settings block are fulfilled. The Correlation rule resource must have at least one trigger. Available parameters vary based on the selected resource type.

General tab

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—the tenant that owns the correlation rule.
  • Type (required)—a drop-down list for selecting the type of correlation rule. Select standard if you want to create a standard correlation rule.
  • Identical fields (required)—the event fields that should be grouped in a Bucket. The hash of the values of the selected fields is used as the Bucket key. If the selector (see below) triggers, the selected fields will be copied to the correlation event.
  • Unique fields—event fields that should be sent to the Bucket. If this parameter is set, the Bucket will receive only unique events. The hash of the selected fields' values is used as the Bucket key. If the Correlation rule triggers, the selected fields will be copied to the correlation event.
  • Rate limit—maximum number of times a correlation rule can be triggered per second. The default value is 100.

    If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the specific method used to count rule triggers in KUMA. In this case, try to increase the value of Rate limit to 1000000, for example.

  • Window, sec (required)—bucket lifetime, in seconds. This timer starts when the Bucket is created (when it receives the first event). The lifetime is not updated, and when it runs out, the On timeout trigger from the Actions group of settings is activated and the bucket is deleted. The On every threshold and On subsequent thresholds triggers can be activated more than once during the lifetime of the bucket.
  • Base events keep policy—this drop-down list is used to specify which base events must be stored in the correlation event:
    • first (default value)—this option is used to store the first base event of the event collection that triggered creation of the correlation event.
    • last—this option is used to store the last base event of the event collection that triggered creation of the correlation event.
    • all—this option is used to store all base events of the event collection that triggered creation of the correlation event.
  • Priority—base coefficient used to determine the importance of a correlation rule. The default value is Low.
  • Order by—in this drop-down list, you can select the event field that will be used by the correlation rule selectors to track situational changes. This could be useful if you want to configure a correlation rule to be triggered when several types of events occur sequentially, for example.
  • Description—the description of a resource. Up to 256 Unicode characters.

Selectors tab

There can be multiple selectors in the standard resource kind. You can add selectors by clicking the Add selector button and can remove them by clicking the Delete selector button. Selectors can be moved by using the DragIcon button.

For each selector, the following two tabs are available: Settings and Local variables.

The Settings tab contains the following settings:

  • Alias (required)—unique name of the event group that meets the conditions of the selector. This name is used to identify events in the filter. Must contain from 1 to 128 Unicode characters.
  • Selector threshold (event count) (required)—the number of events that must be received by the selector to trigger.
  • Filter (required)—used to set the criteria for determining events that should trigger the selector. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

    Filtering based on data from the Extra event field

    Conditions for filters based on data from the Extra event field:

    • Condition—If.
    • Left operand—event field.
    • In this event field, you can specify one of the following values:
      • Extra field.
      • Value from the Extra field in the following format:

        Extra.<field name>

        For example, Extra.app.

        A value of this type is specified manually.

      • Value from the array written to the Extra field in the following format:

        Extra.<field name>.<array element>

        For example, Extra.array.0.

        The values in the array are numbered starting from 0.

        A value of this type is specified manually.

    • Operator – =.
    • Right operand—constant.
    • Value—the value by which you need to filter events.
  • Recovery—this check box must be selected when the Correlation rule must NOT trigger if a certain number of events are received from the selector. By default, this check box is cleared.

On the Local variables tab, use the Add variable button to declare variables that will be used within the limits of this correlation rule.

Actions tab

There can be multiple triggers in a standard type of resource.

  • On first threshold—this trigger activates when the Bucket registers the first triggering of the selector during the lifetime of the Bucket.
  • On subsequent thresholds—this trigger activates when the Bucket registers the second and all subsequent triggering of the selector during the lifetime of the Bucket.
  • On every threshold—this trigger activates every time the Bucket registers the triggering of the selector.
  • On timeout—this trigger activates when the lifetime of the Bucket ends, and is linked to the selector with the Recovery check box selected. In other words, this trigger activates if the situation detected by the correlation rule is not resolved within the defined amount of time.

Every trigger is represented as a group of settings with the following parameters available:

  • Output—if this check box is selected, the correlation event will be sent for post-processing: for enrichment, for a response, and to destinations.
  • Loop—if this check box is selected, the correlation event will be processed by the current correlation rule resource. This allows hierarchical correlation.

    If both check boxes are selected, the correlation rule will be sent for post-processing first and then to the current correlation rule selectors.

  • Do not create alert—if this check box is selected, an alert will not be created when this correlation rule is triggered.
  • Active lists update settings group—used to assign the trigger for one or more operations with active lists. You can use the Add active list action and Delete active list action buttons to add or delete operations with active lists, respectively.

    Available settings:

    • Name (required)—this drop-down list is used to select the Active list resources.
    • Operation (required)—this drop-down list is used to select the operation that must be performed:
      • Get—get the Active list entry and write the values of the selected fields into the correlation event.
      • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
      • Delete—delete the Active list entry.
    • Key fields (required)—this is the list of event fields used to create the Active list entry. It is also used as the Active list entry key.

      The active list entry key depends on the available fields and does not depend on the order in which they are displayed in the KUMA web interface.

    • Mapping (required for Get and Set operations)—used to map Active list fields with events fields. More than one mapping rule can be set.
      • The left field is used to specify the Active list field.

        The field must not contain special characters or numbers only.

      • The middle drop-down list is used to select event fields.
      • The right field can be used to assign a constant to the Active list field is the Set operation was selected.
  • Enrichment settings block—you can update the field values of correlation events by using enrichment rules similar to enrichment rule resources. These enrichment rules are stored in the Correlation rule resource where they were created. It is possible to have more than one enrichment rule. Enrichment rules can be added or deleted by using the Add enrichment or Remove enrichment buttons, respectively.
    • Source kind—you can select the type of enrichment in this drop-down list. Depending on the selected type, you may see advanced settings that will also need to be completed.

      Available types of enrichment:

      • constant

        This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

        • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      • dictionary

        This type of enrichment is used if you need to add a value from the dictionary to the event field.

        When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

      • event

        This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
        • In the Source field drop-down list, select the event field whose value will be written to the target field.
        • Clicking the wrench-new button opens the Conversion window in which you can, using the Add conversion button, create rules for modifying the original data before writing them to the KUMA event fields.

          Available conversions

          Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

          Available conversions:

          • lower—is used to make all characters of the value lowercase
          • upper—is used to make all characters of the value uppercase
          • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
          • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
          • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
            • Replace chars—in this field you can specify the character sequence that should be replaced.
            • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
          • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
          • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
          • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
          • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
            • Expression—in this field you can specify the regular expression which results that should be replaced.
            • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • template

        This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

        • Put the Go template into the Template field.

          Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

          Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • Debug—you can use this drop-down list to enable logging of service operations.
    • Description—the description of a resource. Up to 256 Unicode characters.
    • Filter settings block—lets you select which events will be forwarded for enrichment. Configuration is performed as described above.
  • Categorization settings group—used to change the categories of assets indicated in events. There can be several categorization rules. You can add or delete them by using the Add categorization or Remove categorization buttons. Only reactive categories can be added to assets or removed from assets.
    • Operation—this drop-down list is used to select the operation to perform on the category:
      • Add—assign the category to the asset.
      • Delete—unbind the asset from the category.
    • Event field—event field that indicates the asset requiring the operation.
    • Category ID—you can click the parent-category button to select the category requiring the operation. Clicking this button opens the Select categories window showing the category tree.
Page top

[Topic 221199]

Simple correlation rules

Simple correlation rules are used to define simple sequences of events.

The correlation rule resource window contains the following configuration tabs:

  • General—used to specify the main settings of the correlation rule resource. On this tab, you can select the type of correlation rule.
  • Selectors—used to define the conditions that the processed events must fulfill to trigger the correlation rule. Available parameters vary based on the selected resource type.
  • Actions—used to set the triggers that will activate when the conditions configured in the Selectors settings block are fulfilled. The Correlation rule resource must have at least one trigger. Available parameters vary based on the selected resource type.

General tab

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—the tenant that owns the correlation rule.
  • Type (required)—a drop-down list for selecting the type of correlation rule. Select simple if you want to create a simple correlation rule.
  • Propagated fields (required)—event fields used for event selection. If the selector (see below) is triggered, these fields will be written to the correlation event.
  • Rate limit—maximum number of times a correlation rule can be triggered per second. The default value is 100.

    If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the specific method used to count rule triggers in KUMA. In this case, try to increase the value of Rate limit to 1000000, for example.

  • Priority—base coefficient used to determine the importance of a correlation rule. The default value is Low.
  • Description—the description of a resource. Up to 256 Unicode characters.

Selectors tab

In a simple-type resource, there can be only one selector for which the Settings and Local variables tabs are available.

The Settings tab contains settings with the Filter settings block:

  • Filter (required)—used to set the criteria for determining events that should trigger the selector. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

    Filtering based on data from the Extra event field

    Conditions for filters based on data from the Extra event field:

    • Condition—If.
    • Left operand—event field.
    • In this event field, you can specify one of the following values:
      • Extra field.
      • Value from the Extra field in the following format:

        Extra.<field name>

        For example, Extra.app.

        A value of this type is specified manually.

      • Value from the array written to the Extra field in the following format:

        Extra.<field name>.<array element>

        For example, Extra.array.0.

        The values in the array are numbered starting from 0.

        A value of this type is specified manually.

    • Operator – =.
    • Right operand—constant.
    • Value—the value by which you need to filter events.

On the Local variables tab, use the Add variable button to declare variables that will be used within the limits of this correlation rule.

Actions tab

There can be only one trigger in the simple resource kind: On every event. It is activated every time the selector triggers.

Available parameters of the trigger:

  • Output—if this check box is selected, the correlation event will be sent for post-processing: for enrichment, for a response, and to destinations.
  • Loop—if this check box is selected, the correlation event will be processed by the current correlation rule resource. This allows hierarchical correlation.

    If both check boxes are selected, the correlation rule will be sent for post-processing first and then to the current correlation rule selectors.

  • Do not create alert—if this check box is selected, an alert will not be created when this correlation rule is triggered.
  • Active lists update settings group—used to assign the trigger for one or more operations with active lists. You can use the Add active list action and Delete active list action buttons to add or delete operations with active lists, respectively.

    Available settings:

    • Name (required)—this drop-down list is used to select the Active list resources.
    • Operation (required)—this drop-down list is used to select the operation that must be performed:
      • Get—get the Active list entry and write the values of the selected fields into the correlation event.
      • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
      • Delete—delete the Active list entry.
    • Key fields (required)—this is the list of event fields used to create the Active list entry. It is also used as the Active list entry key.

      The active list entry key depends on the available fields and does not depend on the order in which they are displayed in the KUMA web interface.

    • Mapping (required for Get and Set operations)—used to map Active list fields with events fields. More than one mapping rule can be set.
      • The left field is used to specify the Active list field.

        The field must not contain special characters or numbers only.

      • The middle drop-down list is used to select event fields.
      • The right field can be used to assign a constant to the Active list field is the Set operation was selected.
  • Enrichment settings block—you can update the field values of correlation events by using enrichment rules similar to enrichment rule resources. These enrichment rules are stored in the Correlation rule resource where they were created. It is possible to have more than one enrichment rule. Enrichment rules can be added or deleted by using the Add enrichment or Remove enrichment buttons, respectively.
    • Source kind—you can select the type of enrichment in this drop-down list. Depending on the selected type, you may see advanced settings that will also need to be completed.

      Available types of enrichment:

      • constant

        This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

        • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

      • dictionary

        This type of enrichment is used if you need to add a value from the dictionary to the event field.

        When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

      • event

        This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
        • In the Source field drop-down list, select the event field whose value will be written to the target field.
        • Clicking the wrench-new button opens the Conversion window in which you can, using the Add conversion button, create rules for modifying the original data before writing them to the KUMA event fields.

          Available conversions

          Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

          Available conversions:

          • lower—is used to make all characters of the value lowercase
          • upper—is used to make all characters of the value uppercase
          • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
          • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
          • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
            • Replace chars—in this field you can specify the character sequence that should be replaced.
            • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
          • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
          • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
          • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
          • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
            • Expression—in this field you can specify the regular expression which results that should be replaced.
            • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • template

        This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

        • Put the Go template into the Template field.

          Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

          Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

        • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • Debug—you can use this drop-down list to enable logging of service operations.
    • Description—the description of a resource. Up to 256 Unicode characters.
    • Filter settings block—lets you select which events will be forwarded for enrichment. Configuration is performed as described above.
  • Categorization settings group—used to change the categories of assets indicated in events. There can be several categorization rules. You can add or delete them by using the Add categorization or Remove categorization buttons. Only reactive categories can be added to assets or removed from assets.
    • Operation—this drop-down list is used to select the operation to perform on the category:
      • Add—assign the category to the asset.
      • Delete—unbind the asset from the category.
    • Event field—event field that indicates the asset requiring the operation.
    • Category ID—you can click the parent-category button to select the category requiring the operation. Clicking this button opens the Select categories window showing the category tree.

Page top

[Topic 221203]

Operational correlation rules

Operational correlation rules are used for working with active lists.

The correlation rule resource window contains the following tabs:

  • General—used to specify the main settings of the correlation rule resource. On this tab, you can select the type of correlation rule.
  • Selectors—used to define the conditions that the processed events must fulfill to trigger the correlation rule. Available parameters vary based on the selected resource type.
  • Actions—used to set the triggers that will activate when the conditions configured in the Selectors settings block are fulfilled. The Correlation rule resource must have at least one trigger. Available parameters vary based on the selected resource type.

General tab

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—the tenant that owns the correlation rule.
  • Type (required)—a drop-down list for selecting the type of correlation rule. Select operational if you want to create an operational correlation rule.
  • Rate limit—maximum number of times a correlation rule can be triggered per second. The default value is 100.

    If correlation rules employing complex logic for pattern detection are not triggered, this may be due to the specific method used to count rule triggers in KUMA. In this case, try to increase the value of Rate limit to 1000000, for example.

  • Description—the description of a resource. Up to 256 Unicode characters.

Selectors tab

In an operational-type resource, there can be only one selector for which the Settings and Local variables tabs are available.

The Settings tab contains settings with the Filter settings block:

  • Filter (required)—used to set the criteria for determining events that should trigger the selector. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

    Filtering based on data from the Extra event field

    Conditions for filters based on data from the Extra event field:

    • Condition—If.
    • Left operand—event field.
    • In this event field, you can specify one of the following values:
      • Extra field.
      • Value from the Extra field in the following format:

        Extra.<field name>

        For example, Extra.app.

        A value of this type is specified manually.

      • Value from the array written to the Extra field in the following format:

        Extra.<field name>.<array element>

        For example, Extra.array.0.

        The values in the array are numbered starting from 0.

        A value of this type is specified manually.

    • Operator – =.
    • Right operand—constant.
    • Value—the value by which you need to filter events.

On the Local variables tab, use the Add variable button to declare variables that will be used within the limits of this correlation rule.

Actions tab

There can be only one trigger in the operational resource kind: On every event. It is activated every time the selector triggers.

Available parameters of the trigger:

  • Active lists update settings group—used to assign the trigger for one or more operations with active lists. You can use the Add active list action and Delete active list action buttons to add or delete operations with active lists, respectively.

    Available settings:

    • Name (required)—this drop-down list is used to select the Active list resources.
    • Operation (required)—this drop-down list is used to select the operation that must be performed:
      • Get—get the Active list entry and write the values of the selected fields into the correlation event.
      • Set—write the values of the selected fields of the correlation event into the Active list by creating a new or updating an existing Active list entry. When the Active list entry is updated, the data is merged and only the specified fields are overwritten.
      • Delete—delete the Active list entry.
    • Key fields (required)—this is the list of event fields used to create the Active list entry. It is also used as the Active list entry key.

      The active list entry key depends on the available fields and does not depend on the order in which they are displayed in the KUMA web interface.

    • Mapping (required for Get and Set operations)—used to map Active list fields with events fields. More than one mapping rule can be set.
      • The left field is used to specify the Active list field.

        The field must not contain special characters or numbers only.

      • The middle drop-down list is used to select event fields.
      • The right field can be used to assign a constant to the Active list field is the Set operation was selected.
Page top

[Topic 234114]

Variables in correlators

If tracking values in event fields, active lists, or dictionaries is not enough to cover some specific security scenarios, you can use global and local variables. You can use them to take various actions on the values received by the correlators by implementing complex logic for threat detection. Variables can be declared in the correlator (global variables) or in the correlation rule (local variables) by assigning a function to them, then querying them from correlation rules as if they were ordinary event fields and receiving the triggered function result in response.

Usage scope of variables:

  • When searching for grouping or unique field values in correlation rules.
  • In the correlation rule selectors, in the filters of the conditions under which the correlation rule should be triggered.
  • When enriching correlation events. Select Event as the source type.
  • When filling active lists with values.

Variables can be queried the same way as event fields by preceding their names with the $ character.

In this section

Properties of variables

Requirements for variables

Functions of variables

Declaring variables

Page top

[Topic 234737]

Properties of variables

Local and global variables

The properties of global variables differ from the properties of local variables.

Global variables:

  • Global variables are declared at the correlator level and are applied only within the scope of this correlator.
  • The global variables of the correlator can be queried from all correlation rules that are specified in it.
  • In standard correlation rules, the same global variable can take different values in each selector.
  • It is not possible to transfer global variables between different correlators.

Local variables:

  • Local variables are declared at the correlation rule level and are applied only within the limits of this rule.
  • In standard correlation rules, the scope of a local variable consists of only the selector in which the variable was declared.
  • Local variables can be declared in any type of correlation rule.
  • Local variables cannot be transferred between rules or selectors.
  • A local variable cannot be used as a global variable.

Variables used in various types of correlation rules

  • In operational correlation rules, on the Actions tab, you can specify all variables available or declared in this rule.
  • In standard correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Identical fields field.
  • In simple correlation rules, on the Actions tab, you can provide only those variables specified in these rules on the General tab, in the Inherited Fields field.

Page top

[Topic 234739]

Requirements for variables

When adding a variable function, you must first specify the name of the function, and then list its parameters in parentheses. Basic mathematical operations (addition, subtraction, multiplication, division) are an exception to this requirement. When these operations are used, parentheses are used to designate the severity of the operations.

Requirements for function names:

  • Must be unique within the correlator.
  • Must contain from 1 to 128 Unicode characters.
  • Must not begin with the character $.
  • Must be written in camelCase or CamelCase.

Special considerations when specifying functions of variables:

  • The sequence of parameters is important.
  • Parameters are separated by a comma: ,.
  • String parameters are passed in single quotes: '.
  • Event field names and variables are specified without quotation marks.
  • When querying a variable as a parameter, add the $ character before its name.
  • You do not need to add a space between parameters.
  • In all functions in which a variable can be used as parameters, nested functions can be created.
Page top

[Topic 234740]

Functions of variables

Operations with active lists and dictionaries

"active_list" function

Gets information from the active list regarding the value in the specified column.

You must specify the parameters in the following sequence:

  1. Name of the active list
  2. Name of the active list column
  3. Active list record key

    The name of one or more event fields is used as the record key of the active list.

    Usage example

    Result

    active_list('exampleActiveList', 'score', SourceAddress,SourceUserName)

    Gets data from exampleActiveList from the SourceAddress,SourceUserName record in the score column.

"table_dict" function

Gets information about the value in the specified column of a dictionary of the table type.

You must specify the parameters in the following sequence:

  1. Dictionary name
  2. Dictionary column name
  3. Dictionary row key

    Usage example

    Result

    table_dict('exampleTableDict', 'office', SourceUserName)

    Gets data from the exampleTableDict dictionary from the row with the SourceUserName key in the office column.

"dict" function

Gets information about the value in the specified column of a dictionary of the dictionary type.

You must specify the parameters in the following sequence:

  1. Dictionary name
  2. Dictionary row key

    Usage example

    Result

    dict('exampleDictionary', SourceAddress)

    Gets data from exampleDictionary from the row with the SourceAddress key.

Operation with rows

"len" function

Returns the number of characters in a string.

A string can be passed as a string, field name or variable.

Usage examples

len('SomeText')

len(Message)

len($otherVariable)

"to_lower" function

Converts characters in a string to lowercase.

A string can be passed as a string, field name or variable.

Usage examples

to_lower(SourceUserName)

to_lower('SomeText')

to_lower($otherVariable)

"to_upper" function

Converts characters in a string to uppercase. A string can be passed as a string, field name or variable.

Usage examples

to_upper(SourceUserName)

to_upper('SomeText')

to_upper($otherVariable)

"append" function

Adds characters to the end of a string.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Added string.

Strings can be passed as a string, field name or variable.

Usage examples

Usage result

append(Message, '123')

The string 123 is added to the end of this string from the Message field.

append($otherVariable, 'text')

The string text is added to the end of this string from the variable otherVariable.

append(Message, $otherVariable)

A string from otherVariable is added to the end of this string from the Message field.

"prepend" function

Adds characters to the beginning of a string.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Added string.

Strings can be passed as a string, field name or variable.

Usage examples

Usage result

prepend(Message, '123')

The string 123 is added to the beginning of this string from the Message field.

prepend($otherVariable, 'text')

The string text is added to the beginning of this string from otherVariable.

prepend(Message, $otherVariable)

A string from otherVariable is added to the beginning of this string from the Message field.

"substring" function

Returns a substring from a string. 

You must specify the parameters in the following sequence:

  1. Original string.
  2. Substring start position (natural number or 0).
  3. (Optional) substring end position.

Strings can be passed as a string, field name or variable. If the position number is greater than the original data string length, an empty string is returned.

Usage examples

Usage result

substring(Message, 2)

Returns a part of the string from the Message field: from 3 characters to the end.

substring($otherVariable, 2, 5)

Returns a part of the string from the otherVariable variable: from 3 to 6 characters.

substring(Message, 0, len(Message) - 1)

Returns the entire string from the Message field except the last character.

"tr" function

Deletes the specified characters from the beginning and end of a string.

You must specify the parameters in the following sequence:

  1. Original string.
  2. (Optional) string that should be removed from the beginning and end of the original string.

Strings can be passed as a string, field name or variable. If you do not specify a string to be deleted, spaces will be removed from the beginning and end of the original string.

Usage examples

Usage result

tr(Message)

Spaces have been removed from the beginning and end of the string from the Message field.

tr($otherVariable, '_')

If the otherVariable variable has the _test_ value, the string _test_ is returned.

tr(Message, '@example.com')

If the Message event field contains the string user@example.com, the string user is returned.

"replace" function

Replaces all occurrences of character sequence A in a string with character sequence B.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: sequence of characters to be replaced.
  3. replacement string: sequence of characters to replace the search string.

Strings can be passed as a string, field name or variable.

Usage examples

Usage result

replace(Name, 'UserA', 'UserB')

Returns a string from the Name event field in which all occurrences of UserA are replaced with UserB.

replace($otherVariable, ' text ', '_text_')

Returns a string from otherVariable in which all occurrences of ' text' are replaced with '_text_'.

"regexp_replace" function

Replaces a sequence of characters that match a regular expression with a sequence of characters and regular expression capturing groups.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: regular expression.
  3. replacement string: sequence of characters to replace the search string, and IDs of the regular expression capturing groups.

Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.

Usage examples

Usage result

regexp_replace(SourceAddress, '([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})', 'newIP: $1.$2.$3.10')

Returns a string from the SourceAddress event field in which the text newIP is inserted before the IP addresses. In addition, the last digits of the address are replaced with 10.

"regexp_capture" function

Gets the result matching the regular expression condition from the original string.

You must specify the parameters in the following sequence:

  1. Original string.
  2. Search string: regular expression.

Strings can be passed as a string, field name or variable. Unnamed capturing groups can be used.

Usage examples

Example values

Usage result

regexp_capture(Message, '(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})')

Message = 'Access from 192.168.1.1 session 1'

Message = 'Access from 45.45.45.45 translated address 192.168.1.1 session 1'

'192.168.1.1'

'45.45.45.45'

Operations with timestamps

now function

Gets a timestamp in epoch format. Runs with no arguments.

Usage examples

now()

"extract_from_timestamp" function

Gets atomic time representations (year, month, day, hour, minute, second, day of the week) from fields and variables with time in the epoch format.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Notation of the atomic time representation. This parameter is case sensitive.

    Possible variants of atomic time notation:

    • y refers to the year in number format.
    • M refers to the month in number notation.
    • d refers to the number of the month.
    • wd refers to the day of the week: Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday.
    • h refers to the hour in 24-hour format.
    • m refers to the minutes.
    • s refers to the seconds.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    extract_from_timestamp(Timestamp, 'wd')

    extract_from_timestamp(Timestamp, 'h')

    extract_from_timestamp($otherVariable, 'h')

    extract_from_timestamp(Timestamp, 'h', 'Europe/Moscow')

"parse_timestamp" function

Converts the time from RFC3339 format (for example, "2022-05-24 00:00:00", "2022-05-24 00:00:00+0300) to epoch format.

Usage examples

parse_timestamp(Message)

parse_timestamp($otherVariable)

"format_timestamp" function

Converts the time from epoch format to RFC3339 format.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Time format notation: RFC3339.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    format_timestamp(Timestamp, 'RFC3339')

    format_timestamp($otherVariable, 'RFC3339')

    format_timestamp(Timestamp, 'RFC3339', 'Europe/Moscow')

"truncate_timestamp" function

Rounds the time in epoch format. After rounding, the time is returned in epoch format. Time is rounded down.

The parameters must be specified in the following sequence:

  1. Event field of the timestamp type, or variable.
  2. Rounding parameter:
    • 1s rounds to the nearest second.
    • 1m rounds to the nearest minute.
    • 1h rounds to the nearest hour.
    • 24h rounds to the nearest day.
  3. (optional) Time zone notation. If this parameter is not specified, the time is calculated in UTC format.

    Usage examples

    Examples of rounded values

    Usage result

    truncate_timestamp(Timestamp, '1m')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654631760000 (7 June 2022, 19:56:00)

    truncate_timestamp($otherVariable, '1h')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654628400000 (7 June 2022, 19:00:00)

    truncate_timestamp(Timestamp, '24h', 'Europe/Moscow')

    1654631774175 (7 June 2022, 19:56:14.175)

    1654560000000 (7 June 2022, 0:00:00)

"time_diff" function

Gets the time interval between two timestamps in epoch format.

The parameters must be specified in the following sequence:

  1. Interval end time. Event field of the timestamp type, or variable.
  2. Interval start time. Event field of the timestamp type, or variable.
  3. Time interval notation:
    • ms refers to milliseconds.
    • s refers to seconds.
    • m refers to minutes.
    • h refers to hours.
    • d refers to days.

    Usage examples

    time_diff(EndTime, StartTime, 's')  

    time_diff($otherVariable, Timestamp, 'h')

    time_diff(Timestamp, DeviceReceiptTime, 'd')

Mathematical operations

These are comprised of basic mathematical operations and functions.

Basic mathematical operations

Operations:

  • Addition
  • Subtraction
  • Multiplication
  • Division
  • Modulo division

Parentheses determine the sequence of actions

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Real numbers

    When modulo dividing, only natural numbers can be used as arguments.

Usage constraints:

  • Division by zero returns zero.
  • Mathematical operations between numbers and strings return zero.
  • Integers resulting from operations are returned without a dot.

    Usage examples

    (Type=3; otherVariable=2; Message=text)

    Usage result

    Type + 1

    4

    $otherVariable - Type

    -1

    2 * 2.5

    5

    2 / 0

    0

    Type * Message

    0

    (Type + 2) * 2

    10

    Type % $otherVariable

    1

"round" function

Rounds numbers. 

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.75; DeviceCustomFloatingPoint2=7.5 otherVariable=7.2)

    Usage result

    round(DeviceCustomFloatingPoint1)

    8

    round(DeviceCustomFloatingPoint2)

    8

    round($otherVariable)

    7

"ceil" function

Rounds up numbers.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)

    Usage result

    ceil(DeviceCustomFloatingPoint1)

    8

    ceil($otherVariable)

    9

"floor" function

Rounds down numbers.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomFloatingPoint1=7.15; otherVariable=8.2)

    Usage result

    floor(DeviceCustomFloatingPoint1)

    7

    floor($otherVariable)

    8

"abs" function

Gets the modulus of a number.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    (DeviceCustomNumber1=-7; otherVariable=-2)

    Usage result

    abs(DeviceCustomFloatingPoint1)

    7

    abs($otherVariable)

    2

"pow" function

Exponentiates a number.

The parameters must be specified in the following sequence:

  1. Base. Real numbers
  2. Power. Natural numbers.

Available arguments:

  • Numeric event fields
  • Numeric variables
  • Numeric constants

    Usage examples

    pow(DeviceCustomNumber1, DeviceCustomNumber2)

    pow($otherVariable, DeviceCustomNumber1)

Page top

[Topic 234738]

Declaring variables

To declare variables, they must be added to a correlator or correlation rule.

To add a global variable to an existing correlator:

  1. In the KUMA web interface, under ResourcesCorrelators, select the resource set of the relevant correlator.

    The Correlator Installation Wizard opens.

  2. Select the Global variables step of the Installation Wizard.
  3. click the Add variable button and specify the following parameters:
    • In the Variable window, enter the name of the variable.

      Variable naming requirements

      • Must be unique within the correlator.
      • Must contain from 1 to 128 Unicode characters.
      • Must not begin with the character $.
      • Must be written in camelCase or CamelCase.
    • In the Value window, enter the variable function.

      Description of variable functions.

    Multiple variables can be added. Added variables can be edited or deleted by using the cross icon.

  4. Select the Setup validation step of the Installation Wizard and click Save.

A global variable is added to the correlator. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.

To add a local variable to an existing correlation rule:

  1. In the KUMA web interface, under ResourcesCorrelation rules, select the resource of the relevant correlation rule.

    The correlation rule settings window opens. The parameters of a correlation rule can also be opened from the correlator to which it was added by proceeding to the Correlation step of the Installation Wizard.

  2. Open the Selectors tab.
  3. In the selector, open the Local variables tab, click the Add variable button and specify the following parameters:
    • In the Variable window, enter the name of the variable.

      Variable naming requirements

      • Must be unique within the correlator.
      • Must contain from 1 to 128 Unicode characters.
      • Must not begin with the character $.
      • Must be written in camelCase or CamelCase.
    • In the Value window, enter the variable function.

      Description of variable functions.

    Multiple variables can be added. Added variables can be edited or deleted by using the cross icon.

    For standard correlation rules, repeat this step for each selector in which you want to declare variables.

  4. Click Save.

The local variable is added to the correlation rule. It can be queried like an event field by inserting the $ character in front of the variable name. The variable will be used for correlation after restarting the correlator service.

Added variables can be edited or deleted. If the correlation rule queries an undeclared variable (for example, if its name has been changed), an empty string is returned.

If you change the name of a variable, you will need to manually change the name of this variable in all correlation rules where you have used it.

Page top

[Topic 217942]

Normalizers

Normalizer resources are used to convert raw events of various formats so that they conform to the KUMA event data model. This turns them into normalized events that can be processed by other KUMA resources and services.

A normalizer resource consists of the main normalizer and optional extra normalizers. Data is transmitted through a tree-like structure of normalizers depending on the defined conditions, which lets you configure complex logic for processing events.

A normalizer resource is created in several steps:

  1. Creating the main normalizer

    The main normalizer is created by using the Add event parsing button. Entry of normalizer settings is finished by clicking OK.

    The main normalizer that you created will be displayed as a dark circle. Clicking on the circle will open the normalizer options for editing. When you hover over the circle, a plus sign is displayed. Click it to add more normalizers.

  2. Creating conditions for using an extra normalizer

    Clicking on the normalizer plus sign opens the Add normalizer to normalization scheme window in which you can specify the conditions that will cause data to be forwarded to the new normalizer.

  3. Creating an extra normalizer

    When the previous step is finished, a window will open for creating an extra normalizer. Entry of normalizer settings is finished by clicking OK.

    The extra normalizer you created is displayed as a dark block that indicates the conditions under which this normalizer will be used (see step 2). The conditions can be changed by moving your mouse cursor over the extra normalizer and clicking the button showing the pencil image.

    If you hover the mouse pointer over the extra normalizer, a plus button appears, which you can use to create a new extra normalizer. To delete a normalizer, use the button with the trash icon.

    If you need to create more normalizers, repeat steps 2 and 3.

  4. Completing creation of a normalizer resource

    Normalizer resource creation is finished by clicking the Save button.

For these resources, you can enable the display of control characters in all input fields except the Description field.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer resource itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the resource under ResourcesNormalizers in the web interface.

See also:

Requirements for variables

Page top

[Topic 221932]

Normalizer settings

The normalizer window contains two tabs: Normalization scheme and Enrichment.

Normalization scheme

This tab is used to specify the main settings of the normalizer and to define the rules for converting events into KUMA format.

Available settings:

  • Name (required)—the name of the normalizer. Must contain from 1 to 128 Unicode characters. The name of the main normalizer will be used as the name of the normalizer resource.
  • Tenant (required)—name of the tenant that owns the resource.

    This setting is not available for extra normalizers.

  • Parsing method (required)—drop-down list for selecting the type of incoming events. Depending on your choice, you can use the preconfigured rules for matching event fields or set your own rules. When you select some parsing methods, additional parameter fields required for filling in may become available.

    Available parsing methods:

    • json

      This parsing method is used to process JSON data.

      When processing files with hierarchically arranged data, you can access the fields of nested objects by specifying the names of the parameters dividing them by a period. For example, the username parameter from the string "user": {"username": "system: node: example-01"} can be accessed by using the user.username query.

    • cef

      This parsing method is used to process CEF data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • regexp

      This parsing method is used to create custom rules for processing JSON data.

      In the Normalization parameter block field, add a regular expression (RE2 syntax) with named capture groups. The name of a group and its value will be interpreted as the field and the value of the raw event, which can be converted into an event field in KUMA format.

      To add event handling rules:

      1. Copy an example of the data you want to process to the Event examples field. This is an optional but recommended step.
      2. In the Normalization parameter block field add a regular expression with named capture groups in RE2 syntax, for example "(?P<name>regexp)".

        You can add multiple regular expressions by using the Add regular expression button. If you need to remove the regular expression, use the cross button.

      3. Click the Copy field names to the mapping table button.

        Capture group names are displayed in the KUMA field column of the Mapping table. Now you can select the corresponding KUMA field in the column next to each capture group. Otherwise, if you named the capture groups in accordance with the CEF format, you can use the automatic CEF mapping by selecting the Use CEF syntax for normalization check box.

      Event handling rules were added.

    • syslog

      This parsing method is used to process data in syslog format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • csv

      This parsing method is used to create custom rules for processing CSV data.

      When choosing this method, you must specify the separator of values in the string in the Delimiter field. Any single-byte ASCII character can be used as a delimiter.

    • kv

      This parsing method is used to process data in key-value pair format.

      If you select this method, you must provide values in the following required fields:

      • Pair delimiter—specify a character that will serve as a delimiter for key-value pairs. You can specify any one-character (1 byte) value, provided that the character does not match the value delimiter.
      • Value delimiter—specify a character that will serve as a delimiter between the key and the value. You can specify any one-character (1 byte) value, provided that the character does not match the delimiter of key-value pairs.
    • xml

      This parsing method is used to process XML data.

      When this method is selected in the parameter block XML Attributes you can specify the key attributes to be extracted from tags. If an XML structure has several attributes with different values in the same tag, you can indicate the necessary value by specifying its key in the Source column of the Mapping table.

      To add key XML attributes,

      Click the Add field button, and in the window that appears, specify the path to the required attribute.

      You can add more than one attribute. Attributes can be removed one at a time using the cross icon or all at once using the Reset button.

      If XML key attributes are not specified, then in the course of field mapping the unique path to the XML value will be represented by a sequence of tags.

    • netflow5

      This parsing method is used to process data in the NetFlow v5 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for netflow5 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • netflow9

      This parsing method is used to process data in the NetFlow v9 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for netflow9 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sflow5

      This parsing method is used to process data in sFlow5 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • ipfix

      This parsing method is used to process IPFIX data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for ipfix is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sql

      This parsing method is used to process SQL data.

  • Keep raw log (required)—in this drop-down list, you can indicate whether you need to store the original raw event in the newly created normalized event. Available values:
    • Never—do not save the raw event This is the default setting.
    • Only errors—save the raw event in the Raw field of the normalized event if errors occurred when parsing it. This value is convenient to use when debugging a service. In this case, every time an event has a non-empty Raw field, you know there was a problem.

      If fields containing the names *Address or *Date* do not comply with normalization rules, these fields are ignored. No normalization error will occur, and the values of the fields will not show up in the Raw field of the normalized event even if Keep raw logOnly errors was indicated.

    • Always—always save the raw event in the Raw field of the normalized event.

    This setting is not available for extra normalizers.

  • Save extra fields (required)—in this drop-down list, you can choose whether you want to save fields and their values if no mapping rules have been configured for them (see below). This data is saved as an array in the Extra event field. Normalized events can be searched and filtered based on the data stored in the Extra field.

    Filtering based on data from the Extra event field

    Conditions for filters based on data from the Extra event field:

    • Condition—If.
    • Left operand—event field.
    • In this event field, you can specify one of the following values:
      • Extra field.
      • Value from the Extra field in the following format:

        Extra.<field name>

        For example, Extra.app.

        A value of this type is specified manually.

      • Value from the array written to the Extra field in the following format:

        Extra.<field name>.<array element>

        For example, Extra.array.0.

        The values in the array are numbered starting from 0.

        A value of this type is specified manually.

    • Operator – =.
    • Right operand—constant.
    • Value—the value by which you need to filter events.

    By default, no extra fields are saved.

  • Description—up to 256 Unicode characters describing the resource.

    This setting is not available for extra normalizers.

  • Event examples—in this field, you can provide an example of data that you want to process. Event examples can also be loaded from a TSV, CSV, or TXT file by using the Load from file button.

    This setting is not available for the sFlow5 parsing method.

  • Mapping settings block—here you can configure mapping of original event fields to fields of the event in KUMA format:
    • Source—column for the names of the raw event fields that you want to convert into KUMA event fields.

      Clicking the wrench-new button next to the field names in the Source column opens the Conversion window, in which you can use the Add conversion button to create rules for modifying the original data before they are written to the KUMA event fields.

      Available conversions

      Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

      Available conversions:

      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
      • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
      • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
        • Replace chars—in this field you can specify the character sequence that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
      • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
      • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
      • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
        • Expression—in this field you can specify the regular expression which results that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • KUMA field—drop-down list for selecting the required fields of KUMA events. You can search for fields by entering their names in the field.
    • Label—in this column, you can add a unique custom label to event fields that begin with DeviceCustom*.

    New table rows can be added by using the Add row button. Rows can be deleted individually using the cross button or all at once using the Clear all button.

    If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field.

Enrichment

This tab is used to add additional data to fields of a normalized event by using enrichment rules similar to the rules in enrichment rule resources. These enrichment rules are stored in the normalizer resource where they were created. There can be more than one enrichment rule. Enrichments are created by using the Add enrichment button.

Settings available in the enrichment rule settings block:

  • Source kind (required)—drop-down list for selecting the type of enrichment. Depending on the selected type, you may see advanced settings that will also need to be completed.

    Available Enrichment rule source types:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary to the event field.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • Clicking the wrench-new button opens the Conversion window in which you can, using the Add conversion button, create rules for modifying the original data before writing them to the KUMA event fields.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
  • Target field (required)—drop-down list for selecting the KUMA event field that should receive the data.
Page top

[Topic 221934]

Condition for forwarding data to an extra normalizer

The Add normalizer to normalization scheme window is used to specify the conditions under which the data will be sent to an extra normalizer.

Available settings:

  • Fields to pass into normalizer—used to indicate event fields in case you want to send only events with specific fields to the extra normalizer.

    If you leave this field blank, the full event will be sent to the extra normalizer for processing.

  • Use normalizer for events with specific event field values—used to indicate event fields if you want the extra normalizer to receive only events in which specific values are assigned to certain fields. The value is specified in the Condition value field.

    The data processed by these conditions can be preconverted by clicking the wrench-new button. This opens the Conversion window, in which you can use the Add conversion button to create rules for modifying the original data before it is written to the KUMA event fields.

    Available conversions

    Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

    Available conversions:

    • lower—is used to make all characters of the value lowercase
    • upper—is used to make all characters of the value uppercase
    • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
    • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
    • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
      • Replace chars—in this field you can specify the character sequence that should be replaced.
      • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
    • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
    • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
    • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
      • Expression—in this field you can specify the regular expression which results that should be replaced.
      • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
Page top

[Topic 222424]

Preset normalizers

To use the updated set of event normalizers for KUMA 2.0:

You can download an archive with the updated set of event normalizers for KUMA 2.0.

Download the archive with the updated set of event normalizers for KUMA 2.0

The archive contains the following files:

  • "Normalizers for KUMA 2.0" file that contains normalizers.
  • "Normalizer list for KUMA 2.0.xlsx" file that contains the list of normalizers with their types specified.

To make the updated set of normalizers available for use in KUMA, the normalizers must be imported into KUMA after downloading the archive. The import of normalizers involves replacing the original resources provided with KUMA 2.0 with the revised versions, therefore we recommend exporting your resources before proceeding with the import of revised versions.

The password for importing data is mustB3Ch@ng3d!

The normalizers listed in the table below are included in the KUMA kit.

Preset normalizers

Normalizer name

Event source

Normalizer type

Description

[OOTB] 1C EventJournal Normalizer

1C registration log.

xml

Designed for processing the event log of the 1C system.

[OOTB] 1C TechJournal Normalizer

1C technology log.

regexp

Designed for processing the technology event log.

[OOTB] Ahnlab UTM

System logs, operation logs, connections, IPS

regexp

Designed for processing events from the Ahnlab system.

[OOTB] Apache Access file(Common or Combined Log Format)

Apache access.log in Common or Combined Log format).

regexp

Designed for processing events in the Access log of the Apache web server. The normalizer supports the processing of events in Common or Combined Log formats.

[OOTB] Apache Access Syslog (Common or Combined Log Format)

Apache access.log in Common or Combined Log format), with Syslog header.

syslog

Designed for processing Apache web server events in Common or Combined formats received via the Syslog protocol.

[OOTB] Bastion SKDPU-GW

IT Bastion SKDPU system.

syslog

Designed for processing events of the SKDPU NT Access gateway system received via Syslog.

[OOTB] Bifit Mitigator Syslog

AntiDDoS events of the Bifit Mitigator solution

syslog

Designed for processing events from the DDOS Mitigator protection system received via Syslog.

[OOTB] BIND Syslog

BIND server DNS logs, with Syslog header.

syslog

Designed for processing events of the BIND DNS server received via Syslog.

[OOTB] BlueCoat Proxy v0.2

BlueCoat proxy server event log

regexp

Designed to process BlueCoat proxy server events.

[OOTB] Checkpoint Syslog CEF by CheckPoint

Checkpoint, normalization based on the vendor's CEF event representation diagram.

syslog

Designed for processing events received from the Checkpoint event source via the Syslog protocol in the CEF format.

[OOTB] Cisco ASA Extended v 0.1

Cisco ASA base extended set of events.

syslog

Designed for processing events of Cisco ASA devices.

[OOTB] Cisco Basic

Cisco ASA base set of events.

syslog

Designed for processing events of network devices with IOS firmware.

This normalizer will be removed from the OOTB set after the next release. If you are using this normalizer, you must migrate to the [OOTB] Cisco ASA Extended IOS Basic Syslog normalizer.

[OOTB] Cisco WSA AccessFile

Cisco WSA proxy server, access.log file.

regexp

Designed for processing the event log of the Cisco WSA proxy server, the access.log file.

[OOTB] Citrix NetScaler

Citrix NetScaler events

regexp

Designed for processing events from the Citrix NetScaler load balancer.

[OOTB] CyberTrace

Kaspersky CyberTrace events.

regexp

Designed for processing Kaspersky CyberTrace events.

[OOTB] DNS Windows

Windows server DNS logs.

regexp

Designed for processing Microsoft DNS server events.

[OOTB] Dovecot Syslog

dovecot server POP3/IMAP logs.

syslog

Designed for processing events of the Dovecot mail server received via Syslog.

[OOTB] Eltex MES Switches

Eltex MES switch events

regexp

Designed for processing events from Eltex network devices.

[OOTB] Exchange CSV

Exchange server MTA logs.

csv

Designed for processing the event log of the Microsoft Exchange system.

[OOTB] FortiGate KV

FortiGate logs in Key-Value format.

regexp

Designed for processing events from FortiGate firewalls.

[OOTB] Fortimail

Fortimail mail system logs.

regexp

Designed for processing events of the FortiMail email protection system.

[OOTB] FreeIPA

Free IPA Directory Service logs.

json

Designed for processing events from the FreeIPA system.

[OOTB] Huawei Eudemon

Logs of Huawei Eudemon firewalls.

regexp

Designed for processing events from Huawei Eudemon firewalls.

[OOTB] Huawei USG Basic

Logs of the main USG modules.

syslog

Designed for processing events received from Huawei USG security gateways via Syslog.

[OOTB] Ideco UTM syslog

Ideco UTM events

syslog

Designed for processing events received via Syslog from Ideco UTM 14.7 and later versions. The normalizer supports events from the following modules: Intrusion prevention, Firewall, Application control, Content filter. The normalizer also supports the following event types: connection via VPN, authentication through the web interface.

[OOTB] IIS Log File Format

Microsoft IIS logs.

regexp

The normalizer processes events using a regular expression in the format described at https://learn.microsoft.com/en-us/windows/win32/http/iis-logging.

[OOTB] InfoWatch Traffic Monitor SQL

DLP system Traffic Monitor by InfoWatch.

sql

Designed for processing events received by the connector from the database of the InfoWatch Traffic Monitor system.

[OOTB] IPFIX

IPFIX-format Netflow events.

ipfix

Designed for processing events in the IP Flow Information Export (IPFIX) format.

[OOTB] Juniper - JUNOS

Juniper network equipment logs.

regexp

Designed for processing audit events received from Juniper network devices.

[OOTB] KATA

Kaspersky Anti Targeted Attack.

cef

Designed for processing alerts or events from the Kaspersky Anti Targeted Attack activity log.

[OOTB] KEDR telemetry

EDR telemetry tagged by KATA

json

Designed for processing Kaspersky EDR telemetry tagged by KATA (kafka, EnrichedEventTopic).

[OOTB] Kerio Control

Kerio Control events

syslog

Designed for processing events of Kerio Control firewalls.

[OOTB] KICS4Net v2.x

Kaspersky Industrial Cyber Security v 2.x.

cef

Designed for processing events of Kaspersky Industrial CyberSecurity for Networks version 2.

[OOTB] KICS4Net v3.x

Kaspersky Industrial Cyber Security v 3.x.

syslog

Designed for processing events of Kaspersky Industrial CyberSecurity for Networks version 3.

[OOTB] KLMS syslog CEF

Kaspersky Linux Mail Server mail traffic analysis and filtering systems.

syslog

Designed for processing events of Kaspersky Linux Mail Server mail traffic analysis and filtering systems.

[OOTB] Kolchuga-K syslog

Events of IVK Kolchuga-K version LKNV.466217.002

syslog

Designed for processing events of the IVK Kolchuga-K system, version LKNV.466217.002.

[OOTB] KSC

Kaspersky Security Center.

cef

Designed for processing Kaspersky Security Center events received via Syslog.

[OOTB] KSC from SQL

Kaspersky Security Center, queries to the MS SQL database.

sql

Designed for processing events received by the connector from the database of the Kaspersky Security Center system.

[OOTB] KSMG

Kaspersky Security Mail Gateway.

syslog

Designed for processing events of Kaspersky Security Mail Gateway.

[OOTB] KUMA forwarding

KUMA

json

Designed for processing events forwarded from KUMA.

[OOTB] KWTS (KV)

KWTS logs if sent in Key-Value format.

syslog

Designed for processing events in Kaspersky Web Traffic Security for Key-Value format.

[OOTB] KWTS syslog CEF

KWTS events.

syslog

Designed for processing events of the Kaspersky Web Traffic Security (KWTS) 6.1 web traffic analysis and filtering system received via Syslog in CEF format.

[OOTB] Linux audit and iptables Syslog

Linux events.

syslog

Designed for processing events of the operating system.

This normalizer will be removed from the OOTB set after the next release. If you are using this normalizer, you must migrate to the [OOTB] Linux audit and iptables Syslog v1 normalizer.

[OOTB] Linux audit and iptables Syslog v1

Linux events.

syslog

Designed for processing events of the operating system.

This normalizer will be removed from the OOTB set after the next release. If you are using this normalizer, you must migrate to the [OOTB] Linux audit and iptables Syslog v1 normalizer.

[OOTB] Linux audit.log file

Linux events.

regexp

Designed for processing security logs of Linux operating systems received via Syslog.

[OOTB] MariaDB Audit plugin syslog

MariaDB Audit Plugin events.

syslog

Designed for processing events of the MariaDB Audit Plugin for MariaDB, MySQL 5.7, received via Syslog.

[OOTB] MS DHCP file

Windows server DHCP logs.

regexp

Designed for processing Microsoft DHCP server events.

[OOTB] Minerva EDR

Minerva EDR events

regexp

Designed for processing events from the Minerva EDR system.

[OOTB] NetFlow v5

Netflow v5 events.

netflow5

Designed for processing events from Netflow version 5.

[OOTB] NetFlow v9

Netflow v9 events.

netflow9

Designed for processing events from Netflow version 9.

[OOTB] Nginx regexp

Nginx log.

regexp

Designed for processing Nginx web server log events.

[OOTB] Oracle Audit Trail

Oracle database table

sql

Designed for processing database audit events received by the connector directly from an Oracle database.

[OOTB] OrionSoft zVirt Syslog

Events of the OrionSoft zVirt virtualization system

regexp

Designed for processing events of the OrionSoft zVirt virtualization system.

[OOTB] PA-NGFW (Syslog-CSV)

Palo Alto logs in CSV format.

syslog

Designed for processing events from Palo Alto Networks firewalls received via Syslog.

[OOTB] PTC Winchill Fracas

Winchill Fracas events

regexp

Designed for processing events of the Windchill FRACAS failure registration system.

[OOTB] PTsecurity ISIM

Positive Technologies ISIM events

regexp

Designed for processing events from the PT Industrial Security Incident Manager system.

[OOTB] pfSense Syslog

pfSence events.

syslog

Designed for processing events from Palo Alto Networks firewalls received via Syslog.

[OOTB] pfSense w/o hostname

Custom pfSence event normalizer (invalid Syslog header format).

syslog

Designed for processing events from the pfSense firewall with an incorrect Syslog header format.

[OOTB] PostgreSQL pgAudit syslog

Events of the pgAudit audit plugin

syslog

Designed for processing events of the pgAudit audit plugin for PostgreSQL received via Syslog.

[OOTB] PTsecurity NAD

Network Anomaly Detection by Positive Technologies.

syslog

Designed for processing events from PT Network Attack Discovery (NAD).

[OOTB] PTsecurity Sandbox

Positive Technologies Sandbox events

regexp

Designed for processing events of the PT Sandbox system.

[OOTB] PTsecurity WAF

Web Application Firewall by Positive Technologies.

syslog

Designed for processing events from the PTsecurity (Web Application Firewall) system.

[OOTB] Radware DefensePro AntiDDoS

Radware DefensePro AntiDDoS events

syslog

Designed for processing events from the DDOS Mitigator protection system received via Syslog.

[OOTB] S-Terra

S-Terra Gate events.

syslog

Designed for processing events from S-Terra VPN Gate devices.

[OOTB] SNMP. Windows {XP/2003}

Windows XP logs

json

Designed for processing events received from workstations and servers running Microsoft Windows XP, Microsoft Windows 2003 operating systems using the SNMP protocol.

[OOTB] SecretNet SQL

Secret Net 7.

sql

Designed for processing events received by the connector from the database of the SecretNet system.

[OOTB] SonicWall TZ Firewall

Events of TZ series firewalls

syslog

Designed for processing events received via Syslog from the SonicWall TZ firewall.

[OOTB] Sophos XG

Sophos XG firewall events

regexp

Designed for processing events from the Sophos XG firewall.

[OOTB] Squid access Syslog

Squid proxy server access.log logs.

syslog

Designed for processing events of the Squid proxy server received via the Syslog protocol.

[OOTB] Squid access.log file

Squid proxy server access.log logs.

regexp

Designed for processing Squid log events from the Squid proxy server.

[OOTB] Syslog header

Events in Syslog format from arbitrary sources. The syslog header is parsed.

syslog

Designed for processing events received via Syslog. The normalizer parses the header of the Syslog event, the message field of the event is not parsed. If necessary, you can parse the message field using other normalizers.

[OOTB] Syslog-CEF

Events in CEF format from arbitrary sources, with Syslog header.

syslog

Designed for parsing events from arbitrary sources in the CEF format with a Syslog header. Supports reading files from the following sources: InfoTeCS IDS, IT-Bastion—SKDPU NT Monitoring and Analytics, UserGate, SearchInform KIB, Forcepoint Email Security 8.5, ViPNet TIAS.

 

[OOTB] Unbound Syslog

Logs of the Unbound DNS server.

syslog

Designed for processing events from the Unbound DNS server.

[OOTB] ViPNet Coordinator Syslog

ViPNet Coordinator logs

syslog

Designed for processing events from the ViPNet Coordinator system.

[OOTB] VMware Horizon - Syslog

VMware Horizon logs. Receipt via Syslog.

syslog

Designed for processing events received from the VMware Horizon system via Syslog.

[OOTB] Windows Basic

Basic set of Windows Security events.

xml

Designed for processing event logs of Microsoft Windows operating systems, basic set of events.

[OOTB] Windows Extended v.0.3

Extended set of Windows events.

xml

Designed for processing event logs of Microsoft Windows operating systems, extended set of events. Supports events from terminal servers. The parsing method is XML file processing. This normalizer will be removed from the OOTB set after the next release. If you are using this normalizer, you must migrate to the [OOTB] Windows Extended v 1.0 normalizer.

[OOTB] Windows Extended v 1.0

Optimized with fewer extra normalizers. More complete data in group management events.

xml

The normalizer is designed for processing events of the Microsoft Windows operating system.

[OOTB][regexp] Continent IPS/IDS & TLS

Continent intrusion detection system, TSL.

regexp

Designed for processing events of Continent IPS/IDS devices in a file.

[OOTB] Broadcom Symantec Endpoint Protection

Symantec Endpoint Protection events

regexp

Designed for processing events from the Symantec Endpoint Protection system.

[OOTB] Confident Dallas Lock

Confident Dallas Lock events

regexp

Designed for processing events from the Dallas Lock information protection system.

[OOTB] WatchGuard Firebox

Firebox firewall events

syslog

Designed for processing WatchGuard Firebox events received via Syslog.

Page top

[Topic 217776]

Connectors

Connector resources are used to establish connections between KUMA services, network assets, and/or other services.

The program has the following connector types available:

  • internal—used for establishing connections between the KUMA services.
  • tcp—used for communications over TCP. It is available for Windows and Linux Agents.
  • udp—used for communications over UDP. It is available for Windows and Linux Agents.
  • netflow—used for establishing NetFlow connections.
  • sflow—used for establishing SFlow connections.
  • nats—used for NATS communications. It is available for Windows and Linux Agents.
  • kafka—used for Kafka communications. It is available for Windows and Linux Agents.
  • http—used for HTTP communications. It is available for Windows and Linux Agents.
  • sql—used for communications with a database and DBMS.

    The program supports the following types of SQL databases:

    • SQLite.
    • MSSQL.
    • MySQL.
    • PostgreSQL.
    • Cockroach.
    • Oracle.
    • Firebird.
  • file—used to retrieve data from any text file. It is available for Linux Agents.
  • diode—used for unidirectional data transfer in industrial ICS networks using data diodes.
  • ftp—used to receive data over the File Transfer Protocol. It is available for Windows and Linux Agents.
  • nfs—used to receive data over the Network File System protocol. It is available for Windows and Linux Agents.
  • wmi—used to obtain data using Windows Management Instrumentation. It is available for Windows Agents.
  • wec—used to receive data using the Windows Event Collector. It is available for Windows Agents.
  • snmp—used to receive data using the Simple Network Management Protocol. It is available for Windows and Linux Agents.

In this section

Viewing connector settings

Adding a connector

Connector settings

Page top

[Topic 233566]

Viewing connector settings

To view connector settings:

  1. In the KUMA web interface, select ResourcesConnectors.
  2. In the folder structure, select the folder containing the relevant connector.
  3. Select the connector whose settings you want to view.

The settings of connectors are displayed on two tabs: Basic settings and Advanced settings. For a detailed description of each connector settings, please refer to the Connector settings section.

Page top

[Topic 233570]

Adding a connector

You can enable the display of non-printing characters for all entry fields except the Description field.

To add a connector:

  1. In the KUMA web interface, select ResourcesConnectors.
  2. In the folder structure, select the folder in which the resource should reside.

    Root folders correspond to tenants. To make a resource available to a specific tenant, the resource should be created in the folder of this tenant.

    If the required folder is absent from the folder tree, you need to create it.

    By default, added connectors are created in the Shared folder.

  3. Click the Add connector button.
  4. Define the settings for the selected connector type.

    The settings that you must specify for each type of connector are provided in the Connector settings section.

  5. Click the Save button.
Page top

[Topic 233592]

Connector settings

This section describes the settings of all connector types supported by KUMA.

In this section

Internal type

Tcp type

Udp type

Netflow type

Sflow type

Nats type

Kafka type

Http type

Sql type

File type

Diode type

Ftp type

Nfs type

Wmi type

Wec type

Snmp type

Page top

[Topic 220738]

Internal type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, internal.
    • URL (required)—URL that you need to connect to.

      Available formats: hostname:port, IPv4:port, IPv6:port, :port.

    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Debug—a drop-down list where you can specify whether resource logging should be enabled.

      By default it is Disabled.

Page top

[Topic 220739]

Tcp type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, tcp.
    • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port.
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), the default value is \n.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Buffer size is used to set a buffer size for the connector. The default value is 1 MB, and the maximum value is 64 MB.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • TLS mode specifies whether TLS encryption is used:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • Compression—you can use Snappy compression. By default, compression is disabled.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220740]

Udp type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, udp.
    • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port.
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), events are not separated.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Buffer size is used to set a buffer size for the connector. The default value is 16 KB, and the maximum value is 64 KB.
    • Workers—used to set worker count for the connector. The default value is 1.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Compression—you can use Snappy compression. By default, compression is disabled.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220741]

Netflow type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, netflow.
    • URL (required)—URL that you need to connect to.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Buffer size is used to set a buffer size for the connector. The default value is 16 KB, and the maximum value is 64 KB.
    • Workers—used to set worker count for the connector. The default value is 1.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 233206]

Sflow type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, sflow.
    • URL (required)—a URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Buffer size is used to set a buffer size for the connector. The default value is 1 MB, and the maximum value is 64 MB.
    • Workers—used to set the amount of workers for a connector. The default value is 1.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—drop-down list that lets you enable resource logging. By default it is Disabled.
Page top

[Topic 220742]

Nats type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, nats.
    • URL (required)—URL that you need to connect to.
    • Topic (required)—the topic for NATS messages. Must contain from 1 to 255 Unicode characters.
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), events are not separated.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Buffer size is used to set a buffer size for the connector. The default value is 16 KB, and the maximum value is 64 KB.
    • GroupID—the GroupID parameter for NATS messages. Must contain from 1 to 255 Unicode characters. The default value is default.
    • Workers—used to set worker count for the connector. The default value is 1.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Storage ID is a NATS storage identifier.
    • TLS mode specifies whether TLS encryption is used:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—use encryption without certificate verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.
      • Custom CA—use encryption with verification that the certificate was signed by a Certificate Authority. The secret containing the certificate is selected from the Custom CA drop-down list, which is displayed when this option is selected.

        Creating a certificate signed by a Certificate Authority

        To use this TLS mode, you must do the following on the KUMA Core server (OpenSSL commands are used in the examples below):

        1. Create the key that will be used by the Certificate Authority.

          Example command: openssl genrsa -out ca.key 2048

        2. Generate a certificate for the key that was just created.

          Example command: openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

        3. Create a private key and a request to have it signed by the Certificate Authority.

          Example command: openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

        4. Create a certificate signed by the Certificate Authority. The subjectAltName must include the domain names or IP addresses of the server for which the certificate is being created.

          Example command: openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

        5. The obtained server.crt certificate should be uploaded in the KUMA web interface as a certificate-type secret, which should then be selected from the Custom CA drop-down list.

      When using TLS, it is impossible to specify an IP address as a URL.

      To use KUMA certificates on third-party machines, you must change the certificate file extension from CERT to CRT. Otherwise, error x509: certificate signed by unknown authority may be returned.

    • Compression—you can use Snappy compression. By default, compression is disabled.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220744]

Kafka type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, kafka.
    • URL—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port.
    • Topic—subject of Kafka messages. Must contain from 1 to 255 of the following characters: a–z, A–Z, 0–9, ".", "_", "-".
    • Authorization—requirement for Agents to complete authorization when connecting to the connector:
      • disabled (by default).
      • PFX.

        When this option is selected, a certificate must be generated with a private key in PKCS#12 container format in an external Certificate Authority. Then the certificate must be exported from the key store and uploaded to the KUMA web interface as a PFX secret.

        Add PFX secret

        1. If you previously uploaded a PFX certificate, select it from the Secret drop-down list.

          If no certificate was previously added, the drop-down list shows No data.

        2. If you want to add a new certificate, click the AD_plus button on the right of the Secret list.

          The Secret window opens.

        3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
        4. Click the Upload PFX button to select the file containing your previously exported certificate with a private key in PKCS#12 container format.
        5. In the Password field, enter the certificate security password that was set in the Certificate Export Wizard.
        6. Click the Save button.

        The certificate will be added and displayed in the Secret list.

      • plain.

        If this option is selected, you must indicate the secret containing user account credentials for authorization when connecting to the connector.

        Add secret

        1. If you previously created a secret, select it from the Secret drop-down list.

          If no secret was previously added, the drop-down list shows No data.

        2. If you want to add a new secret, click the AD_plus button on the right of the Secret list.

          The Secret window opens.

        3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
        4. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
        5. If necessary, add any other information about the secret in the Description field.
        6. Click the Save button.

        The secret will be added and displayed in the Secret list.

    • GroupID—the GroupID parameter for Kafka messages. Must contain from 1 to 255 of the following characters: a–z, A–Z, 0–9, ".", "_", "-".
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), events are not separated.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • TLS mode specifies whether TLS encryption is used:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—use encryption without certificate verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.
      • Custom CA—use encryption with verification that the certificate was signed by a Certificate Authority. The secret containing the certificate is selected from the Custom CA drop-down list, which is displayed when this option is selected.

        Creating a certificate signed by a Certificate Authority

        To use this TLS mode, you must do the following on the KUMA Core server (OpenSSL commands are used in the examples below):

        1. Create the key that will be used by the Certificate Authority.

          Example command: openssl genrsa -out ca.key 2048

        2. Generate a certificate for the key that was just created.

          Example command: openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

        3. Create a private key and a request to have it signed by the Certificate Authority.

          Example command: openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

        4. Create a certificate signed by the Certificate Authority. The subjectAltName must include the domain names or IP addresses of the server for which the certificate is being created.

          Example command: openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

        5. The obtained server.crt certificate should be uploaded in the KUMA web interface as a certificate-type secret, which should then be selected from the Custom CA drop-down list.

      When using TLS, it is impossible to specify an IP address as a URL.

      To use KUMA certificates on third-party machines, you must change the certificate file extension from CERT to CRT. Otherwise, error x509: certificate signed by unknown authority may be returned.

    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220745]

Http type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, http.
    • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port.
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), events are not separated.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • TLS mode specifies whether TLS encryption is used:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • Proxy—a drop-down list where you can select a proxy server resource.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220746]

Sql type

KUMA supports multiple types of databases.

The program supports the following types of SQL databases:

  • SQLite.
  • MSSQL.
  • MySQL.
  • PostgreSQL.
  • Cockroach.
  • Oracle.
  • Firebird.

When creating a connector, you must specify general connector settings and specific database connection settings.

On the Basic settings tab, you must specify the following values for the connector:

  • Name (required)—unique name of the resource. Must contain from 1 to 128 Unicode characters.
  • Type (required)—connector type, sql.
  • Tenant (required)—name of the tenant that owns the resource.
  • Default query (required)—SQL query that is executed when connecting to the database.
  • Poll interval, sec —interval for executing SQL queries. This value is specified in seconds.

    The default value is 10 seconds.

  • Description—up to 256 Unicode characters describing the resource.

To connect to the database, you need to define the values of the following settings on the Basic settings tab:

  • URL (required)—secret that stores a list of URLs for connecting to the database.

    If necessary, you can edit or create a secret.

    1. Click the AddResource button.

      The secret window is displayed.

    2. Define the values for the following settings:
      1. Name—the name of the added secret.
      2. Typeurls.

        This value is set by default and cannot be changed.

      3. URL—URL of the database.

        You must keep in mind that each type of database uses its own URL format for connections.

        Available URL formats are as follows:

        • For SQLite:
          • sqlite3://file:<file_path>

          A question mark (?) is used as a placeholder.

        • For MSSQL:
          • sqlserver://<user>:<password>@<server:port>/<instance_name>?database=<database> (recommended)
          • sqlserver://<user>:<password>@<server>?database=<database>&encrypt=disable

          The characters @p1 are used as a placeholder.

        • For MySQL:
          • mysql://<user>:<password>@tcp(<server>:<port>)/<database>

          The characters %s are used as a placeholder.

        • For PostgreSQL:
          • postgres://<user>:<password>@<server>/<database>?sslmode=disable

          The characters $1 are used as a placeholder.

        • For Cockroach:
          • postgres://<user>:<password>@<server>:<port>/<database>?sslmode=disable

          The characters $1 are used as a placeholder.

        • For Firebird:
          • firebirdsql://<user>:<password>@<server>:<port>/<database>

          A question mark (?) is used as a placeholder.

      4. Description—any additional information.
    3. If necessary, click Add and specify an additional URL.

      In this case, if one URL is not available, the program connects to the next URL specified in the list of addresses.

    4. Click the Save button.
    1. Click the EditResource button.

      The secret window is displayed.

    2. Specify the values for the settings that you want to change.

      You can change the following values:

      1. Name—the name of the added secret.
      2. URL—URL of the database.

        You must keep in mind that each type of database uses its own URL format for connections.

        Available URL formats are as follows:

        • For SQLite:
          • sqlite3://file:<file_path>

          A question mark (?) is used as a placeholder.

        • For MSSQL:
          • sqlserver://<user>:<password>@<server:port>/<instance_name>?database=<database> (recommended)
          • sqlserver://<user>:<password>@<server>?database=<database>&encrypt=disable

          The characters @p1 are used as a placeholder.

        • For MySQL:
          • mysql://<user>:<password>@tcp(<server>:<port>)/<database>

          The characters ? are used as placeholders.

        • For PostgreSQL:
          • postgres://<user>:<password>@<server>/<database>?sslmode=disable

          The characters $1 are used as a placeholder.

        • For Cockroach:
          • postgres://<user>:<password>@<server>:<port>/<database>?sslmode=disable

          The characters $1 are used as a placeholder.

        • For Firebird:
          • firebirdsql://<user>:<password>@<server>:<port>/<database>

          A question mark (?) is used as a placeholder.

      3. Description—any additional information.
    3. If necessary, click Add and specify an additional URL.

      In this case, if one URL is not available, the program connects to the next URL specified in the list of addresses.

    4. Click the Save button.

    When creating connections, strings containing account credentials with special characters may be incorrectly processed. If an error occurs when creating a connection but you are sure that the settings are correct, enter the special characters in percent encoding.

    Codes of special characters

    !

    #

    $

    %

    &

    '

    (

    )

    *

    +

    %21

    %23

    %24

    %25

    %26

    %27

    %28

    %29

    %2A

    %2B

    ,

    /

    :

    ;

    =

    ?

    @

    [

    ]

    \

    %2C

    %2F

    %3A

    %3B

    %3D

    %3F

    %40

    %5B

    %5D

    %5C

    The following special characters are not supported in passwords used to access SQL databases: space, [, ], :, /, #, %, \.

  • Identity column (required)—name of the column that contains the ID for each row of the table.
  • Identity seed (required)—identity column value that will be used to determine the specific line to start reading data from the SQL table.
  • Query—field for an additional SQL query. The query indicated in this field is performed instead of the default query.
  • Poll interval, sec —interval for executing SQL queries. The interval defined in this field replaces the default interval for the connector.

    This value is specified in seconds. The default value is 10 seconds.

On the Advanced settings tab, you need to specify the following settings for the connector:

  • Character encoding—the specific encoding of the characters. The default value is UTF-8.

    KUMA converts SQL responses to UTF-8 encoding. You can configure the SQL server to send responses in UTF-8 encoding or change the encoding of incoming messages on the KUMA side.

  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.

Within a single connector, you can create a connection for multiple supported databases.

To create a connection for multiple SQL databases:

  1. Click the Add connection button.
  2. Specify the URL, Identity column, Identity seed, Query, and Poll interval, sec values.
  3. Repeat steps 1–2 for each required connection.

Supported SQL types and their specific usage features

The UNION operator is not supported by the SQL Connector resources.

The following SQL types are supported:

  • MSSQL

    Example URLs:

    • sqlserver://{user}:{password}@{server:port}/{instance_name}?database={database} – (recommended option)
    • sqlserver://{user}:{password}@{server}?database={database}

    The characters @p1 are used as a placeholder in the SQL query.

    If you need to connect using domain account credentials, specify the account name in <domain>%5C<user> format. For example: sqlserver://domain%5Cuser:password@ksc.example.com:1433/SQLEXPRESS?database=KAV.

  • MySQL

    Example URL: mysql://{user}:{password}@tcp({server}:{port})/{database}

    The characters ? are used as placeholders in the SQL query.

  • PostgreSQL

    Example URL: postgres://{user}:{password}@{server}/{database}?sslmode=disable

    The characters $1 are used as a placeholder in the SQL query.

  • CockroachDB

    Example URL: postgres://{user}:{password}@{server}:{port}/{database}?sslmode=disable

    The characters $1 are used as a placeholder in the SQL query.

  • SQLite3

    Example URL: sqlite3://file:{file_path}

    A question mark (?) is used as a placeholder in the SQL query.

  • Oracle DB

    Example URL: oracle://{user}/{password}@{server}:{port}/{service_name}

    Easy Connect syntax is used. The characters :val are used as a placeholder in the SQL query.

    When querying the Oracle DB, if the initial value of the ID is in datetime format, the Oracle to_timestamp_tz function should be used to add the date conversion to the SQL query. For example, select * from connections where login_time > to_timestamp_tz(:val, 'YYYY-MM-DD"T"HH24:MI:SSTZH:TZM'). In this example, Connections is the Oracle DB table and the :val variable is taken from the Identity seed field, therefore it must be indicated in a format with the timezone (for example, 2021-01-01T00:10:00+03:00).

    To access the Oracle DB, the libaio1 package must be installed.

  • Firebird SQL

    Example URL: firebirdsql://{user}:{password}@{server}:{port}/{database}

    A question mark (?) is used as a placeholder in the SQL query.

A sequential request for database information is supported in SQL queries. For example, if you type select * from <name of data table> where id > <placeholder> in the Query field, the Identity seed field value will be used as the placeholder value the first time you query the table. In addition, the service that utilizes the SQL connector saves the ID of the last read entry, and the ID of this entry will be used as the placeholder value in the next query to the database.

Examples of SQL requests

SQLite, Firebird—select * from table_name where id > ?

MSSQL—select * from table_name where id > @p1

MySQL—select * from table_name where id > ?

PostgreSQL, Cockroach—select * from table_name where id > $1

Oracle—select * from table_name where id > :val

Page top

[Topic 220748]

File type

The file type is used to retrieve data from any text file. One string in a file is considered to be one event. Strings delimiter: \n. This type of connector is available for Linux Agents.

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, file.
    • URL (required)—full path to the file that you need to interact with. For example, /var/log/*som?[1-9].log.

      File and folder mask templates

      Masks:

      • '*'—matches any sequence of characters.
      • '[' [ '^' ] { range of characters } ']'—class of characters (should not be left blank).
      • '?'—matches any single character.

      Ranges of characters:

      • [0-9]—digits;
      • [a-zA-Z]—Latin alphabet characters.

      Examples:

      • /var/log/*som?[1-9].log
      • /mnt/dns_logs/*/dns.log
      • /mnt/proxy/access*.log

      Limitations when using prefixes in file paths

      Prefixes that cannot be used when specifying paths to files:

      • /*
      • /bin
      • /boot
      • /dev
      • /etc
      • /home
      • /lib
      • /lib64
      • /proc
      • /root
      • /run
      • /sys
      • /tmp
      • /usr/*
      • /usr/bin/
      • /usr/local/*
      • /usr/local/sbin/
      • /usr/local/bin/
      • /usr/sbin/
      • /usr/lib/
      • /usr/lib64/
      • /var/*
      • /var/lib/
      • /var/run/
      • /opt/kaspersky/kuma/

      Files are available at the following paths:

      • /opt/kaspersky/kuma/clickhouse/logs/
      • /opt/kaspersky/kuma/mongodb/log/
      • /opt/kaspersky/kuma/victoria-metrics/log/
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 232912]

Diode type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, diode.
    • Data diode destination directory (required)—full path to the KUMA collector server directory where the data diode moves files containing events from the isolated network segment. After the connector has read these files, the files are deleted from the directory. The path can contain up to 255 Unicode characters.

      Limitations when using prefixes in paths

      Prefixes that cannot be used when specifying paths to files:

      • /*
      • /bin
      • /boot
      • /dev
      • /etc
      • /home
      • /lib
      • /lib64
      • /proc
      • /root
      • /run
      • /sys
      • /tmp
      • /usr/*
      • /usr/bin/
      • /usr/local/*
      • /usr/local/sbin/
      • /usr/local/bin/
      • /usr/sbin/
      • /usr/lib/
      • /usr/lib64/
      • /var/*
      • /var/lib/
      • /var/run/
      • /opt/kaspersky/kuma/

      Files are available at the following paths:

      • /opt/kaspersky/kuma/clickhouse/logs/
      • /opt/kaspersky/kuma/mongodb/log/
      • /opt/kaspersky/kuma/victoria-metrics/log/
    • Delimiter is used to specify a character representing the delimiter between events. Available values: \n, \t, \0. If no separator is specified (an empty value is selected), the default value is \n.

      This setting must match for the connector and destination resources used to relay events from an isolated network segment via the data diode.

    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Workers—the number of services processing the request queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • Poll interval, sec —frequency at which the files are read from the directory containing events from the data diode. The default value is 2. The value is specified in seconds.
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Compression—you can use Snappy compression. By default, compression is disabled.

      This setting must match for the connector and destination resources used to relay events from an isolated network segment via the data diode.

    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.

Page top

[Topic 220749]

Ftp type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, ftp.
    • URL (required)—actual URL of the file or file mask beginning with 'ftp://'. For a file mask, you can use * ? [...].

      File mask templates

      Masks:

      • '*'—matches any sequence of characters.
      • '[' [ '^' ] { range of characters } ']'—class of characters (should not be left blank).
      • '?'—matches any single character.

      Ranges of characters:

      • [0-9]—digits;
      • [a-zA-Z]—Latin alphabet characters.

      Examples:

      • /var/log/*som?[1-9].log
      • /mnt/dns_logs/*/dns.log
      • /mnt/proxy/access*.log

      If the URL does not include the FTP server port, port 21 is inserted.

    • URL credentials—for specifying the user name and password for the FTP server. If there is no user name and password, the line remains empty.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220750]

Nfs type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, nfs.
    • URL (required)—path to the remote folder in the format nfs://host/path.
    • File name mask (required)—mask used to filter files containing events. Use of masks is acceptable "*", "?", "[...]".
    • Poll interval, sec—polling interval. The time interval after which files are re-read from the remote system. The value is specified in seconds. The default value is 0.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 220751]

Wmi type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, wmi.
    • URL (required)—URL of the collector being created, for example: kuma-collector.example.com:7221.

      The creation of a collector for receiving data using Windows Management Instrumentation results in the automatic creation of an agent that will receive the necessary data on the remote machine and forward that data to the collector service. In the URL, you must specify the address of this collector. The URL is known in advance if you already know on which server you plan to install the service. However, this field can also be filled after the Installation Wizard is finished by copying the URL data from the ResourcesActive services section.

    • Description—up to 256 Unicode characters describing the resource.
    • Default credentials—drop-down list that does not require any value to be selected. The account credentials used to connect to hosts must be provided in the Remote hosts table (see below).
    • The Remote hosts table lists the remote Windows assets that you can connect to. Available columns:
      • Host (required) is the IP address or domain name of the asset from which you want to receive data. For example, "machine-1.example.com".
      • Domain (required)—name of the domain in which the remote device resides. For example, "example.com"
      • Log type—drop-down list to select the name of the Windows logs that you need to retrieve. By default, only preconfigured logs are displayed in the list, but you can add custom logs to the list by typing their name in the Windows logs field and then pressing ENTER. KUMA service and resource configurations may require additional changes in order to process custom logs correctly.

        Logs that are available by default:

        • Application
        • ForwardedEvents
        • Security
        • System
        • HardwareEvents
      • Secret—account credentials for accessing a remote Windows asset with permissions to read the logs. If you leave this field blank, the credentials from the secret selected in the Default credentials drop-down list will be used. The login in the secret resource must be specified without the domain. The domain value for accessing the host is taken from the Domain column of the Remote hosts table.

        You can select the secret resource from the drop-down list or create one using the AddResource button. The selected secret can be changed by clicking on the EditResource button.

  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Compression—you can use Snappy compression. By default, compression is disabled.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.

Receiving events from a remote machine

Conditions for receiving events from a remote Windows machine hosting a KUMA agent:

  • To start the KUMA agent on the remote machine, you must use an account with the Log on as a service permissions.
  • To receive events from the KUMA agent, you must use an account with Event Log Readers permissions. For domain servers, one such user account can be created so that a group policy can be used to distribute its rights to read logs to all servers and workstations in the domain.
  • TCP ports 135, 445, and 49152-65535 must be opened on the remote Windows machines.
  • You need to launch the following services on the remote machines:
    • Remote Procedure Call (RPC)
    • RPC Endpoint Mapper
Page top

[Topic 220752]

Wec type

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, wec.
    • URL (required)—URL of the collector being created, for example: kuma-collector.example.com:7221.

      The creation of a collector for receiving data using Windows Event Collector results in the automatic creation of an agent that will receive the necessary data on the remote machine and forward that data to the collector service. In the URL, you must specify the address of this collector. The URL is known in advance if you already know on which server you plan to install the service. However, this field can also be filled after the Installation Wizard is finished by copying the URL data from the ResourcesActive services section.

    • Description—up to 256 Unicode characters describing the resource.
    • Windows logs (required)—Select the names of the Windows logs you want to retrieve from this drop-down list. By default, only preconfigured logs are displayed in the list, but you can add custom logs to the list by typing their name in the Windows logs field and then pressing ENTER. KUMA service and resource configurations may require additional changes in order to process custom logs correctly.

      Preconfigured logs:

      • Application
      • ForwardedEvents
      • Security
      • System
      • HardwareEvents
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Compression—you can use Snappy compression. By default, compression is disabled.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.

To start the KUMA agent on the remote machine, you must use an account with the Log on as a service permissions.

To receive events, you must use an account with Event Log Readers permissions. For domain servers, one such user account can be created so that a group policy can be used to distribute its rights to read logs to all servers and workstations in the domain.

Page top

[Topic 220753]

Snmp type

To process events received via SNMP, you must use json normalizer.

It is available for Windows and Linux Agents. Supported protocol versions:

  • snmpV1
  • snmpV2
  • snmpV3

When creating this type of connector, you need to define values for the following settings:

  • Basic settings tab:
    • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
    • Tenant (required)—name of the tenant that owns the resource.
    • Type (required)—connector type, snmp.
    • SNMP version (required)—This drop-down list allows you to select the version of the protocol to use.
    • Host (required)—hostname or its IP address. Available formats: hostname, IPv4, IPv6.
    • Port (required)—port for connecting to the host. Typically 161 or 162 are used.

    The SNMP version, Host and Port settings define one connection to a SNMP resource. You can create several such connections in one connector by adding new ones using the SNMP resource button. You can delete connections by using the delete-icon button.

    • Secret (required) is a drop-down list to select the secret resource which stores the credentials for connecting via the Simple Network Management Protocol. The secret type must match the SNMP version. If required, a secret can be created in the connector creation window using the AddResource button. The selected secret can be changed by clicking on the EditResource button.
    • In the Source data table you can specify the rules for naming the received data, according to which OIDs, object identifiers, will be converted into keys with which the normalizer can interact. Available table columns:
      • Parameter name (required)—an arbitrary name for the data type. For example, "Site name" or "Site uptime".
      • OID (required)—a unique identifier that determines where to look for the required data at the event source. For example, "1.3.6.1.2.1.1.5".
      • Key (required)—a unique identifier returned in response to a request to the asset with the value of the requested setting. For example, "sysName". This key can be accessed when normalizing data.
    • Description—up to 256 Unicode characters describing the resource.
  • Advanced settings tab:
    • Character encoding setting specifies character encoding. The default value is UTF-8.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
Page top

[Topic 217722]

Aggregation rules

Aggregation rule resources are used to group repeated events into aggregation events.

Available settings:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Threshold—the number of events that should be received before the aggregation rule is triggered and the events are aggregated. The default value is 100.
  • Triggered rule lifetime (required)—time period (in seconds) when events are received for aggregation. On the timeout, the aggregation rule is triggered and a new event is created. The default value is 60.
  • Description—up to 256 Unicode characters describing the resource.
  • Identical fields (required)—in this drop-down list you can select fields that should be used to group events for aggregation.
  • Unique fields—in this drop-down list you can select the fields that will disqualify events from aggregation even if their Identical fields parameter match the aggregation rule condition.
  • Sum fields—in this drop-down list, you can select the fields whose values should be summed during aggregation.
  • Filter—settings block in which you can specify the conditions for identifying events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    In aggregation rule resources, do not use filters with the TI operand or the TIDetect, inActiveDirectoryGroup, and hasVulnerability operators. The Active Directory fields for which you can use the inActiveDirectoryGroup operator will appear during the enrichment stage (after aggregation rules are executed).

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 217863]

Enrichment rules

Enrichment rule resources are used to update the event fields.

Available Enrichment rule resource parameters:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Source kind (required)—drop-down list for selecting the type of incoming events. Depending on the selected type, you may see the following additional settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary to the event field.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can use the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can use the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can use the Add row button to add a string, and can use the cross button to remove a string.

    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

    • geographic data

      This type of enrichment is used to add IP address geographic data to event fields. Learn more about linking IP addresses to geographic data.

      When this type is selected, in the Mapping geographic data to event fields settings block, you must specify from which event field the IP address will be read, select the required attributes of geographic data, and define the event fields in which geographic data will be written:

      1. In the Event field with IP address drop-down list, select the event field from which the IP address is read. Geographic data uploaded to KUMA is matched against this IP address.

        You can use the Add event field with IP address button to specify multiple event fields with IP addresses that require geographic data enrichment. You can delete event fields added in this way by clicking the Delete event field with IP address button.

        When the SourceAddress, DestinationAddress, and DeviceAddress event fields are selected, the Apply default mapping button becomes available. You can use this button to add preconfigured mapping pairs of geographic data attributes and event fields.

      2. For each event field you need to read the IP address from, select the type of geographic data and the event field to which the geographic data should be written.

        You can use the Add geodata attribute button to add field pairs for Geodata attributeEvent field to write to. You can also configure different types of geographic data for one IP address to be written to different event fields. To delete a field pair, click cross-red.

        • In the Geodata attribute field, select which geographic data corresponding to the read IP address should be written to the event. Available geographic data attributes: Country, Region, City, Longitude, Latitude.
        • In the Event field to write to, select the event field which the selected geographic data attribute must be written to.

        You can write identical geographic data attributes to different event fields. If you configure multiple geographic data attributes to be written to the same event field, the event will be enriched with the last mapping in the sequence.

  • Debug—you can use this drop-down list to enable logging of service operations. Logging is disabled by default.
  • Description—up to 256 Unicode characters describing the resource.
  • Filter—settings block in which you can specify the conditions for identifying events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 217842]

Destinations

Destination resources are used to receive events and then forward them to other services. The settings of destinations are configured on two tabs: Basic settings and Advanced settings. The available settings depend on the selected type of destination:

  • nats—used for NATS communications.
  • tcp—used for communications over TCP.
  • http—used for HTTP communications.
  • diode—used to transmit events using a data diode.
  • kafka—used for Kafka communications.
  • file—used for writing to a file.
  • storage—used to transmit data to the storage.
  • correlator—used to transmit data to the correlator.

In this section

Nats type

Tcp type

Http type

Diode type

Kafka type

File type

Storage type

Correlator type

Page top

[Topic 232952]

Nats type

The nats type is used for NATS communications

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required) – destination type, nats.
  • URL (required)—URL that you need to connect to.
  • Topic (required)—the topic for NATS messages. Must contain from 1 to 255 Unicode characters.
  • Delimiter is used to specify a character representing the delimiter between events. By default, \n is used.
  • Authorization—type of authorization when connecting to the specified URL:
    • disabled (by default).
    • plain.

      If this option is selected, you must indicate the secret containing user account credentials for authorization when connecting to the connector.

      Add secret

      1. If you previously created a secret, select it from the Secret drop-down list.

        If no secret was previously added, the drop-down list shows No data.

      2. If you want to add a new secret, click the AD_plus button on the right of the Secret list.

        The Secret window opens.

      3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
      4. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
      5. If necessary, add any other information about the secret in the Description field.
      6. Click the Save button.

      The secret will be added and displayed in the Secret list.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Compression—you can use Snappy compression. By default, compression is disabled.
  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • Storage ID is a NATS storage identifier.
  • TLS mode specifies whether TLS encryption is used:
    • Disabled (default)—do not use TLS encryption.
    • Enabled—use encryption without certificate verification.
    • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.
    • Custom CA—use encryption with verification that the certificate was signed by a Certificate Authority. The secret containing the certificate is selected from the Custom CA drop-down list, which is displayed when this option is selected.

      Creating a certificate signed by a Certificate Authority

      To use this TLS mode, you must do the following on the KUMA Core server (OpenSSL commands are used in the examples below):

      1. Create the key that will be used by the Certificate Authority.

        Example command: openssl genrsa -out ca.key 2048

      2. Generate a certificate for the key that was just created.

        Example command: openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

      3. Create a private key and a request to have it signed by the Certificate Authority.

        Example command: openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

      4. Create a certificate signed by the Certificate Authority. The subjectAltName must include the domain names or IP addresses of the server for which the certificate is being created.

        Example command: openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

      5. The obtained server.crt certificate should be uploaded in the KUMA web interface as a certificate-type secret, which should then be selected from the Custom CA drop-down list.

    When using TLS, it is impossible to specify an IP address as a URL.

  • Delimiter is used to specify the character delimiting the events. By default, \n is used.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232960]

Tcp type

The tcp type is used for TCP communications

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required)—destination type (tcp).
  • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port. You can use the URL button to add multiple addresses if your KUMA license includes the High Level Availability module.
  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Compression—you can use Snappy compression. By default, compression is disabled.
  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • TLS mode specifies whether TLS encryption is used:
    • Disabled (default)—do not use TLS encryption.
    • Enabled—use encryption without certificate verification.
    • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

    When using TLS, it is impossible to specify an IP address as a URL.

  • Delimiter is used to specify the character delimiting the events. By default, \n is used.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232961]

Http type

The http type is used for HTTP communications.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required)—destination type (http).
  • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port. You can use the URL button to add multiple addresses if your KUMA license includes the High Level Availability module.
  • Authorization—type of authorization when connecting to the specified URL:
    • disabled (by default).
    • plain.

      If this option is selected, you must indicate the secret containing user account credentials for authorization when connecting to the connector.

      Add secret

      1. If you previously created a secret, select it from the Secret drop-down list.

        If no secret was previously added, the drop-down list shows No data.

      2. If you want to add a new secret, click the AD_plus button on the right of the Secret list.

        The Secret window opens.

      3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
      4. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
      5. If necessary, add any other information about the secret in the Description field.
      6. Click the Save button.

      The secret will be added and displayed in the Secret list.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Compression—you can use Snappy compression. By default, compression is disabled.
  • Proxy is a drop-down list for proxy server resource selection.
  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • TLS mode specifies whether TLS encryption is used:
    • Disabled (default)—do not use TLS encryption.
    • Enabled—use encryption without certificate verification.
    • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.
    • Custom CA—use encryption with verification that the certificate was signed by a Certificate Authority. The secret containing the certificate is selected from the Custom CA drop-down list, which is displayed when this option is selected.

      Creating a certificate signed by a Certificate Authority

      To use this TLS mode, you must do the following on the KUMA Core server (OpenSSL commands are used in the examples below):

      1. Create the key that will be used by the Certificate Authority.

        Example command: openssl genrsa -out ca.key 2048

      2. Generate a certificate for the key that was just created.

        Example command: openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

      3. Create a private key and a request to have it signed by the Certificate Authority.

        Example command: openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

      4. Create a certificate signed by the Certificate Authority. The subjectAltName must include the domain names or IP addresses of the server for which the certificate is being created.

        Example command: openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

      5. The obtained server.crt certificate should be uploaded in the KUMA web interface as a certificate-type secret, which should then be selected from the Custom CA drop-down list.

    When using TLS, it is impossible to specify an IP address as a URL.

  • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
    • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
    • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
    • Round robin. Packets with events will be evenly distributed among available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
  • Delimiter is used to specify the character delimiting the events. By default, \n is used.
  • Path—the path that must be added for the URL request. For example, if you specify the path /input and enter 10.10.10.10 for the URL, requests for 10.10.10.10/input will be sent from the destination.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • You can set health checks using the Health check path and Health check timeout fields. You can also disable health checks by selecting the Health Check Disabled check box.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232967]

Diode type

The diode type is used to transmit events using a data diode.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required) – destination type, diode.
  • Data diode source directory (required)—the directory from which the data diode transfers events. The path can contain up to 255 Unicode characters.

    Limitations when using prefixes in paths on Windows servers

    On Windows servers, absolute paths to directories must be specified. Directories with names matching the following regular expressions cannot be used:

    • ^[a-zA-Z]:\\Program Files
    • ^[a-zA-Z]:\\Program Files \(x86\)
    • ^[a-zA-Z]:\\Windows
    • ^[a-zA-Z]:\\Program Files\\Kaspersky Lab\\KUMA

    Limitations when using prefixes in paths on Linux servers

    Prefixes that cannot be used when specifying paths to files:

    • /*
    • /bin
    • /boot
    • /dev
    • /etc
    • /home
    • /lib
    • /lib64
    • /proc
    • /root
    • /run
    • /sys
    • /tmp
    • /usr/*
    • /usr/bin/
    • /usr/local/*
    • /usr/local/sbin/
    • /usr/local/bin/
    • /usr/sbin/
    • /usr/lib/
    • /usr/lib64/
    • /var/*
    • /var/lib/
    • /var/run/
    • /opt/kaspersky/kuma/

    Files are available at the following paths:

    • /opt/kaspersky/kuma/clickhouse/logs/
    • /opt/kaspersky/kuma/mongodb/log/
    • /opt/kaspersky/kuma/victoria-metrics/log/
  • Temporary directory—directory in which events are prepared for transmission to the data diode.

    Events are collected in a file when a timeout (10 seconds by default) or a buffer overflow occurs. The prepared file is moved to the directory specified in the Data diode source directory field. The checksum (SHA-256) of the file contents is used as the name of the file containing events.

    The temporary directory must be different from the data diode source directory.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Compression—you can use Snappy compression. By default, compression is disabled.

    This setting must match for the connector and destination resources used to relay events from an isolated network segment via the data diode.

  • Buffer size is used to set the size of the buffer. Default size is 64 MB. It cannot exceed 64 MB.
  • Timeout—field in which you can specify the interval (in seconds) at which the data is moved from the temporary directory to the directory for the data diode. The default value is 10.
  • Delimiter is used to specify the character delimiting the events. By default, \n is used.

    This setting must match for the connector and destination resources used to relay events from an isolated network segment via the data diode.

  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232962]

Kafka type

The kafka type is used for Kafka communications.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required)—destination type (kafka).
  • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port. You can use the URL button to add multiple addresses if your KUMA license includes the High Level Availability module.
  • Topic (required)—the topic for Kafka messages. Must contain from 1 to 255 of the following characters: a–z, A–Z, 0–9, ".", "_", "-".
  • Delimiter is used to specify a character representing the delimiter between events. By default, \n is used.
  • Authorization—type of authorization when connecting to the specified URL:
    • disabled (by default).
    • PFX.

      When this option is selected, a certificate must be generated with a private key in PKCS#12 container format in an external Certificate Authority. Then the certificate must be exported from the key store and uploaded to the KUMA web interface as a PFX secret.

      Add PFX secret

      1. If you previously uploaded a PFX certificate, select it from the Secret drop-down list.

        If no certificate was previously added, the drop-down list shows No data.

      2. If you want to add a new certificate, click the AD_plus button on the right of the Secret list.

        The Secret window opens.

      3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
      4. Click the Upload PFX button to select the file containing your previously exported certificate with a private key in PKCS#12 container format.
      5. In the Password field, enter the certificate security password that was set in the Certificate Export Wizard.
      6. Click the Save button.

      The certificate will be added and displayed in the Secret list.

    • plain.

      If this option is selected, you must indicate the secret containing user account credentials for authorization when connecting to the connector.

      Add secret

      1. If you previously created a secret, select it from the Secret drop-down list.

        If no secret was previously added, the drop-down list shows No data.

      2. If you want to add a new secret, click the AD_plus button on the right of the Secret list.

        The Secret window opens.

      3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
      4. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
      5. If necessary, add any other information about the secret in the Description field.
      6. Click the Save button.

      The secret will be added and displayed in the Secret list.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • TLS mode specifies whether TLS encryption is used:
    • Disabled (default)—do not use TLS encryption.
    • Enabled—use encryption without certificate verification.
    • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.
    • Custom CA—use encryption with verification that the certificate was signed by a Certificate Authority. The secret containing the certificate is selected from the Custom CA drop-down list, which is displayed when this option is selected.

      Creating a certificate signed by a Certificate Authority

      To use this TLS mode, you must do the following on the KUMA Core server (OpenSSL commands are used in the examples below):

      1. Create the key that will be used by the Certificate Authority.

        Example command: openssl genrsa -out ca.key 2048

      2. Generate a certificate for the key that was just created.

        Example command: openssl req -new -x509 -days 365 -key ca.key -subj "/CN=<common host name of Certificate Authority>" -out ca.crt

      3. Create a private key and a request to have it signed by the Certificate Authority.

        Example command: openssl req -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=<common host name of KUMA server>" -out server.csr

      4. Create a certificate signed by the Certificate Authority. The subjectAltName must include the domain names or IP addresses of the server for which the certificate is being created.

        Example command: openssl x509 -req -extfile <(printf "subjectAltName=DNS:domain1.ru,DNS:domain2.com,IP:192.168.0.1") -days 365 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt

      5. The obtained server.crt certificate should be uploaded in the KUMA web interface as a certificate-type secret, which should then be selected from the Custom CA drop-down list.

    When using TLS, it is impossible to specify an IP address as a URL.

  • Delimiter is used to specify the character delimiting the events. By default, \n is used.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232965]

File type

The file type is used for writing data to a file.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required) – destination type, file.
  • URL (required)—path to the file in which the events must be written.

    Limitations when using prefixes in file paths

    Prefixes that cannot be used when specifying paths to files:

    • /*
    • /bin
    • /boot
    • /dev
    • /etc
    • /home
    • /lib
    • /lib64
    • /proc
    • /root
    • /run
    • /sys
    • /tmp
    • /usr/*
    • /usr/bin/
    • /usr/local/*
    • /usr/local/sbin/
    • /usr/local/bin/
    • /usr/sbin/
    • /usr/lib/
    • /usr/lib64/
    • /var/*
    • /var/lib/
    • /var/run/
    • /opt/kaspersky/kuma/

    Files are available at the following paths:

    • /opt/kaspersky/kuma/clickhouse/logs/
    • /opt/kaspersky/kuma/mongodb/log/
    • /opt/kaspersky/kuma/victoria-metrics/log/
  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • Delimiter is used to specify the character delimiting the events. By default, \n is used.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232973]

Storage type

The storage type is used to transmit data to the storage.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required)—destination type (storage).
  • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port.

    You can use the URL button to add multiple addresses, even if your KUMA license does not include the High Level Availability module.

    The URL field can be populated automatically by using the Copy service URL drop-down list that displays the active services of the selected type.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Proxy is a drop-down list for proxy server resource selection.
  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
    • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
    • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
    • Round robin. Packets with events will be evenly distributed among available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Health check timeout—health check frequency in seconds.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 232976]

Correlator type

The correlator type is used to transmit data to the correlator.

Available settings:

Basic settings tab:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Disabled toggle switch—used if you do not need to send events to a destination. By default, sending events is enabled.
  • Type (required)—destination type (correlator).
  • URL (required)—URL that you need to connect to. Available formats: hostname:port, IPv4:port, IPv6:port, :port. You can use the URL button to add multiple addresses if your KUMA license includes the High Level Availability module.

    The URL field can be populated automatically by using the Copy service URL drop-down list that displays the active services of the selected type.

  • Description—up to 256 Unicode characters describing the resource.

Advanced settings tab:

  • Proxy is a drop-down list for proxy server resource selection.
  • Buffer size is used to set the size of the buffer. The default value is 16 KB, and the maximum value is 64 KB.
  • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
  • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
  • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
    • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
    • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
    • Round robin. Packets with events will be evenly distributed among available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
  • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
  • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
  • Health check timeout—health check frequency in seconds.
  • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
  • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
  • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 217880]

Filters

Filter resources are used to select events based on user-defined conditions.

This is not true only when filters are used in the collector service, in which the filters select all events that DO NOT satisfy filter conditions.

Filters can be used in collector services, enrichment rule resources, aggregation rule resources, response rule resources, correlation rule resources, and destination resources either as separate filter resources or as built-in filters stored in the service or resource where they were created.

For these resources, you can enable the display of control characters in all input fields except the Description field.

Available settings for filter resources:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters. Inline filters are created in other resources or services and do not have names.
  • Tenant (required)—name of the tenant that owns the resource.
  • Conditions settings block—here you can formulate filtering criteria by creating filter conditions and groups of filters, and by adding existing filter resources.

    You can use the Add group button to add a group of filters. Group operators can be switched between AND, OR, and NOT. Groups, conditions, and existing filter resources can be added to groups of filters.

    You can use the Add filter button to add an existing filter resource, which should be selected in the Select filter drop-down list.

    You can use the Add condition button to add a string containing fields for identifying the condition (see below).

    Conditions, groups, and filters can be deleted by using the cross button.

Settings of conditions:

  • When (required)—in this drop-down list, you can specify whether or not to use the inverted function of the operator.
  • Left operand and Right operand (required)—used to specify the values that the operator will process. The available types depend on the selected operator.

    Operands of filters

    • Event field—used to assign an event field value to the operand. Advanced settings:
      • Event field (required)—this drop-down list is used to select the field from which the value for the operand should be extracted.
    • Active list—used to assign an active list record value to the operand. Advanced settings:
      • Active list (required)—this drop-down list is used to select the active list.
      • Key fields (required)—this is the list of event fields used to create the Active list entry and serve as the Active list entry key.
      • Field (required unless the inActiveList operator is selected)—used to enter the Active list field name from which the value for the operand should be extracted.
    • Dictionary—used to assign a dictionary resource value to the operand. Advanced settings:
      • Dictionary (required)—this drop-down list is used to select the dictionary.
      • Key fields (required)—this is the list of the event fields used to form the dictionary value key.
    • Constant—used to assign a custom value to the operand. Advanced settings:
      • Value (required)—here you enter the constant that you want to assign to the operand.
    • Table—used to assign multiple custom values to the operand. Advanced settings:
      • Dictionary (required)—this drop-down list is used to select a Table-type dictionary.
      • Key fields (required)—this is the list of the event fields used to form the dictionary value key.
    • List—used to assign multiple custom values to the operand. Advanced settings:
      • Value (required)—here you enter the list of constants that you want to assign to the operand. When you type the value in the field and press ENTER, the value is added to the list and you can enter a new value.
    • TI—used to read the CyberTrace threat intelligence (TI) data from the events. Advanced settings:
      • Feed (required)—this field is used to specify the CyberTrace threat category.
      • Key fields (required)—this drop-down list is used to select the event field containing the CyberTrace threat indicators.
      • Field (required)—this field is used to specify the CyberTrace feed field containing the threat indicators.
  • Operator (required)—used to select the condition operator.

    In this drop-down list, you can select the do not match case check box if the operator should ignore the case of values. This check box is ignored if the inSubnet, inActiveList, inCategory, InActiveDirectoryGroup, hasBit, inDictionary operators are selected. This check box is cleared by default.

    Filter operators

    • =—the left operand equals the right operand.
    • <—the left operand is less than the right operand.
    • <=—the left operand is less than or equal to the right operand.
    • >—the left operand is greater than the right operand.
    • >=—the left operand is greater than or equal to the right operand.
    • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
    • contains—the left operand contains values of the right operand.
    • startsWith—the left operand starts with one of the values of the right operand.
    • endsWith—the left operand ends with one of the values of the right operand.
    • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
    • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
    • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
    • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
    • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
    • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
    • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
    • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.

The available operand kinds depends on whether the operand is left (L) or right (R).

Available operand kinds for left (L) and right (R) operands

Operator

Event field type

Active list type

Dictionary type

Table type

TI type

Constant type

List type

=

L,R

L,R

L,R

L,R

L,R

R

R

>

L,R

L,R

L,R

L,R

L

R

 

>=

L,R

L,R

L,R

L,R

L

R

 

<

L,R

L,R

L,R

L,R

L

R

 

<=

L,R

L,R

L,R

L,R

L

R

 

inSubnet

L,R

L,R

L,R

L,R

L,R

R

R

contains

L,R

L,R

L,R

L,R

L,R

R

R

startsWith

L,R

L,R

L,R

L,R

L,R

R

R

endsWith

L,R

L,R

L,R

L,R

L,R

R

R

match

L

L

L

L

L

R

R

hasVulnerability

L

L

L

L

 

 

 

hasBit

L

L

L

L

 

R

R

inActiveList

 

 

 

 

 

 

 

inDictionary

 

 

 

 

 

 

 

inCategory

L

L

L

L

 

R

R

inActiveDirectoryGroup

L

L

L

L

 

R

R

TIDetect

 

 

 

 

 

 

 

Page top

[Topic 217972]

Response rules

You can configure automatic execution of Kaspersky Security Center tasks, Kaspersky Endpoint Detection and Response actions, and startup of a custom script when receiving events for which there are configured response rules.

Automatic execution of Kaspersky Security Center tasks, Kaspersky Endpoint Detection and Response tasks, and KICS for Networks tasks according to response rules is available when integrated with the listed applications.

In this section

Response rules for Kaspersky Security Center

Response rules for a custom script

Response rules for KICS for Networks

Response rules for Kaspersky Endpoint Detection and Response

Page top

[Topic 233363]

Response rules for Kaspersky Security Center

You can configure response rules to automatically start tasks of anti-virus scan and updates on Kaspersky Security Center assets.

When creating and editing response rules for Kaspersky Security Center, you need to define values for the following settings:

  • Name (required)—unique name of the resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Type (required)—ksctasks.

    This is available if KUMA is integrated with Kaspersky Security Center.

  • Kaspersky Security Center task (required)—name of the Kaspersky Security Center task that you need to start. Tasks must be created beforehand, and their names must begin with "KUMA ". For example, KUMA antivirus check (not case-sensitive and without quotation marks).
  • Event field (required)—defines the event field of the asset for which the Kaspersky Security Center task should be started. Possible values:
    • SourceAssetID
    • DestinationAssetID
    • DeviceAssetID
  • Workers—the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

  • Description—you can add up to 4000 Unicode characters describing the resource.
  • Filter—used to define the conditions for the events to be processed by the response rule resource. You can select an existing filter resource from the drop-down list or create a new filter.

To send requests to Kaspersky Security Center, you must ensure that Kaspersky Security Center is available over the UDP protocol.

If a response rule resource is owned by the shared tenant, the displayed Kaspersky Security Center tasks that are available for selection are from the Kaspersky Security Center server that the main tenant is connected to.

If a response rule resource has a selected task that is absent from the Kaspersky Security Center server that the tenant is connected to, the task will not be performed for assets of this tenant. This situation could arise when two tenants are using a common correlator, for example.

Page top

[Topic 233366]

Response rules for a custom script

You can create a script containing commands to be executed on the KUMA server when selected events are detected and configure response rules to automatically run this script. In this case, the program will run the script when it receives events that match the response rules.

The script file is stored on the server where the correlator service using the response resource is installed: /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts. The kuma user of this server requires the permissions to run the script.

When creating and editing response rules for a custom script, you need to define values for the following parameters:

  • Name (required)—unique name of the resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Type (required)—script.
  • Timeout—the number of seconds allotted for the script to finish. If this amount of time is exceeded, the script is terminated.
  • Script name (required)—the name of the script file.

    If the response resource is attached to the correlator service but there is no script file in the /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts folder, the correlator will not work.

  • Script arguments—parameters or event field values that must be passed to the script.

    If the script includes actions taken on files, you should specify the absolute path to these files.

    Parameters can be written with quotation marks (").

    Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field which value must be passed to the script.

    Example: -n "\"usr\": {{.SourceUserName}}"

  • Workers—the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

  • Description—you can add up to 4000 Unicode characters describing the resource.
  • Filter—used to define the conditions for the events to be processed by the response rule resource. You can select an existing filter resource from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 233722]

Response rules for KICS for Networks

You can configure response rules to automatically trigger response actions on KICS for Networks assets. For example, you can change the asset status in KICS for Networks.

When creating and editing response rules for KICS for Networks, you need to define values for the following settings:

  • Name (required)—unique name of the resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Type (required)—kics.
  • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
    • SourceAssetID
    • DestinationAssetID
    • DeviceAssetID
  • KICS for Networks task—response action to be performed when data matching the filter is received. The following types of response actions are available:
    • Change asset status to Authorized.
    • Change asset status to Unauthorized.

    When a response rule is triggered, KUMA will send KICS for Networks an API request to change the status of the specified device to Authorized or Unauthorized.

  • Workers—the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

  • Description—you can add up to 4000 Unicode characters describing the resource.
  • Filter—used to define the conditions for the events to be processed by the response rule resource. You can select an existing filter resource from the drop-down list or create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 237454]

Response rules for Kaspersky Endpoint Detection and Response

You can configure response rules to automatically trigger response actions on Kaspersky Endpoint Detection and Response assets. For example, you can configure automatic asset network isolation.

When creating and editing response rules for Kaspersky Endpoint Detection and Response, you need to define values for the following settings:

  • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
    • SourceAssetID
    • DestinationAssetID
    • DeviceAssetID
  • Task type—response action to be performed when data matching the filter is received. The following types of response actions are available:
    • Enable network isolation.

      When selecting this type of response, you need to define values for the following settings:

      • Isolation timeout—the number of hours during which the network isolation of an asset will be active. You can indicate from 1 to 9999 hours.

        If necessary, you can add an exclusion for network isolation.

        To add an exclusion for network isolation:

        1. Click the Add exclusion button.
        2. Select the direction of network traffic that must not be blocked:
          • Inbound.
          • Outbound.
          • Inbound/Outbound.
        3. In the Asset IP field, enter the IP address of the asset whose network traffic must not be blocked.
        4. If you selected Inbound or Outbound, specify the connection ports in the Remote ports and Local ports fields.
        5. If you want to add more than one exclusion, click Add exclusion and repeat the steps to fill in the Traffic direction, Asset IP, Remote ports and Local ports fields.
        6. If you want to delete an exclusion, click the Delete button under the relevant exclusion.

        When adding exclusions to a network isolation rule, Kaspersky Endpoint Detection and Response may incorrectly display the port values in the rule details. This does not affect application performance. For more details on viewing a network isolation rule, please refer to the Kaspersky Anti Targeted Attack Platform Help Guide.

    • Disable network isolation.
    • Add prevention rule.

      When selecting this type of response, you need to define values for the following settings:

      • Event fields to extract hash from—event fields from which KUMA extracts SHA256 or MD5 hashes of the files that must be prevented from starting.

        The selected event fields and the values selected in the Event field must be added to the inherited fields of the correlation rule.

      • File hash #1—SHA256 or MD5 hash of the file to be blocked.

      At least one of the above fields must be completed.

    • Delete prevention rule.
    • Run program.

      When selecting this type of response, you need to define values for the following settings:

      • File path—path to the file of the process that you want to start.
      • Command line parameters—parameters with which you want to start the file.
      • Working directory—directory in which the file is located at the time of startup.

      When a response rule is triggered for users with the General Administrator role, the Run program task will be displayed in the Task manager section of the program web interface. Scheduled task is displayed for this task in the Created column of the task table. You can view task completion results.

      All of the listed operations can be performed on assets that have Kaspersky Endpoint Agent for Windows. On assets that have Kaspersky Endpoint Agent for Linux, the program can only be started.

      At the software level, the capability to create prevention rules and network isolation rules for assets with Kaspersky Endpoint Agent for Linux is unlimited. KUMA and Kaspersky Endpoint Detection and Response do not provide any notifications about unsuccessful application of these rules.

  • Workers—the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

  • Description—you can add up to 4000 Unicode characters describing the resource.
  • Filter—used to define the conditions for the events to be processed by the response rule resource. You can select an existing filter resource from the drop-down list or create a new filter.
    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Page top

[Topic 233508]

Notification templates

Resource settings

Notification template resources are used in alert generation notifications.

Notification template resource settings:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Subject (required)—subject of the email containing the notification about the alert generation. In the email subject, you can refer to the alert fields.

    Example: New alert in KUMA: {{.CorrelationRuleName}}. In place of {{.CorrelationRuleName}}, the subject of the notification message will include the name of the correlation rule contained in the CorrelationRuleName alert field.

  • Template (required)—the body of the email containing the notification about the alert generation. The template supports a syntax that can be used to populate the notification with data from the alert.

    For convenience, you can open the email in a separate window by clicking the full-screen icon. This opens the Template window in which you can edit the text of the notification message. Click Save to save the changes and close the window.

Notification template syntax

In a template, you can query the alert fields containing a string or number:

{{ .CorrelationRuleName }}

The message will display the alert name, which is the contents of the CorrelationRuleName field.

Some alert fields contain data arrays. For instance, these include alert fields containing related events, assets, and user accounts. Such nested objects can be queried by using the range function, which sequentially queries the fields of the first 50 nested objects. When using the range function to query a field that does not contain a data array, an error is returned. Example:

{{ range .Assets }}

Device: {{ .DisplayName }}, creation date: {{ .CreatedAt }}

{{ end }}

The message will display the values of the DeviceHostName and CreatedAt fields from 50 assets related to the alert:

Device: <DisplayName field value from asset 1>, creation date: <CreatedAt field value from asset 1>

Device: <DisplayName field value from asset 2>, creation date: <CreatedAt field value from asset 2>

...

// 50 strings total

You can use the limit parameter to limit the number of objects returned by the range function:

{{ range (limit .Assets 5) }}

<strong>Device</strong>: {{ .DisplayName }},

<strong>Creation date</strong>: {{ .CreatedAt }}

{{ end }}

The message will display the values of the DisplayName and CreatedAt fields from 5 assets related to the alert, with the words "Devices" and "Creation date" marked with HTML tag <strong>:

<strong>Device</strong>: <DeviceHostName field value from asset 1>,

<strong>Creation date</strong>: <value of the CreatedAt field from asset 1>

<strong>Device</strong>: <DeviceHostName field value from asset N>,

<strong>Creation date</strong>: <CreatedAt field value from asset N>

...

// 10 strings total

Nested objects can have their own nested objects. They can be queried by using nested range functions:

{{ range (limit .Events 5) }}

    {{ range (limit .Event.BaseEvents 10) }}

    Service ID: {{ .ServiceID }}

    {{ end }}

{{ end }}

The message will show ten service IDs (ServiceID field) from the base events related to five correlation events of the alert. 50 strings total. Please note that events are queried through the nested EventWrapper structure, which is located in the Events field in the alert. Events are available in the Event field of this structure, which is reflected in the example above. Therefore, if field A contains nested structure [B] and structure [B] contains field C, which is a string or a number, you must specify the path {{ A.C }} to query field C.

Some object fields contain nested dictionaries in key-value format (for example, the Extra event field). They can be queried by using the range function with the variables passed to it: range $placeholder1, $placeholder2 := .FieldName. The values of variables can then be called by specifying their names. Example:

{{ range (limit .Events 3) }}

    {{ range (limit .Event.BaseEvents 5) }}

    List of fields in the Extra event field: {{ range $name, $value := .Extra }} {{ $name }} - {{ $value }}<br> {{ end }}

    {{ end }}

{{ end }}

The message will use an HTML tag<br> to show key-value pairs from the Extra fields of the base events belonging to the correlation events. Data is called from five base events out of each of the three correlation events.

You can use HTML tags in notification templates to create more complex structures. Below is an example table for correlation event fields:

<style type="text/css">

  TD, TH {

    padding: 3px;

    border: 1px solid black;

  }

</style>

<table>

  <thead>

    <tr>

        <th>Service name</th>

        <th>Name of the correlation rule</th>

        <th>Device version</th>

    </tr>

  </thead>

  <tbody>

    {{ range .Events }}

    <tr>

        <td>{{ .Event.ServiceName }}</td>

        <td>{{ .Event.CorrelationRuleName }}</td>

        <td>{{ .Event.DeviceVersion }}</td>

    </tr>

    {{ end }}

  </tbody>

</table>

Use the link_alert function to insert an HTML alert link into the notification email:

{{link_alert}}

A link to the alert window will be displayed in the message.

Page top

[Topic 217707]

Active lists

Active list resources are dynamically updated data containers used by the KUMA correlators to read and write information when analyzing events according to correlation rules.

The same resource of an active list can be used by different correlator services. However, a separate entity of the active list is created for each correlator. Therefore, the contents of the active lists used by different correlators differ even if the active lists have the same names and IDs. The contents of the active list can be opened from the active services window.

Available active list resource settings:

  • ID—identifier selected Active list. This setting is displayed for active lists that have been created. You can copy this value by using the Copy button.
  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • TTL—time to live parameter of entries stored in the Active list, in seconds. The default value is 0. The maximum time to live is 31536000 (one year). When the time to live expires, the entry is deleted, and an event is generated for deleting the entry from the active list (see below).
  • Description—you can add up to 256 Unicode characters describing the resource.

During the correlation process, when entries are deleted from active lists, service events are generated in the correlators. These events only exist in the correlators, and they are not redirected to other destinations. Correlation rules can be configured to track these events so that they can be used to identify threats. Service event fields for deleting an entry from the active list are described below.

Event field

Value or comment

ID

Event identifier

Timestamp

Time when the expired entry was deleted

Name

"active list record expired"

DeviceVendor

"Kaspersky"

DeviceProduct

"KUMA"

ServiceID

Correlator ID

ServiceName

Correlator name

DeviceExternalID

Active list ID

DevicePayloadID

Key of the expired entry

BaseEventCount

Number of deleted entry updates increased by one

Page top

[Topic 217843]

Dictionaries

Description of parameters

Dictionaries are resources storing data that can be used by other KUMA resources and services.

Available settings:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Description—you can add up to 256 Unicode characters describing the resource.
  • Type (required)—type of dictionary. The selected type determines the format of the data that the dictionary can contain:
    • You can add key-value pairs to the Dictionary type.

      It is not recommended to add more than 50,000 entries to dictionaries of this type.

      When adding lines with the same keys to the dictionary, each new line will overwrite the existing line with the same key. This means that only one line will be added to the dictionary.

    • Data in the form of complex tables can be added to the Table type. You can interact with this type of dictionary by using the REST API.
  • Values settings block—contains a table of dictionary data:
    • For the Dictionary type, this block displays a list of KeyValue pairs. You can use the add-button button to add rows to the table. You can delete rows by using the delete-button button that appears when you hover your mouse cursor over a row.
    • For the Table type, this block displays a table containing data. You can use the add-button button to add rows and columns to the table. You can delete rows and columns by using the delete-button buttons that are displayed when you hover your mouse cursor over a row or a column header. Column headers can be edited.

    If the dictionary contains more than 5,000 entries, they are not displayed in the KUMA web interface. To view the contents of such dictionaries, the contents must be exported in CSV format. If you edit the CSV file and import it back into KUMA, the dictionary resource will be updated.

Importing and exporting dictionaries

You can import or export dictionary data in CSV format (in UTF-8 encoding) by using the Import CSV or Export CSV buttons.

The format of the CSV file depends on the dictionary type:

  • Dictionary type:

    {KEY},{VALUE}\n

  • Table type:

    {Column header 1}, {Column header N}, {Column header N+1}\n

    {Key1}, {ValueN}, {ValueN+1}\n

    {Key2}, {ValueN}, {ValueN+1}\n

    The keys must be unique for both the CSV file and the dictionary. In tables, the keys are specified in the first column. A key must contain from 1 to 128 Unicode characters.

    Values must contain from zero to 256 Unicode characters.

During an import, the contents of the dictionary are overwritten by the imported file. When imported into the dictionary, the resource name is also changed to reflect the name of the imported file.

If the key or value contains comma or quotation mark characters (, and "), they are enclosed in quotation marks (") when exported. Also, quotation mark character (") is shielded with additional quotation mark (").

If incorrect lines are detected in the imported file (for example, invalid separators), these lines will be ignored during import into the dictionary, and the import process will be interrupted during import into the table.

Interacting with dictionaries via API

You can use the REST API to read the contents of Table-type dictionaries. You can also modify them even if these resources are being used by active services. This lets you, for instance, configure enrichment of events with data from dynamically changing tables exported from third-party applications.

Page top

[Topic 217960]

Proxies

Proxy server resources are used to store configuration settings for proxy servers.

Available settings:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Use URL from the secret (required)—drop-down list to select a secret resource that stores URLs of proxy servers. If required, a secret can be created in the proxy server creation window by using the AddResource button. The selected secret can be changed by clicking on the EditResource button.
  • Do not use for domains—one or more domains that require direct access.
  • Description—you can add up to 256 Unicode characters describing the resource.
Page top

[Topic 217990]

Secrets

Secret resources are used to securely store sensitive information such as user names and passwords that must be used by KUMA to interact with external services.

Available settings:

  • Name (required)—a unique name for this type of resource. Must contain from 1 to 128 Unicode characters.
  • Tenant (required)—name of the tenant that owns the resource.
  • Type (required)—the type of secret.

    When you select the type in the drop-down list, the parameters for configuring this secret type also appear. These parameters are described below.

  • Description—you can add up to 256 Unicode characters describing the resource.

Depending on the secret type, different fields are available. You can select one of the following secret types:

  • credentials—this type of secret is used to store account credentials required to connect to external services, such as SMTP servers. If you select this type of secret, you must fill in the User and Password fields.
  • token—this secret type is used to store tokens for API requests. Tokens are used when connecting to IRP systems, for example. If you select this type of secret, you must fill in the Token field.
  • ktl—this secret type is used to store Kaspersky Threat Intelligence Portal account credentials. If you select this type of secret, you must fill in the following fields:
    • User and Password (required fields)—user name and password of your Kaspersky Threat Intelligence Portal account.
    • PFX file (required)—lets you upload a Kaspersky Threat Intelligence Portal certificate key.
    • PFX password (required)—the password for accessing the Kaspersky Threat Intelligence Portal certificate key.
  • urls—this secret type is used to store URLs for connecting to SQL databases and proxy servers. In the Description field, you must provide a description of the connection for which you are using the secret of urls type.

    You can specify URLs in the following formats: hostname:port, IPv4:port, IPv6:port, :port.

  • pfx—this type of secret is used for importing a PFX file containing certificates. If you select this type of secret, you must fill in the following fields:
    • PFX file (required)—this is used to upload a PFX file. The file must contain a certificate and key. PFX files may include CA-signed certificates for server certificate verification.
    • PFX password (required)—this is used to enter the password for accessing the certificate key.
  • kata/edr—this type of secret is used to store the certificate file and private key required when connecting to the Kaspersky Endpoint Detection and Response server. If you select this type of secret, you must upload the following files:
    • Certificate file—KUMA server certificate.

      The file must be in PEM format. You can upload only one certificate file.

    • Private key for encrypting the connection—KUMA server RSA key.

      The key must be without a password and with the PRIVATE KEY header. You can upload only one key file.

      You can generate certificate and key files by clicking the download button.

  • snmpV1—this type of secret is used to store the values of Community access (for example, public or private) that is required for interaction over the Simple Network Management Protocol.
  • snmpV3—this type of secret is used for storing data required for interaction over the Simple Network Management Protocol. If you select this type of secret, you must fill in the following fields:
    • User—user name indicated without a domain.
    • Security Level—security level of the user.
      • NoAuthNoPriv—messages are forwarded without authentication and without ensuring confidentiality.
      • AuthNoPriv—messages are forwarded with authentication but without ensuring confidentiality.
      • AuthPriv—messages are forwarded with authentication and ensured confidentiality.

      You may see additional settings depending on the selected level.

    • Password—SNMP user authentication password. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
    • Authentication Protocol—the following protocols are available: MD5, SHA, SHA224, SHA256, SHA384, SHA512. This field becomes available when the AuthNoPriv or AuthPriv security level is selected.
    • Privacy Protocol—protocol used for encrypting messages. Available protocols: DES, AES. This field becomes available when the AuthPriv security level is selected.
    • Privacy password—encryption password that was set when the SNMP user was created. This field becomes available when the AuthPriv security level is selected.
  • certificate—this secret type is used for storing certificate files. Files are uploaded to a resource by clicking the Upload certificate file button. X.509 certificate public keys in Base64 are supported.
Page top

[Topic 217688]

KUMA services

Services are the main components of KUMA that work with events: receiving, processing, analyzing, and storing them. Each service consists of two parts that work together:

  • One part of the service is created inside the KUMA web interface based on set of resources for services.
  • The second part of the service is installed in the network infrastructure where the KUMA system is deployed as one of its components. The server part of a service can consist of several instances: for example, services of the same agent or storage can be installed on several computers at once.

    On the server side, KUMA services are located in the /opt/kaspersky/kuma directory.

Parts of services are connected to each other by using the IDs of services.

Service types:

  • Collectors are used to receive events and convert them to KUMA format.
  • Correlators are used to analyze events and search for defined patterns.
  • Storages are used to save events.
  • Agents are used to receive events on remote devices and forward them to KUMA collectors.

In the KUMA web interface, services are displayed in the Resources Active services section in table format. The table of services can be updated using the Refresh button and sorted by columns by clicking on the active headers.

Table columns:

  • Type—type of service: agent, collector, correlator, or storage.
  • Name—name of the service. Clicking on the name of the service opens its settings.
  • Version—service version.
  • Tenant—the name of the tenant that owns the service.
  • FQDN—fully qualified domain name of the service server.
  • IP address—IP address of the server where the service is installed.
  • API Port—Remote Procedure Call port number.
  • Status—service status:
    • Green means that the service is running.
    • Red means that the service is not running.
    • Yellow means that there is no connection with ClickHouse nodes (this status is applied only to storage services). The reason for this is indicated in the service log if logging was enabled.
  • Uptime—the time showing how long the service has been running.

Using the Add service button, you can create new services based on existing resource sets for services. In this window, you can restart a service or delete its certificate, copy the service identifier, or delete the service. In this section you can also view storage partitions and active correlator lists

Services can be edited by clicking on them under ResourcesActive services. This opens a window containing the set of resources that were used to create the service. A service is edited by changing the settings of the resource set. Changes are saved by clicking the Save button and will take effect after the service is restarted.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer resource itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the resource under ResourcesNormalizers in the web interface.

In this Help topic

Services tools

Service resource sets

Creating a collector

Creating a correlator

Creating an agent

Creating a storage

Page top

[Topic 217948]

Services tools

This section describes the tools for working with services available in the ResourcesActive services section of the KUMA web interface.

In this section

Getting service identifier

Restarting the service

Deleting the service

Partitions window

Correlator active list window

Searching for related events

Page top

[Topic 217885]

Getting service identifier

The service identifier is used to bind parts of the service residing within KUMA and installed in the network infrastructure into a single complex. An identifier is assigned to a service when it is created in KUMA, and is then used when installing the service to the server.

To get the identifier of a service:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the service whose ID you want to obtain, and click Copy ID.

The identifier of the service will be copied to the clipboard. It can be used, for example, for installing the service on a server.

Page top

[Topic 217977]

Restarting the service

To restart the service:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the service and select the necessary option:
    • Reload—perform a hot update of a running service configuration. For example, you can change the field mapping settings or the destination point settings this way.
    • Restart—stop a service and start it again. This option is used to modify the port number or connector type.

      Restarting KUMA agents:

      • KUMA Windows Agent can be restarted as described above only if it is running on a remote computer. If the service on the remote computer is inactive, you will receive an error when trying to restart from KUMA. In that case you must restart KUMA Windows Agent service on the remote Windows machine. For information on restarting Windows services, refer to the documentation specific to the operating system version of your remote Windows computer.
      • KUMA Agent for Linux stops when this option is used. To start the agent again, you must execute the command that was used to start it.
    • Reset certificate—remove certificates that the service uses for internal communication. For example, this option can be used to renew the Core certificate.

      Special considerations for deleting Windows agent certificates:

      • If the agent has the green status and you select Reset certificate, KUMA deletes the current certificate and creates a new one, the agent continues working with the new certificate.
      • If the agent has the red status and you select Reset certificate, KUMA generates an error that the agent is not running. In the agent installation folder %APPDATA%\kaspersky\kuma\<Agent ID>\certificates, manually delete the internal.cert and internal.key files and start the agent manually. When the agent starts, a new certificate is created automatically.

      Special considerations for deleting Linux agent certificates:

      1. Regardless of the agent status, apply the Reset certificate option in the web interface to delete the certificate in the databases.
      2. In the agent installation folder /opt/kaspersky/agent/<Agent ID>/certificates, manually delete the internal.cert and internal.key files.
      3. Since the Reset certificate option stops the agent, to continue its operation, start the agent manually. When the agent starts, a new certificate is created automatically.
Page top

[Topic 217840]

Deleting the service

Before deleting the service get its ID. It will be required to remove the service for the server.

To delete the service:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the service you want to delete, and click Delete.

    A confirmation window opens.

  3. Click OK.

The service has been deleted from KUMA.

To remove the service from the server:

Delete the file /usr/lib/systemd/system/kuma-<Service type: collector, correlator, or storage >-<ID of the service>.service from the server where the service was installed.

Page top

[Topic 217949]

Partitions window

If the Storage service was created and installed, you can view its partitions in the Partitions table.

To open Partitions table:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the relevant storage and click Go to partitions.

The Partitions table opens.

The table has the following columns:

  • Tenant—the name of the tenant that owns the stored data.
  • Date—the date when the space was created.
  • Space—the name of the space.
  • Size—the size of the space.
  • Events—the number of stored events.
  • Expires—the date when this partition expires.

You can delete partitions.

To delete a partition:

  1. Open the Partitions table (see above).
  2. Open the More-DropDown drop-down list to the left from the required partition.
  3. Select Delete.

    A confirmation window opens.

  4. Click OK.

The partition has been deleted.

Page top

[Topic 217785]

Correlator active list window

The Correlator active list table displays the active lists that are used by a specific correlator.

To open the Correlator active list table:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the relevant correlator and click Go to active lists.

The Correlator active list table opens.

The table has the following columns:

  • Name—the name of the correlator list.
  • Records—the number of record the active list contains.
  • Size on disk—the size of the active list.
  • Directory—the path to the active list on the KUMA Core server.

You can view, import, export, or clear active lists.

To view active list,

open the Correlator active list table (see above) and click the name of the relevant active list.

The table with active list records opens. If you want to view the contents of a record, click on the value of its key (the Key column). If you want to delete the entry, click on the delete-icon icon. You can also search records using the Search field.

To export active list:

  1. Open the Correlator active list table (see above).
  2. Open the More-DropDown drop-down list to the left from the required active list.
  3. Click Export.

Active list is downloaded in JSON format using your browsers settings. The name of the downloaded file reflects the name of active list.

To import active list:

  1. Open the Correlator active list table (see above).
  2. Open the More-DropDown drop-down list to the left from the required active list.
  3. Select Import.

    The active list import window opens.

  4. In the File field select the file you wan to import.
  5. In the Format drop-down list select the format of the file:
    • csv
    • tsv
    • internal
  6. Under Key field, enter the name of the column containing the active list record keys.
  7. Select Import.

The data from the file is imported into the active list.

Page top

[Topic 217989]

Searching for related events

You can search for events processed by the Correlator or the Collector services.

To search for events related to the Correlator or the Collector service:

  1. Log in to the KUMA web interface and open ResourcesActive services.
  2. Select the check box next to the required correlator or collector and click Go to Events.

    A new browser tab opens with the KUMA Events section open.

  3. To find events, click the magn-glass icon.

A table with events selected by the search expression ServiceID = <ID of the selected service> will be displayed.

Page top

[Topic 220557]

Service resource sets

Service resource sets are a resource type, a KUMA component, a set of settings based on which the KUMA services are created and operate. Resource sets for services are collections of resources.

Any resources added to a set of resources must be owned by the same tenant that owns the created set of resources. An exception is the shared tenant, whose owned resources can be used in the sets of resources of other tenants.

Resource sets for services are displayed in the Resources<Resource set type for the service> section of the KUMA web interface. Available types:

  • Collectors
  • Correlators
  • Storages
  • Agents

When you select the required type, a table opens with the available sets of resources for services of this type. The resource table contains the following columns:

  • Name—the name of a resource set. Can be used for searching and sorting.
  • Updated—the date and time of the last update of the resource set. Can be used for sorting.
  • Created by—the name of the user who created the resource set.
  • Description—the description of the resource set.

Page top

[Topic 217765]

Creating a collector

A collector consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on a server in the network infrastructure intended for receiving events.

Actions in the KUMA web interface

The creation of a collector in the KUMA web interface is carried out by using the Installation Wizard. This Wizard combines the required resources into a set of resources for a collector. Upon completion of the Wizard, the service itself is automatically created based on this set of resources.

To create a collector in the KUMA web interface,

Start the Collector Installation Wizard:

  • In the KUMA web interface, in the Resources section, click Add event source button.
  • In the KUMA web interface in the ResourcesCollectors section click Add collector button.

As a result of completing the steps of the Wizard, a collector service is created in the KUMA web interface.

A resource set for a collector includes the following resources:

These resources can be prepared in advance, or you can create them while the Installation Wizard is running.

Actions on the KUMA Collector Server

For installing the collector on the server that you intend to use to receive events, you must on this server run the command displayed at the last step of the Installation Wizard. When installing, you must specify the identifier automatically assigned to the service in the KUMA web interface, as well as the port used for communication.

Testing the installation

After creating a collector, you are advised to make sure that it is working correctly.

In this section

Starting the Collector Installation Wizard

Installing a collector in a KUMA network infrastructure

Validating collector installation

Ensuring uninterrupted collector operation

Page top

[Topic 220707]

Starting the Collector Installation Wizard

A collector consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on the network infrastructure server intended for receiving events. The Installation Wizard creates the first part of the collector.

To start the Collector Installation Wizard:

  • In the KUMA web interface, in the Resources section, click Add event source.
  • In the KUMA web interface in the ResourcesCollectors section click Add collector.

Follow the instructions of the Wizard.

Aside from the first and last steps of the Wizard, the steps of the Wizard can be performed in any order. You can switch between steps by using the Next and Previous buttons, as well as by clicking the names of the steps in the left side of the window.

After the Wizard completes, a resource set for a collector is created in the KUMA web interface under ResourcesCollectors, and a collector service is added under ResourcesActive services.

In this section

Step 1. Connect event sources

Step 2. Transportation

Step 3. Event parsing

Step 4. Filtering events

Step 5. Event aggregation

Step 6. Event enrichment

Step 7. Routing

Step 8. Setup validation

Page top

[Topic 220710]

Step 1. Connect event sources

This is a required step of the Installation Wizard. At this step, you specify the main settings of the collector: its name and the tenant that will own it.

To specify the basic settings of the collector:

  1. In the Collector name field, enter a unique name for the service you are creating. The name must contain from 1 to 128 Unicode characters.

    When certain types of collectors are created, agents named "agent: <Collector name>, auto created" are also automatically created together with the collectors. If this type of agent was previously created and has not been deleted, it will be impossible to create a collector named <Collector name>. If this is the case, you will have to either specify a different name for the collector or delete the previously created agent.

  2. In the Tenant drop-down list, select the tenant that will own the collector. The tenant selection determines what resources will be available when the collector is created.

    If you return to this window from another subsequent step of the Installation Wizard and select another tenant, you will have to manually edit all the resources that you have added to the service. Only resources from the selected tenant and shared tenant can be added to the service.

  3. If required, specify the number of processes that the service can run concurrently in the Workers field. By default, the number of worker processes is the same as the number of vCPUs on the server where the service is installed.
  4. If necessary, use the Debug drop-down list to enable logging of service operations.
  5. You can optionally add up to 256 Unicode characters describing the service in the Description field.

The main settings of the collector are specified. Proceed to the next step of the Installation Wizard.

Page top

[Topic 220711]

Step 2. Transportation

This is a required step of the Installation Wizard. On the Transport tab of the Installation Wizard, select or create a connector resource with the settings indicating from where the collector service should receive events.

To add an existing connector to a resource set,

select the name of the required connector from the Connector drop-down list.

The Transport tab of the Installation Wizard will display the settings of the selected connector. You can open the selected resource for editing in a new browser tab using the edit-grey button.

To create a new connector:

  1. Select Create new from the Connector drop-down list.
  2. In the Type drop-down list, select the connector type and define its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector:

    When using the tcp or upd connector type at the normalization stage, IP addresses of the assets from which the events were received will be written in the DeviceAddress event field if it is empty.

    When using a wmi or wec connector, agents will be automatically created for receiving Windows events.

    It is recommended to use the default encoding (UTF-8), and to apply other settings only if bit characters are received in the fields of events.

    Making KUMA collectors to listen on ports up to 1,000 requires running the service of the relevant collector with root privileges. To do this, after installing the collector, add the line AmbientCapabilities = CAP_NET_BIND_SERVICE to its systemd configuration file in the [Service] section.
    The systemd file is located in the /lib/systemd/system/kuma-collector-<collector ID>.service directory.

The connector resource has been added to the resource set of the collector. The created resource is only available in this resource set and is not displayed in the web interface ResourcesConnectors section.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220712]

Step 3. Event parsing

This is a required step of the Installation Wizard. On the Event parsing tab of the Installation Wizard, select or create a normalizer resource whose settings will define the rules for converting raw events into normalized events. You can add more than one normalizer to implement complex processing logic.

When creating a new normalizer in the Installation Wizard, by default it is saved in the set of resources for the collector and cannot be used in other collectors. You can use the Save normalizer check box to create a separate resource.

If, when changing the settings of a collector resource set, you change or delete conversions in a normalizer connected to it, the edits will not be saved, and the normalizer resource itself may be corrupted. If you need to modify conversions in a normalizer that is already part of a service, the changes must be made directly to the resource under ResourcesNormalizers in the web interface.

Adding a normalizer

To add an existing normalizer to a resource set:

  1. Click the Add event parsing button.

    The Event parsing window will open with the normalizer settings and an active Normalization scheme tab.

  2. In the Normalizer drop-down list, select the required normalizer.

    The Event parsing window will display the parameters of the selected normalizer. You can open the selected resource for editing in a new browser tab using the edit-grey button.

  3. Click OK.

The normalizer is displayed as a dark circle on the Event parsing tab of the Installation Wizard. Clicking on the circle will open the normalizer options for editing. When you hover over the circle, a plus sign is displayed: click on it to add more normalizers (see below).

To create a new normalizer:

  1. Select Create new from the Normalizer drop-down list.

    The Event parsing window will open with the normalizer settings and an active Normalization scheme tab.

  2. If you want to keep the normalizer as a separate resource, select the Save normalizer check box. This check box is cleared by default.
  3. In the Name field, enter a unique name for the normalizer. The name must contain from 1 to 128 Unicode characters.
  4. In the Parsing method drop-down list, select the type of events to receive. Depending on your choice, you can use the preconfigured rules for matching event fields or set your own rules. When you select some parsing methods, additional parameter fields required for filling in may become available.

    Available parsing methods:

    • json

      This parsing method is used to process JSON data.

      When processing files with hierarchically arranged data, you can access the fields of nested objects by specifying the names of the parameters dividing them by a period. For example, the username parameter from the string "user": {"username": "system: node: example-01"} can be accessed by using the user.username query.

    • cef

      This parsing method is used to process CEF data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • regexp

      This parsing method is used to create custom rules for processing JSON data.

      In the Normalization parameter block field, add a regular expression (RE2 syntax) with named capture groups. The name of a group and its value will be interpreted as the field and the value of the raw event, which can be converted into an event field in KUMA format.

      To add event handling rules:

      1. Copy an example of the data you want to process to the Event examples field. This is an optional but recommended step.
      2. In the Normalization parameter block field add a regular expression with named capture groups in RE2 syntax, for example "(?P<name>regexp)".

        You can add multiple regular expressions by using the Add regular expression button. If you need to remove the regular expression, use the cross button.

      3. Click the Copy field names to the mapping table button.

        Capture group names are displayed in the KUMA field column of the Mapping table. Now you can select the corresponding KUMA field in the column next to each capture group. Otherwise, if you named the capture groups in accordance with the CEF format, you can use the automatic CEF mapping by selecting the Use CEF syntax for normalization check box.

      Event handling rules were added.

    • syslog

      This parsing method is used to process data in syslog format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

    • csv

      This parsing method is used to create custom rules for processing CSV data.

      When choosing this method, you must specify the separator of values in the string in the Delimiter field. Any single-byte ASCII character can be used as a delimiter.

    • kv

      This parsing method is used to process data in key-value pair format.

      If you select this method, you must provide values in the following required fields:

      • Pair delimiter—specify a character that will serve as a delimiter for key-value pairs. You can specify any one-character (1 byte) value, provided that the character does not match the value delimiter.
      • Value delimiter—specify a character that will serve as a delimiter between the key and the value. You can specify any one-character (1 byte) value, provided that the character does not match the delimiter of key-value pairs.
    • xml

      This parsing method is used to process XML data.

      When this method is selected in the parameter block XML Attributes you can specify the key attributes to be extracted from tags. If an XML structure has several attributes with different values in the same tag, you can indicate the necessary value by specifying its key in the Source column of the Mapping table.

      To add key XML attributes,

      Click the Add field button, and in the window that appears, specify the path to the required attribute.

      You can add more than one attribute. Attributes can be removed one at a time using the cross icon or all at once using the Reset button.

      If XML key attributes are not specified, then in the course of field mapping the unique path to the XML value will be represented by a sequence of tags.

    • netflow5

      This parsing method is used to process data in the NetFlow v5 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for netflow5 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • netflow9

      This parsing method is used to process data in the NetFlow v9 format.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for netflow9 is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • ipfix

      This parsing method is used to process IPFIX data.

      When choosing this method, you can use the preconfigured rules for converting events to the KUMA format by clicking the Apply default mapping button.

      In mapping rules, the protocol type for ipfix is not indicated in the fields of KUMA events by default. When parsing data in NetFlow format on the Enrichment normalizer tab, you should create a constant data enrichment rule that adds the netflow value to the DeviceProduct target field.

    • sql—this method becomes available only when using a sql type connector.

      This parsing method is used to process SQL data.

  5. In the Keep raw log drop-down list, specify whether the original raw event should be stored in the newly created normalized event. Available values:
    • Never—do not save the raw event This is the default setting.
    • Only errors—save the raw event in the Raw field of the normalized event if errors occurred when parsing it. This value is convenient to use when debugging a service. In this case, every time an event has a non-empty Raw field, you know there was a problem.
    • Always—always save the raw event in the Raw field of the normalized event.
  6. In the Keep extra fields drop-down list, choose whether you want to store the raw event fields in the normalized event if no mapping rules have been configured for them (see below). The data is stored in the Extra event field. By default, fields are not saved.
  7. Copy an example of the data you want to process to the Event examples field. This is an optional but recommended step.

    Event examples can also be loaded from a TSV, CSV, or TXT file by using the Load from file button.

  8. Configure the mapping of the raw event fields to event fields in KUMA format In the Mapping table:
    1. In the Source column, provide the name of the raw event field that you want to convert into the KUMA event field.

      Clicking the wrench-new button next to the field names in the Source column opens the Conversion window, in which you can use the Add conversion button to create rules for modifying the original data before they are written to the KUMA event fields.

      Available conversions

      Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

      Available conversions:

      • lower—is used to make all characters of the value lowercase
      • upper—is used to make all characters of the value uppercase
      • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
      • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
      • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
        • Replace chars—in this field you can specify the character sequence that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
      • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
      • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
      • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
      • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
        • Expression—in this field you can specify the regular expression which results that should be replaced.
        • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    2. In the KUMA field column, select the required KUMA event field from the drop-down list. You can search for fields by entering their names in the field.
    3. If the name of the KUMA event field selected at the previous step begins with DeviceCustom*, you can add a unique custom label in the Label field if necessary.

    New table rows can be added by using the Add row button. Rows can be deleted individually using the cross button or all at once using the Clear all button.

    If you have loaded data into the Event examples field, the table will have an Examples column containing examples of values carried over from the raw event field to the KUMA event field.

  9. Click OK.

The normalizer is displayed as a dark circle on the Event parsing tab of the Installation Wizard. Clicking on the circle will open the normalizer options for editing. When you hover over the circle, a plus sign is displayed: click on it to add more normalizers (see below).

Enriching normalized events with additional data

You can add additional data to the newly created normalized events by creating enrichment rules in the normalizer similar to those in enrichment rule resources. These enrichment rules are stored in the normalizer resource where they were created. There can be more than one enrichment rule.

To add enrichment rules to the normalizer:

  1. Select the normalizer and go to the Enrichment tab in the Event parsing window.
  2. Click the Add enrichment button.

    The enrichment rule parameter block appears. Close the parameter block using the cross button.

  3. Select the enrichment type from the Source kind drop-down list. Depending on the selected type, you may see advanced settings that will also need to be completed.

    Available Enrichment rule source types:

    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary to the event field.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • Clicking the wrench-new button opens the Conversion window in which you can, using the Add conversion button, create rules for modifying the original data before writing them to the KUMA event fields.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
  4. Click OK.

Enrichment rules are added to the normalizer, and the Event parsing window is closed.

Creating a structure of normalizers

You can create several extra normalizers within a normalizer. This allows you to customize complex event handling logic.

The sequence in which normalizers are created matters: events are processed sequentially, and their path is shown using arrows.

To create an extra normalizer:

  • Create the initial normalizer (see above).

    The created normalizer will be displayed in the window as a dark circle.

  • Hover over the initial normalizer and click the plus sign button that appears.
  • In the Add normalizer to normalization scheme window, specify the conditions under which the data will be sent to the extra normalizer:
    • If you want to send only events with specific fields to the extra normalizer, list them in the Fields to pass into normalizer field.
    • If you want to send only events in which certain fields have been assigned specific values to the extra normalizer, specify the name of the event field in the Use normalizer for events with specific event field values field and the value that should match it in the Condition value field.

    The data processed by these conditions can be preconverted by clicking the wrench-new button. This opens the Conversion window, in which you can use the Add conversion button to create rules for modifying the original data before it is written to the KUMA event fields.

    Available conversions

    Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

    Available conversions:

    • lower—is used to make all characters of the value lowercase
    • upper—is used to make all characters of the value uppercase
    • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
    • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
    • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
      • Replace chars—in this field you can specify the character sequence that should be replaced.
      • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
    • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
    • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
    • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
      • Expression—in this field you can specify the regular expression which results that should be replaced.
      • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
  • Click OK.

    This will open the Event parsing window, in which you can configure the rules for processing events as you did in the initial normalizer (see above). The Keep raw log parameter is not available. The Event examples field displays the values specified when the initial normalizer was created.

  • Specify the extra normalizer settings similar to the initial normalizer
  • Click OK.

The extra normalizer is displayed as a dark block that indicates the conditions under which this normalizer will be used. The conditions can be changed by moving your mouse cursor over the extra normalizer and clicking the button showing the pencil image. If you hover the mouse pointer over the extra normalizer, a plus button appears, which you can use to create a new extra normalizer. To delete a normalizer, use the button with the trash icon.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220713]

Step 4. Filtering events

This is an optional step of the Installation Wizard. The Event filtering tab of the Installation Wizard allows you to select or create a filter resource whose settings specify the conditions for filtering out irrelevant events. You can add more than one filter to a collector. You can swap the filters by dragging them by the DragIcon icon as well as delete them. Filters are combined by the AND operator.

To add an existing filter to a collector resource set,

Click the Add filter button and select the required filter from the Filter drop-down menu.

To add a new filter to the collector resource set:

  1. Click the Add filter button and select Create new from the Filter drop-down menu.
  2. If you want to keep the filter as a separate resource, select the Save filter check box. This can be useful if you decide to reuse the same filter across different services. This check box is cleared by default.
  3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
  4. In the Conditions section, specify the conditions that must be met by the filtered events:
    • The Add condition button is used to add filtering conditions. You can select two values (two operands, left and right) and assign the operation you want to perform with the selected values. The result of the operation is either True or False.
      • In the operator drop-down list, select the function to be performed by the filter.

        In this drop-down list, you can select the do not match case check box if the operator should ignore the case of values. This check box is ignored if the InSubnet, InActiveList, InCategory, and InActiveDirectoryGroup operators are selected. This check box is cleared by default.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      • In the Left operand and Right operand drop-down lists, select where the data to be filtered will come from. As a result of the selection, Advanced settings will appear. Use them to determine the exact value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.
      • You can use the If drop-down list to choose whether you need to create a negative filter condition.

      Conditions can be deleted using the cross button.

    • The Add group button is used to add groups of conditions. Operator AND can be switched between AND, OR, and NOT values.

      A condition group can be deleted using the cross button.

    • Using the Add filter button you can add existing filter resources selected in the Select filter drop-down list to the conditions. You can navigate to a nested filter resource using the edit-grey button.

      A nested filter can be deleted using the cross button.

The filter has been added.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220714]

Step 5. Event aggregation

This is an optional step of the Installation Wizard. The Event aggregation tab of the Installation Wizard allows you to select or create an aggregation rule resource whose settings specify the conditions for aggregating events of the same type. More than one aggregation rule can be added to a collector.

To add an existing aggregation rule to a set of collector resources:

Click the Add aggregation rule button and select the required resource from the Aggregation rule drop-down menu.

To add a new aggregation rule to a set of collector resources:

  1. Click the Add aggregation rule button and select Create new from the Aggregation rule drop-down menu.
  2. Enter the name of the newly created aggregation rule in the Name field. The name must contain from 1 to 128 Unicode characters.
  3. In the Threshold field, specify the number of events that should be received before the aggregation rule triggers and the events are aggregated. The default value is 100.
  4. In the Triggered rule lifetime field, indicate the number of seconds the program must wait for events for aggregation. On the timeout, the aggregation rule is triggered and a new event is created. The default value is 60.
  5. In the Identical fields section, use the Add field button to select the fields that will be used to identify the same types of events. Selected events can be deleted using the buttons with a cross icon.
  6. In the Unique fields section, you can use the Add field button to select the fields that will disqualify events from aggregation even if they have fields listed in the Identical fields section. Selected events can be deleted using the buttons with a cross icon.
  7. In the Sum fields section, you can use the Add field button to select the fields whose values will be summed during the aggregation process. Selected events can be deleted using the buttons with a cross icon.
  8. In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

Aggregation rule added. You can delete it using the cross button.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220715]

Step 6. Event enrichment

This is an optional step of the Installation Wizard. On the Event enrichment tab of the Installation Wizard, you can specify which data from which sources should be added to events processed by the collector. You can enrich events with data received using LDAP or via enrichment rules.

LDAP enrichment

To enable enrichment using LDAP:

  1. Click Add enrichment with LDAP data.

    This opens the settings block for LDAP enrichment.

  2. In the LDAP accounts mapping settings block, use the New domain button to specify the domain of the user accounts. You can specify multiple domains.
  3. In the LDAP mapping table, define the rules for mapping KUMA fields to LDAP attributes:
    • In the KUMA field column, indicate the KUMA event field which data should be compared to LDAP attribute.
    • In the column, the LDAP attribute with which you want to compare the KUMA event field.
    • In the KUMA event field to write to column, specify in which field of the KUMA event the ID of the user account imported from LDAP should be placed if the mapping was successful.

    You can use the Add row button to add a string to the table, and can use the cross button to remove a string. You can use the Apply default mapping button to fill the mapping table with standard values.

Event enrichment rules for data received from LDAP were added to the group of resources for the collector.

If you add an enrichment to an existing collector using LDAP or change the enrichment settings, you must stop and restart the service.

Rule-based enrichment

There can be more than one enrichment rule. You can add them by clicking the Add enrichment button and can remove them by clicking the cross button. You can use existing resources of enrichment rules or create rules directly in the Installation Wizard.

To add an existing enrichment rule to a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select the relevant resource.

The enrichment rule is added to the set of resources for the collector.

To create a new enrichment rule in a set of resources:

  1. Click Add enrichment.

    This opens the enrichment rules settings block.

  2. In the Enrichment rule drop-down list, select Create new.
  3. In the Source kind drop-down list, select the source of data for enrichment and define its corresponding settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary to the event field.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can use the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can use the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can use the Add row button to add a string, and can use the cross button to remove a string.

    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

    • geographic data

      This type of enrichment is used to add IP address geographic data to event fields. Learn more about linking IP addresses to geographic data.

      When this type is selected, in the Mapping geographic data to event fields settings block, you must specify from which event field the IP address will be read, select the required attributes of geographic data, and define the event fields in which geographic data will be written:

      1. In the Event field with IP address drop-down list, select the event field from which the IP address is read. Geographic data uploaded to KUMA is matched against this IP address.

        You can use the Add event field with IP address button to specify multiple event fields with IP addresses that require geographic data enrichment. You can delete event fields added in this way by clicking the Delete event field with IP address button.

        When the SourceAddress, DestinationAddress, and DeviceAddress event fields are selected, the Apply default mapping button becomes available. You can use this button to add preconfigured mapping pairs of geographic data attributes and event fields.

      2. For each event field you need to read the IP address from, select the type of geographic data and the event field to which the geographic data should be written.

        You can use the Add geodata attribute button to add field pairs for Geodata attributeEvent field to write to. You can also configure different types of geographic data for one IP address to be written to different event fields. To delete a field pair, click cross-red.

        • In the Geodata attribute field, select which geographic data corresponding to the read IP address should be written to the event. Available geographic data attributes: Country, Region, City, Longitude, Latitude.
        • In the Event field to write to, select the event field which the selected geographic data attribute must be written to.

        You can write identical geographic data attributes to different event fields. If you configure multiple geographic data attributes to be written to the same event field, the event will be enriched with the last mapping in the sequence.

  4. Use the Debug drop-down list to indicate whether or not to enable logging of service operations. Logging is disabled by default.
  5. In the Filter section, you can specify conditions to identify events that will be processed by the enrichment rule resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new enrichment rule was added to the set of resources for the collector.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220716]

Step 7. Routing

This is an optional step of the Installation Wizard. On the Routing tab of the Installation Wizard, you can select or create destination resources with parameters indicating where the events processed by the collector should be redirected. Typically, events from the collector are routed to two points: to the correlator to analyze and search for threats; and to the storage, both for storage and so that processed events can be viewed later. Events can be sent to other locations as needed. There can be more than one destination point.

To add an existing destination to a collector resource set:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. In the Destination drop-down list, select the necessary destination.

    The window name changes to Edit destination, and it displays the settings of the selected resource. The resource can be opened for editing in a new browser tab using the edit-grey button.

  3. Click Save.

The selected destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

To add a new destination resource to a collector resource set:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. Specify the settings on the Basic settings tab:
    • In the Destination drop-down list, select Create new.
    • In the Name field, enter a unique name for the destination resource. The name must contain from 1 to 128 Unicode characters.
    • Use the Disabled toggle button to specify whether events will be sent to this destination. By default, sending events is enabled.
    • Select the Type for the destination resource:
      • Select storage if you want to configure forwarding of processed events to the storage.
      • Select correlator if you want to configure forwarding of processed events to a correlator.
      • Select nats, tcp, http, kafka, or file if you want to configure sending events to other locations.
    • Specify the URL to which events should be sent in the hostname:<API port> format.

      If your KUMA license includes the High Level Availability module, you can specify multiple destination addresses by using the URL button for all types except nats, file, and diode.

      If you have selected storage or correlator as the destination type, the URL field can be populated automatically using the Copy service URL drop-down list that displays active services of the selected type.

    • For the nats and kafka types, use the Topic field to specify which topic the data should be written to. The topic name must contain from 1 to 255 Unicode characters.
  3. If required, define the settings on the Advanced settings tab. The available settings vary based on the selected destination resource type:
    • Compression is a drop-down list where you can enable Snappy compression. By default, compression is disabled.
    • Proxy is a drop-down list for proxy server resource selection.
    • Buffer size field is used to set buffer size (in bytes) for the destination resource. The default value is 1 MB, and the maximum value is 64 MB.
    • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
    • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
    • Storage ID is a NATS storage identifier.
    • TLS mode is a drop-down list where you can specify the conditions for using TLS encryption:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
      • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
      • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
      • Round robin. Packets with events will be evenly distributed among available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
    • Delimiter is used to specify the character delimiting the events. By default, \n is used.
    • Path—the file path if the file destination type is selected.
    • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
    • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • You can set health checks using the Health check path and Health check timeout fields. You can also disable health checks by selecting the Health Check Disabled check box.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
    • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
    • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

  4. Click Save.

The created destination resource is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 220717]

Step 8. Setup validation

This is the required, final step of the Installation Wizard. At this step, KUMA creates a service resource set, and the Services are created automatically based on this set:

  • The set of resources for the collector is displayed under ResourcesCollectors. It can be used to create new collector services. When this set of resources changes, all services that operate based on this set of resources will start using the new parameters after the services restart. To do so, you can use the Save and restart services and Save and update service configurations buttons.

    A set of resources can be modified, copied, moved from one folder to another, deleted, imported, and exported, like other resources.

  • Services are displayed in ResourcesActive services. The services created using the Installation Wizard perform functions inside the KUMA program. To communicate with external parts of the network infrastructure, you need to install similar external services on the servers and assets intended for them. For example, an external collector service should be installed on a server intended as an events recipient, external storage services should be installed on servers that have a deployed ClickHouse service, and external agent services should be installed on the Windows assets that must both receive and forward Windows events.

To finish the Installation Wizard:

  1. Click Create and save service.

    The Setup validation tab of the Installation Wizard displays a table of services created based on the set of resources selected in the Installation Wizard. The lower part of the window shows examples of commands that you must use to install external equivalents of these services on their intended servers and assets.

    For example:

    /opt/kaspersky/kuma/kuma collector --core https://kuma-example:<port used for communication with the KUMA Core> --id <service ID> --api.port <port used for communication with the service> --install

    The "kuma" file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You should also ensure the network connectivity of the KUMA system and open the ports used by its components if necessary.

  2. Close the Wizard by clicking Save collector.

The collector service is created in KUMA. Now you will install a similar service on the server intended for receiving events.

If a wmi or wec connector was selected for collectors, you must also install the automatically created KUMA agents.

Page top

[Topic 220708]

Installing a collector in a KUMA network infrastructure

A collector consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on the network infrastructure server intended for receiving events. The second part of the collector is installed in the network infrastructure.

To install a collector:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 is used by default)> --id <service ID copied from the KUMA web interface> --api.port <port used for communication with the installed component>

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://test.kuma.com:7210 --id XXXX --api.port YYYY

    If errors are detected as a result of the command execution, make sure that the settings are correct. For example, the availability of the required access level, network availability between the collector service and the Core, and the uniqueness of the selected API port. After fixing errors, continue installing the collector.

    If no errors were found, and the collector status in the KUMA web interface is changed to green, stop the command execution and proceed to the next step.

    The command can be copied at the last step of the installer wizard. It automatically specifies the address and port of the KUMA Core server, the identifier of the collector to be installed, and the port that the collector uses for communication.

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

    Before installation, ensure the network connectivity of KUMA components.

  5. Run the command again by adding the --install key:

    sudo /opt/kaspersky/kuma/kuma collector --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --api.port <port used for communication with the installed component> --install

    Example: sudo /opt/kaspersky/kuma/kuma collector --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

  6. Add KUMA collector port to firewall exclusions.

    For the program to run correctly, ensure that the KUMA components are able to interact with other components and programs over the network via the protocols and ports specified during the installation of the KUMA components.

The collector is installed. You can use it to receive data from an event source and forward it for processing.

Page top

[Topic 221402]

Validating collector installation

To verify that the collector is ready to receive events:

  1. In the KUMA web interface, open ResourcesActive services.
  2. Make sure that the collector you installed has the green status.

If the collector is installed correctly and you are sure that data is coming from the event source, the table should display events when you search for events associated with the collector.

To check for normalization errors using the Events section of the KUMA web interface:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Make sure that you selected Only errors in the Keep raw event drop-down list of the Normalizer resource in the Resources section of the KUMA web interface.
  4. In the Events section of KUMA, search for events with the following parameters:

If any events are found with this search, it means that there are normalization errors and they should be investigated.

To check for normalization errors using the Grafana Dashboard:

  1. Make sure that the Collector service is running.
  2. Make sure that the event source is providing events to the KUMA.
  3. Open the Metrics section and follow the KUMA Collectors link.
  4. See if the Errors section of the Normalization widget displays any errors.

If there are any errors, it means that there are normalization errors and they should be investigated.

For WEC and WMI collectors, you must ensure that unique ports are used to connect to their agents. This port is specified in the Transport section of Collector Installation Wizard.

Page top

[Topic 238522]

Ensuring uninterrupted collector operation

An uninterrupted event stream from the event source to KUMA is important for protecting the network infrastructure. Continuity can be ensured though automatic forwarding of the event stream to a larger number of collectors:

  • On the KUMA side, two or more identical collectors must be installed.
  • On the event source side, you must configure control of event streams between collectors using third-party server load management tools, such as rsyslog or nginx.

With this configuration of the collectors in place, no incoming events will be lost if the collector server is unavailable for any reason.

Please keep in mind that when the event stream switches between collectors, each collector will aggregate events separately.

In this section

Event stream control using rsyslog

Event stream control using nginx

Page top

[Topic 238527]

Event stream control using rsyslog

To enable rsyslog event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install rsyslog on the event source server (see the rsyslog documentation).
  3. Add rules for forwarding the event stream between collectors to the configuration file /etc/rsyslog.conf:

    *. * @@ <main collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended on

    *. * @@ <backup collector server FQDN>: <port for incoming events>

    $ActionExecOnlyWhenPreviousIsSuspended off

    Example configuration file

    Example configuration file specifying one primary and two backup collectors. The collectors are configured to receive events on TCP port 5140.

    *.* @@kuma-collector-01.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended on

    & @@kuma-collector-02.example.com:5140

    & @@kuma-collector-03.example.com:5140

    $ActionExecOnlyWhenPreviousIsSuspended off

  4. Restart rsyslog by running systemctl restart rsyslog command.

Event stream control is now enabled on the event source server.

Page top

[Topic 238530]

Event stream control using nginx

To control event stream using nginx, you need to create and configure an ngnix server to receive events from the event source and then forward these to collectors.

To enable nginx event stream control on the event source server:

  1. Create two or more identical collectors that you want to use to ensure uninterrupted reception of events.
  2. Install nginx on the server intended for event stream control.
    • Installation command in Oracle Linux 8.6:

      $sudo dnf install nginx

    • Installation command in Ubuntu 20.4:

      $sudo apt-get install nginx

      When installing from sources, you must compile with the parameter -with-stream option:
      $ sudo ./configure -with-stream -without-http_rewrite_module -without-http_gzip_module

  3. On the nginx server, add the stream module to the nginx.conf configuration file that contains the rules for forwarding the stream of events between collectors.

    Example stream module

    Example module in which event stream is distributed between the collectors kuma-collector-01.example.com and kuma-collector-02.example.com, which receive events via TCP on port 5140 and via UPD on port 5141. Balancing uses the nginx.example.com ngnix server.

    stream {

     upstream syslog_tcp {

    server kuma-collector-1.example.com:5140;

    server kuma-collector-2.example.com:5140;

    }

    upstream syslog_udp {

    server kuma-collector-1.example.com:5141;

    server kuma-collector-2.example.com:5141;

    }

     server {

    listen nginx.example.com:5140;

    proxy_pass syslog_tcp;

    }

    server {

    listen nginx.example.com:5141 udp;

    proxy_pass syslog_udp;

    proxy_responses 0;

    }

    }

     worker_rlimit_nofile 1000000;

    events {

    worker_connections 20000;

    }

    # worker_rlimit_nofile is the limit on the number of open files (RLIMIT_NOFILE) for workers. This is used to raise the limit without restarting the main process.

    # worker_connections is the maximum number of connections that a worker can open simultaneously.

  4. Restart nginx by running systemctl restart rsyslog .
  5. On the event source server, forward events to the ngnix server.

Event stream control is now enabled on the event source server.

Nginx Plus may be required to fine-tune balancing, but certain balancing methods, such as Round Robin and Least Connections, are available in the base version of ngnix.

For more details on configuring nginx, please refer to the nginx documentation.

Page top

[Topic 217787]

Creating a correlator

A correlator consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on the network infrastructure server intended for processing events.

Actions in the KUMA web interface

A correlator is created in the KUMA web interface by using the Installation Wizard, which combines the necessary resources into a set of resources for the correlator. Upon completion of the Wizard, the service is automatically created based on this set of resources.

To create a correlator in the KUMA web interface:

Start the Correlator Installation Wizard:

  • In the KUMA web interface, under Resources, click Create correlator.
  • In the KUMA web interface, under ResourcesCorrelators, click Add correlator.

As a result of completing the steps of the Wizard, a correlator service is created in the KUMA web interface.

A resource set for a correlator includes the following resources:

These resources can be prepared in advance, or you can create them while the Installation Wizard is running.

Actions on the KUMA correlator server

If you are installing the correlator on a server that you intend to use for event processing, you need to run the command displayed at the last step of the Installation Wizard on the server. When installing, you must specify the identifier automatically assigned to the service in the KUMA web interface, as well as the port used for communication.

Testing the installation

After creating a correlator, it is recommended to make sure that it is working correctly.

In this section

Starting the Correlator Installation Wizard

Installing a correlator in a KUMA network infrastructure

Validating correlator installation

Page top

[Topic 221166]

Starting the Correlator Installation Wizard

To start the Correlator Installation Wizard:

  • In the KUMA web interface, under Resources, click Add correlator.
  • In the KUMA web interface, under ResourcesCorrelators, click Add correlator.

Follow the instructions of the Wizard.

Aside from the first and last steps of the Wizard, the steps of the Wizard can be performed in any order. You can switch between steps by using the Next and Previous buttons, as well as by clicking the names of the steps in the left side of the window.

After the Wizard completes, a resource set for the correlator is created in the KUMA web interface under ResourcesCorrelators, and a correlator service is added under ResourcesActive services.

In this section

Step 1. General correlator settings

Step 2. Global variables

Step 3. Correlation

Step 4. Enrichment

Step 5. Response

Step 6. Routing

Step 7. Setup validation

Page top

[Topic 221167]

Step 1. General correlator settings

This is a required step of the Installation Wizard. At this step, you specify the main settings of the correlator: the correlator name and the tenant that will own it.

To define the main settings of the correlator:

  • In the Name field, enter a unique name for the service you are creating. The name must contain from 1 to 128 Unicode characters.
  • In the Tenant drop-down list, select the tenant that will own the correlator. The tenant selection determines what resources will be available when the collector is created.

    If you return to this window from another subsequent step of the Installation Wizard and select another tenant, you will have to manually edit all the resources that you have added to the service. Only resources from the selected tenant and shared tenant can be added to the service.

  • If required, specify the number of processes that the service can run concurrently in the Workers field. By default, the number of worker processes is the same as the number of vCPUs on the server where the service is installed.
  • If necessary, use the Debug drop-down list to enable logging of service operations.
  • You can optionally add up to 256 Unicode characters describing the service in the Description field.

The main settings of the correlator are defined. Proceed to the next step of the Installation Wizard.

Page top

[Topic 233900]

Step 2. Global variables

If tracking values in event fields, active lists, or dictionaries is not enough to cover some specific security scenarios, you can use global and local variables. You can use them to take various actions on the values received by the correlators by implementing complex logic for threat detection. Variables can be assigned a specific function and then queried from correlation rules as if they were ordinary event fields, with the triggered function result received in response.

To add a global variable in the correlator,

click the Add variable button and specify the following parameters:

  • In the Variable window, enter the name of the variable.

    Variable naming requirements

    • Must be unique within the correlator.
    • Must contain from 1 to 128 Unicode characters.
    • Must not begin with the character $.
    • Must be written in camelCase or CamelCase.
  • In the Value window, enter the variable function.

    Description of variable functions.

The global variable is added. It can be queried from correlation rules by adding the $ character in front of the variable name. There can be multiple variables. Added variables can be edited or deleted by using the cross icon.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 221168]

Step 3. Correlation

This is an optional but recommended step of the Installation Wizard. On the Correlation tab of the Installation Wizard, you should select or create resources of correlation rules. These resources define the sequences of events that indicate security-related incidents. When these sequences are detected, the correlator creates a correlation event and an alert.

If you have added global variables to the correlator, all added correlation rules can query them.

Correlation rules that are added to the set of resources for the correlator are displayed in the table with the following columns:

  • Correlation rules—name of the correlation rule resource.
  • Type—type of correlation rule: standard, simple, operational. The table can be filtered based on the values of this column by clicking the column header and selecting the relevant values.
  • Actions—list of actions that will be performed by the correlator when the correlation rule is triggered. These actions are indicated in the correlation rule settings. The table can be filtered based on the values of this column by clicking the column header and selecting the relevant values.

You can use the Search field to search for a correlation rule. Added correlation rules can be removed from the set of resources by selecting the relevant rules and clicking Delete.

When a correlation rule is selected, a window opens to show its settings. The resource settings can be edited and then saved by clicking the Save button. If you click Delete in this window, the correlation rule is unlinked from the set of resources.

To link the existing correlation rules to the set of resources for the correlator:

  1. Click Link.

    The resource selection window opens.

  2. Select the relevant correlation rules and click OK.

The correlation rules will be linked to the set of resources for the correlator and will be displayed in the rules table.

To create a new correlation rule in a set of resources for a correlator:

  1. Click Add.

    The correlation rule creation window opens.

  2. Specify the correlation rule settings and click Save.

The correlation rule will be created and linked to the set of resources for the correlator. It is displayed in the correlation rules table and in the list of resources under ResourcesCorrelation rules.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 221169]

Step 4. Enrichment

This is an optional step of the Installation Wizard. On the Enrichment tab of the Installation Wizard, you can select or create a resource for enrichment rules and indicate which data from which sources should be added to correlation events created by the correlator. There can be more than one enrichment rule. You can add them by clicking the Add button and can remove them by clicking the cross button.

To add an existing enrichment rule to a set of resources:

  1. Click Add.

    This opens the enrichment rule settings block.

  2. In the Enrichment rule drop-down list, select the relevant resource.

The enrichment rule is added to the set of resources for the correlator.

To create a new enrichment rule in a set of resources:

  1. Click Add.

    This opens the enrichment rule settings block.

  2. In the Enrichment rule drop-down list, select Create new.
  3. In the Source kind drop-down list, select the source of data for enrichment and define its corresponding settings:
    • constant

      This type of enrichment is used when a constant needs to be added to an event field. Settings of this type of enrichment:

      • In the Constant field, specify the value that should be added to the event field. The value should not be longer than 255 Unicode characters. If you leave this field blank, the existing event field value will be cleared.
      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.

    • dictionary

      This type of enrichment is used if you need to add a value from the dictionary to the event field.

      When this type is selected in the Dictionary name drop-down list, you must select the dictionary that will provide the values. In the Key fields settings block, you must use the Add field button to select the event fields whose values will be used for dictionary entry selection.

    • event

      This type of enrichment is used when you need to write a value from another event field to the current event field. Settings of this type of enrichment:

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
      • In the Source field drop-down list, select the event field whose value will be written to the target field.
      • In the Conversion settings block, you can create rules for modifying the original data before it is written to the KUMA event fields. The conversion type can be selected from the drop-down list. You can use the Add conversion and Delete buttons to add or delete a conversion, respectively. The order of conversions is important.

        Available conversions

        Conversions are changes that can be applied to a value before it gets written to the event field. The conversion type is selected from a drop-down list.

        Available conversions:

        • lower—is used to make all characters of the value lowercase
        • upper—is used to make all characters of the value uppercase
        • regexp – used to convert a value using the regular expression RE2. When this conversion type is selected, the field appears where regular expression should be added.
        • substring—is used to extract characters in the position range specified in the Start and End fields. These fields appear when this conversion type is selected.
        • replace—is used to replace specified character sequence with the other character sequence. When this type of conversion is selected, new fields appear:
          • Replace chars—in this field you can specify the character sequence that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
        • trim—used to simultaneously remove the characters specified in the Chars field from the leading and end positions of the value. The field appears when this type of conversion is selected. For example, a trim conversion with the Micromon value applied to Microsoft-Windows-Sysmon results in soft-Windows-Sys.
        • append is used to add the characters specified in the Constant field to the end of the event field value. The field appears when this type of conversion is selected.
        • prepend—used to prepend the characters specified in the Constant field to the start of the event field value. The field appears when this type of conversion is selected.
        • replace with regexp—is used to replace RE2 regular expression results with the character sequence.
          • Expression—in this field you can specify the regular expression which results that should be replaced.
          • With chars—in this field you can specify the characters sequence should be used instead of replaced characters.
    • template

      This type of enrichment is used when you need to write a value obtained by processing Go templates into the event field. Settings of this type of enrichment:

      • Put the Go template into the Template field.

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field from which the value must be passed to the script.

        Example: Attack on {{.DestinationAddress}} from {{.SourceAddress}}.

      • In the Target field drop-down list, select the KUMA event field to which you want to write the data.
    • dns

      This type of enrichment is used to send requests to a private network DNS server to convert IP addresses into domain names or vice versa.

      Available settings:

      • URL—in this field, you can specify the URL of a DNS server to which you want to send requests. You can use the Add URL button to specify multiple URLs.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Workers—maximum number of requests per one point in time. The default value is 1.
      • Max tasks—maximum number of simultaneously fulfilled requests. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • Cache TTL—the lifetime of the values stored in the cache. The default value is 60.
      • Cache disabled—you can use this drop-down list to enable or disable caching. Caching is enabled by default.
    • cybertrace

      This type of enrichment is used to add information from CyberTrace data streams to event fields.

      Available settings:

      • URL (required)—in this field, you can specify the URL of a CyberTrace server to which you want to send requests.
      • Number of connections—maximum number of connections to the CyberTrace server that can be simultaneously established by KUMA. By default, this value is equal to the number of vCPUs of the KUMA Core server.
      • RPS—maximum number of requests sent to the server per second. The default value is 1000.
      • Timeout—amount of time to wait for a response from the CyberTrace server, in seconds. The default value is 30.
      • Mapping (required)—this settings block contains the mapping table for mapping KUMA event fields to CyberTrace indicator types. The KUMA field column shows the names of KUMA event fields, and the CyberTrace indicator column shows the types of CyberTrace indicators.

        Available types of CyberTrace indicators:

        • ip
        • url
        • hash

        In the mapping table, you must provide at least one string. You can use the Add row button to add a string, and can use the cross button to remove a string.

    • timezone

      This type of enrichment is used in collectors and correlators to assign a specific timezone to an event. Timezone information may be useful when searching for events that occurred at unusual times, such as nighttime.

      When this type of enrichment is selected, the required timezone must be selected from the Timezone drop-down list.

      Make sure that the required time zone is set on the server hosting the enrichment-utilizing service. For example, you can do this by using the timedatectl list-timezones command, which shows all time zones that are set on the server. For more details on setting time zones, please refer to your operating system documentation.

      When an event is enriched, the time offset of the selected timezone relative to Coordinated Universal Time (UTC) is written to the DeviceTimeZone event field in the +-hh:mm format. For example, if you select the Asia/Yekaterinburg timezone, the value +05:00 will be written to the DeviceTimeZone field. If the enriched event already has a value in the DeviceTimeZone field, it will be overwritten.

      By default, if the timezone is not specified in the event being processed and enrichment rules by timezone are not configured, the event is assigned the timezone of the server hosting the service (collector or correlator) that processes the event. If the server time is changed, the service must be restarted.

      Permissible time formats when enriching the DeviceTimeZone field

      When processing incoming raw events in the collector, the following time formats can be automatically converted to the +-hh:mm format:

      Time format in a processed event

      Example

      +-hh:mm

      -07:00

      +-hhmm

      -0700

      +-hh

      -07

      If the date format in the DeviceTimeZone field differs from the formats listed above, the collector server timezone is written to the field when an event is enriched with timezone information. You can create custom normalization rules for non-standard time formats.

  4. Use the Debug drop-down list to indicate whether or not to enable logging of service operations. Logging is disabled by default.
  5. In the Filter section, you can specify conditions to identify events that will be processed by the enrichment rule resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new enrichment rule was added to the set of resources for the correlator.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 221170]

Step 5. Response

This is an optional step of the Installation Wizard. On the Response tab of the Installation Wizard, you can select or create a resource for response rules and indicate which actions must be performed when the correlation rules are triggered. There can be multiple response rules. You can add them by clicking the Add button and can remove them by clicking the cross button.

To add an existing response rule to a set of resources:

  1. Click Add.

    The response rule settings window opens.

  2. In the Response rule drop-down list, select the relevant resource.

The response rule is added to the set of resources for the correlator.

To create a new response rule in a set of resources:

  1. Click Add.

    The response rule settings window opens.

  2. In the Response rule drop-down list, select Create new.
  3. In the Type drop-down list, select the type of response rule and define its corresponding settings:
    • ksctasks—response rules for automatically starting tasks on Kaspersky Security Center assets. For example, you can configure automatic startup of a virus scan or database update.

      Tasks are automatically started when KUMA is integrated with Kaspersky Security Center. Tasks are run only on assets that were imported from Kaspersky Security Center.

      Settings of ksctasks responses

      • Kaspersky Security Center task (required)—name of the Kaspersky Security Center task that you need to start. Tasks must be created beforehand, and their names must begin with "KUMA ". For example, "KUMA antivirus check".
      • Event field (required)—defines the event field of the asset for which the Kaspersky Security Center task should be started. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID

      To send requests to Kaspersky Security Center, you must ensure that Kaspersky Security Center is available over the UDP protocol.

    • script—response rules for automatically running a script. For example, you can create a script containing commands to be executed on the KUMA server when selected events are detected.

      The script file is stored on the server where the correlator service using the response resource is installed: /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts.

      The kuma user of this server requires the permissions to run the script.

      Settings of script responses

      • Timeout—the number of seconds the system will wait before running the script.
      • Script name (required)—the name of the script file.

        If the script Response resource is linked to the Correlator service, but the is no script file in the /opt/kaspersky/kuma/correlator/<Correlator ID>/scripts folder, the service will not start.

      • Script arguments—parameters or event field values that must be passed to the script.

        If the script includes actions taken on files, you should specify the absolute path to these files.

        Parameters can be written with quotation marks (").

        Event field names are passed in the {{.EventField}} format, where EventField is the name of the event field which value must be passed to the script.

        Example: -n "\"usr\": {{.SourceUserName}}"

    • kata/edr—response rules for automatically creating prevention rules, starting network isolation, or starting the application on Kaspersky Endpoint Detection and Response and Kaspersky Security Center assets.

      Automatic response actions are carried out when KUMA is integrated with Kaspersky Endpoint Detection and Response.

      Settings of kata/edr-type responses

      • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID
      • Task type—response action to be performed when data matching the filter is received. The following types of response actions are available:
        • Enable network isolation.

          When selecting this type of response, you need to define values for the following settings:

          • Isolation timeout—the number of hours during which the network isolation of an asset will be active. You can indicate from 1 to 9999 hours.

            If necessary, you can add an exclusion for network isolation.

            To add an exclusion for network isolation:

            1. Click the Add exclusion button.
            2. Select the direction of network traffic that must not be blocked:
              • Inbound.
              • Outbound.
              • Inbound/Outbound.
            3. In the Asset IP field, enter the IP address of the asset whose network traffic must not be blocked.
            4. If you selected Inbound or Outbound, specify the connection ports in the Remote ports and Local ports fields.
            5. If you want to add more than one exclusion, click Add exclusion and repeat the steps to fill in the Traffic direction, Asset IP, Remote ports and Local ports fields.
            6. If you want to delete an exclusion, click the Delete button under the relevant exclusion.

            When adding exclusions to a network isolation rule, Kaspersky Endpoint Detection and Response may incorrectly display the port values in the rule details. This does not affect application performance. For more details on viewing a network isolation rule, please refer to the Kaspersky Anti Targeted Attack Platform Help Guide.

        • Disable network isolation.
        • Add prevention rule.

          When selecting this type of response, you need to define values for the following settings:

          • Event fields to extract hash from—event fields from which KUMA extracts SHA256 or MD5 hashes of the files that must be prevented from starting.

            The selected event fields and the values selected in the Event field must be added to the inherited fields of the correlation rule.

          • File hash #1—SHA256 or MD5 hash of the file to be blocked.

          At least one of the above fields must be completed.

        • Delete prevention rule.
        • Run program.

          When selecting this type of response, you need to define values for the following settings:

          • File path—path to the file of the process that you want to start.
          • Command line parameters—parameters with which you want to start the file.
          • Working directory—directory in which the file is located at the time of startup.

          When a response rule is triggered for users with the General Administrator role, the Run program task will be displayed in the Task manager section of the program web interface. Scheduled task is displayed for this task in the Created column of the task table. You can view task completion results.

          All of the listed operations can be performed on assets that have Kaspersky Endpoint Agent for Windows. On assets that have Kaspersky Endpoint Agent for Linux, the program can only be started.

          At the software level, the capability to create prevention rules and network isolation rules for assets with Kaspersky Endpoint Agent for Linux is unlimited. KUMA and Kaspersky Endpoint Detection and Response do not provide any notifications about unsuccessful application of these rules.

    • kics – response rules for automatically starting tasks on KICS for Networks assets. For example, you can change the asset status in KICS for Networks.

      Tasks are automatically started when KUMA is integrated with KICS for Networks.

      Settings of kics responses

      • Event field (required)—event field containing the asset for which the response actions are needed. Possible values:
        • SourceAssetID
        • DestinationAssetID
        • DeviceAssetID
      • KICS for Networks task—response action to be performed when data matching the filter is received. The following types of response actions are available:
        • Change asset status to Authorized.
        • Change asset status to Unauthorized.

        When a response rule is triggered, KUMA will send KICS for Networks an API request to change the status of the specified device to Authorized or Unauthorized.

  • In the Workers field, specify the number of processes that the service can run simultaneously.

    By default, the number of workers is the same as the number of virtual processors on the server where the service is installed.

    This field is optional.

  1. In the Filter section, you can specify conditions to identify events that will be processed by the response rule resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

    Creating a filter in resources

    1. In the Filter drop-down list, select Create new.
    2. If you want to keep the filter as a separate resource, select the Save filter check box.

      In this case, you will be able to use the created filter in various services.

      This check box is cleared by default.

    3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
    4. In the Conditions settings block, specify the conditions that the events must meet:
      1. Click the Add condition button.
      2. In the Left operand and Right operand drop-down lists, specify the search parameters.

        Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

      3. In the operator drop-down list, select the relevant operator.

        Filter operators

        • =—the left operand equals the right operand.
        • <—the left operand is less than the right operand.
        • <=—the left operand is less than or equal to the right operand.
        • >—the left operand is greater than the right operand.
        • >=—the left operand is greater than or equal to the right operand.
        • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
        • contains—the left operand contains values of the right operand.
        • startsWith—the left operand starts with one of the values of the right operand.
        • endsWith—the left operand ends with one of the values of the right operand.
        • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
        • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
        • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
        • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
        • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
        • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
        • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
        • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
      4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

        The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

        This check box is cleared by default.

      5. If you want to add a negative condition, select If not from the If drop-down list.
      6. You can add multiple conditions or a group of conditions.
    5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
    6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

      You can view the nested filter settings by clicking the edit-grey button.

The new response rule was added to the set of resources for the correlator.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 221171]

Step 6. Routing

This is an optional step of the Installation Wizard. On the Routing tab of the Installation Wizard, you can select or create destination resources with parameters indicating the forwarding destination of events created by the correlator. Events from a correlator are usually redirected to storage so that they can be saved and later viewed if necessary. Events can be sent to other locations as needed. There can be more than one destination point.

To add an existing destination to a set of resources for a correlator:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. In the Destination drop-down list, select the necessary destination.

    The window name changes to Edit destination, and it displays the settings of the selected resource. The resource can be opened for editing in a new browser tab using the edit-grey button.

  3. Click Save.

The selected destination is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

To add a new destination to a set of resources for a correlator:

  1. In the Add destination drop-down list, select the type of destination resource you want to add:
    • Select Storage if you want to configure forwarding of processed events to the storage.
    • Select Correlator if you want to configure forwarding of processed events to a correlator.
    • Select Other if you want to send events to other locations.

      This type of resource includes correlator and storage services that were created in previous versions of the program.

    The Add destination window opens where you can specify parameters for events forwarding.

  2. Specify the settings on the Basic settings tab:
    • In the Destination drop-down list, select Create new.
    • In the Name field, enter a unique name for the destination resource. The name must contain from 1 to 128 Unicode characters.
    • Use the Disabled toggle button to specify whether events will be sent to this destination. By default, sending events is enabled.
    • Select the Type for the destination resource:
      • Select storage if you want to configure forwarding of processed events to the storage.
      • Select correlator if you want to configure forwarding of processed events to a correlator.
      • Select nats, tcp, http, kafka, or file if you want to configure sending events to other locations.
    • Specify the URL to which events should be sent in the hostname:<API port> format.

      You can specify multiple destination URLs using the URL button for all types except nats and file, if your KUMA license includes High Level Availability module.

      If you have selected storage or correlator as the destination type, the URL field can be populated automatically using the Copy service URL drop-down list that displays active services of the selected type.

    • For the nats and kafka types, use the Topic field to specify which topic the data should be written to. The topic name must contain from 1 to 255 Unicode characters.
  3. If required, define the settings on the Advanced settings tab. The available settings vary based on the selected destination resource type:
    • Compression is a drop-down list where you can enable Snappy compression. By default, compression is disabled.
    • Proxy is a drop-down list for proxy server resource selection.
    • Buffer size field is used to set buffer size (in bytes) for the destination resource. The default value is 1 MB, and the maximum value is 64 MB.
    • Timeout field is used to set the timeout (in seconds) for another service or component response. The default value is 30.
    • Disk buffer size limit field is used to specify the size of the disk buffer in bytes. The default size is 10 GB.
    • Storage ID is a NATS storage identifier.
    • TLS mode is a drop-down list where you can specify the conditions for using TLS encryption:
      • Disabled (default)—do not use TLS encryption.
      • Enabled—encryption is enabled, but without verification.
      • With verification—use encryption with verification that the certificate was signed with the KUMA root certificate. The root certificate and key of KUMA are created automatically during program installation and are stored on the KUMA Core server in the folder /opt/kaspersky/kuma/core/certificates/.

      When using TLS, it is impossible to specify an IP address as a URL.

    • URL selection policy is a drop-down list in which you can select a method for determining which URL to send events to if several URLs have been specified:
      • Any. Events are sent to one of the available URLs as long as this URL receives events. If the connection is broken (for example, the receiving node is disconnected) a different URL will be selected as the events destination.
      • Prefer first. Events are sent to the first URL in the list of added addresses. If it becomes unavailable, events are sent to the next available node in sequence. When the first URL becomes available again, events start to be sent to it again.
      • Round robin. Packets with events will be evenly distributed among available URLs from the list. Because packets are sent either on a destination buffer overflow or on the flush timer, this URL selection policy does not guarantee an equal distribution of events to destinations.
    • Delimiter is used to specify the character delimiting the events. By default, \n is used.
    • Path—the file path if the file destination type is selected.
    • Buffer flush interval—this field is used to set the time interval (in seconds) at which the data is sent to the destination. The default value is 100.
    • Workers—this field is used to set the number of services processing the queue. By default, this value is equal to the number of vCPUs of the KUMA Core server.
    • You can set health checks using the Health check path and Health check timeout fields. You can also disable health checks by selecting the Health Check Disabled check box.
    • Debug—a drop-down list where you can specify whether resource logging should be enabled. By default it is Disabled.
    • The Disk buffer disabled drop-down list is used to enable or disable the use of a disk buffer. By default, the disk buffer is disabled.
    • In the Filter section, you can specify the conditions to define events that will be processed by this resource. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

  4. Click Save.

The created destination resource is displayed on the Installation Wizard tab. A destination resource can be removed from the resource set by selecting it and clicking Delete in the opened window.

Proceed to the next step of the Installation Wizard.

Page top

[Topic 221172]

Step 7. Setup validation

This is the required, final step of the Installation Wizard. At this step, KUMA creates a service resource set, and the Services are created automatically based on this set:

  • The set of resources for the correlator is displayed under ResourcesCorrelators. It can be used to create new correlator services. When this set of resources changes, all services that operate based on this set of resources will start using the new parameters after the services restart. To do so, you can use the Save and restart services and Save and update service configurations buttons.

    A set of resources can be modified, copied, moved from one folder to another, deleted, imported, and exported, like other resources.

  • Services are displayed in ResourcesActive services. The services created using the Installation Wizard perform functions inside the KUMA program. To communicate with external parts of the network infrastructure, you need to install similar external services on the servers and assets intended for them. For example, an external correlator service should be installed on a server intended to process events, external storage services should be installed on servers with a deployed ClickHouse service, and external agent services should be installed on Windows assets that must both receive and forward Windows events.

To finish the Installation Wizard:

  1. Click Create and save service.

    The Setup validation tab of the Installation Wizard displays a table of services created based on the set of resources selected in the Installation Wizard. The lower part of the window shows examples of commands that you must use to install external equivalents of these services on their intended servers and assets.

    For example:

    /opt/kaspersky/kuma/kuma correlator --core https://kuma-example:<port used for communication with the KUMA Core> --id <service ID> --api.port <port used for communication with the service> --install

    The "kuma" file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    The port for communication with the KUMA Core, the service ID, and the port for communication with the service are added to the command automatically. You should also ensure the network connectivity of the KUMA system and open the ports used by its components if necessary.

  2. Close the Wizard by clicking Save.

The correlator service is created in KUMA. Now the equivalent service must be installed on the server intended for processing events.

Page top

[Topic 221173]

Installing a correlator in a KUMA network infrastructure

A correlator consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on the network infrastructure server intended for processing events. The second part of the correlator is installed in the network infrastructure.

To install a correlator:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma correlator --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --api.port <port used for communication with the installed component> --install

    Example: sudo /opt/kaspersky/kuma/kuma correlator --core https://kuma.example.com:7210 --id XXXX --api.port YYYY --install

    You can copy the correlator installation command at the last step of the Installation Wizard. It automatically specifies the address and port of the KUMA Core server, the identifier of the correlator to be installed, and the port that the correlator uses for communication. Before installation, ensure the network connectivity of KUMA components.

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

The correlator is installed. You can use it to analyze events for threats.

Page top

[Topic 221404]

Validating correlator installation

To verify that the correlator is ready to receive events:

  1. In the KUMA web interface, open ResourcesActive services.
  2. Make sure that the correlator you installed has the green status.

If the events that are fed into the correlator contain events that meet the correlation rule filter conditions, the events tab will show events with the DeviceVendor=Kaspersky and DeviceProduct=KUMA parameters. The name of the triggered correlation rule will be displayed as the name of these correlation events.

If correlation events were not found

You can create a simpler version of your correlation rule to find possible errors. Use a simple correlation rule and a single Output action. It is recommended to create a filter to find events that are regularly received by KUMA.

When updating, adding, or removing a correlation rule, you must reload the correlator to update its configuration.

When you finish testing your correlation rules, you must remove all testing and temporary correlation rules from KUMA and reload the correlator.

Page top

[Topic 217720]

Creating an agent

A KUMA agent consists of two parts: one part is created inside the KUMA web interface, and the second part is installed on a server or on an asset in the network infrastructure.

An agent is created in several steps:

  1. Creating a set of resources for the agent in the KUMA web interface
  2. Creating an agent service in the KUMA web interface
  3. Installing the server portion of the agent to the asset that will forward messages

A KUMA agent for Windows assets can be created automatically when you create a collector with the wmi or wec transport type. Although the set of resources and service of these agents are created in the Collector Installation Wizard, they must still be installed to the asset that will be used to forward a message.

In this section

Creating a set of resources for an agent

Creating an agent service in the KUMA web interface

Installing an agent in a KUMA network infrastructure

Automatically created agents

Update agents

Page top

[Topic 217718]

Creating a set of resources for an agent

In the KUMA web interface, an agent service is created based on the set of resources for an agent that unites connectors and destinations.

To create a set of resources for an agent in the KUMA web interface:

  1. In the KUMA web interface, under ResourcesAgents, click Add agent.

    This opens a window for creating an agent with the Base settings tab active.

  2. Fill in the settings on the Base settings tab:
    • In the Agent name field, enter a unique name for the created service. The name must contain from 1 to 128 Unicode characters.
    • In the Tenant drop-down list, select the tenant that will own the storage.
    • If you want, select the Debug check box to log service operations.
    • You can optionally add up to 256 Unicode characters describing the service in the Description field.
  3. Create a connection for the agent by using the AddResource button and switch to the added Connection <number> tab.

    You can delete tabs by using the cross button.

  4. In the Connector settings block, add a connector resource:
    • If you want to select an existing resource, select it from the drop-down list.
    • If you want to create a new resource, select it in the Create new drop-down list and define its settings:
      • Specify the connector name in the Name field. The name must contain from 1 to 128 Unicode characters.
      • In the Type drop-down list, select the connector type and define its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of connector:

        The agent type is determined by the connector that is used in the agent. The only exception is for agents with a destination of the diode type. These agents are considered to be diode agents.

        When using the tcp or upd connector type at the normalization stage, IP addresses of the assets from which the events were received will be written in the DeviceAddress event field if it is empty.

    • You can optionally add up to 256 Unicode characters describing the resource in the Description field.

    The connector resource is added to the selected connection of the agent's set of resources. The created resource is only available in this resource set and is not displayed in the web interface ResourcesConnectors section.

  5. In the Destinations settings block, add resources of destinations.
    • If you want to select an existing resource, select it from the drop-down list.
    • If you want to create a new resource, select it in the Create new drop-down list and define its settings:
      • Specify the destination name in the Name field. The name must contain from 1 to 128 Unicode characters.
      • In the Type drop-down list, select the destination type and define its settings on the Basic settings and Advanced settings tabs. The available settings depend on the selected type of destination:
        • nats—used for NATS communications.
        • tcp—used for communications over TCP.
        • http—used for HTTP communications.
        • diode—used to transmit events using a data diode.
        • kafka—used for Kafka communications.
        • file—used for writing to a file.
    • You can optionally add up to 256 Unicode characters describing the resource in the Description field.

      The advanced settings for an agent destination (such as TLS mode and compression) must match the advanced destination settings for the collector that you want to link to the agent.

    There can be more than one destination point. You can add them by clicking the Add destination button and can remove them by clicking the cross button.

  6. Repeat steps 3–5 for each agent connection that you want to create.
  7. Click Save.

The set of resources for the agent is created and displayed under ResourcesAgents. Now you can create an agent service in KUMA.

Page top

[Topic 221392]

Creating an agent service in the KUMA web interface

When a set of resources is created for an agent, you can proceed to create an agent service in KUMA.

To create an agent service in the KUMA web interface:

  1. In the KUMA web interface, under ResourcesActive services, click Add service.
  2. In the opened Choose a service window, select the set of resources that was just created for the agent and click Create service.

The agent service is created in the KUMA web interface and is displayed under ResourcesActive services. Now agent services must be installed to each asset from which you want to forward data to the collector. A service ID is used during installation.

Page top

[Topic 217719]

Installing an agent in a KUMA network infrastructure

When an agent service is created in KUMA, you can proceed to installation of the agent to the network infrastructure assets that will be used to forward data to a collector.

Prior to installation, verify the network connectivity of the system and open the ports used by its components.

In this section

Installing a KUMA agent on Linux assets

Installing a KUMA agent on Windows assets

Page top

[Topic 221396]

Installing a KUMA agent on Linux assets

To install a KUMA agent to a Linux asset:

  1. Log in to the server where you want to install the service.
  2. Create the following directories:
    • /opt/kaspersky/kuma/
    • /opt/kaspersky/agent/
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma agent --core https://<KUMA Core server FQDN>:<port used by KUMA Core server for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --wd <path to the directory that will contain the files of the installed agent. If this flag is not specified, the files will be stored in the directory where the kuma file is located>

    Example: sudo /opt/kaspersky/kuma/kuma agent --core https://kuma.example.com:7210 --id XXXX --wd /opt/kaspersky/kuma/agent/XXXX

The KUMA agent is installed on the Linux asset. The agent forwards data to KUMA, and you can set up a collector to receive this data.

Page top

[Topic 221395]

Installing a KUMA agent on Windows assets

Prior to installing a KUMA agent to a Windows asset, the server administrator must create a user account with the EventLogReaders and Log on as a service permissions on the Windows asset. This user account must be used to start the agent.

To install a KUMA agent to a Windows asset:

  1. Copy the kuma.exe file to a folder on the Windows asset. C:\Users\<User name>\Desktop\KUMA folder is recommended for installation.

    The kuma.exe file is located inside the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

  2. Start the Command Prompt on the Windows asset with Administrator privileges and locate the folder containing the kuma.exe file.
  3. Execute the following command:

    kuma agent --core https://<fullly qualified domain name of the KUMA Core server>:<port used by the KUMA Core server for internal communications (port 7210 by default)> --id <ID of the agent service that was created in KUMA> --user <name of the user account used to run the agent, including the domain> --install

    Example: kuma agent --core https://kuma.example.com:7210 --id XXXXX --user domain\username --install

    You can get help information by executing the kuma help agent command.

  4. Enter the password of the user account used to run the agent.

The C:\Program Files\Kaspersky Lab\KUMA\agent\<Agent ID> folder is created in which the KUMA agent service is installed. The agent forwards Windows events to KUMA, and you can set up a collector to receive them.

When the agent service is installed, it starts automatically. The service is also configured to restart in case of any failures. The agent can be restarted from the KUMA web interface, but only when the service is active. Otherwise, the service needs to be manually restarted on the Windows asset.

Removing a KUMA agent from Windows assets

To remove a KUMA agent from a Windows asset:

  1. Start the Command Prompt on the Windows machine with Administrator privileges and locate the folder with kuma.exe file.
  2. Run any of the commands below:

The specified KUMA agent is removed from the Windows asset. Windows events are no longer sent to KUMA.

When configuring services, you can test the configuration for errors before installation by running the agent with the following command: kuma agent --core https://<fully qualified domain name of the KUMA Core server>:<port used by the KUMA Core server for internal communications (port 7210 by default)> --id <ID of the agent service that was created in KUMA> --user <name of the user account used to run the agent, including the domain>.

Page top

[Topic 221407]

Automatically created agents

When creating a collector with wec or wmi connectors, agents are automatically created for receiving Windows events.

Automatically created agents have the following special conditions:

  • Automatically created agents can have only one connection.
  • Automatically created agents are displayed under ResourcesAgents, and auto created is indicated at the end of their name. Agents can be reviewed or deleted.
  • The settings of automatically created agents are defined automatically based on the collector settings from the Connect event sources and Transport sections. You can change the settings only for a collector that has a created agent.
  • The description of an automatically created agent is taken from the collector description in the Connect event sources section.
  • Debugging of an automatically created agent is enabled and disabled in the Connect event sources section of the collector.
  • When deleting a collector with an automatically created agent, you will be prompted to choose whether to delete the collector together with the agent or to just delete the collector. When deleting only the collector, the agent will become available for editing.
  • When deleting automatically created agents, the type of collector changes to http, and the connection address is deleted from the URL field of the collector.

In the KUMA interface, automatically created agents appear at the same time when the collector is created. However, they must still be installed on the asset that will be used to forward a message.

Page top

[Topic 222245]

Update agents

When updating KUMA versions, the WMI and WEC agents installed on remote machines must also be updated.

To update the agent:

  1. Install the new agent on a remote machine.

    The agent has been updated, but no data is coming from it due to an invalid certificate.

  2. In the KUMA web interface, under ResourcesActive services, reset the certificate of the agent being upgraded.
  3. On the remote machine with the installed agent, start the "KUMA Windows Agent <service ID>" service.

    For more information on Windows services, see the documentation for your version of Windows.

The agent and its certificates have been updated.

Page top

[Topic 218011]

Creating a storage

A storage consists of two parts: one part is created inside the KUMA web interface, and the other part is installed on network infrastructure servers intended for storing events. The server part of a KUMA storage consists of ClickHouse nodes collected into a cluster.

For each ClickHouse cluster, a separate storage must be installed.

Prior to storage creation, carefully plan the structure of the cluster and deploy the necessary network infrastructure. When choosing a ClickHouse cluster configuration, consider the specific event storage requirements of your organization.

It is recommended to use ext4 as the file system.

A storage is created in several steps:

  1. Creating a set of resources for a storage in the KUMA web interface
  2. Create a storage service in the KUMA web interface.
  3. Installing storage nodes in the KUMA network infrastructure.

When creating storage cluster nodes, verify the network connectivity of the system and open the ports used by the components.

In this section

Creating a set of resources for a storage

Creating a storage service in the KUMA web interface

Installing a storage in the KUMA network infrastructure

Page top

[Topic 221257]

Creating a set of resources for a storage

In the KUMA web interface, a storage service is created based on the set of resources for the storage.

To create a set of resources for a storage in the KUMA web interface:

  1. In the KUMA web interface, under ResourcesStorages, click Add storage.

    The storage creation window opens.

  2. In the Storage name field, enter a unique name for the service you are creating. The name must contain from 1 to 128 Unicode characters.
  3. In the Tenant drop-down list, select the tenant that will own the storage.
  4. You can optionally add up to 256 Unicode characters describing the service in the Description field.
  5. In the Default retention period, days field, enter the necessary time period for storing events in the cluster.
  6. In the Audit retention period, days field, enter the necessary time period for storing audit events. The minimum value and default value is 365.
  7. If necessary, use the Add space button to add space to the storage. There can be multiple spaces. You can delete spaces by clicking the Delete space button. After creating the space, you will be able to view and delete spaces in the storage resource settings.

    Available settings:

    • In the Name field, specify a name for the space. This name can contain from 1 to 128 Unicode characters.
    • In the Retention period, days field, specify the number of days to store events in the cluster.
    • In the Filter section, you can specify conditions to identify events that will be put into this space. You can select an existing filter resource from the drop-down list, or select Create new to create a new filter.

      Creating a filter in resources

      1. In the Filter drop-down list, select Create new.
      2. If you want to keep the filter as a separate resource, select the Save filter check box.

        In this case, you will be able to use the created filter in various services.

        This check box is cleared by default.

      3. If you selected the Save filter check box, enter a name for the created filter resource in the Name field. The name must contain from 1 to 128 Unicode characters.
      4. In the Conditions settings block, specify the conditions that the events must meet:
        1. Click the Add condition button.
        2. In the Left operand and Right operand drop-down lists, specify the search parameters.

          Depending on the data source selected in the Right operand field, you may see fields of additional parameters that you need to use to define the value that will be passed to the filter. For example, when choosing active list you will need to specify the name of the active list, the entry key, and the entry key field.

        3. In the operator drop-down list, select the relevant operator.

          Filter operators

          • =—the left operand equals the right operand.
          • <—the left operand is less than the right operand.
          • <=—the left operand is less than or equal to the right operand.
          • >—the left operand is greater than the right operand.
          • >=—the left operand is greater than or equal to the right operand.
          • inSubnet—the left operand (IP address) is in the subnet of the right operand (subnet).
          • contains—the left operand contains values of the right operand.
          • startsWith—the left operand starts with one of the values of the right operand.
          • endsWith—the left operand ends with one of the values of the right operand.
          • match—the left operand matches the regular expression of the right operand. The RE2 regular expressions are used.
          • hasBit—checks whether the left operand (string or number) contains bits whose positions are listed in the right operand (in a constant or in a list).
          • hasVulnerability—checks whether the left operand contains an asset with the vulnerability and vulnerability severity specified in the right operand.
          • inActiveList—this operator has only one operand. Its values are selected in the Key fields field and are compared with the entries in the active list selected from the Active List drop-down list.
          • inDictionary—checks whether the specified dictionary contains an entry defined by the key composed with the concatenated values of the selected event fields.
          • inCategory—the asset in the left operand is assigned at least one of the asset categories of the right operand.
          • inActiveDirectoryGroup—the Active Directory account in the left operand belongs to one of the Active Directory groups in the right operand.
          • TIDetect—this operator is used to find events using CyberTrace Threat Intelligence (TI) data. This operator can be used only on events that have completed enrichment with data from CyberTrace Threat Intelligence. In other words, it can only be used in collectors at the destination selection stage and in correlators.
        4. If necessary, select the do not match case check box. When this check box is selected, the operator ignores the case of the values.

          The selection of this check box does not apply to the InSubnet, InActiveList, InCategory or InActiveDirectoryGroup operators.

          This check box is cleared by default.

        5. If you want to add a negative condition, select If not from the If drop-down list.
        6. You can add multiple conditions or a group of conditions.
      5. If you have added multiple conditions or groups of conditions, choose a search condition (and, or, not) by clicking the AND button.
      6. If you want to add existing filters that are selected from the Select filter drop-down list, click the Add filter button.

        You can view the nested filter settings by clicking the edit-grey button.

The set of resources for the storage is created and is displayed under ResourcesStorages. Now you can create a storage service.

Page top

[Topic 221258]

Creating a storage service in the KUMA web interface

When a set of resources is created for a storage, you can proceed to create a storage service in KUMA.

To create a storage service in the KUMA web interface:

  1. In the KUMA web interface, under ResourcesActive services, click Add service.
  2. In the opened Choose a service window, select the set of resources that you just created for the storage and click Create service.

The storage service is created in the KUMA web interface and is displayed under ResourcesActive services. Now storage services must be installed to each node of the ClickHouse cluster by using the service ID.

Page top

[Topic 217905]

Installing a storage in the KUMA network infrastructure

To create a storage:

  1. Log in to the server where you want to install the service.
  2. Create the /opt/kaspersky/kuma/ folder.
  3. Copy the "kuma" file to the /opt/kaspersky/kuma/ folder. The file is located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder.

    Make sure the kuma file has sufficient rights to run.

  4. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma storage --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install

    Example: sudo /opt/kaspersky/kuma/kuma storage --core https://kuma.example.com:7210 --id XXXXX --install

    When deploying several KUMA services on the same host, during the installation process you must specify unique ports for each component using the --api.port <port> parameter. The following setting values are used by default: --api.port 7221.

  5. Repeat steps 1–2 for each storage node.

The storage is installed.

Page top

[Topic 217736]

Analytics

KUMA provides extensive analytics on the data available to the program from the following sources:

  • Events in storage
  • Alerts
  • Assets
  • Accounts imported from Active Directory
  • Data from collectors on the number of processed events
  • Metrics

You can configure and receive analytics in the Dashboard, Reports, and Source status sections of the KUMA web interface. Analytics are built by using only the data from tenants that the user can access.

Displayed date format:

  • English localization: YYYY-MM-DD.
  • Russian localization: DD.MM.YYYY.

In this Help topic

Dashboard

Reports

Source status

Widgets

Page top

[Topic 217827]

Dashboard

In KUMA, you can configure the Dashboard to display the most recent information (or analytics) about KUMA processes. Analytics are generated using widgets, which are specialized tools that can display specific types of information. If a widget displays data on events, alerts, incidents, or active lists, you can click its header to open the corresponding section of the KUMA web interface with an active filter and/or search query that is used to display data from the widget.

The collections of widgets are called layouts. Administrators and analysts can create, edit, and delete layouts. You can also assign any layout as the default layout so that it is displayed when you open the Dashboard section.

The information in the Dashboard section is updated regularly as per layout configuration, but you can force an update by clicking the DashboardUpdate button at the top of the window. The time of last update is displayed near the window title.

The data displayed on the dashboard depends on the tenants that you can access.

For convenient presentation of analytical data, you can enable TV mode. This lets you hide the left pane containing sections of the KUMA interface and switch to full-screen mode in Full HD resolution. In TV mode, you can also configure a slide show display for the selected layouts.

In this section

Creating a dashboard layout

Selecting a dashboard layout

Selecting a dashboard layout as the default

Editing a dashboard layout

Deleting a dashboard layout

Preconfigured widgets

Enabling and disabling TV mode

Page top

[Topic 217806]

Creating a dashboard layout

To create a layout:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Open the drop-down list in the top right corner of the Dashboard window and select Create layout.

    The New layout window opens.

  3. In the Tenants drop-down list, select the tenants that will own the layout being created.
  4. In the Time period drop-down list, select the time period from which you require analytics:
    • 1 hour
    • 1 day (this value is selected by default)
    • 7 days
    • 30 days
    • In period—receive analytics for the custom time period. The time period is set using the calendar that is displayed when this option is selected.

      The upper boundary of the period is not included in the time slice defined by it. In other words, to receive analytics for a 24-hour period, you should configure the period as Day 1, 00:00:00 – Day 2, 00:00:00 instead of Day 1, 00:00:00 – Day 1, 23:59:59.

  5. In the Refresh every drop-down list, select how often data should be updated in layout widgets:
    • 1 minute
    • 5 minutes
    • 15 minutes
    • 1 hour (this value is selected by default)
    • 24 hours
  6. In the Add widget drop-down list, select the required widget and configure its settings.

    You can add multiple widgets to the layout.

    You can also drag widgets around the window and resize them using the DashboardResize button that appears when you hover the mouse over a widget.

    You can edit or delete widgets added to the layout by clicking the gear icon and selecting Edit to change their configuration or Delete to delete them from the layout.

    • Adding widgets

      To add widget:

      1. Click the Add widget drop-down list and select required widget.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      2. Configure widget parameters and click the Add button.
    • Editing widget

      To edit widget:

      1. Hover the mouse over the required widget and clicking the gear icon that appears.
      2. In the drop-down list select Edit.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      3. Update widget parameters and click the Save button.
  7. In the Layout name field, enter a unique name for this layout. Must contain from 1 to 128 Unicode characters.
  8. Click Save.

The new layout is created and is displayed in the Dashboard section of the KUMA web interface.

Page top

[Topic 217992]

Selecting a dashboard layout

To select layout:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Open the drop-down list in the top right corner of the Dashboard window and select the required layout.

The selected layout is displayed in the Dashboard section of the KUMA web interface.

Page top

[Topic 217993]

Selecting a dashboard layout as the default

To set a dashboard layout as the default:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Open the drop-down list in the top right corner of the Dashboard window and hover mouse over the required layout.
  3. Click the StarOffIcon icon.

The selected layout is become default layout.

Page top

[Topic 217855]

Editing a dashboard layout

To edit layout:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Open the drop-down list in the top right corner of the Dashboard window and hover mouse over the required layout.
  3. Click the EditResource icon.
  4. The Customizing layout window opens.
  5. Make the necessary changes. The settings that are available for editing are the same as the settings available when creating a layout.
  6. Click Save.

The layout is updated and is displayed in the Dashboard section of the KUMA web interface.

If the layout was deleted or assigned to a different tenant while you were making changes to it, an error will be displayed when you click Save. In this case, the layout will not be saved. Reload the page in your web browser to view the list of available layouts in the drop-down list in the upper-right corner.

Page top

[Topic 217835]

Deleting a dashboard layout

To delete layout:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Open the drop-down list in the top right corner of the Dashboard window and hover mouse over the required layout.
  3. Click the delete-icon icon and confirm this action.

The layout is deleted.

Page top

[Topic 222445]

Preconfigured widgets

KUMA comes with a set of preconfigured layouts with widgets:

  • Alerts Overview layout (Alert overview):
    • Active Alerts
    • Unassigned Alerts
    • Latest Alerts
    • Alerts distribution
    • Alerts by Priority
    • Alerts by Assignee
    • Alerts by Status
    • Affected users in alerts
    • Affected Assets
    • Affected Assets Categories
    • Top event source by alerts number
    • Alerts count by rule
  • Incidents Overview layout (Incidents overview):
    • Active incidents
    • Unassigned Incidents
    • Latest Incidents
    • Incidents distribution
    • Incidents by Priority
    • Incidents by assignee
    • Incidents by Status
    • Affected Assets in Incidents
    • Affected Users in Incidents
    • Affected Assets Categories in Incidents
    • Incidents by Tenant
  • Network Overview layout (Network activity overview):
    • Netflow top internal IPs
    • Netflow top external IPs
    • Netflow top hosts for remote control — requests to ports 3389, 22, 135 are monitored.
    • Netflow total bytes by internal ports
    • Top Log Sources by Events count
Page top

[Topic 230361]

Enabling and disabling TV mode

It is recommended to create a separate user with the minimum required set of right to display analytics in TV mode.

To enable TV mode:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Click the GearGrey button in the upper-right corner.

    The Settings window opens.

  3. Move the TV mode toggle switch to the Enabled position.
  4. If you want to configure the slideshow display of widgets, do the following:
    1. Move the Slideshow toggle switch to the Enabled position.
    2. In the Timeout field, indicate how many seconds to wait before switching widgets. If the value 00:00 is selected, widgets will not switch.
    3. In the Queue drop-down list, select the widgets to view.
    4. If necessary, change the order in which the widgets are displayed by using the DragIcon button to drag and drop them in the necessary order.
  5. Click the Save button.

TV mode will be enabled. To return to working with the KUMA web interface, disable TV mode.

To disable TV mode:

  1. Open the KUMA web interface and select the Dashboard section.
  2. Click the GearGrey button in the upper-right corner.

    The Settings window opens.

  3. Move the TV mode toggle switch to the Disabled position.
  4. Click the Save button.

TV mode will be disabled. The left part of the screen shows a pane containing sections of the KUMA web interface.

Page top

[Topic 217966]

Reports

You can configure KUMA to regularly generate reports about KUMA processes.

Reports are generated using report templates that are created and stored on the Templates tab of the Reports section.

Generated reports are stored on the Generated reports tab of the Reports section.

In this section

Report template

Generated reports

Page top

[Topic 217965]

Report template

Report templates are used to specify the analytical data to include in the report, and to configure how often reports must be generated. Administrators and analysts can create, edit, and delete report templates. Reports that were generated using report templates are displayed in the Generated reports tab.

Report templates are available in the Templates tab of the Reports section, where the table of existing templates is displayed. The table has the following columns:

You can configure a set of table columns and their order, as well as change data sorting:

  • You can enable or disable the display of columns in the menu that can be opened by clicking the icon gear.
  • You can change the order of columns by dragging the column headers.
  • If a table column header is green, you can click it to sort the table based on that column's data.
  • Name—the name of the report template.

    You can sort the table by this column by clicking the title and selecting Ascending or Descending.

    You can also search report templates by using the Search field that opens when you click the Name column title.

  • Schedule—the rate at which reports must be generated using the template. If the report schedule was not configured, the disabled value is displayed.
  • Created by—the name of the user who created the report template.
  • Updated—the date when the report template was last updated.

    You can sort the table by this column by clicking the title and selecting Ascending or Descending.

  • Last report—the date and time when the last report was generated based on the report template.
  • Send by email—the check mark is displayed in this column for the report templates that notify users about generated reports via email notifications.
  • Tenant—the name of the tenant that owns the report template.

You can click the name of the report template to open the drop-down list with available commands:

  • Run report—use this option to generate report immediately. The generated reports are displayed in the Generated reports tab.
  • Edit schedule—use this command to configure the schedule for generating reports and to define users that must receive email notifications about generated reports.
  • Edit report template—use this command to configure widgets and the time period for extracting analytics.
  • Duplicate report template—use this command to create a copy of the existing report template.
  • Delete report template—use this command to delete the report template.

In this section

Creating report template

Configuring report schedule

Editing report template

Copying report template

Deleting report template

Page top

[Topic 217811]

Creating report template

To create report template:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. Click the New template button.

    The New report template window opens.

  3. In the Tenants drop-down list, select one or more tenants that will own the layout being created.
  4. In the Time period drop-down list, select the time period from which you require analytics:
    • This day (this value is selected by default)
    • This week
    • This month
    • In period—receive analytics for the custom time period.

      The upper boundary of the period is not included in the time slice defined by it. In other words, to receive analytics for a 24-hour period, you should configure the period as Day 1, 00:00:00 – Day 2, 00:00:00 instead of Day 1, 00:00:00 – Day 1, 23:59:59.

    • Custom—receive analytics for the last N days/weeks/months/years.
  5. In the Retention field, specify how long you want to store reports that are generated according to this template.
  6. In the Template name field, enter a unique name for the report template. Must contain from 1 to 128 Unicode characters.
  7. In the Add widget drop-down list, select the required widget and configure its settings.

    You can add multiple widgets to the report template.

    You can also drag widgets around the window and resize them using the DashboardResize button that appears when you hover the mouse over a widget.

    You can edit or delete widgets added to the layout by hovering the mouse over them, clicking the gear icon that appears and selecting Edit to change their configuration or Delete to delete them from layout.

    • Adding widgets

      To add widget:

      1. Click the Add widget drop-down list and select required widget.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      2. Configure widget parameters and click the Add button.
    • Editing widget

      To edit widget:

      1. Hover the mouse over the required widget and clicking the gear icon that appears.
      2. In the drop-down list select Edit.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      3. Update widget parameters and click the Save button.
  8. You can change logo in the report template by clicking the Upload logo button.

    When you click the Upload logo button, the Upload window opens where you can specify the image file for the logo. The image must be a .jpg, .png, or .gif file no larger than 3 MB.

    The added logo is displayed in the report instead of KUMA logo.

  9. Click Save.

The new report template is created and is displayed in the ReportsTemplates tab of the KUMA web interface. You can run this report manually. If you want to have the reports generated automatically, you must configure the schedule for that.

Page top

[Topic 217771]

Configuring report schedule

To configure report schedule:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. In the report templates table, click the name of an existing report template and select Edit schedule in the drop-down list.

    The Report settings window opens.

  3. If you want the report to be generated regularly:
    1. Turn on the Schedule toggle switch.

      In the Recur every group of settings, define how often the report must be generated.

      You can specify the frequency of generating reports by days, weeks, months, or years. Depending on the selected period, you should specify the time, day of the week, day of the month or the date of the report generation.

    2. In the Time field, enter the time when the report must be generated. You can enter the value manually or using the clock icon.
  4. If you want, in the Send to drop-down list select the KUMA users you want to receive the link to the generated reports via email.

    You should configure an SMTP connection so that generated reports can be forwarded by email.

  5. Click Save.

Report schedule is configured.

Page top

[Topic 217856]

Editing report template

To edit report template:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. In the report templates table click the name of the report template and select Edit report template in the drop-down list.

    The Edit report template window opens.

    You can also open this window in the ReportsGenerated reports tab by clicking the name of a generated report and selecting in the drop-down list Edit report template.

  3. Make the necessary changes:
    • Change the list of tenants that own the report template.
    • Update the time period from which you require analytics.
    • Add widgets

      To add widget:

      1. Click the Add widget drop-down list and select required widget.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      2. Configure widget parameters and click the Add button.
    • Change widgets positions by dragging them.
    • Resize widgets using the DashboardResize button that appears when you hover the mouse over a widget.
    • Edit widgets

      To edit widget:

      1. Hover the mouse over the required widget and clicking the gear icon that appears.
      2. In the drop-down list select Edit.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      3. Update widget parameters and click the Save button.
    • Delete widgets by hovering the mouse over them, clicking the gear icon that appears, and selecting Delete.
    • In the field to the right from the Add widget drop-down list enter a new name of the report template. Must contain from 1 to 128 Unicode characters.
    • Change the report logo by uploading it using the Upload logo button. If the template already contains a logo, you must first delete it.
    • Change how long reports generated using this template must be stored.
  4. Click Save.

The report template is updated and is displayed in the ReportsTemplates tab of the KUMA web interface.

Page top

[Topic 217778]

Copying report template

To create a copy of a report template:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. In the report templates table, click the name of an existing report template, and select Duplicate report template in the drop-down list.

    The New report template window opens. The name of the widget is changed to <Report template> - copy.

  3. Make the necessary changes:
    • Change the list of tenants that own the report template.
    • Update the time period from which you require analytics.
    • Add widgets

      To add widget:

      1. Click the Add widget drop-down list and select required widget.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      2. Configure widget parameters and click the Add button.
    • Change widgets positions by dragging them.
    • Resize widgets using the DashboardResize button that appears when you hover the mouse over a widget.
    • Edit widgets

      To edit widget:

      1. Hover the mouse over the required widget and clicking the gear icon that appears.
      2. In the drop-down list select Edit.

        The window with widget parameters opens. You can see how the widget will look like by clicking the Preview button.

      3. Update widget parameters and click the Save button.
    • Delete widgets by hovering the mouse over them, clicking the gear icon that appears, and selecting Delete.
    • In the field to the right from the Add widget drop-down list enter a new name of the report template. Must contain from 1 to 128 Unicode characters.
    • Change the report logo by uploading it using the Upload logo button. If the template already contains a logo, you must first delete it.
  4. Click Save.

The report template is created and is displayed in the ReportsTemplates tab of the KUMA web interface.

Page top

[Topic 217838]

Deleting report template

To delete report template:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. In the report templates table, click the name of the report template, and select Delete report template in the drop-down list.

    A confirmation window opens.

  3. If you want to delete only the report template, click the Delete button.
  4. If you want to delete a report template and all the reports that were generated using that template, click the Delete with reports button.

The report template is deleted.

Page top

[Topic 217882]

Generated reports

All reports are generated using report templates. Generated reports are available in the Generated reports tab of the Reports section and are displayed in the table with the following columns:

You can configure a set of table columns and their order, as well as change data sorting:

  • You can enable or disable the display of columns in the menu that can be opened by clicking the icon gear.
  • You can change the order of columns by dragging the column headers.
  • If a table column header is green, you can click it to sort the table based on that column's data.
  • Name—the name of the report template.

    You can sort the table by this column by clicking the title and selecting Ascending or Descending.

  • Time period—the time period for which the report analytics were extracted.
  • Last report—date and time when the report was generated.

    You can sort the table by this column by clicking the title and selecting Ascending or Descending.

  • Tenant—name of the tenant that owns the report.

You can click the name of a report to open the drop-down list with available commands:

  • Open report—use this command to open the report data window.
  • Save as HTML—use this command to save the report as an HTML file.
  • Run report—use this option to generate report immediately. Refresh the browser window to see the newly generated report in the table.
  • Edit report template—use this command to configure widgets and the time period for extracting analytics.
  • Delete report—use this command to delete the report.

In this section

Viewing reports

Generating report

Saving report as HTML

Deleting report

Page top

[Topic 217945]

Viewing reports

To open report:

  1. Open the KUMA web interface and select ReportsGenerated reports.
  2. In the report table, click the name of the generated report, and select Open report in the drop-down list.

    The new browser window opens with the widgets displaying report analytics. If a widget displays data on events, alerts, incidents, or active lists, you can click its header to open the corresponding section of the KUMA web interface with an active filter and/or search query that is used to display data from the widget.

  3. If necessary, you can save the report to an HTML file by using the Save as HTML button.
Page top

[Topic 217883]

Generating reports

You can generate report manually or configure a schedule to have it generated automatically.

To generate report manually:

  1. Open the KUMA web interface and select ReportsTemplates.
  2. In the report templates table, click a report template name and select Run report in the drop-down list.

    You can also generate report from the ReportsGenerated reports tab by clicking the name of an existing report and in the drop-down list selecting Run report.

The report is generated and is displayed in the ReportsGenerated reports tab.

To generate report automatically:

Configure the report schedule.

Page top

[Topic 217985]

Saving report as HTML

To save the report as HTML:

  1. Open the KUMA web interface and select ReportsGenerated reports.
  2. In the report table, click the name of a generated report, and select Save as HTML in the drop-down list.

The report is saved as HTML file using your browser settings.

You can also save the report in HTML format when you view it.

Page top

[Topic 217837]

Deleting reports

To delete report:

  1. Open the KUMA web interface and select ReportsGenerated reports.
  2. In the report table, click the name of the generated report, and in the drop-down list select Delete report.

    A confirmation window opens.

  3. Click OK.
Page top

[Topic 221645]

Source status

In KUMA, you can monitor the state of the sources of data received by collectors. There can be multiple sources of events on one server, and data from multiple sources can be received by one collector. Sources of events are identified based on the following fields of events (the data in these fields is case sensitive):

  • DeviceProduct
  • DeviceAddress and/or DeviceHostName

Lists of sources are generated in collectors, merged in the KUMA Core, and displayed in the program web interface under Source status on the List of event sources tab. Data is updated every minute.

The rate and number of incoming events serve as an important indicator of the state of the observed system. You can configure monitoring policies such that changes are tracked automatically and notifications are automatically created when indicators reach specific boundary values. Monitoring policies are displayed in the KUMA web interface under Source status on the Monitoring policies tab.

When monitoring policies are triggered, monitoring events are created and include data about the source of events.

In this section

List of event sources

Monitoring policies

Page top

[Topic 221773]

List of event sources

Sources of events are displayed in the table under Source statusList of event sources. One page can display up to 250 sources. You can sort the table by clicking the column header of the relevant setting. Clicking on a source of events opens an incoming data graph.

You can use the Search field to search for event sources. The search is performed using regular expressions (RE2).

If necessary, you can configure the interval for updating data in the table. Available update periods: 1 minute, 5 minutes, 15 minutes, 1 hour. The default value is No refresh. You may need to configure the update period to track changes made to the list of sources.

The following columns are available:

  • Status—status of the event source:
    • Green—events are being received within the limits of the assigned monitoring policy.
    • Red—the frequency or number of incoming events go beyond the boundaries defined in the monitoring policy.
    • Gray—a monitoring policy has not been assigned to the source of events.

    The table can be filtered by this setting.

  • Name—name of the event source. The name is generated automatically from the following fields of events:
    • DeviceProduct
    • DeviceAddress and/or DeviceHostName
    • DeviceProcessName
    • Tenant

    You can change the name of an event source. The name can contain no more than 128 Unicode characters.

  • Host name or IP address—host name or IP address from which the events were forwarded.
  • Monitoring policy—name of the monitoring policy assigned to the event source.
  • Stream—frequency at which events are received from the event source.
  • Lower limit—lower boundary of the permissible number of incoming events as indicated in the monitoring policy.
  • Upper limit—upper boundary of the permissible number of incoming events as indicated in the monitoring policy.
  • Tenant—the tenant that owns the events received from the event source.

If you select sources of events, the following buttons become available:

  • Save to CSV—you can use this button to export data of the selected event sources to a file named event-source-list.csv in UTF-8 encoding.
  • Apply policy and Disable policy—you can use these buttons to enable or disable a monitoring policy for a source of events. When enabling a policy, you must select the policy from the drop-down list. When disabling a policy, you must select how long you want to disable the policy: temporarily or forever.

    If there is no policy for the selected event source, the Apply policy button is inactive. This button will also be inactive if sources from different tenants are selected, but the user has no available policies in the shared tenant.

    In some rare cases, the status of a disabled policy may change from gray to green a few seconds after it is disabled due to overlapping internal processes of KUMA. If this happens, you need to disable the monitoring policy again.

  • Remove event source from the list—you can use this button to remove an event source from the table. The statistics on this source will also be removed. If a collector continues to receive data from the source, the event source will re-appear in the table but its old statistics will not be taken into account.

By default, no more than 250 event sources are displayed and, therefore, available for selection. If there are more event sources, to select them you must load additional event sources by clicking the Show next 250 button in the lower part of the window.

Page top

[Topic 221775]

Monitoring policies

Policies for monitoring the sources of events are displayed in the table under Source statusMonitoring policies. You can sort the table by clicking the column header of the relevant setting. Clicking on a policy opens an information pane containing its settings that can be edited.

The following columns are available:

  • Name—name of the monitoring policy.
  • Lower limit—lower boundary of the permissible number of incoming events as indicated in the monitoring policy.
  • Upper limit—upper boundary of the permissible number of incoming events as indicated in the monitoring policy.
  • Interval—period taken into account by the monitoring policy.
  • Type—type of monitoring policy:
    • byCount—the monitoring policy tracks the number of incoming events.
    • byEPS—the monitoring policy tracks the rate of incoming events.
  • Tenant—the tenant that owns the monitoring policy.

To add a monitoring policy:

  1. In the KUMA web interface, under Source statusMonitoring policies, click Add policy and define the settings in the opened window:
    • In the Policy name field, enter a unique name for the policy you are creating. The name must contain from 1 to 128 Unicode characters.
    • In the Tenant drop-down list, select the tenant that will own the policy. Your tenant selection determines the specific sources of events that can covered by the monitoring policy.
    • In the Policy type drop-down list, select the method used to track incoming events: by rate or by number.
    • In the Lower limit and Upper limit fields, define the boundaries representing normal behavior. Deviations outside of these boundaries will trigger the monitoring policy, create an alert, and forward notifications.
    • In the Count interval field, specify the period during which the monitoring policy must take into account the data from the monitoring source. The maximum value is 14 days.
    • If necessary, specify the email addresses to which notifications about the activation of the KUMA monitoring policy should be sent. To add each address, click the Email button.

      To forward notifications, you must configure a connection to the SMTP server.

  2. Click Add.

The monitoring policy will be added.

To remove a monitoring policy,

select one or more policies, then click Delete policy and confirm the action.

You cannot remove preinstalled monitoring policies or policies that have been assigned to data sources.

Page top

[Topic 218042]

Widgets

Widgets in KUMA are used to obtain analytics for the Dashboard and Reports.

Click on the title or legend of widgets for events, alerts, incidents, or active lists, to open the corresponding section of the KUMA web interface containing the widget data obtained using the section's filters and/or a search query. See below for more details. This functionality is not available while creating or editing layouts.

If the widget is configured to divide the analytics period into segments, the values or charts will be displayed in pairs: the analytics for the current segment of the period (custom color) and the analytics for the previous segment of the period (gray).

Widgets are organized into widget groups, each one related to the analytics type they provide. The following widget groups and widgets are available in KUMA:

  • Events—widget for creating analytics based on events.

    Click on the title of this widget to go to the Events section of the KUMA web interface. The SQL query specified in the widget is used to request events from the widget. The query is specified without grouping (except for table graphs) but takes into account the conditions indicated in the WHERE parameter. The LIMIT parameter in a query is equal to 250.

  • Active lists—widget for creating analytics based on active lists of correlators.

    Click the title of this widget to go to the section of the active list used to build the analytics of the widget.

  • Alerts—group for analytics related to alerts. Click on the title or legend of widgets in this group to go to the Alerts section of the KUMA web interface and view the widget data in detail.

    The group includes the following widgets:

    • Active alerts—number of alerts that have not been closed.
    • Active alerts by tenant—number of unclosed alerts grouped by tenant.
    • Alerts by tenant—number of alerts of all statuses, grouped by tenant.
    • Unassigned alerts—number of alerts that have the New status.
    • Alerts by assignee—number of assigned alerts grouped by their executor.
    • Alerts by status—number of alerts grouped by status.
    • Alerts by severity—number of unclosed alerts grouped by their severity.
    • Alerts by rule—number of unclosed alerts grouped by correlation rule. For this widget, you cannot obtain detailed information by clicking on the widget title.
    • Latest alerts—table containing the last 10 unclosed alerts. If there are more than 10 alerts in tenants selected in the widget, some of them will not be displayed.
    • Alerts distribution—number of alerts created during the period indicated in the widget.
  • Assets—group for analytics related to assets from processed events. This group includes the following widgets:
    • Affected assets—table of alert-related assets showing the severity of the asset and the number of unclosed alerts related to it.
    • Affected asset categories—categories of assets linked to unclosed alerts.
    • Number of assets—number of assets that were added to KUMA.
    • Assets in incidents by tenant—number of assets in unclosed incidents, grouped by tenant.
    • Assets in alerts by tenant—number of assets in unclosed alerts, grouped by tenant.
  • Incidents—group for analytics related to incidents. Click on the title or legend of widgets in this group to go to the Incidents section of the KUMA web interface and view the widget data in detail.

    The group includes the following widgets:

    • Active incidents—number of incidents that have not been closed.
    • Unassigned incidents—number of incidents that have the Opened status.
    • Incidents distribution—number of incidents created during the period indicated in the widget.
    • Incidents by assignee—number of incidents that have the Assigned status grouped by KUMA user.
    • Incidents by status—number of incidents grouped by status.
    • Incidents by severity—number of unclosed incidents grouped by their severity. Available types of diagrams: pie chart, bar graph.
    • Active incidents by tenant—number of unclosed incidents grouped by tenant available to the user.
    • All incidents—number of incidents of all statuses.
    • All incidents by tenant—number of incidents of all statuses, grouped by tenant.
    • Affected assets in incidents—number of assets in unclosed incidents. For this widget, you cannot obtain detailed information by clicking on the widget title.
    • Affected assets categories in incidents—categories of the assets affected by unclosed incidents. Available types of diagrams: pie chart, bar graph. For this widget, you cannot obtain detailed information by clicking on the widget title.
    • Affected users in incidents—users affected by incidents. Available types of diagrams: table, pie chart, bar graph. For this widget, you cannot obtain detailed information by clicking on the widget title.
    • Latest incidents—last 10 unclosed incidents. If there are more than 10 incidents in tenants selected in the widget, some of them will not be displayed.
  • Event sources—group for analytics related to sources of events. The group includes the following widgets:
    • Top event sources by alerts number—number of unclosed alerts grouped by event source.
    • Top event sources by convention rate—number of events that have an unclosed alert grouped by event source.

      Due to optimized storage of events in alerts, the number of alerts created by event sources may be distorted in some cases. To obtain accurate statistics, it is recommended to specify the Device Product event field as unique in the correlation rule, and enable storage of all base events in a correlation event. However, correlation rules with these settings consume more resources.

  • Users—group for analytics related to users from processed events. The group includes the following widgets:
    • Affected users in alerts—number of users related to unclosed alerts.
    • Number of AD users—number of Active Directory accounts received via LDAP during the period indicated in the widget.

In this section

Standard widgets

Customizable event-based analytics

Customizable active lists analytics

Page top

[Topic 221919]

Standard widgets

This section describes the settings of all widgets except the Events widget and Active lists widget.

The available settings of widgets depend on the selected type of widget. The widget type is determined by its icon:

  • pie —pie chart
  • counter —counter
  • table —table
  • bar1 —bar chart
  • bar2 —date histogram

Settings of pie charts, counters, and tables

The settings of pie charts, counters, and tables are located on the same tab. The available settings depend on the selected widget:

  • Name—the field for the name of the widget. Must contain from 1 to 128 Unicode characters.
  • Description—the field for the widget description. You can add up to 4000 Unicode characters describing the widget.
  • Tenant—drop-down list for selecting the tenant whose data will be used to display analytics. The As layout setting is used by default.
  • Period—drop-down list for configuring the time period for which the analytics must be displayed. Available options:
    • As layout—when this option is selected, the widget time period value reflects the period that was configured for the layout. This option is selected by default.
    • 1 hour—receive analytics for the previous hour.
    • 1 day—receive analytics for the previous day.
    • 7 days—receive analytics for the previous 7 days.
    • 30 days—receive analytics for the previous 30 days.
    • In period—receive analytics for the custom time period. The time period is set using the calendar that is displayed when this option is selected.

      The upper boundary of the period is not included in the time slice defined by it. In other words, to receive analytics for a 24-hour period, you should configure the period as Day1, 00:00:00 – Day 2, 00:00:00 instead of Day 1, 00:00:00 – Day 1, 23:59:59.

  • Storage—drop-down list for selecting the storage whose events will be used to create analytics.
  • Color—the drop-down list to select the color in which the information is displayed:
    • Default—use your browser's default font color.
    • green
    • red
    • blue
    • yellow
  • Horizontal—turn on this toggle switch if you want to use horizontal histogram instead of vertical. This toggle switch is turned off by default.
  • Show legend—turn off this toggle switch if you don't want the widget to display the legend for the widget analytics. This toggle switch is turned on by default.
  • Show nulls in legend—turn on this toggle switch if you want the legend for the widget analytics to include parameters with zero values. This toggle switch is turned off by default.
  • Decimals—this field is used to specify how to round-off values. The default value is Auto.

Settings of bar graphs

The settings of bar graphs are located on two tabs. The available settings depend on the selected widget:

  • Actions —this tab is used to configure the chart scale. Available settings:
    • The Y-min and Y-max fields are used to define the scale of the Y-axis. The Decimals field on the left is used to set the rounding parameter for the Y-axis values.
    • The X-min and X-max fields are used to define the scale of the X-axis. The Decimals field on the right is used to control rounding of the X-axis values.

      Negative values can be displayed on chart axes. This is due to the scaling of charts on the widget and can be fixed by setting zero as the minimum chart values instead of Auto.

  • wrench —this tab is used to configure the widget analytics display.
    • Name—the field for the name of the widget. Must contain from 1 to 128 Unicode characters.
    • Description—the field for the widget description. You can add up to 512 Unicode characters describing the widget.
    • Tenant—drop-down list for selecting the tenant whose data will be used to display analytics.
    • Period—drop-down list for configuring the time period for which the analytics must be displayed. Available options:
      • As layout—when this option is selected, the widget time period value reflects the period that was configured for the layout. This option is selected by default.
      • 1 hour—receive analytics for the previous hour.
      • 1 day—receive analytics for the previous day.
      • 7 days—receive analytics for the previous 7 days.
      • 30 days—receive analytics for the previous 30 days.
      • In period—receive analytics for the custom time period. The time period is set using the calendar that is displayed when this option is selected.
    • Storage—drop-down list for selecting the storage whose events will be used to create analytics.
    • Color—the drop-down list to select the color in which the information is displayed:
      • default—use your browser's default font color.
      • green
      • red
      • blue
      • yellow
    • Horizontal—turn on this toggle switch if you want to use horizontal histogram instead of vertical. This toggle switch is turned off by default.

      When this option is enabled, when a widget displays a large amount of data, the horizontal scrolling will not be available and data will be fit into the widget window. If there is a lot of data to display, it is recommended to increase the widget size.

    • Show legend—turn off this toggle switch if you don't want the widget to display the legend for the widget analytics. This toggle switch is turned on by default.
    • Show nulls in legend—turn on this toggle switch if you want the legend for the widget analytics to include parameters with zero values. This toggle switch is turned off by default.
    • Decimals—this field is used to specify how to round-off values. The default value is Auto.
Page top

[Topic 217867]

Customizable event-based analytics

You can use the Events widget to get the necessary event-based analytics based on SQL queries. Depending on the selected value of the graph type, two or three parameter tabs are available:

  • Selectors —this tab is used to define the widget type and to compose the search for the analytics.
  • Actions —this tab is used to configure the chart scale. This tab only available for graph types (see below) Bar chart, Line chart, Date Histogram.
  • wrench —this tab is used to configure the widget analytics display.

The following parameters are available for the Selectors tab:

  • Graph—this drop-down list is used to select widget graph type. Available options:
    • Pie chart
    • Bar chart
    • Counter
    • Line chart
    • Table
    • Date Histogram
  • Tenant—drop-down list for selecting the tenant whose data will be used to display analytics. The As layout setting is used by default.
  • Period—drop-down list for configuring the time period for which the analytics must be displayed. Available options:
    • As layout—when this option is selected, the widget time period value reflects the period that was configured for the layout. This option is selected by default.
    • 1 hour—receive analytics for the previous hour.
    • 1 day—receive analytics for the previous day.
    • 7 days—receive analytics for the previous 7 days.
    • 30 days—receive analytics for the previous 30 days.
    • In period—receive analytics for the custom time period. The time period is set using the calendar that is displayed when this option is selected.

      The upper boundary of the period is not included in the time slice defined by it. In other words, to receive analytics for a 24-hour period, you should configure the period as Day 1, 00:00:00 – Day 2, 00:00:00 instead of Day 1, 00:00:00 – Day 1, 23:59:59.

  • Show data for previous period—a toggle button that lets you enable the display of data for two periods at the same time, including data for the current period and for the previous one. This can be useful for assessing change dynamics.
  • Storage—the storage where the search should be performed.
  • SQL query field—here you can enter a search query that is equivalent to filtering events using SQL syntax.

    For Event widgets, you can use the ExtraNormalizers button to open a query builder that is equivalent to the event filter builder parameters:

    Description of query builder parameters

    • SELECT—use these fields to define the event fields that must be extracted for analytics. The number of available fields depends on the selected widget graph type (see above).

      In the left drop-down list you can select event fields from required for analytics.

      The middle field displays what the selected field is used for in the widget: metric or value.

      When the Table widget type is selected, the values in the middle fields become available for editing and are displayed as the names of columns. Only ANSII-ASCII characters can be used for values.

      In the right drop-down list you can select how the metric type event field values must be processed for the widget:

      • count—select this option to count events. This option is available only for the ID event field.
      • max—select this option to display the maximum event field value from the event selection.
      • min—select this option to display the minimum event field value from the event selection.
      • avg—select this option to display the average event field value from the event selection.
      • sum—select this option to display the sum of event field values from the event selection.
    • SOURCE—this drop-down list is used to select the data source type. Only events option is available for selection.
    • WHERE—this group of settings is used to create search conditions:

      In the left drop-down list you can select the event field you want to use as a filter.

      In the middle drop-down list you can select the required operator. Available operators vary based on the chosen event field's value type.

      In the right you can select or enter the value of the event field. Depending on the selected event field value type, you may have to input the value manually, select it in the drop-down list, or select it on the calendar.

      You can add search conditions using the Add condition button or delete them using the button with the cross icon.

      You can also add group conditions using the Add group button. By default, groups of conditions are added with the AND operator. However, you can switch the operator by clicking the operator name. Available values: AND, OR, NOT. Group conditions are deleted using the Delete group button.

    • GROUP BY—this drop-down list is used to select the event fields for grouping events. This parameter is not available for Counter graph type.
    • ORDER BY—this drop-down list is used to define how the information from search results is sorted in the widget. This parameter is not available for Date Histogram and Counter graph types.

      In the left drop-down list you can select the value, metric or event field to use for sorting.

      In the drop-down list on the right, you can select the sorting order: ASC for ascending or DESC for descending.

      For Table-type graphs, you can add sorting conditions by using the Add column button.

    • LIMIT—this field is used to set the maximum number of data points for the widget. This parameter is not available for Date Histogram and Counter graph types.

    Example of search conditions in the query builder

    WidgetCustomExample

    Search condition parameters for the widget showing average bytes received per host

    Aliases must not contain spaces.

The following parameters are available for the Actions tab:

  • The Y-min and Y-max fields are used to define the scale of the Y-axis. The Decimals field on the left is used to set the rounding parameter for the Y-axis values.
  • The X-min and X-max fields are used to define the scale of the X-axis. The Decimals field on the right is used to control rounding of the X-axis values.

    Negative values can be displayed on chart axes. This is due to the scaling of charts on the widget and can be fixed by setting zero as the minimum chart values instead of Auto.

  • Line-width and Point size fields are available for Line chart graph type and is used to configure the plot line.

The following parameters are available for the wrench tab:

  • Name—the field for the name of the widget. Must contain from 1 to 128 Unicode characters.
  • Description—the field for the widget description. You can add up to 512 Unicode characters describing the widget.
  • Color—the drop-down list to select the color in which the information is displayed:
    • Default—use your browser's default font color.
    • green
    • red
    • blue
    • yellow
  • Horizontal—turn on this toggle switch if you want to use horizontal histogram instead of vertical. This toggle switch is turned off by default.

    When this option is enabled, when a widget displays a large amount of data, the horizontal scrolling will not be available and data will be fit into the widget window. If there is a lot of data to display, it is recommended to increase the widget size.

  • Show legend—turn off this toggle switch if you don't want the widget to display the legend for the widget analytics. This toggle switch is turned on by default.
  • Show nulls in legend—turn on this toggle switch if you want the legend for the widget analytics to include parameters with zero values. This toggle switch is turned off by default.
  • Decimals—the field to enter the number of decimals to which the displayed value must be rounded off.
  • Period segments length (available for Date Histogram graph types)—drop-down list for selecting the duration of the segments into which you want to divide the period.
Page top

[Topic 234198]

Customizable active lists analytics

You can use Active lists widgets to get the necessary analytics based on SQL queries sent to the active lists. Depending on the selected value of the graph type, two or three parameter tabs are available:

  • Selectors —this tab is used to define the widget type and to compose the search for the analytics.
  • Actions —this tab is used to configure the chart scale. This tab is available only for Bar chart types of graphs (see below).
  • wrench —this tab is used to configure the widget analytics display.

The following parameters are available for the Selectors tab:

  • Graph—this drop-down list is used to select widget graph type. Available options:
    • Pie chart
    • Bar chart
    • Counter
    • Table
  • Tenant—drop-down list for selecting the tenant whose data will be used to display analytics. The As layout setting is used by default.
  • Correlator—name of the correlator service whose active list should be queried for analytics.
  • Active list—name of the active list that should be searched.

    The same resource of an active list can be used by different correlator services. However, a separate entity of the active list is created for each correlator. Therefore, the contents of the active lists used by different correlators differ even if the active lists have the same names and IDs.

  • SQL query field—here you can enter a search query that is equivalent to searching events using SQL syntax.

    In contrast to an event search, the FROM parameter must match the value of `records` in search queries through active lists.

    The service fields _key (the field with the keys of the active list records) and _count (the number of times this record has been added to the active list), and custom fields are available for queries.

    Examples:

    • SELECT count(_key) AS metric, Status AS value FROM `records` GROUP BY value ORDER BY metric DESC LIMIT 250—Query for a pie chart that returns the number of keys of the active list (count aggregation based on the _key field) and all options for values of the custom field Status. The widget displays a pie chart with the total number of records in the active list, divided proportionally by the number of possible values for the Status field.
    • SELECT Name, Status, _count AS Number FROM `records` WHERE Description ILIKE '%ftp%' ORDER BY Name DESC LIMIT 250—Query for the table that returns the values of the Name and Status custom fields and the _count service field for those active list records in which the value of the Description custom field matches the query ILIKE '%ftp%'. The widget displays a table with the Status, Name, and Number columns.

    If a date and time conversion function is used in an SQL query (for example, fromUnixTimestamp64Milli) and the field being processed does not contain a date and time, an error will be displayed in the widget. To avoid this, use functions that can handle a null value. Example: SELECT _key, fromUnixTimestamp64Milli(toInt64OrNull(DateTime)) as Date FROM `records` LIMIT 250.

The following parameters are available for the Actions tab:

  • The Y-min and Y-max fields are used to define the scale of the Y-axis. The Decimals field on the left is used to set the rounding parameter for the Y-axis values.
  • The X-min and X-max fields are used to define the scale of the X-axis. The Decimals field on the right is used to control rounding of the X-axis values.

    Negative values can be displayed on chart axes. This is due to the scaling of charts on the widget and can be fixed by setting zero as the minimum chart values instead of Auto.

The following parameters are available for the wrench tab:

  • Name—the field for the name of the widget. Must contain from 1 to 128 Unicode characters.
  • Description—the field for the widget description. You can add up to 512 Unicode characters describing the widget.
  • Color—the drop-down list to select the color in which the information is displayed:
    • default—use your browser's default font color.
    • green
    • red
    • blue
    • yellow
  • Horizontal—turn on this toggle switch if you want to use horizontal histogram instead of vertical. This toggle switch is turned off by default.

    When this option is enabled, when a widget displays a large amount of data, the horizontal scrolling will not be available and data will be fit into the widget window. If there is a lot of data to display, it is recommended to increase the widget size.

  • Show legend—turn off this toggle switch if you don't want the widget to display the legend for the widget analytics. This toggle switch is turned on by default.
  • Show nulls in legend—turn on this toggle switch if you want the legend for the widget analytics to include parameters with zero values. This toggle switch is turned off by default.
  • Decimals—the field to enter the number of decimals to which the displayed value must be rounded off.
Page top

[Topic 221499]

Working with tenants

Access to tenants is regulated in the settings of users. The general administrator has access to the data of all tenants. Only a user with this role can create and disable tenants.

Tenants are displayed in the table under SettingsTenants in the KUMA web interface. You can sort the table by clicking on columns.

Available columns:

  • Name—tenant name. The table can be filtered by this column.
  • EPS limit—quota size for EPS (events processed per second) allocated to the tenant out of the overall EPS quota determined by the license.
  • Description—description of the tenant.
  • Disabled—indicates that the tenant is inactive.

    By default, inactive tenants are not displayed in the table. You can view them by selecting the Show disabled check box.

  • Created—tenant creation date.

To create a tenant:

  1. In the KUMA web interface under SettingsTenants, click Add.

    The Add tenant window opens.

  2. Specify the tenant name in the Name field. The name must contain from 1 to 128 Unicode characters.
  3. In the EPS limit field, specify the EPS quota for the tenant. The cumulative EPS of all tenants cannot exceed the EPS of the license.
  4. If necessary, add a Description of the tenant. The description can contain no more than 256 Unicode characters.
  5. Click Save.

The tenant will be added and displayed in the tenants table.

To disable or enable a tenant:

  1. In the KUMA web interface under SettingsTenants, select the relevant tenant.

    If the tenant is disabled and not displayed in the table, select the Show disabled check box.

  2. Click Disable or Enable.

When a tenant is disabled, its services are automatically stopped, it no longer receives or processes events, and the EPS of the tenant is no longer taken into account for the cumulative EPS of the license.

When a tenant is enabled, its services must be manually started.

In this Help topic

Selecting a tenant

Tenant affiliation rules

Page top

[Topic 221455]

Selecting a tenant

If you have access to multiple tenants, KUMA lets you select which tenants' data will be displayed in the KUMA web interface.

To select a tenant for displaying data:

  1. In the KUMA web interface, click Selected tenants.

    The tenant selection area opens.

  2. Select the check boxes next to the tenants whose data you want to see in sections of the KUMA web interface.
  3. You must select at least one tenant. You can use the Search field to search for tenants.
  4. Click the tenant selection area by clicking Selected tenants.

Sections of the KUMA web interface will display only the data and analytics related to the selected tenants.

Your selection of tenants for data display will determine which tenants can be specified when creating resources, services, layouts, report templates, widgets, incidents, assets, and other KUMA settings that let you select a tenant.

Page top

[Topic 221469]

Tenant affiliation rules

Tenant inheritance rules

It is important to track which tenant owns specific objects created in KUMA because this determines who will have access to the objects and whether or not interaction with specific objects can be configured. Tenant identification rules:

  • The tenant of an object (such as a service or resource) is determined by the user when the object is created.

    After the object is created, the tenant selected for that object cannot be changed. However, resources can be exported then imported into another tenant.

  • The tenant of an alert and correlation event is inherited from the correlator that created them.

    The tenant name is indicated in the TenantId event field.

  • If events of different tenants that are processed by the same correlator are not merged, the correlation events created by the correlator inherit the tenant of the event.
  • The incident tenant is inherited from the alert.

Examples of multitenant interactions

Multitenancy in KUMA provides the capability to centrally investigate alerts and incidents that occur in different tenants. Below are some examples that illustrate which tenants own certain objects that are created.

When correlating events from different tenants in a common stream, you should not group events by tenant. In other words, the TenantId event field should not be specified in the Identical fields field in correlation rules. Events must be grouped by tenant only if you must not merge events from different tenants.

Services that must be accommodated by the capacities of the main tenant can be deployed only by a user with the general administrator role.

  • Correlation of events for one tenant, correlator is allocated for this tenant and deployed at the tenant

    Condition:

    The collector and correlator are owned by tenant 2 (tenantID=2)

    Scenario:

    1. The collector of tenant 2 receives and forwards events to the correlator of tenant 2.
    2. When correlation rules are triggered, the correlator creates correlation events with tenantID=2.
    3. The correlator forwards the correlation events to the storage partition for tenant 2.
    4. An alert is created and linked to the tenant with tenantID=2.
    5. The events that triggered the alert are appended to the alert.

    An incident is created manually by the user. The incident tenant is determined by the tenant of the user. An alert is linked to an incident either manually or automatically.

  • Correlation of events for one tenant, correlator is allocated for this tenant and deployed at the main tenant

    Condition:

    • The collector is deployed at tenant 2 and is owned by this tenant (tenantID=2).
    • The correlator is deployed at the main tenant.

      The owner of the correlator is determined by the general administrator depending on who will investigate incidents of tenant 2: employees of the main tenant or employees of tenant 2. The owner of the alert and incident depends on the owner of the correlator.

    Scenario 1. The correlator belongs to tenant 2 (tenantID=2):

    1. The collector of tenant 2 receives and forwards events to the correlator.
    2. When correlation rules are triggered, the correlator creates correlation events with tenantID=2.
    3. The correlator forwards the correlation events to the storage partition of tenant 2.
    4. An alert is created and linked to the tenant with tenantID=2.
    5. The events that triggered the alert are appended to the alert.

    Result 1:

    • The created alert and its linked events can be accessed by employees of tenant 2.

    Scenario 2. The correlator belongs to the main tenant (tenantID=1):

    1. The collector of tenant 2 receives and forwards events to the correlator.
    2. When correlation rules are triggered, the correlator creates correlation events with tenantID=1.
    3. The correlator forwards the correlation events to the storage partition of the main tenant.
    4. An alert is created and linked to the tenant with tenantID=1.
    5. The events that triggered the alert are appended to the alert.

    Result 2:

    • The alert and its linked events cannot be accessed by employees of tenant 2.
    • The alert and its linked events can be accessed by employees of the main tenant.
  • Centralized correlation of events received from different tenants

    Condition:

    • Two collectors are deployed: one at tenant 2 and one at tenant 3. Both collectors forward events to the same correlator.
    • The correlator is owned by the main tenant. A correlation rule waits for events from both tenants.

    Scenario:

    1. The collector of tenant 2 receives and forwards events to the correlator of the main tenant.
    2. The collector of tenant 3 receives and forwards events to the correlator of the main tenant.
    3. When a correlation rule is triggered, the correlator creates correlation events with tenantID=1.
    4. The correlator forwards the correlation events to the storage partition of the main tenant.
    5. An alert is created and linked to the main tenant with tenantID=1.
    6. The events that triggered the alert are appended to the alert.

    Result:

    • The alert and its linked events cannot be accessed by employees of tenant 2.
    • The alert and its linked events cannot be accessed by employees of tenant 3.
    • The alert and its linked events can be accessed by employees of the main tenant.
  • The tenant correlates its own events, but the main tenant additionally provides centralized correlation of events.

    Condition:

    • Two collectors are deployed: one on the main tenant and one on tenant 2.
    • Two correlators are deployed:
      • Correlator 1 is owned by the main tenant and receives events from the collector of the main tenant and correlator 2.
      • Correlator 2 is owned by tenant 2 and receives events from the collector of tenant 2.

    Scenario:

    1. The collector of tenant 2 receives and forwards events to correlator 2.
    2. When a correlation rule is triggered, the correlator of tenant 2 creates correlation events with tenantID=2.
      • Correlator 2 forwards the correlation events to the storage partition of tenant 2.
      • Alert 1 is created and linked to the tenant with tenantID=2.
      • The events that triggered the alert are appended to the alert.
      • Correlation events from the correlator of tenant 2 are forwarded to correlator 1.
    3. The collector of the main tenant receives and forwards events to correlator 1.
    4. Correlator 1 processes events of both tenants. When a correlation rule is triggered, correlation events with tenantID=1 are created.
      • Correlator 1 forwards the correlation events to the storage partition of the main tenant.
      • Alert 2 is created and linked to the tenant with tenantID=1.
      • The events that triggered the alert are appended to the alert.

    Result:

    • Alert 2 and its linked events cannot be accessed by employees of tenant 2.
    • Alert 2 and its linked events can be accessed by employees of the main tenant.
  • One correlator for two tenants

    If you do not want events from different tenants to be merged during correlation, you should specify the TenantId event field in the Identical fields field in correlation rules. In this case, the alert inherits the tenant from the correlator.

    Condition:

    • Two collectors are deployed: one at tenant 2 and one at tenant 3.
    • One correlator owned by the main tenant (tenantID=1) is deployed. It receives events from both tenants but processes them irrespective of each other.

    Scenario:

    1. The collector of tenant 2 receives and forwards events to the correlator.
    2. The collector of tenant 3 receives and forwards events to the correlator.
    3. When a correlation rule is triggered, the correlator creates correlation events with tenantID=1.
      • The correlator forwards the correlation events to the storage partition of the main tenant.
      • An alert is created and linked to the main tenant with tenantID=1.
      • The events that triggered the alert are appended to the alert.

    Result:

    • Alerts that were created based on events from tenants 2 and 3 are not available to employees of these tenants.
    • Alerts and their linked events can be accessed by employees of the main tenant.
Page top

[Topic 220213]

Working with incidents

In the Incidents section of the KUMA web interface, you can create, view and process incidents. You can also filter incidents if needed. Clicking the name of an incident opens a window containing information about the incident.

Displayed date format:

  • English localization: YYYY-MM-DD.
  • Russian localization: DD.MM.YYYY.

In this Help topic

About the incidents table

Saving and selecting incident filter configuration

Deleting incident filter configurations

Viewing information about an incident

Incident creation

Incident processing

Changing incidents

Automatic linking of alerts to incidents

Categories and types of incidents

Exporting incidents to RuCERT

See also:

About incidents

Page top

[Topic 220214]

About the incidents table

The main part of the Incidents section shows a table containing information about registered incidents. If required, you can change the set of columns and the order in which they are displayed in the table.

How to customize the incidents table

  1. Click the gear icon in the top right corner of the incidents table.

    The table customization window opens.

  2. Select the check boxes opposite the settings that you want to view in the table.

    When you select a check box, the events table is updated and a new column is added. When a check box is cleared, the column disappears.

    You can search for table parameters using the Search field.

    By pressing the Default button, the following columns are selected for display:

    • Name.
    • Threat duration.
    • Assigned.
    • Created.
    • Tenant.
    • Status.
    • Alerts number.
    • Priority.
    • Affected asset categories.
  3. Change the display order of the columns as needed by dragging the column headings.
  4. If you want to sort the incidents by a specific column, click its title and select one of the available options in the drop-down list: Ascending or Descending.
  5. To filter incidents by a specific parameter, click on the column header and select the required filters from the drop-down list. The set of filters available in the drop-down list depends on the selected column.
  6. To remove filters, click the relevant column heading and select Clear filter.

Available columns of the incidents table:

  • Name—the name of the incident.
  • Threat duration—the time span during which the incident occurred (the time between the first and the last event related to the incident).
  • Assigned to—the name of the security officer to whom the incident was assigned for investigation or response.
  • Created—the date and time when the incident was created. This column allows you to filter incidents by the time they were created.
    • The following preset periods are available: Today, Yesterday, This week, Previous week.
    • If required, you can set an arbitrary period by using the calendar that opens when you select Before date, After date, or In period.
  • Tenant—the name of the tenant that owns the incident.
  • Status—current status of the incident:
    • Opened—new incident that has not been processed yet.
    • Assigned—the incident has been processed and assigned to a security officer for investigation or response.
    • Closed—the incident is closed; the security threat has been resolved.
  • Alerts number—the number of alerts included in the incident. Only the alerts of those tenants to which you have access are taken into account.
  • Priority shows how important a possible security threat is: Critical priority-critical, High priority-high, Medium priority-medium, Low priority-low.
  • Affected asset categories—categories of alert-related assets with the highest severity. No more than three categories are displayed.
  • Updated—the date and time of the last change made in the incident.
  • First event time and Last event time—dates and times of the first and last events in the incident.
  • Incident category and Incident typecategory and type of threat assigned to the incident.
  • Export to RuCERT—the status of the export of the incident data to the National Coordinating Center for Computer Incidents (also known as RuCERT):
    • Not exported—the data was not forwarded to RuCERT.
    • Export failed—an attempt to forward data to RuCERT ended with an error, and the data was not transmitted.
    • Exported—data on the incident has been successfully transmitted to RuCERT.
  • Branch—data on the specific node where the incident was created. Incidents of your node are displayed by default. This column is displayed only when hierarchy mode is enabled.

In the Search field, you can enter a regular expression for searching incidents based on their related assets, users, tenants, and correlation rules. Parameters that can be used for a search:

  • Assets: name, FQDN, IP address.
  • Active Directory accounts: attributes displayName, SAMAccountName, and UserPrincipalName.
  • Correlation rules: name.
  • KUMA users who were assigned alerts: name, login, email address.
  • Tenants: name.

When filtering incidents based on a specific parameter, the corresponding column in the incidents table is highlighted in yellow.

Page top

[Topic 220215]

Saving and selecting incident filter configuration

In KUMA, you can save changes to incident table settings as filters. Filter configurations are saved on the KUMA Core server and are available to all KUMA users of the tenant for which they were created.

To save the current filter configuration settings:

  1. In the Incidents section of KUMA, open the Select filter drop-down list.
  2. Select Save current filter.

    A window will open for entering the name of the new filter and selecting the tenant that will own the filter.

  3. Enter a name for the filter configuration. The name must be unique for alert filters, incident filters, and event filters.
  4. In the Tenant drop-down list, select the tenant that will own the filter and click Save.

The filter configuration is now saved.

To select a previously saved filter configuration:

  1. In the Incidents section of KUMA, open the Select filter drop-down list.
  2. Select the configuration you want.

The filter configuration is now active.

You can select the default filter by putting an asterisk to the left of the required filter configuration name in the Filters drop-down list.

To reset the current filter settings,

open the Filters drop-down and select Clear filter.

Page top

[Topic 220216]

Deleting incident filter configurations

To delete a previously saved filter configuration:

  1. In the Incidents section of KUMA, open the Filters drop-down list.
  2. Click the delete-icon button next to the configuration you want to delete.
  3. Click OK.

The filter configuration is now deleted for all KUMA users.

Page top

[Topic 220362]

Viewing information about an incident

To view information about an incident:

  1. In the program web interface window, select the Incidents section.
  2. Select the incident whose information you want to view.

This opens a window containing information about the incident.

Some incident parameters are editable.

The upper part of the incident information window displays a toolbar and the name of the user to whom the incident was assigned. In this window, you can process the incident: assign it to a user, combine it with another incident, or close it.

The Description section contains the following data:

  • Created—the date and time when the incident was created.
  • Name—the name of the incident.

    You can change the name of an incident by entering a new name in the field and clicking Save The name must contain from 1 to 128 Unicode characters.

  • Tenant—the name of the tenant that owns the incident.

    The tenant can be changed by selecting the required tenant from the drop-down list and clicking Save

  • Status—current status of the incident:
    • Opened—new incident that has not been processed yet.
    • Assigned—the incident has been processed and assigned to a security officer for investigation or response.
    • Closed—the incident is closed; the security threat has been resolved.
  • Priority—the severity of the threat posed by the incident. Possible values:
    • Critical
    • High
    • Medium
    • Low

    Priority can be changed by selecting the required value from the drop-down list and clicking Save.

  • Affected asset categories—the assigned categories of assets associated with the incident.
  • First event time and Last event time—dates and times of the first and last events in the incident.
  • Type and Category—type and category of the threat assigned to the incident. You can change these values by selecting the relevant value from the drop-down list and clicking Save.
  • Export to RuCERT—information on whether or not this incident was exported to RuCERT.
  • Description—description of the incident.

    To change the description, edit the text in the field and click Save The description can contain no more than 256 Unicode characters.

  • Related tenants—tenants associated with incident-related alerts, assets, and users.
  • Available tenants—tenants whose alerts can be linked to the incident automatically.

    The list of available tenants can be changed by checking the boxes next to the required tenants in the drop-down list and clicking Save.

The Related alerts section contains a table of alerts related to the incident. When you click on the alert name, a window opens with detailed information about this alert.

The Related endpoints and Related users sections contain tables with data on assets and users related to the incident. This information comes from alerts that are related to the incident.

You can add data to the tables in the Related alerts, Related endpoints and Related users sections by clicking the Link button in the appropriate section and selecting the object to be linked to the incident in the opened window. If required, you can unlink objects from the incident. To do this, select the objects as required, click Unlink in the section to which they belong, and save the changes. If objects were automatically added to the incident, they cannot be unlinked until the alert mentioning those objects is unlinked.

The Change log section contains a record of the changes you and your users made to the incident. Changes are automatically logged, but it is also possible to add comments manually.

Page top

[Topic 220361]

Incident creation

To create an incident:

  1. Open the KUMA web interface and select the Incidents section.
  2. Click Create incident.

    The window for creating an incident will open.

  3. Fill in the mandatory parameters of the incident:
    • In the Name field enter the name of the incident. The name must contain from 1 to 128 Unicode characters.
    • In the Tenant drop-down list, select the tenant that owns the created incident.
  4. If necessary, provide other parameters for the incident:
    • In the Priority drop-down list, select the severity of the incident. Available options: Low, Medium, High, Critical.
    • In the First event time and Last event time fields, specify the time range in which events related to the incident were received.
    • In the Category and Type drop-down lists, select the category and type of the incident. The available incident types depend on the selected category.
    • Add the incident Description. The description can contain no more than 256 Unicode characters.
    • In the Available tenants drop-down list, select the tenants whose alerts can be linked to the incident automatically.
    • In the Related alerts section, add alerts related to the incident.

      Linking alerts to incidents

      To link an alert to an incident:

      1. In the Related alerts section of the incident window click Link.

        A window with a list of alerts not linked to incidents will open.

      2. Select the required alerts.

        PCRE regular expressions can be used to search alerts by user, asset, tenant, and correlation rule.

      3. Click Link.

      Alerts are now related to the incident and displayed in the Related alerts section.

      To unlink alerts from an incident:

      1. Select the relevant alerts in the Related alerts section and click Unlink.
      2. Click Save.

      Alerts have been unlinked from the incident. Also, the alert can be unlinked from the incident in the alert window using the Unlink button.

    • In the Related endpoints section, add assets related to the incident.

      Linking assets to incidents

      To link an asset to an incident:

      1. In the Related endpoints section of the incident window, click Link.

        A window containing a list of assets will open.

      2. Select the relevant assets.

        You can use the Search field to look for assets.

      3. Click Link.

      Assets are now linked to the incident and are displayed in the Related endpoints section.

      To unlink assets from an incident:

      1. Select the relevant assets in the Related endpoints section and click Unlink.
      2. Click Save.

      The assets are now unlinked from the incident.

    • In the Related users section, add users related to the incident.

      Linking users to incidents

      To link a user to an incident:

      1. In the Related users section of the incident window, click Link.

        The user list window opens.

      2. Select the required users.

        You can use the Search field to look for users.

      3. Click Link.

      Users are now linked to the incident and appear in the Related users section.

      To unlink users from the incident:

      1. Select the required users in the Related users section and click the Unlink button.
      2. Click Save.

      Users are unlinked from the incident.

    • Add a Comment to the incident.
  5. Click Save.

The incident has been created.

Page top

[Topic 220419]

Incident processing

You can assign an incident to a user, aggregate it with other incidents, or close it.

To process an incident:

  1. Select required incidents using one of the methods below:
    • In the Incidents section of the KUMA web interface, click on the incident to be processed.

      The incident window will open, displaying a toolbar on the top.

    • In the Incidents section of the KUMA web console, select the check box next to the required incidents.

      A toolbar will appear at the bottom of the window.

  2. In the Assign to drop-down list, select the user to whom you want to assign the incident.

    You can assign the incident to yourself by selecting Me.

    The status of the incident changes to assigned and the name of the selected user is displayed in the Assign to drop-down list.

  3. If required, edit the incident parameters
  4. After investigating, close the incident:
    1. Click Close.

      A confirmation window opens.

    2. Select the reason for closing the incident:
      • Approved. This means the appropriate measures were taken to eliminate the security threat.
      • Not approved. This means the incident was a false positive and the received events do not indicate a security threat.
    3. Click Close.

    The Closed status will be assigned to the incident. Incidents with this status cannot be edited, and they are displayed in the incidents table only if you selected the Closed check box in the Status drop-down list when filtering the table. You cannot change the status of a closed incident or assign it to another user, but you can aggregate it with another incident.

  5. If requited, aggregate the selected incidents with another incident:
    1. Click Merge. In the opened window, select the incident in which all data from the selected incidents should be placed.
    2. Confirm your selection by clicking Merge.

    The incidents will be aggregated.

The incident has been processed.

Page top

[Topic 220444]

Changing incidents

To change the parameters of an incident:

  1. In the Incidents section of the KUMA web interface, click on the incident you want to modify.

    The Incident window opens.

  2. Make the necessary changes to the parameters. All incident parameters that can be set when creating it are available for editing.
  3. Click Save.

The incident will be modified.

Page top

[Topic 220446]

Automatic linking of alerts to incidents

In KUMA, you can configure automatic linking of generated alerts to existing incidents if alerts and incidents have related assets or users in common. If this setting is enabled, when creating an alert the program searches for incidents falling into a specified time interval that includes assets or users from the alert. In addition, the program checks whether the generated alert pertains to the tenants specified in the incidents' Available tenants parameter. If a matching incident is found, the program links the generated alert to the incident it found.

To set up automatic linking of alerts to incidents:

  1. In the KUMA web interface, open SettingsIncidentsAutomatic linking of alerts to incidents.
  2. Select the Enable check box in the Link by assets and/or Link by accounts parameter blocks depending on the types of connections between incidents and alerts that you are looking for.
  3. Define the Incidents must not be older than value for the parameters that you want to use when searching links. The generated alerts will be compared with incidents no older than the specified interval.

Automatic linking of alerts to incidents is configured.

To disable automatic linking of alerts to incidents,

In the KUMA web interface, under SettingsIncidentsAutomatic linking of alerts to incidents, select the Disabled check box.

Page top

[Topic 220450]

Categories and types of incidents

For your convenience, you can assign categories and types. If an incident has been assigned a RuCERT category, it can be exported to RuCERT.

Categories and types of incidents that can be exported to RuCERT

The table below lists the categories and types of incidents that can be exported to RuCERT:

Incident category

Incident type

Computer incident notification

Involvement of a controlled resource in malicious software infrastructure

Slowed operation of the resource due to a DDoS attack

Malware infection

Network traffic interception

Use of a controlled resource for phishing

Compromised user account

Unauthorized data modification

Unauthorized disclosure of information

Publication of illegal information on the resource

Distribution of spam messages from the controlled resource

Successful exploitation of a vulnerability

Notification about a computer attack

DDoS attack

Unsuccessful authorization attempts

Malware injection attempts

Attempts to exploit a vulnerability

Publication of fraudulent information

Network scanning

Social engineering

Notification about a detected vulnerability

Vulnerable resource

The categories of incidents can be viewed or changed under SettingsIncidentsIncident types, in which they are displayed as a table. By clicking on the column headers, you can change the table sorting options. The resource table contains the following columns:

  • Category—a common characteristic of an incident or cyberattack. The table can be filtered by the values in this column.
  • Type—the class of the incident or cyberattack.
  • RuCERT category—incident type according to RuCERT nomenclature. Incidents that have been assigned custom types and categories cannot be exported to RuCERT. The table can be filtered by the values in this column.
  • Vulnerability—specifies whether the incident type indicates a vulnerability.
  • Created—the date the incident type was created.
  • Updated—the date the incident type was modified.

To add an incident type:

  1. In the KUMA web interface, under SettingsIncidentsIncident types, click Add.

    The incident type creation window will open.

  2. Fill in the Type and Category fields.
  3. If the created incident type matches the RuCERT nomenclature, select the RuCERT category check box.
  4. If the incident type indicates a vulnerability, check Vulnerability.
  5. Click Save.

The incident type has been created.

Page top

[Topic 221855]

Exporting incidents to RuCERT

Incidents created in KUMA can be exported to the National Coordinating Center for Computer Incidents (also known as RuCERT). Prior to exporting incidents, you must configure integration with RuCERT. An incident can be exported only once. Incidents can be exported to RuCERT by users that have the Can interact with RuCERT check box selected in the user settings.

You can export incidents to RuCERT only if your application license includes the GosSOPKA module (GosSOPKA is a Russian government system for the detection, prevention, and mitigation of computer attacks).

To export an incident to RuCERT:

  1. In the Incidents section of the KUMA web interface, select the incident that you want to export using one of the following ways:
  2. Click Export to RuCERT.

    This opens the export settings window.

  3. Specify the settings on the Basic tab of the Export to RuCERT window:
    • Category and Type—specify the type and category of the incident. Only incidents of specific categories and types can be exported to RuCERT.

      Categories and types of incidents that can be exported to RuCERT

      The table below lists the categories and types of incidents that can be exported to RuCERT:

      Incident category

      Incident type

      Computer incident notification

      Involvement of a controlled resource in malicious software infrastructure

      Slowed operation of the resource due to a DDoS attack

      Malware infection

      Network traffic interception

      Use of a controlled resource for phishing

      Compromised user account

      Unauthorized data modification

      Unauthorized disclosure of information

      Publication of illegal information on the resource

      Distribution of spam messages from the controlled resource

      Successful exploitation of a vulnerability

      Notification about a computer attack

      DDoS attack

      Unsuccessful authorization attempts

      Malware injection attempts

      Attempts to exploit a vulnerability

      Publication of fraudulent information

      Network scanning

      Social engineering

      Notification about a detected vulnerability

      Vulnerable resource

    • TLP (required)—assign a Traffic Light Protocol marker to an incident to define the nature of information about the incident. The default value is RED. Available values:
      • WHITE—disclosure is not restricted.
      • GREEN—disclosure is only for the community.
      • AMBER—disclosure is only for organizations.
      • RED—disclosure is only for a specific group of people.
    • Affected system name (required)—specify the name of the information resource where the incident occurred. You can enter up to 500,000 characters in the field.
    • Affected system category (required)—specify the critical information infrastructure (CII) category of your organization. If your organization does not have a CII category, select Information resource is not a CII object.
    • Affected system function (required)—specify the scope of activity of your organization. The value specified in RuCERT integration settings is used by default.

      Available company business sectors

      • Nuclear energy
      • Banking and other financial market sectors
      • Mining
      • Federal/municipal government
      • Healthcare
      • Metallurgy
      • Science
      • Defense industry
      • Education
      • Aerospace industry
      • Communication
      • Mass media
      • Fuel and power
      • Transportation
      • Chemical industry
      • Other
    • Location (required)—select the location of your organization from the drop-down list.
    • Affected system has Internet connection—select this check box if the assets related to this incident have an Internet connection. In addition, after completing an export in the GosSOPKA account dashboard, provide technical information about the computer incident, computer attack, or vulnerability in the notification card. By default, this check box is cleared.
    • Product info (required)—this table becomes available if you selected Notification about a detected vulnerability as the incident category.

      You can use the Add new element button to add a string to the table. In the Name column, you must indicate the name of the application (for example, MS Office). Specify the application version in the Version column (for example, 2.4).

    • Vulnerability ID—if necessary, specify the identifier of the detected vulnerability. For example, CVE-2020-1231.

      This field becomes available if you selected Notification about a detected vulnerability as the incident category.

    • Product category—if necessary, specify the name and version of the vulnerable product. For example, Microsoft operating systems and their components.

      This field becomes available if you selected Notification about a detected vulnerability as the incident category.

  4. If required, define the settings on the Advanced tab of the Export to RuCERT window.

    The available settings on the tab depend on the selected category and type of incident:

    • Detection tool—specify the name of the product that was used to register the incident. For example, KUMA 1.5.
    • Assistance required—select this check box if you need help from GosSOPKA employees.
    • Incident end time—specify the date and time when the critical information infrastructure (CII object) was restored to normal operation after a computer incident, computer attack was ended, or a vulnerability was fixed.
    • Availability impact—assess the degree of impact that the incident had on system availability:
      • High
      • Low
      • None
    • Integrity impact—assess the degree of impact that the incident had on system integrity:
      • High
      • Low
      • None
    • Confidentiality impact—assess the degree of impact that the incident had on data confidentiality:
      • High
      • Low
      • None
    • Custom impact—specify other significant impacts from the incident.
    • City—indicate the city where your organization is located.
  5. Click Export.
  6. Confirm the export.

Information about the incident is submitted to RuCERT, and the Export to RuCERT incident parameter is changed to Exported successfully. If changes need to be made to the exported incident, you should do this in your GosSOPKA account dashboard.

Page top

[Topic 260687]

Sending incidents involving personal information leaks to RuCERT

KUMA 2.1.x does not have a separate section with incident parameters for submitting information about personal information leaks to RuCERT. Since such incidents do occur and a need exists to submit information to RuCERT, use the following solution.

To submit incidents involving personal information leaks:

  1. In the KUMA web interface, in the Incidents section, when creating an incident involving a personal information leak, in the Category field, select Notification about a computer incident.
  2. In the Type field, select one of the options that involves submission of information about personal information leak:
    • Malware infection
    • Compromised user account
    • Unauthorized disclosure of information
    • Successful exploitation of a vulnerability
    • Event is not related to a computer attack
  3. In the Description field, enter "The incident involves a leak of personal information. Please set the status to "More information required"".
  4. Click Save.
  5. Export the incident to RuCERT.

After RuCERT employees set the status to "More information required" and return the incident for further editing, in your RuCERT account, you can provide additional information about the incident in the "Details of the personal information leak" section.

Page top

[Topic 229568]

Working in hierarchy mode

When multiple KUMA instances are deployed in various organizations, they may be merged into a hierarchical structure. Interaction between parent and child instances of KUMA (or nodes) provides the following capabilities:

  • Child KUMA nodes relay data on other descendant nodes to parent KUMA nodes. This enables the parent node to see its entire branch of the hierarchical tree.
  • Parent KUMA nodes receive data on incidents from descendant nodes, and can also receive data on incident-related alerts and events if the corresponding settings are enabled in a child node.
  • Child KUMA nodes do not receive information about upstream nodes, except information about their parent KUMA node.

Parent and child nodes interact via API. Authentication relies on self-signed certificates, which the administrators of the parent and child organization must exchange over an encrypted channel when they connect to each other.

One parent node can have more than one child node. A child node can be connected to only one parent node. A parent node cannot be a child node of its descendants.

General administrator users can configure hierarchy mode in the KUMA web interface under SettingsHierarchy:

  • On the Node profile tab, you can configure the profile of your node, create a certificate, and enable or disable hierarchy mode.
  • On the Structure tab, you can view your available branch of the hierarchical tree, change the connected nodes, or disconnect them.
  • You can connect parent and child nodes on either of these tabs.

Incidents of child nodes can be viewed by users of all roles in the KUMA web interface under Incidents. In incidents, you can obtain information about their related alerts, events, assets, and users.

In this Help topic

Enabling hierarchy mode for the first time

Creating a node certificate

Connecting nodes into a hierarchical structure

Viewing your own branch of the hierarchy and available nodes

Editing a node profile

Viewing incidents from child nodes

Enabling and disabling hierarchy mode

Page top

[Topic 230727]

Enabling hierarchy mode for the first time

When enabling hierarchy mode for the first time, you must complete the profile of your node.

To complete the profile of your node:

  1. In the KUMA web interface, open SettingsHierarchyNode profile.
  2. In the Organization name field, indicate the name of your company (1–128 characters). This name will be used for the name of your node in the hierarchy.

    To change the organization name, you will have to regenerate the certificate of your node and replace it on the nodes that you are connected to.

  3. In the FQDN field, specify the FQDN of your node.
  4. If necessary, use the Proxy drop-down list to select the proxy server resource that should be used to communicate with other nodes. You can create a proxy server by using the AddResource button. The selected proxy server can be changed by clicking the EditResource button.

    The user account credentials entered into the proxy server URL can contain only the following characters: letters of the English alphabet, numbers, and special characters ("-", ".", "_", ":", "~", "!", "$", "&", "\", "(", ")", "*", "+", ",", ";", "=", "%", "@"). The URL in the proxy server resource is indicated by using the secret resource, which is selected from the Use URL from the secret drop-down list.

  5. Click Generate certificate.

The profile of your KUMA node is complete and hierarchy mode is enabled. When hierarchy mode is enabled, a certificate is automatically created for authentication of your node. You can use the SaveButton icon to download the certificate and then forward it to other nodes over an encrypted channel to create a connection between these nodes.

Page top

[Topic 229603]

Creating a node certificate

Nodes in the hierarchy are authenticated using self-signed certificates of the nodes. A certificate contains the name of the organization and its FQDN.

The certificate is created when hierarchy mode is enabled, but you can also recreate a certificate if necessary. The certificate must be recreated whenever you change the name of a node or its FQDN.

To create a node certificate:

  1. In the KUMA web interface, open SettingsHierarchyNode profile.

    You will see a window containing the settings of your node in the hierarchy.

  2. Click the Generate certificate button.

    The certificate creation window opens.

  3. In the FQDN field, specify the FQDN of your node.
  4. In the Organization name field, indicate the name of your company (1–128 characters). This name will be used for the name of your node in the hierarchy.
  5. Close the window by clicking Save.

The node certificate will be created and can be downloaded by clicking the SaveButton icon. Then it can be transferred to other nodes over an encrypted channel to create a connection between these nodes.

Page top

[Topic 229596]

Connecting nodes into a hierarchical structure

Prior to connecting nodes, you should make sure that they have hierarchy mode enabled, their node profiles have been configured, and certificates have been created for the nodes. Parent and child nodes must exchange their certificates over encrypted communication channels.

Connection of nodes in a hierarchy consists of the following steps:

  • The child node connects to the parent node.
  • The parent node connects the child node.

Prior to connecting nodes, make sure that the system time on the machines is synchronized with the NTP server. For more details, please refer to the appropriate documentation for Oracle Linux and for Astra Linux Special Edition.

When a connection is established, the parent node polls its child nodes for their available hierarchy data every 5 minutes, and thereby identify the structure of their available branch of the hierarchical tree. This data is displayed in the KUMA web interface under SettingsHierarchyStructure after the web page is refreshed.

Information about the hierarchical structure can be manually refreshed by using the Update structure button. To display the updated data, you must refresh the page of your web browser.

In this section

Connecting to a parent node

Connecting a child node

Disconnecting a node

Changing a node

Errors when connecting nodes

Page top

[Topic 229673]

Connecting to a parent node

To connect to a parent node:

  1. In the KUMA web interface, open SettingsHierarchy and click the Add parent node button.

    The Connect to parent node window opens.

  2. Use the Upload certificate button to upload the certificate to KUMA.

    The window will display a description of the certificate and indicate the organization that issued it and its FQDN.

  3. If necessary, use the Port field to specify the port used for accessing the parent node.
  4. Click Save.

You are now connected to the parent node. It can now add your node as a child node so that it will receive data on your child nodes and view your incidents.

Page top

[Topic 229674]

Connecting a child node

If you connected a parent node, you will be able to add child nodes only after your parent node adds you as a child node. Prior to connecting a child node, make sure that it has added your node as the parent node.

To connect a child node:

  1. In the KUMA web interface, open SettingsHierarchy and click the Add child node button.

    The Connect to child node window opens.

  2. Use the Upload certificate button to upload the certificate of the child node to KUMA.

    The window will display a description of the certificate and indicate the organization that issued it and its FQDN.

  3. If necessary, use the Port field to specify the port used for accessing the child node.
  4. Click Save.

The child node is added and displayed on the SettingsHierarchyStructure tab. This tab also displays the descendants of the child node. You can view the incidents of your child nodes and their descendants.

Page top

[Topic 229676]

Disconnecting a node

You can disconnect from a parent node or child node. However, it is impossible to disconnect from nodes that are descendants of your child nodes.

To disconnect from a node:

  1. In the KUMA web interface, open SettingsHierarchy and select the Structure tab.

    The hierarchical structure will be displayed.

  2. Select the node that you want to disconnect from.

    The right side of the window will display the details area containing information about this node.

  3. Click Disconnect.

You have disconnected from the node. If you have disconnected from a parent node, it will no longer receive information about your child nodes and incidents. If you have disconnected from a child node, you will no longer receive information about its child nodes and its incidents.

Page top

[Topic 229677]

Changing a node

If the name and/or FQDN of a node has changed, this node must reissue a certificate. Then the procedure for connecting the nodes must be repeated. Outdated nodes must be disconnected.

The port for connecting to nodes can be changed in the details area of the node without reissuing a certificate.

To change the settings for connecting to a node:

  1. In the KUMA web interface, open the Structure tab under SettingsHierarchy and select the relevant node.

    The right side of the window will display the details area of the node.

  2. In the Port field, enter the required port.
  3. Change the settings for email notifications regarding incidents on the child node:
    • If you need to disable notifications, clear the Monitoring incidents check box.
    • If you need to enable notifications, select the Monitoring incidents check box and use the input field to add the necessary email addresses.

      To send email notifications, you need to configure a connection to the SMTP server.

  4. Click Save.

The node connection settings have been changed.

Page top

[Topic 230313]

Errors when connecting nodes

Errors that occur when connecting nodes may be incompletely displayed in the KUMA web interface. You can use the developer's console of your browser to view the full server report.

The table below lists the errors that may arise when connecting KUMA nodes into a hierarchy, and includes recommendations on resolving those errors.

Errors that occur when establishing a connection to a node are displayed in pop-up windows in the lower part of the screen. Errors in already connected nodes can be viewed in the KUMA web interface under SettingsHierarchyStructure. The error text is displayed when you move your mouse cursor over the red triangle icon next to the node that encountered the error.

Error message

Possible cause of the error

Recommended remediation

failed to exchange settings with child: <Post request for child node address>: connect: connection refused

Connection refused. There was an attempt to add a child node that did not add the certificate of the parent node.

  • First connect the parent node on the child node, then add the child node on the parent node.
  • Verify that the hierarchy is enabled on the child node.

You cannot add your own KUMA node as a parent node or child node.

You cannot generate a cyclical structure out of KUMA nodes.

Make sure that the hierarchical structure you are creating is a tree structure.

corrupted certificate

Invalid certificate.

You must check the certificate file.

failed to exchange settings with child: <Post request for child node address>: context deadline exceeded

Connection could not be established due to exceeded response timeout.

Verify that the child node machine is running.

failed to exchange settings with child: <Post request for child node address>: x509: certificate has expired or is not yet valid

Connection refused due to invalid certificate.

  • Make sure that the child node certificate is valid.
  • Make sure that the system time of nodes is synchronized with the NTP server.

failed to exchange settings with child: <Post request for child node address>: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "<node name>"

Connection refused due to invalid certificate.

Make sure that the parent node certificate is valid.

failed to exchange settings with child: <Post request for child node address>: dial tcp: lookup <child node address> on <IP address of child node>: no such host

Child node certificate contains a non-existent FQDN.

Make sure that the child node certificate is valid.

Already exists

This node already exists within the structure.

Check the hierarchical structure that you are trying to build.

The parent node is already indicated as a child node.

 

Do not connect a parent node that is already a child node within this hierarchy.

Failed to query branch {"branchID":"<branch ID>","branchName":"<node name>","branchFQDN":"<node FQDN>","error":"Get \"<URL of child node>/children\": remote error: tls: bad certificate"}

Child node deleted the parent.

The child node must connect the parent node.

error: <Post request for child node address>: read tcp <IP addresses of nodes>: read: connection reset by peer

Invalid ports are indicated in node connection settings.

Make sure that the correct port is indicated in the node settings and that a valid certificate is being used.

"error":"Get \"<Child node URL>/children\": proxyconnect tcp: x509: certificate signed by unknown authority"

Connecting to a node using wrong proxy server settings.

Make sure correct proxy server settings are used.

Page top

[Topic 229678]

Viewing your own branch of the hierarchy and available nodes

In the KUMA web interface, under SettingsHierarchy, select the Structure tab to view your branch of the hierarchical tree extending from the parent node to all descendants of its child nodes. Your node in the hierarchy is highlighted in green.

When you click a node of the branch, the right side of the window shows the node details area in which you can do the following:

  • Change the port for connecting to the parent node or child node.
  • Disable a parent node or child node.
  • Change the settings of email notifications regarding incidents for child nodes and their descendants.
Page top

[Topic 229679]

Editing a node profile

You can modify the profile settings of your node.

To change the settings of your node:

  1. In the KUMA web interface, open SettingsHierarchyNode profile.
  2. If necessary, use the Proxy drop-down list to select the proxy server resource that should be used to communicate with other nodes. You can create a proxy server by using the AddResource button. The selected proxy server can be changed by clicking the EditResource button.

    The user account credentials entered into the proxy server URL can contain only the following characters: letters of the English alphabet, numbers, and special characters ("-", ".", "_", ":", "~", "!", "$", "&", "\", "(", ")", "*", "+", ",", ";", "=", "%", "@"). The URL in the proxy server resource is indicated by using the secret resource, which is selected from the Use URL from the secret drop-down list.

  3. If necessary, use the Port field to enter the port used for accessing your node. Make sure that access to the port is open.
  4. If necessary, use the Timeout field to indicate how many seconds to wait for a response from nodes when attempting a connection. The default value is 60.
  5. If necessary, select or clear the following check boxes: Do not include events to the incidents relayed to parent nodes or Do not include alerts to the incidents relayed to parent nodes. These check boxes are cleared by default.
  6. Click Save.

The settings of your node are changed.

If you want to change the FQDN or name of your node, regenerate a certificate for the node.

Page top

[Topic 229682]

Viewing incidents from child nodes

If hierarchy mode is enabled, you can view the Incidents section to inspect the incidents that were created on child nodes and their descendants. The incidents table displays the Branch column, which can be used to filter incidents based on the nodes in which they were created. By default, the incidents table displays the incidents that were created on your node.

To select the nodes whose incidents you want to view:

  1. In the KUMA web interface, open the Incidents section.
  2. Click the header of the Branch column and click the parent-category icon in the opened window.

    The right side of the window will display the details area containing the hierarchical structure of the organization. You can use the more button to expand or collapse all branches of the structure, or select all KUMA nodes.

  3. Select the relevant nodes and click Save.

The incidents table displays the incidents that were created on the nodes that you selected.

When you click an incident, a window opens with detailed information about the incident. The data is read-only. An incident from another node cannot be edited or processed.

Special considerations when viewing data on an incident created on a different node:

  • The Related alerts section of the incident window contains information only if the child node is configured to forward data on incident-related alerts to the parent node.

    When you click on the name of an incident-related alert, a window opens with detailed information about this alert. This data is also read-only. An alert from another node cannot be edited or processed.

  • The Related events section in the window of an alert related to an incident of another node contains information only if the child node is configured to forward data on incident-related events to the parent node.

    In this case, you can use the Find in events button to open the events table and search for relevant events. However, you cannot select the storage, and there are limitations applied to SQL queries when searching events in drilldown analysis mode. This mode employs data enrichment (for example, using Kaspersky Threat Intelligence Portal, Kaspersky CyberTrace or Active Directory). The results of Kaspersky Threat Intelligence Portal data enrichment performed on child nodes are not available on parent nodes.

See also:

About incidents

About alerts

About events

Exporting incidents to RuCERT

Page top

[Topic 229681]

Enabling and disabling hierarchy mode

To enable or disable hierarchy mode:

  1. In the KUMA web interface, open SettingsHierarchyNode profile.
  2. Enable or disable hierarchy mode:
    • If you want to enable hierarchy mode, clear the Disabled check box.
    • If you want to disable hierarchy mode, select the Disabled check box.
  3. Click Save.

Hierarchy mode will be enabled or disabled according to your selection.

Page top

[Topic 218046]

Working with alerts

In the Alerts section of the KUMA web interface, you can view and process the alerts registered by the program. Alerts can be filtered. When you click the alert name, a window with its details opens.

Displayed date format:

  • English localization: YYYY-MM-DD.
  • Russian localization: DD.MM.YYYY.

Alert overflow

Each alert and its related events cannot exceed the size of 16 MB. When this limit is reached:

  • New events can no longer be linked to the alert.
  • The alert has an Overflowed tag displayed in the Detected column. The same tag is displayed in the Details on alert section of the alert details window.

Overflowed alerts should be processed as soon as possible.

In this Help topic

Filtering alerts

Viewing details on an alert

Processing alerts

Drilldown analysis

Alert storage period

Alert segmentation rules

Alert notifications

Page top

[Topic 217874]

Filtering alerts

In KUMA, you can perform alert selection by using the filtering and sorting tools in the Alerts section.

The filter configuration can be saved. Existing filter configurations can be deleted.

In this section

Configuring alerts table

Saving and selecting alert filter configurations

Deleting alert filter configurations

Page top

[Topic 217769]

Configuring alerts table

The main part of the Alerts section shows a table containing information about registered alerts. You can click column titles to open drop-down lists with tools for filtering alerts and configuring alert table:

  • Priority (priority)—shows the importance of a possible security threat: Critical priority-critical, High priority-high, Medium priority-medium, or Low priority-low.
  • Name—alert name.

    If Overflowed tag is displayed next to the alert name, it means the alert size has reached or is about to reach the limit and should be processed as soon as possible.

  • Status—current status of an alert:
    • New—a new alert that hasn't been processed yet.
    • Assigned—the alert has been processed and assigned to a security officer for investigation or response.
    • Closed—the alert was closed. Either it was a false alert, or the security threat was eliminated.
    • Escalated—an incident was generated based on this alert.
  • Assigned to—the name of the security officer the alert was assigned to for investigation or response.
  • Incident—name of the incident to which this alert is linked.
  • First seen—the date and time when the first correlation event of the event sequence was created, triggering creation of the alert.
  • Last seen—the date and time when the last correlation event of the event sequence was created, triggering creation of the alert.
  • Categories—categories of alert-related assets with the highest severity. No more than three categories are displayed.
  • Tenant—the name of the tenant that owns the alert.

In the Search field, you can enter a regular expression for searching alerts based on their related assets, users, tenants, and correlation rules. Parameters that can be used for a search:

  • Assets: name, FQDN, IP address.
  • Active Directory accounts: attributes displayName, SAMAccountName, and UserPrincipalName.
  • Correlation rules: name.
  • KUMA users who were assigned alerts: name, login, email address.
  • Tenants: name.

When filtering alerts based on a specific parameter, the corresponding header of the alerts table is highlighted in yellow.

Page top

[Topic 217983]

Saving and selecting alert filter configurations

In KUMA, you can save changes to the alert table settings as filters. Filter configurations are saved on the KUMA Core server and are available to all KUMA users of the tenant for which they were created.

To save the current filter configuration settings:

  1. In the Alerts section of KUMA open the Filters drop-down list.
  2. Select Save current filter.

    A field will appear for entering the name of the new filter and selecting the tenant that will own it.

  3. Enter a name for the filter configuration. The name must be unique for alert filters, incident filters, and event filters.
  4. In the Tenant drop-down list, select the tenant that will own the filter and click Save.

The filter configuration is now saved.

To select a previously saved filter configuration:

  1. In the Alerts section of KUMA open the Filters drop-down list.
  2. Select the configuration you want.

The filter configuration is now active.

You can select the default filter by putting an asterisk to the left of the required filter configuration name in the Filters drop-down list.

To reset the current filter settings:

Open the Filters drop-down list and select Clear filters.

Page top

[Topic 217831]

Deleting alert filter configurations

To delete a previously saved filter configuration:

  1. In the Alerts section of KUMA open the Filters drop-down list.
  2. Click the delete-icon button near configuration you want to delete.
  3. Click OK.

The filter configuration is now deleted for all KUMA users.

Page top

[Topic 217723]

Viewing details on an alert

To view details on an alert:

  1. In the program web interface window, select the Alerts section.

    The alerts table is displayed.

  2. Click the name of the alert whose details you want to view.

    This opens a window containing information about the alert.

The upper part of the alert details window contains a toolbar and shows the alert severity and the user name to which the alert is assigned. In this window, you can process the alert: change its severity, assign it to a user, and close and create an incident based on the alert.

Details on alert section

This section lets you view basic information about an alert. It contains the following data:

  • Correlation rule priority—the priority of the correlation rule that triggered the creation of the alert.
  • Max asset category priority—the highest priority of an asset category assigned to assets related to this alert. If multiple assets are related to the alert, the largest value is displayed.
  • Linked to incident—if the alert is linked to an incident, the name and status of the alert are displayed.
  • First seen—the date and time of creation of the first correlation event in the event sequence that triggered creation of the alert.
  • Last seen—the date and time when the last correlation event of the event sequence was created, triggering creation of the alert.
  • Alert ID—the unique identifier of an alert in KUMA.
  • Tenant—the name of the tenant that owns the alert.
  • Correlation rule—the name of the correlation rule that triggered the creation of the alert. The rule name is represented as a link that can be used to open the settings of this correlation rule.
  • Overflowed—this tag means that the alert size has reached or will soon reach the limit of 16 MB and the alert must be processed as soon as possible. New events are not added to the overflowed alerts, but you can click the All possible related events link to filter all events that could be related to the alert if there were no overflow.

Related events section

This section contains a table of events related to the alert. If you click Arrow icon near the correlation rule, the base events from this correlation rule will be displayed. Events can be sorted by priority and time.

Selecting an event in the table opens the details area containing information about the selected event. The details area also displays the Detailed view button, which opens a window containing information about the correlation event.

The Find in events links below correlation events and the Find in events button to the right of the section header are used for drilldown analysis.

You can use the Download events button to download information about related events into a CSV file (in UTF-8 encoding). The file contains columns that are populated in at least one related event.

Some CSV file editors interpret the separator value (for example, \n) in the CSV file exported from KUMA as a line break, not as a separator. This may disrupt the line division of the file. If you encounter a similar issue, you may need to additionally edit the CSV file received from KUMA.

Related endpoints section

This section contains a table of hosts related to the alert. Host information comes from events that are related to the alert. You can search for endpoints by using the Search for IP addresses or FQDN field. Assets can be sorted using the Count and Endpoint columns.

This section also displays the assets related to the alert. Clicking the name of the asset opens the Asset details window.

You can use the Download assets button to download information about related assets into a CSV file (in UTF-8 encoding). The following columns are available in the file: Count, Name, IP address, FQDN, Categories.

Related users section

This section contains a table of users related to the alert. User information comes from events that are related to the alert. You can search for users using the Search for users field. Users can be sorted by the Count, User, User principal name and Email columns.

You can use the Download users button to download information about related users into a CSV file (in UTF-8 encoding). The following columns are available in the file: Count, User, User principal name, Email, Domain.

Change log section

This section contains entries about changes made to the alert by users. Changes are automatically logged, but it is also possible to add comments manually. Comments can be sorted by using the Time column.

If necessary, you can enter a comment for the alert in the Comment field and click Add to save it.

Page top

[Topic 217956]

Processing alerts

You can change the alert severity, assign an alert to a user, close the alert, or create an incident based on the alert.

To process an alert:

  1. Select required alerts using one of the methods below:
    • In the Alerts section of the KUMA web interface, click the alert whose information you want to view.

      The Alert window opens with the alert processing toolbar at the very top.

    • In the Alerts section of the KUMA web interface, select the check box next to the required alert. It is possible to select more than one alert.

      Alerts with the closed status cannot be selected for processing.

      The action toolbar appears at the bottom of the window.

  2. If you want to change the priority of an alert, select the required value in the Priority drop-down list:
    • Low
    • Medium
    • High
    • Critical

    The priority of the alert changes to the selected value.

  3. If you want to assign an alert to a user, select the relevant user from the Assign to drop-down list.

    You can assign the alert to yourself by selecting Me.

    The status of the alert changes to Assigned and the name of the selected user is displayed in the Assign to drop-down list.

  4. Create an incident based on the alert:
    1. Click Create incident.

      The window for creating an incident will open. The alert name is used as the incident name.

    2. Update the desired incident parameters and click the Save button.

    The incident is created, and the alert status is changed to Escalated. An alert can be unlinked from an incident by selecting it and clicking Unlink.

  5. If you want to close the alert:
    1. Click Close alert.

      A confirmation window opens.

    2. Select the reason for closing the alert:
      • Responded. This means the appropriate measures were taken to eliminate the security threat.
      • Incorrect data. This means the alert was a false positive and the received events do not indicate a security threat.
      • Incorrect correlation rule. This means the alert was a false positive and the received events do not indicate a security threat. The correlation rule may need to be updated.
    3. Click OK.

    The status of the alert changes to Closed. Alerts with this status are no longer updated with new correlation events and aren't displayed in the alerts table unless the Closed check box is selected in the Status drop-down list in the alerts table. You cannot change the status of a closed alert or assign it to another user.

Page top

[Topic 217847]

Drilldown analysis

Drilldown analysis is used when you need to find more information about the threat an alert is warning you about: is the threat real, where's it coming from, what network environment elements are affected by it, how should the threat be dealt with. Studying the events related to the correlation events that triggered an alert can help you determine the course of action.

The drilldown mode is enabled in KUMA when you click the Find in events link in the alert window or the correlation event window. When the drill-down mode is enabled, the events table is shown with filters automatically set to match the events from the alert or correlation event. The filters also match the time period of the alert duration or the time when the correlation event was registered. You can change these filters to find other events and learn more about the processes related to the threat.

An additional EventSelector drop-down list becomes available in drilldown mode:

  • All events—view all events.
  • Related to alert (selected by default)—view only events related to the alert.

    When filtering events related to an alert, there are limitations on the complexity of SQL search queries.

You can manually link events to alerts. Only events that are not related to the alert can be linked to it.

You can create and save event filter configuration in drilldown mode. When using this filter outside of drilldown mode, all events that match the filter criteria will be selected disregarding whether or not they are related to the alert that was selected for drilldown analysis.

To link a base event to an alert:

  1. In the Alerts section of the KUMA web interface, click the alert that you want to link to the event.

    The Alert window opens.

  2. In the Related events section click the Find in events button.

    The events table opens with active filters matching the data and period of events related to the alert, and columns show the settings used by the correlation rule to create the alert. The Link to alert column is also added to the events table showing the events linked to the alert.

  3. In the EventSelector drop-down list select All events.
  4. Modify the filters to find the event you want to link to the alert.
  5. Select the event you want, and click the Link to alert button at the bottom of the event details area.

The event will be linked to the alert. You can unlink this event from the alert by clicking in the Unlink from alert detailed view.

When the event is linked or unlinked from the alert, the Change log entry is added in the Alert window. You can click the link in this entry and in the opened event details area link or unlink the event using the Link to alert and Unlink from alert buttons.

Page top

[Topic 222206]

Alert storage period

Alerts are stored in KUMA for a year by default. This period can be changed by editing the application startup parameters in the /usr/lib/systemd/system/kuma-core.service file on the KUMA Core server.

To change the storage period for alerts:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. In the /usr/lib/systemd/system/kuma-core.service file, edit the following string by inserting the necessary number of days:

    ExecStart=/opt/kaspersky/kuma/kuma core --alerts.retention <number of days to keep alerts> --external :7220 --internal :7210 --mongo mongodb://localhost:27017

  3. Restart KUMA by running the following commands in sequence:
    1. systemctl daemon-reload
    2. systemctl restart kuma-core

The storage period for alerts has been changed.

Page top

[Topic 222426]

Alert segmentation rules

In KUMA, you can configure segmentation rules for alerts, that is, you can create separate alerts with certain conditions. This can be useful when the correlator groups the same type of correlation events into one common alert, but you want separate alerts to be generated based on some of these events, which differ from others for some important reason.

Segmentation rules are created separately for each tenant. They are displayed in the KUMA web interface under SettingsAlertsSegmentation rules in a table containing the following columns:

  • Tenant—the name of the tenant that owns the segmentation rules.
  • Updated—date and time of the last update of the segmentation rules.
  • Disabled—this column displays a label if the segmentation rules are turned off.

To create an alert segmentation rule:

  1. In the KUMA web interface, go to SettingsAlertsSegmentation rules.
  2. Select the tenant for which you would like to create a segmentation rule:
    • If the tenant already has segmentation rules, select it in the table.
    • If the tenant has no segmentation rules, click Add tenant and select the relevant tenant from the Tenant drop-down list.
  3. In the Segmentation rules settings block, press Add and specify the segmentation rule settings:
    • Name (required)—specify the segmentation rule name in this field.
    • Correlation rule (required)—in this drop-down list, select the correlation rule whose events you want to highlight in a separate alert.
    • Selector (required)—in this settings block, you need to specify a condition under which the segmentation rule will be triggered. The conditions are specified in a way similar to filters.
  4. Click Save.

The alert segmentation rule is created. Events matching these rules will be combined into a separate alert with the name of the segmentation rule.

To turn off the segmentation rules:

  1. Open the SettingsAlerts section of the KUMA web interface and select the tenant whose segmentation rules you want to disable.
  2. Select the Disabled check box.
  3. Click Save.

The segmentation rules for the alerts of the selected tenant are disabled.

Page top

[Topic 233518]

Alert notifications

Standard KUMA notifications are sent by email when alerts are generated and assigned. You can configure delivery of alert generation notifications based on a custom email template.

To configure delivery of alert generation notifications based on a custom template:

  1. In the KUMA web interface, open SettingsAlertsNotification rules.
  2. Select the tenant for which you want to create a notification rule:
    • If the tenant already has notification rules, select it in the table.
    • If the tenant has no notification rules, click Add tenant and select the relevant tenant from the Tenant drop-down list.
  3. In the Notification rules settings block, click Add and specify the notification rule settings:
    • Name (required)—specify the notification rule name in this field.
    • Recipient emails (required)—in this settings block, you can use the Email button to add the email addresses to which you need to send notifications about alert generation. Addresses are added one at a time.

      Cyrillic domains are not supported. For example, a notification cannot be sent to login@domain.us.

    • Correlation rules (required)—in this settings block, you must select one or more correlation rules that, when triggered, will cause notification sending.

      The window displays a tree structure representing the correlation rules from the shared tenant and the user-selected tenant. To select a rule, select the check box next to it. You can select the check box next to a folder to select all correlation rules in that folder and its subfolders.

    • Template (required)—in this settings block, you must select an email template that will be used to create the notifications. To select a template, click the parent-category icon, select the required template in the opened window, and click Save.

      You can create a template by clicking the plus icon or edit the selected template by clicking the pencil icon.

    • Disabled—by selecting this check box, you can disable the notification rule.
  4. Click Save.

The notification rule is created. When an alert is created based on the selected correlation rules, notifications created based on custom email templates will be sent to the specified email addresses. Standard KUMA notifications about the same event will not be sent to the specified addresses.

To disable notification rules for a tenant:

  1. In the KUMA web interface, open SettingsAlertsNotification rules and select the tenant whose notification rules you want to disable.
  2. Select the Disabled check box.
  3. Click Save.

The notification rules of the selected tenant are disabled.

Page top

[Topic 228267]

Working with events

In the Events section of the KUMA web interface, you can inspect events received by the program to investigate security threats or create correlation rules. The events table displays the data received after the SQL query is executed.

Events can be sent to the correlator for a retroscan.

Displayed date format:

  • English localization: YYYY-MM-DD.
  • Russian localization: DD.MM.YYYY.

In this Help topic

Filtering and searching events

Viewing event detail areas

Exporting events

Selecting Storage

Getting events table statistics

Configuring the table of events

Refreshing events table

Opening the correlation event window

See also:

About events

Program architecture

Normalized event data model

Page top

[Topic 228277]

Filtering and searching events

The Events section of the KUMA web interface does not show any data by default. To view events, you need to define an SQL query in the search field and click the SearchField button. The SQL query can be entered manually or it can be generated using a query builder.

Data aggregation and grouping is supported in SQL queries.

You can add filter conditions to an already generated SQL query in the window for viewing statistics, the events table, and the event details area:

  • Changing a query from the Statistics window

    To change the filtering settings in the Statistics window:

    1. Open Statistics details area by using one of the following methods:
      • In the MoreButton drop-down list in the top right corner of the events table select Statistics.
      • In the events table click any value and in the opened context menu select Statistics.

      The Statistics details area appears in the right part of the web interface window.

    2. Open the drop-down list of the relevant parameter and hover your mouse cursor over the necessary value.
    3. Use the plus and minus signs to change the filter settings by doing one of the following:
      • If you want the events selection to include only events with the selected value, click the filter-plus icon.
      • If you want the events selection to exclude all events with the selected value, click the filter-minus icon.

    As a result, the filter settings and the events table will be updated, and the new search query will be displayed in the upper part of the screen.

  • Changing a query from the events table

    To change the filtering settings in the events table:

    1. In the Events section of the KUMA web interface, click any event parameter value in the events table.
    2. In the opened menu, select one of the following options:
      • If you want the table to show only events with the selected value, select Filter by this value.
      • If you want to exclude all events with the selected value from the table, select Exclude from filter.

    As a result, the filter settings and the events table are updated, and the new search query is displayed in the upper part of the screen.

  • Changing a query from the Event details area

    To change the filter settings in the event details area:

    1. In the Events section of the KUMA web interface, click the relevant event.

      The Event details area appears in the right part of the window.

    2. Change the filter settings by using the plus or minus icons next to the relevant settings:
      • If you want the events selection to include only events with the selected value, click the filter-plus icon.
      • If you want the events selection to exclude all events with the selected value, click the filter-minus icon.

    As a result, the filter settings and the events table will be updated, and the new search query will be displayed in the upper part of the screen.

After modifying a query, all query parameters, including the added filter conditions, are transferred to the query builder and the search field.

When you switch to the query builder, the parameters of a query entered manually in the search field are not transferred to the builder, so you will need to create your query again. Also, the query created in the builder does not overwrite the query that was entered into the search string until you click the Apply button in the builder window.

In the SQL query input field, you can enable the display of control characters.

You can also filter events by time period. Search results can be automatically updated.

The filter configuration can be saved. Existing filter configurations can be deleted.

Filter functions are available for users regardless of their roles.

For more details on SQL, refer to the ClickHouse documentation. See also KUMA operator usage and supported functions.

In this section

Generating an SQL query using a builder

Manually creating an SQL query

Limited complexity of queries in drilldown analysis mode

Filtering events by period

Saving and selecting events filter configuration

Deleting event filter configurations

Supported ClickHouse functions

See also:

About events

Storage

Page top

[Topic 228337]

Generating an SQL query using a builder

In KUMA, you can use a query builder to generate an SQL query for filtering events.

To generate an SQL query using a builder:

  1. In the Events section of the KUMA web interface, click the parent-category button.

    The filter constructor window opens.

  2. Generate a search query by providing data in the following parameter blocks:

    SELECT—event fields that should be returned. The * value is selected by default, which means that all available event fields must be returned. To make viewing the search results easier, select the necessary fields in the drop-down list. In this case, the data only for the selected fields is displayed in the table. Note that Select * increases the duration of the request execution, but eliminates the need to manually indicate the fields in the request.

    When selecting an event field, you can use the field on the right of the drop-down list to specify an alias for the column of displayed data, and you can use the right-most drop-down list to select the operation to perform on the data: count, max, min, avg, sum.

    If you are using aggregation functions in a query, you cannot customize the events table display, sort events in ascending or descending order, or receive statistics.

    When filtering by alert-related events in drilldown analysis mode, you cannot perform operations on the data of event fields or assign names to the columns of displayed data.

    • FROM—data source. Select the events value.
    • WHERE—conditions for filtering events.

      Conditions and groups of conditions can be added by using the Add condition and Add group buttons. The AND operator value is selected by default in a group of conditions, but the operator can be changed by clicking on this value. Available values: AND, OR, NOT. The structure of conditions and condition groups can be changed by using the DragIcon icon to drag and drop expressions.

      Adding filter conditions:

      1. In the drop-down list on the left, select the event field that you want to use for filtering.
      2. Select the necessary operator from the middle drop-down list. The available operators depend on the type of value of the selected event field.
      3. Enter the value of the condition. Depending on the selected type of field, you may have to manually enter the value, select it from the drop-down list, or select it on the calendar.

      Filter conditions can be deleted by using the cross button. Group conditions are deleted using the Delete group button.

    • GROUP BY—event fields or aliases to be used for grouping the returned data.

      If you are using data grouping in a query, you cannot customize the events table display, sort events in ascending or descending order, receive statistics, or perform a retroscan.

      When filtering by alert-related events in drilldown analysis mode, you cannot group the returned data.

    • ORDER BY—columns used as the basis for sorting the returned data. In the drop-down list on the right, you can select the necessary order: DESC—descending, ASC—ascending.
    • LIMIT—number of strings displayed in the table.

      The default value is 250.

      If you are filtering events by user-defined period and the number of strings in the search results exceeds the defined value, you can click the Show next records button to display additional strings in the table. This button is not displayed when filtering events by the standard period.

  3. Click the Apply button.

    The current SQL query will be overwritten. The generated SQL query is displayed in the search field.

    If you want to reset the builder settings, click the Default query button.

    If you want to close the builder without overwriting the existing query, click the parent-category button.

  4. Click the SearchField button to display the data in the table.

The table will display the search results based on the generated SQL query.

When switching to another section of the web interface, the query generated in the builder is not preserved. If you return to the Events section from another section, the builder will display the default query.

For more details on SQL, refer to the ClickHouse documentation. See also KUMA operator usage and supported functions.

See also:

Manually creating an SQL query

About events

Storage

Page top

[Topic 228356]

Manually creating an SQL query

You can use the search string to manually create SQL queries of any complexity for filtering events.

To manually generate an SQL query:

  1. Go to the Events section of the KUMA web interface.

    An input form opens.

  2. Enter your SQL query into the input field.
  3. Click the SearchField button.

You will see a table of events that satisfy the criteria of your query. If necessary, you can filter events by period.

Supported functions and operators

  • SELECT—event fields that should be returned.

    For SELECT fields, the program supports the following functions and operators:

    • Aggregation functions: count, avg, max, min, sum.
    • Arithmetic operators: +, -, *, /, <, >, =, !=, >=, <=.

      You can combine these functions and operators.

      If you are using aggregation functions in a query, you cannot customize the events table display, sort events in ascending or descending order, or receive statistics.

  • FROM—data source.

    When creating a query, you need to specify the events value as the data source.

  • WHERE—conditions for filtering events.
    • AND, OR, NOT, =, !=, >, >=, <, <=
    • IN
    • BETWEEN
    • LIKE
    • ILIKE
    • inSubnet
    • match (the re2 syntax of regular expressions is used in queries, special characters must be shielded with "\")
  • GROUP BY—event fields or aliases to be used for grouping the returned data.

    If you are using data grouping in a query, you cannot customize the events table display, sort events in ascending or descending order, receive statistics, or perform a retroscan.

  • ORDER BY—columns used as the basis for sorting the returned data.

    Possible values:

    • DESC—descending order.
    • ASC—ascending order.
  • OFFSET—skip the indicated number of lines before printing the query results output.
  • LIMIT—number of strings displayed in the table.

    The default value is 250.

    If you are filtering events by user-defined period and the number of strings in the search results exceeds the defined value, you can click the Show next records button to display additional strings in the table. This button is not displayed when filtering events by the standard period.

    Example queries:

    • SELECT * FROM `events` WHERE Type IN ('Base', 'Audit') ORDER BY Timestamp DESC LIMIT 250

      In the events table, all events with the Base and Audit type are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

    • SELECT * FROM `events` WHERE BytesIn BETWEEN 1000 AND 2000 ORDER BY Timestamp ASC LIMIT 250

      All events of the events table for which the BytesIn field contains a value of received traffic in the range from 1,000 to 2,000 bytes are sorted by the Timestamp column in ascending order. The number of strings that can be displayed in the table is 250.

    • SELECT * FROM `events` WHERE Message LIKE '%ssh:%' ORDER BY Timestamp DESC LIMIT 250

      In the events table, all events whose Message field contains data corresponding to the defined %ssh:% template in lowercase are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

    • SELECT * FROM `events` WHERE inSubnet(DeviceAddress, '00.0.0.0/00') ORDER BY Timestamp DESC LIMIT 250

      In the events table, all events for the hosts that are in the 00.0.0.0/00 subnet are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

    • SELECT * FROM `events` WHERE match(Message, 'ssh.*') ORDER BY Timestamp DESC LIMIT 250

      In the events table, all events whose Message field contains text corresponding to the ssh.* template are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

    • SELECT max(BytesOut) / 1024 FROM `events`

      Maximum amount of outbound traffic (KB) for the selected time period.

    • SELECT count(ID) AS "Count", SourcePort AS "Port" FROM `events` GROUP BY SourcePort ORDER BY Port ASC LIMIT 250

      Number of events and port number. Events are grouped by port number and sorted by the Port column in ascending order. The number of strings that can be displayed in the table is 250.

      The ID column in the events table is named Count, and the SourcePort column is named Port.

If you want to use a special character in a query, you need to escape this character by placing a backslash (\) character in front of it.

Example:

SELECT * FROM `events` WHERE match(Message, 'ssh:\'connection.*') ORDER BY Timestamp DESC LIMIT 250

In the events table, all events whose Message field contains text corresponding to the ssh: 'connection' template are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

When creating a normalizer for events, you can choose whether to retain the field values of the raw event. The data is stored in the Extra event field. This field is searched for events by using the LIKE operator.

Example:

SELECT * FROM `events` WHERE DeviceAddress = '00.00.00.000' AND Extra LIKE '%"app":"example"%' ORDER BY Timestamp DESC LIMIT 250

In the events table, all events for hosts with the IP address 00.00.00.000 where the example process is running are sorted by the Timestamp column in descending order. The number of strings that can be displayed in the table is 250.

When switching to the query builder, the query parameters that were manually entered into the search string are not transferred to the builder so you will need to create your query again. Also, the query created in the builder does not overwrite the query that was entered into the search string until you click the Apply button in the builder window.

Aliases must not contain spaces.

For more details on SQL, refer to the ClickHouse documentation. See also the supported ClickHouse functions.

See also:

Generating an SQL query using a builder

Limited complexity of queries in drilldown analysis mode

About events

Storage

Page top

[Topic 230248]

Limited complexity of queries in drilldown analysis mode

During a drilldown analysis, the complexity of SQL queries for event filtering is limited if the Related to alert option is selected from the EventSelector drop-down list when investigating an alert. If this is the case, only the functions and operators listed below are available for event filtering.

If the All events option is selected from the EventSelector drop-down list, these limitations are not applied.

  • SELECT
    • The * character is used as a wildcard to represent any number of characters.
  • WHERE
    • AND, OR, NOT, =, !=, >, >=, <, <=
    • IN
    • BETWEEN
    • LIKE
    • inSubnet

    Examples:

    • WHERE Type IN ('Base', 'Correlated')
    • WHERE BytesIn BETWEEN 1000 AND 2000
    • WHERE Message LIKE '%ssh:%'
    • WHERE inSubnet(DeviceAddress, '10.0.0.1/24')
  • ORDER BY

    Sorting can be done by column.

  • OFFSET

    Skip the indicated number of lines before printing the query results output.

  • LIMIT

    The default value is 250.

    If you are filtering events by user-defined period and the number of strings in the search results exceeds the defined value, you can click the Show next records button to display additional strings in the table. This button is not displayed when filtering events by the standard period.

When filtering by alert-related events in drilldown analysis mode, you cannot group the returned data. When filtering by alert-related events in drilldown analysis mode, you cannot perform operations on the data of event fields or assign names to the columns of displayed data.

Page top

[Topic 217877]

Filtering events by period

In KUMA, you can specify the time period to display events from.

To filter events by period:

  1. In the Events section of the KUMA web interface, open the Period drop-down list in the upper part of the window.
  2. If you want to filter events based on a standard period, select one of the following:
    • 5 minutes
    • 15 minutes
    • 1 hour
    • 24 hours
    • In period

      If you select this option, use the opened calendar to select the start and end dates of the period and click Apply Filter. The date and time format depends on your operating system's settings. You can also manually change the date values if necessary.

  3. Click the SearchField button.

When the period filter is set, only events registered during the specified time interval will be displayed. The period will be displayed in the upper part of the window.

You can also configure the display of events by using the events histogram that is displayed when you click the Histogram icon button in the upper part of the Events section. Events are displayed if you click the relevant data series or select the relevant time period and click the Show events button.

Page top

[Topic 228358]

Saving and selecting events filter configuration

In KUMA, you can save a filter configuration and use it in the future. Other users can also use the saved filters if they have the appropriate access rights. When saving a filter, you are saving the configured settings of all the active filters at the same time, including the time-based filter, query builder, and the events table settings. Search queries are saved on the KUMA Core server and are available to all KUMA users of the selected tenant.

To save the current settings of the filter, query, and period:

  1. In the Events section of the KUMA web interface, click the SaveButton icon next to the filter expression and select Save current filter.
  2. In the window that opens, enter the name of the filter configuration in the Name field. The name must contain 128 Unicode characters or less.
  3. In the Tenant drop-down list, select the tenant that will own the created filter.
  4. Click Save.

The filter configuration is now saved.

To select a previously saved filter configuration:

In the Events section of the KUMA web interface, click the SaveButton icon next to the filter expression and select the relevant filter.

The selected configuration is active, which means that the search field is displaying the search query, and the upper part of the window is showing the configured settings for the period and frequency of updating the search results. Click the SearchField button to submit the search query.

You can click the StarOffIcon icon near the filter configuration name to make it a default filter.

Page top

[Topic 228359]

Deleting event filter configurations

To delete a previously saved filter configuration:

  1. In the Events section of the KUMA web interface, click the SaveButton icon next to the filter search query and click the delete-icon icon next to the configuration that you need to delete.
  2. Click OK.

The filter configuration is now deleted for all KUMA users.

Page top

[Topic 235093]

Supported ClickHouse functions

The following ClickHouse functions are supported in KUMA:

  • Arithmetic functions.
  • Arrays—all functions except:
    • has
    • range
    • functions in which higher-order functions must be used (lambda expressions (->))
  • Comparison functions: all operators except == and less.
  • Logical functions: "not" function only.
  • Type conversion functions.
  • Date/time functions: all except date_add and date_sub.
  • String functions.
  • String search functions—all functions except:
    • position
    • multiSearchAllPositions, multiSearchAllPositionsUTF8, multiSearchFirstPosition, multiSearchFirstIndex, multiSearchAny
    • like and ilike
  • Conditional functions: simple if operator only (ternary if and miltif operators are not supported).
  • Mathematical functions.
  • Rounding functions.
  • Functions for splitting and merging strings and arrays.
  • Bit functions.
  • Functions for working with UUIDs.
  • Functions for working with URLs.
  • Functions for working with IP addresses.
  • Functions for working with Nullable arguments.
  • Functions for working with geographic coordinates.

Search and replace functions in strings, and functions from other sections are not supported.

For more details on SQL, refer to the ClickHouse documentation.

Page top

[Topic 218039]

Viewing event detail areas

To view information about an event:

  1. In the program web interface window, select the Events section.
  2. Search for events by using the query builder or by entering a query in the search field.

    The event table is displayed.

  3. Select the event whose information you want to view.

    The event details window opens.

The Event details area appears in the right part of the web interface window and contains a list of the event's parameters with values. In this area you can:

  • Include the selected field in the search or exclude it from the search by clicking filter-plus or filter-minus next to the setting value.
  • Clicking a file hash in the FileHash field opens a list in which you can select one of the following actions:
  • Open a window containing information about the asset if it is mentioned in the event fields and registered in the program.
  • You can click the link containing the collector name in the Service field to view the settings of the service that registered the event.

    You can also link an event to an alert if the program is in detailed analysis mode and open the Correlation event details window if the selected event is a correlation event.

Page top

[Topic 217871]

Exporting events

In KUMA, you can export information about events to a TSV file. The selection of events that will be exported to a TSV file depends on filter settings. The information is exported from the columns that are currently displayed in the events table. The columns in the exported file are populated with the available data even if they did not display in the events table in the KUMA web interface due to the special features of the SQL query.

To export information about events:

  1. In the Events section of the KUMA web interface, open the MoreButton drop-down list and choose Export TSV.

    The new export TSV file task is created in the Task manager section.

  2. Find the task you created in the Task manager section.

    When the file is ready to download, the DoneIcon icon will appear in the Status column of the task.

  3. Click the task type name and select Upload from the drop-down list.

    The TSV file will be downloaded using your browser's settings. By default, the file name is event-export-<date>_<time>.tsv.

The file is saved based on your web browser's settings.

Page top

[Topic 217994]

Selecting Storage

Events that are displayed in the Events section of the KUMA web interface are retrieved from storage (from the ClickHouse cluster). Depending on the demands of your company, you may have more than one Storage. However, you can only receive events from one Storage at a time, so you must specify which one you want to use.

To select the Storage you want to receive events from,

In the Events section of the KUMA web interface, open the cluster drop-down list and select the relevant storage cluster.

Now events from the selected storage are displayed in the events table. The name of the selected storage is displayed in the cluster drop-down list.

The cluster drop-down list displays only the clusters of tenants available to the user, and the cluster of the main tenant.

See also:

Storage

Page top

[Topic 228360]

Getting events table statistics

You can get statistics for the current events selection displayed in the events table. The selected events depend on the filter settings.

To obtain statistics:

Select Statistics from the MoreButton drop-down list in the upper-right corner of the events table, or click on any value in the events table and select Statistics from the opened context menu.

The Statistics details area appears with the list of parameters from the current event selection. The numbers near each parameter indicate the number of events with that parameter in the selection. If a parameter is expanded, you can also see its five most frequently occurring values. Relevant parameters can be found by using the Search field.

The Statistics window allows you to modify the events filter.

When using SQL queries with data grouping and aggregation for filtering events, statistics are not available.

Page top

[Topic 228361]

Configuring the table of events

Responses to user SQL queries are presented as a table in the Events section. This table can be updated.

Default column configuration of the events table:

  • Tenant.
  • Timestamp.
  • Name.
  • DeviceProduct.
  • DeviceVendor.
  • DestinationAddress.
  • DestinationUserName.

In KUMA, you can customize the displayed set of event fields and their display order. The selected configuration can be saved.

When using SQL queries with data grouping and aggregation for filtering events, statistics are not available and the order of displayed columns depends on the specific SQL query.

To configure the fields displayed in the events table:

  1. Click the gear icon in the top right corner of the events table.

    You will see a window for selecting the event fields that should be displayed in the events table.

  2. Select the check boxes opposite the fields that you want to view in the table. You can search for relevant fields by using the Search field.

    You can configure the table to display any event field from the KUMA event data model. The Timestamp and Name parameters are always displayed in the table. Click the Default button to display only default event parameters in the events table.

    When you select a check box, the events table is updated and a new column is added. When a check box is cleared, the column disappears.

    You can also remove columns from the events table by clicking the column title and selecting Hide column from the drop-down list.

  3. If necessary, change the display order of the columns by dragging the column headers in the event tables.
  4. If you want to sort the events by a specific column, click its title and in the drop-down list select one of the available options: Ascending or Descending.

The selected event fields will be displayed as columns in the table of the Events section in the order you specified.

Page top

[Topic 217961]

Refreshing events table

You can update the displayed event selection with the most recent entries by refreshing the web browser page. You can also refresh the events table automatically and set the frequency of updates. Automatic refresh is disabled by default.

To enable automatic refresh,

select the update frequency in the refresh drop-down list:

  • 5 seconds
  • 15 seconds
  • 30 seconds
  • 1 minute
  • 5 minutes
  • 15 minutes

The events table now refreshes automatically.

To disable automatic refresh:

Select No refresh in the refresh drop-down list:

Page top

[Topic 217946]

Opening the correlation event window

You can view the details of a correlation event in the Correlation event details window.

To open the correlation event window:

  1. In the Events section of the KUMA web interface, click a correlation event.

    You can use filters to find correlation events by assigning the correlated value to the Type parameter.

    The details area of the selected event will open. If the selected event is a correlation event, the Detailed view button will be displayed at the bottom of the details area.

  2. Click the Detailed view button.

The correlation event window will open. The event name is displayed in the upper left corner of the window.

The Correlation event details section of the correlation event window contains the following data:

  • Correlation event severity—the importance of the correlation event.
  • Correlation rule—the name of the correlation rule that triggered the creation of this correlation event. The rule name is represented as a link that can be used to open the settings of this correlation rule.
  • Correlation rule severity—the importance of the correlation rule that triggered the correlation event.
  • Correlation rule ID—the identifier of the correlation rule that triggered the creation of this correlation event.
  • Tenant—the name of the tenant that owns the correlation event.

The Related events section of the correlation event window contains the table of events related to the correlation event. These are base events that actually triggered the creation of the correlation event. When an event is selected, the details area opens in the right part of the web interface window.

The Find in events link to the right of the section header is used for drilldown analysis.

The Related endpoints section of the correlation event window contains the table of hosts related to the correlation event. This information comes from the base events related to the correlation event. Clicking the name of the asset opens the Asset details window.

The Related users section of the correlation event window contains the table of users related to the correlation event. This information comes from the base events related to the correlation event.

See also:

About alerts

Correlator

Drilldown analysis

Page top

[Topic 217979]

Retroscan

You can use the Retroscan feature to "replay" events in KUMA by feeding a sample of events into a correlator so that they can be processed by specific correlation rules. You can also choose to have alerts created while events are retroscanned. Retroscan can be useful when refining the correlation rule resources or analyzing historical data.

Retroscanned events are not enriched with data from CyberTrace or the Kaspersky Threat Intelligence Portal.

Active lists are updated during retroscanning.

A retroscan cannot be performed on selections of events obtained using SQL queries that group data and contain arithmetic expressions.

To use Retroscan:

  1. In the Events section of KUMA, create the required event selection:
    • Select the storage.
    • Configure search expression using the constructor or search query.
    • Select the required period.
  2. Open the MoreButton drop-down list and choose Retroscan.

    The Retroscan window opens.

  3. In the Correlator drop-down list, select the Correlator to feed selected events to.
  4. In the Correlation rules drop-down list, select the Correlation rules that must be used when processing events.
  5. If you want responses to be executed when processing events, turn on the Execute responses toggle switch.
  6. If you want alerts to be generated during event processing, turn on the Create alerts toggle switch.
  7. Click the Create task button.

The retroscan task is created in the Task manager section.

To view results of replay:

In the Task manager section of the KUMA web interface, click the task you created and select Go to Events from the drop-down list.

This opens a new browser tab containing a table of events that were processed during the retroscan and the aggregation and correlation events that were created during event processing.

Depending on your browser settings, you may be prompted for confirmation before your browser can open the new tab containing the retroscan results. For more details, please refer to the documentation for your specific browser.

Page top

[Topic 233257]

Working with geographic data

A list of mappings of IP addresses or ranges of IP addresses to geographic data can be uploaded to KUMA for use in event enrichment.

In this Help topic

Geodata format

Converting geographic data from MaxMind to IP2Location

Importing and exporting geographic data

Default mapping of geographic data

Page top

[Topic 233258]

Geodata format

Geodata can be uploaded to KUMA as a CSV file in UTF-8 encoding. A comma is used as the delimiter. The first line of the file contains the field headers: Network,Country,Region,City,Latitude,Longitude.

CSV file description

Field header name in CSV

Field description

Example

Network

IP address in one of the following formats:

  • Single IP address
  • Range of IP addresses
  • IP address in CIDR format.

Mixing of IPv4 and IPv6 addresses is allowed.

Required field.

  • 192.168.2.24
  • 192.168.2.25-192.168.2.35
  • 131.10.55.70/8
  • 2001:DB8::0/120

Country

Country designation used by your organization. For example, this could be its name or code.

Required field.

  • Russia
  • RU

Region

Regional designation used by your organization. For example, this could be its name or code.

  • Sverdlovsk Oblast
  • RU-SVE

City

City designation used by your organization. For example, this could be its name or code.

  • Yekaterinburg
  • 65701000001

Latitude

Latitude of the described location in decimal format. This field can be empty, in which case the value 0 will be used when importing data into KUMA.

56.835556

Longitude

Longitude of the described location in decimal format. This field can be empty, in which case the value 0 will be used when importing data into KUMA.

60.612778

Page top

[Topic 233259]

Converting geographic data from MaxMind to IP2Location

Geographic data obtained from MaxMind and IP2Location can be used in KUMA if this data is first converted to a format supported by KUMA. Conversion can be done using the script below.

Download script

Python 2.7 or later is required to run the script.

Script start command:

python converter.py --type <type of geographic data being processed: "maxmind" or "ip2location"> --out <directory where a CSV file containing geographic data in KUMA format will be placed> --input <path to the ZIP archive containing geographic data from MaxMind or IP2location>

When the script is run with the --help flag, help is displayed for the available script parameters: python converter.py --help

Command for converting a file containing a Russian database of IP address ranges from a MaxMind ZIP archive:

python converter.py --type maxmind --lang ru --input MaxMind.zip --out geoip_maxmind_ru.csv

If the --lang parameter is not specified, the script receives information from the GeoLite2-City-Locations-en.csv file from the ZIP archive by default.

Absence of the --lang parameter for MaxMind is equivalent to the following command:

python converter.py --type maxmind --input MaxMind.zip --out geoip_maxmind.csv

Command for converting a file from an IP2Location ZIP archive:

python converter.py --type ip2location --input IP2LOCATION-LITE-DB11.CSV.ZIP --out geoip_ip2location.csv

Command for converting a file from several IP2Location ZIP archives:

python converter.py --type ip2location --input IP2LOCATION-LITE-DB11.CSV.ZIP IP2LOCATION-LITE-DB11.IPV6.CSV.ZIP --out geoip_ip2location_ipv4_ipv6.csv

The --lang parameter is not used for IP2Location.

Page top

[Topic 233260]

Importing and exporting geographic data

If necessary, you can manually import and export geographic data into KUMA. Geographic data is imported and exported in a CSV file. If the geographic data import is successful, the previously added data is overwritten and an audit event is generated in KUMA.

To import geographic data into KUMA:

  1. Prepare a CSV file containing geographic data.

    Geographic data received from MaxMind and IP2Location must be converted to a format supported by KUMA.

  2. In the KUMA web interface, open SettingsGeneral.
  3. In the Geographic data settings block, click the Import from file button and select a CSV file containing geographic data.

    Wait for the geographic data import to finish. The data import is interrupted if the page is refreshed.

The geographic data is uploaded to KUMA.

To export geographic data from KUMA:

  1. In the KUMA web interface, open SettingsGeneral.
  2. In the Geographic data settings block, click the Export button.

Geographic data will be downloaded as a CSV file named geoip.csv (in UTF-8 encoding) based on the settings of your browser.

The data is exported in the same format as it was uploaded, with the exception of IP address ranges. If a range of addresses was indicated in the format 1.0.0.0/24 in a file imported into KUMA, the range will be displayed in the format 1.0.0.0-1.0.0.255 in the exported file.

Page top

[Topic 233399]

Default mapping of geographic data

If you select the SourceAddress, DestinationAddress and DeviceAddress event fields as the IP address source when configuring a geographic data enrichment rule, the Apply default mapping button becomes available. You can use this button to add preconfigured mapping pairs of geographic data attributes and event fields as described below.

Default mappings for the SourceAddress event field

Geodata attribute

Event field

Country

SourceCountry

Region

SourceRegion

City

SourceCity

Latitude

SourceLatitude

Longitude

SourceLongitude

Default mappings for the DestinationAddress event field

Geodata attribute

Event field

Country

DestinationCountry

Region

DestinationRegion

City

DestinationCity

Latitude

DestinationLatitude

Longitude

DestinationLongitude

Default mappings for the DeviceAddress event field

Geodata attribute

Event field

Country

DeviceCountry

Region

DeviceRegion

City

DeviceCity

Latitude

DeviceLatitude

Longitude

DeviceLongitude

Page top

[Topic 232930]

Transferring events from isolated network segments to KUMA

Data transfer scenario

Data diodes can be used to transfer events from isolated network segments to KUMA. Data transfer is organized as follows:

  1. KUMA agent that is Installed on a standalone server, with a diode destination receives events and moves them to a directory from which the data diode will pick up the events.

    The agent accumulates events in a buffer until it overflows or for a user-defined period after the last write to disk. The events are then written to a file in the temporary directory of the agent. The file is moved to the directory processed by the data diode; its name is a combination of the file contents hash (SHA-256) and the file creation time.

  2. The data diode moves files from the isolated server directory to the external server directory.
  3. A KUMA collector with a diode connector installed on an external server reads and processes events from the files of the directory where the data diode places files.

    After all events are read from a file, it is automatically deleted. Before reading events, the contents of files are verified based on the hash in the file name. If the contents fail verification, the file is deleted.

In the described scenario, the KUMA components are responsible for moving events to a specific directory within the isolated segment and for receiving events from a specific directory in the external network segment. The data diode transfers files containing events from the directory of the isolated network segment to the directory of the external network segment.

For each data source within an isolated network segment, you must create its own KUMA collector and agent, and configure the data diode to work with separate directories.

Configuring KUMA components

Configuring KUMA components for transferring data from isolated network segments consists of the following steps:

  1. Creating a collector service in the external network segment.

    At this step, you must create and install a collector to receive and process the files that the data diode will transfer from the isolated network segment. You can use the Collector Installation Wizard to create the collector and all the resources it requires.

    At the Transport step, you must select or create a connector of the diode type. In the connector, you must specify the directory to which the data diode will move files from the isolated network segment.

    The user "kuma" that runs the collector must have read/write/delete permissions in the directory to which the data diode moves data from the isolated network segment.

  2. Creating a set of resources for a KUMA agent.

    At this step, you must create a set of resources for the KUMA agent that will receive events in an isolated network segment and prepare them for transferring to the data diode. The diode agent resource set has the following requirements:

    • The destination resource in the agent must have the diode type. In this resource, you must specify the directory from which the data diode will move files to the external network segment.
    • You cannot select connectors of the sql or netflow types for the diode agent.
  3. Downloading the agent configuration file as JSON file.
    1. The set of agent resources from a diode-type destination must be downloaded as a JSON file.
    2. If secret resources were used in the agent resource set, you must manually add the secret data to the configuration file.
  4. Installing the KUMA agent service in the isolated network segment.

    At this step, you must install the agent in an isolated network segment based on the agent configuration file that was created at the previous step. It can be installed to Linux and Windows devices.

Configuring a data diode

The data diode must be configured as follows:

  • Data must be transferred atomically from the directory of the isolated server (where the KUMA agent places the data) to the directory of the external server (where the KUMA collector reads the data).
  • The transferred files must be deleted from the isolated server.

For information on configuring the data diode, please refer to the documentation for the data diode used in your organization.

Special considerations

When working with isolated network segments, operations with SQL and NetFlow are not supported.

When using the scenario described above, the agent cannot be administered through the KUMA web interface because it resides in an isolated network segment. Such agents are not displayed in the list of active KUMA services.

In this Help topic

Diode agent configuration file

Description of secret fields

Installing Linux Agent in an isolated network segment

Installing Windows Agent in an isolated network segment

See also:

About agents

Collector

Service resource sets

Page top

[Topic 233138]

Diode agent configuration file

A created set of agent resources with a diode-type destination can be downloaded as a configuration file. This file is used when installing the agent in an isolated network segment.

To download the configuration file:

In the KUMA web interface, under ResourcesAgents, select the required set of agent resources with a diode destination and click Download config.

The agent settings configuration is downloaded as a JSON file based on the settings of your browser. Secret resources used in the agent resource set are downloaded empty. Their IDs are specified in the file in the "secrets" section. To use a configuration file to install an agent in an isolated network segment, you must manually add secrets to the configuration file (for example, specify the URL and passwords used in the agent connector to receive events).

You must use an access control list (ACL) to configure permissions to access the file on the server where the agent will be installed. File read access must be available to the user account that will run the diode agent.

Below is an example of a diode agent configuration file with a kafka connector.

{

"config": {

"id": "<ID of the set of agent resources>",

"name": "<name of the set of agent resources>",

"proxyConfigs": [

{

"connector": {

"id": "<ID of the connector resource. This example shows a kafka-type connector, but other types of connectors can also be used in a diode agent. If a connector resource is created directly in the set of agent resources, there is no ID value.>",

"name": "<name of the connector resource>",

"kind": "kafka",

"connections": [

{

"kind": "kafka",

"urls": [

"localhost:9093"

],

"host": "",

"port": "",

"secretID": "<ID of the secret resource>",

"clusterID": "",

"tlsMode": "",

"proxy": null,

"rps": 0,

"maxConns": 0,

"urlPolicy": "",

"version": "",

"identityColumn": "",

"identitySeed": "",

"pollInterval": 0,

"query": "",

"stateID": "",

"certificateSecretID": "",

"authMode": "pfx",

"secretTemplateKind": "",

"certSecretTemplateKind": ""

}

],

"topic": "<kafka topic name>",

"groupID": "<kafka group ID>",

"delimiter": "",

"bufferSize": 0,

"characterEncoding": "",

"query": "",

"pollInterval": 0,

"workers": 0,

"compression": "",

"debug": false,

"logs": [],

"defaultSecretID": "",

"snmpParameters": [

{

"name": "",

"oid": "",

"key": ""

}

],

"remoteLogs": null,

"defaultSecretTemplateKind": ""

},

"destinations": [

{

"id": "<ID of the destination resource. If a destination resource is created directly in the set of agent resources, there is no ID value.>",

"name": "<destination resource name>",

"kind": "diode",

"connection": {

"kind": "file",

"urls": [

"<path to the directory where the destination should place events that the data diode will transmit from the isolated network segment>",

"<path to the temporary directory in which events are placed to prepare for data transmission by the diode>"

],

"host": "",

"port": "",

"secretID": "",

"clusterID": "",

"tlsMode": "",

"proxy": null,

"rps": 0,

"maxConns": 0,

"urlPolicy": "",

"version": "",

"identityColumn": "",

"identitySeed": "",

"pollInterval": 0,

"query": "",

"stateID": "",

"certificateSecretID": "",

"authMode": "",

"secretTemplateKind": "",

"certSecretTemplateKind": ""

},

"topic": "",

"bufferSize": 0,

"flushInterval": 0,

"diskBufferDisabled": false,

"diskBufferSizeLimit": 0,

"healthCheckPath": "",

"healthCheckTimeout": 0,

"healthCheckDisabled": false,

"timeout": 0,

"workers": 0,

"delimiter": "",

"debug": false,

"disabled": false,

"compression": "",

"filter": null,

"path": ""

}

]

}

],

"workers": 0,

"debug": false

},

"secrets": {

"<ID of the secret resource>": {

"pfx": "<encrypted pfx key>",

"pfxPassword": "<password to the encrypted pfx key. The changeit value is exported from KUMA instead of the actual password. In the configuration file, you must manually specify the contents of secrets>"

}

},

"tenantID": "<ID of the tenant>"

}

Page top

[Topic 233147]

Description of secret fields

Secret fields

Field name

Type

Description

user

string

User name.

password

string

Password.

token

string

Token.

urls

array of strings

URL list.

publicKey

string

Public key (used in PKI).

privateKey

string

Private key (used in PKI).

pfx

string containing the base64-encoded pfx file

Base64-encoded PFX file. On Linux, you can get the base64 encoding of a file by using the base64 -w0 src > dst command.

pfxPassword

string

Password of the PFX.

securityLevel

string

Used in snmp3. Possible values: NoAuthNoPriv, AuthNoPriv, AuthPriv.

community

string

Used in snmp1.

authProtocol

string

Used in snmp3. Possible values: MD5, SHA, SHA224, SHA256, SHA384, SHA512.

privacyProtocol

string

Used in snmp3. Possible values: DES, AES.

privacyPassword

string

Used in snmp3.

certificate

string containing the base64-encoded pem file

Base64-encoded pem file. On Linux, you can get the base64 encoding of a file by using the base64 -w0 src > dst command.

Page top

[Topic 233143]

Installing Linux Agent in an isolated network segment

To install a KUMA agent to a Linux device in an isolated network segment:

  1. Place the following files on the Linux server in an isolated network segment that will be used by the agent to receive events and from which the data diode will move files to the external network segment:
    • Agent configuration file.

      You must use an access control list (ACL) to configure access permissions for the configuration file so that only the KUMA user will have file read access.

    • Executive file /opt/kaspersky/kuma/kuma (the "kuma" file can located in the installer in the /kuma-ansible-installer/roles/kuma/files/ folder).
  2. Execute the following command:

    sudo ./kuma agent --cfg <path to the agent configuration file> --wd <path to the directory where the files of the agent being installed will reside. If this flag is not specified, the files will be stored in the directory where the kuma file is located>

The agent service is installed and running on the server in an isolated network segment. It receives events and relays them to the data diode so that they can be sent to an external network segment.

Page top

[Topic 233215]

Installing Windows Agent in an isolated network segment

Prior to installing a KUMA agent to a Windows asset, the server administrator must create a user account with the EventLogReaders and Log on as a service permissions on the Windows asset. This user account must be used to start the agent.

To install a KUMA agent to a Windows device in an isolated network segment:

  1. Place the following files on the Window server in an isolated network segment that will be used by the agent to receive events and from which the data diode will move files to the external network segment:
    • Agent configuration file.

      You must use an access control list (ACL) to configure access permissions for the configuration file so that the file can only be read by the user account that will run the agent.

    • Kuma.exe executable file. This file can be found inside the installer in the /kuma-ansible-installer/roles/kuma/files/ directory.

    It is recommended to use the C:\Users\<user name>\Desktop\KUMA folder.

  2. Start the Command Prompt on the Windows asset with Administrator privileges and locate the folder containing the kuma.exe file.
  3. Execute the following command:

    kuma.exe agent --cfg <path to the agent configuration file> --user <user name that will run the agent, including the domain> --install

    You can get help information by running the kuma.exe help agent command.

  4. Enter the password of the user account used to run the agent.

The C:\Program Files\Kaspersky Lab\KUMA\agent\<Agent ID> folder is created in which the KUMA agent service is installed. The agent moves events to the folder so that they can be processed by the data diode.

When installing the agent, the agent configuration file is moved to the directory C:\Program Files\Kaspersky Lab\KUMA\agent\<agent ID specified in the configuration file>. The kuma.exe file is moved to the C:\Program Files\Kaspersky Lab\KUMA directory.

When installing an agent, its configuration file must not be located in the directory where the agent is installed.

When the agent service is installed, it starts automatically. The service is also configured to restart in case of any failures.

Removing a KUMA agent from Windows assets

To remove a KUMA agent from a Windows asset:

  1. Start the Command Prompt on the Windows machine with Administrator privileges and locate the folder with kuma.exe file.
  2. Run any of the commands below:

The specified KUMA agent is removed from the Windows asset. Windows events are no longer sent to KUMA.

When configuring services, you can check the configuration for errors before installation by running the agent with the command kuma.exe agent --cfg <path to the agent configuration file>.

Page top

[Topic 217738]

Asset categories

In KUMA, assets are assigned to tree-structured categories. You can view the category tree in the KUMA web interface under AssetsAll assets. When a tree node is selected, the assets assigned to it are displayed in the right part of the window. Assets from the subcategories of the selected category are not displayed unless you specify that you want to show assets recursively.

Categories can be assigned to assets either manually or automatically. Automatic categorization can be reactive, which means that categories are populated with assets by using correlation rules. Alternatively, automatic categorization can be active, which means that all assets that meet specific conditions are assigned to a category. The categorization method can be specified in the category settings when you create or edit a category.

If you hover the mouse over a category, the ellipsis icon will appear to the right of the category name. Clicking this icon opens a category context menu in which you can select the following options:

  • Show assets—display assets of the selected category in the right part of the window.
  • Show assets recursively—display assets from the subcategories of the selected category. If you want to exit recursive viewing mode, select another category to view.
  • Show info—view information about the selected category in the Category information details area displayed in the right part of the web interface window.
  • Start categorization—start automatically linking assets to the selected category. This option is available for categories that have active categorization.
  • Add subcategoryadd a subcategory to the selected category.
  • Edit category—edit the selected category.
  • Delete category—remove the selected category. You can remove only the categories that have no assets or subcategories. Otherwise the Delete category option will be inactive.
  • Pin as tab—display the selected category in a separate tab. You can undo this action by selecting Unpin as tab in the context menu of the relevant category.
Page top

[Topic 217710]

Adding an asset category

To add an asset category:

  1. Open the Assets section in the KUMA web interface.
  2. Open the category creation window:
    • Click the Add category button.
    • If you want to create a subcategory, select Add subcategory in the context menu of the parent category.

    The Add category details area appears in the right part of the web interface window.

  3. Add information about the category:
    • In the Name field, enter the name of the category. The name must contain from 1 to 128 Unicode characters.
    • In the Parent field, indicate the position of the category within the categories tree hierarchy:
      1. Click the parent-category button.

        This opens the Select categories window showing the categories tree. If you are creating a new category and not a subcategory, the window may show multiple asset category trees, one for each tenant that you can access. Your tenant selection in this window cannot be undone.

      2. Select the parent category for the category you are creating.
      3. Click Save.

      Selected category appears in Parent fields.

    • The Tenant field displays the tenant whose structure contains your selected parent category. The tenant category cannot be changed.
    • Assign a severity to the category in the Priority drop-down list.
    • If necessary, in the Description field, you can add a note consisting of up to 256 Unicode characters.
  4. In the Categorization kind drop-down list, select how the category will be populated with assets. Depending on your selection, you may need to specify additional settings:
    • Manually—assets can only be manually linked to a category.
    • Active—assets will be assigned to a category at regular intervals if they satisfy the defined filter.

      Active category of assets

      1. In the Repeat categorization every drop-down list, specify how often assets will be linked to a category. You can select values ranging from once per hour to once per 24 hours.

        You can forcibly start categorization by selecting Start categorization in the category context menu.

      2. In the Conditions settings block, specify the filter for matching assets to attach to an asset category.

        You can add conditions by clicking the Add condition buttons. Groups of conditions can be added by using the Add group buttons. Group operators can be switched between AND, OR, and NOT values.

        Categorization filter operands and operators

        Operand

        Operators

        Comment

        Build number

        >, >=, =, <=, <

         

        OS

        =, like

        The "like" operator ensures that the search is not case sensitive.

        IP address

        inSubnet, inRange

        The IP address is indicated in CIDR notation (for example: 192.168.0.0/24).

        When the inRange operator is selected, you can indicate only addresses from private ranges of IP addresses (for example: 10.0.0.0–10.255.255.255). Both addresses must be in the same range.

        FQDN

        =, like

        The "like" operator ensures that the search is not case sensitive.

        CVE

        =, in

        The "in" operator lets you specify an array of values.

      3. Use the Test conditions button to make sure that the specified filter is correct. When you click the button, you should see the Assets for given conditions window containing a list of assets that satisfy the search conditions.
    • Reactive—the category will be filled with assets by using correlation rules.
  5. Click Save.

The new category will be added to the asset categories tree.

Page top

[Topic 217772]

Configuring the table of assets

In KUMA, you can configure the contents and order of columns displayed in the assets table. These settings are stored locally on your machine.

To configure the settings for displaying the assets table:

  1. Click the gear icon in the upper-right corner of the assets table.
  2. In the drop-down list, select the check boxes next to the parameters that you want to view in the table:
    • FQDN
    • IP address
    • Asset source
    • Owner
    • MAC address
    • Created by
    • Updated
    • Tenant

    When you select a check box, the assets table is updated and a new column is added. When a check box is cleared, the column disappears. The table can be sorted based on multiple columns.

  3. If you need to change the order of columns, click the left mouse button on the column name and drag it to the desired location in the table.

The assets table display settings are configured.

Page top

[Topic 217987]

Searching assets

KUMA has a full-text search function to find assets based on their parameters. The search uses Name, FQDN, IP address, MAC address, and Owner asset parameters.

To find the relevant asset:

In the Assets section of the KUMA web interface, enter your search query in the Search field and press ENTER or click the magn-glass icon.

The table displays all assets whose names satisfy the search criteria.

Page top

[Topic 235166]

Viewing asset details

To view information about an asset, open the asset information window in one of the following ways:

  • In the KUMA web interface, select Assets → select a category with the relevant assets → select an asset.
  • In the KUMA web interface, select Alerts → click the link with the relevant alert → select the asset in the Related endpoints section.
  • In the KUMA web interface, select Eventssearch and filter events → select the relevant event → click the link in one of the following fields: SourceAssetID, DestinationAssetID, or DeviceAssetID.

The following information may be displayed in the asset details window:

  • Name—asset name.

    Assets imported into KUMA retain the names that were assigned to them at the source. You can change these names in the KUMA web interface.

  • Tenant—the name of the tenant that owns the asset.
  • Asset source—source of information about the asset. There may be several sources. For instance, information can be added in the KUMA web interface or by using the API, or it can be imported from Kaspersky Security Center, KICS for Networks, and MaxPatrol reports.

    When using multiple sources to add information about the same asset to KUMA, you should take into account the rules for merging asset data.

  • Created—date and time when the asset was added to KUMA.
  • Updated—date and time when the asset information was most recently modified.
  • Owner—owner of the asset, if provided.
  • IP address—IP address of the asset (if any).

    If there are several assets with identical IP addresses in KUMA, the asset that was added the latest is returned in all cases when assets are searched by IP address. If assets with identical IP addresses can coexist in your organization's network, plan accordingly and use additional attributes to identify the assets. For example, this may become important during correlation.

  • FQDN—Fully Qualified Domain Name of the asset, if provided.
  • MAC address—MAC address of the asset (if any).
  • Operating system—operating system of the asset.
  • Related alertsalerts associated with the asset (if any).

    To view the list of alerts related to an asset, click the Find in Alerts link. The Alerts tab opens with the search expression set to filter all assets with the corresponding asset ID.

  • Categoriescategories associated with the asset (if any).
  • Software info and Hardware info—if the asset software and hardware parameters are provided, they are displayed in this section.
  • Asset vulnerability information:
    • Kaspersky Security Center vulnerabilities—vulnerabilities of the asset, if provided. This information is available for the assets imported from Kaspersky Security Center.

      You can learn more about the vulnerability by clicking the learnmore icon, which opens the Kaspersky Threats portal. You can also update the vulnerabilities list by clicking the Update link and requesting updated information from Kaspersky Security Center.

    • KICS for Networks vulnerabilities—vulnerabilities of the asset, if provided. This information is available for the assets imported from KICS for Networks.
  • Asset source information:
    • Last connection to Kaspersky Security Center—the time when information about the asset was last received from Kaspersky Security Center. This information is available for the assets imported from Kaspersky Security Center.
    • Host ID—ID of the Kaspersky Security Center Network Agent from which the asset information was received. This information is available for the assets imported from Kaspersky Security Center. This ID is used to determine the uniqueness of the asset in Kaspersky Security Center.
    • KICS for Networks server IP address and KICS for Networks connector ID—data on the KICS for Networks instance from which the asset was imported.

By clicking the KSC response button, you can start a Kaspersky Security Center task on the asset.

This is available if KUMA is integrated with Kaspersky Security Center.

Page top

[Topic 233855]

Adding assets

You can add asset information in the following ways:

When assets are added, assets that already exist in KUMA can be merged with the assets being added.

Asset merging algorithm:

  1. Checking uniqueness of Kaspersky Security Center or KICS for Networks assets.
    • The uniqueness of an asset imported from Kaspersky Security Center is determined by the Host ID parameter, which contains the Kaspersky Security Center Network Agent Network Agent identifier. If two assets' IDs differ, they are considered to be separate assets and are not merged.
    • The uniqueness of an asset imported from KICS for Networks is determined by the combination of the IP address, KICS for Networks server IP address, and KICS for Networks connector ID parameters. If any of the parameters of two assets differ they are considered to be separate assets and are not merged.

    If the compared assets match, the algorithm is performed further.

  2. Make sure that the values in the IP, MAC, and FQDN fields match.

    If at least two of the specified fields match, the assets are combined, provided that the other fields are blank.

    Possible matches:

    • The FQDN and IP address of the assets match. The MAC field is blank.

      The check is performed against the entire array of IP address values. If the IP address of an asset is included in the FQDN, the values are considered to match.

    • The FQDN and MAC address of the assets match. The IP field is blank.

      The check is performed against the entire array of MAC address values. If at least one value of the array fully matches the FQDN, the values are considered to match.

    • The IP address and MAC address of the assets match. The FQDN field is blank.

      The check is performed against the entire array of IP- and MAC address values. If at least one value in the arrays is fully matched, the values are considered to match.

  3. Make sure that the values of at least one of the IP, MAC, or FQDN fields match, provided that the other two fields are not filled in for one or both assets.

    Assets are merged if the values in the field match. For example, if the FQDN and IP address are specified for a KUMA asset, but only the IP address with the same value is specified for an imported asset, the fields match. In this case, the assets are merged.

    For each field, verification is performed separately and ends on the first match.

You can see examples of asset field comparison here.

Information about assets can be generated from various sources. If the added asset and the KUMA asset contain data received from the same source, this data is overwritten. For example, a Kaspersky Security Center asset receives a fully qualified domain name, software information, and host ID when imported into KUMA. When importing an asset from Kaspersky Security Center with an equivalent fully qualified domain name, all this data will be overwritten (if it has been defined for the added asset). All fields in which the data can be refreshed are listed in the Updatable data table.

Updatable data

Field name

Update procedure

Name

Selected according to the following priority:

  • Manually defined.
  • Received from Kaspersky Security Center.
  • Received by KICS for Networks.

Owner

The first value from the sources is selected according to the following priority:

  • Received from Kaspersky Security Center.
  • Manually defined.

IP address

The data is merged. If the array of addresses contains identical addresses, the copy of the duplicate address is deleted.

FQDN

The first value from the sources is selected according to the following priority:

  • Received from Kaspersky Security Center.
  • Received by KICS for Networks.
  • Manually defined.

MAC address

The data is merged. If the array of addresses contains identical addresses, one of the duplicate addresses is deleted.

Operating system

The first value from the sources is selected according to the following priority:

  • Received from Kaspersky Security Center.
  • Received by KICS for Networks.
  • Manually defined.

Vulnerabilities

KUMA asset data is supplemented with information from the added assets. In the asset details, data is grouped by the name of the source.

Vulnerabilities are eliminated for each source separately.

Software info

Data from KICS for Networks is always recorded (if available).

For other sources, the first value is selected according to the following priority:

  • Received from Kaspersky Security Center.
  • Manually defined.

Hardware info

The first value from the sources is selected according to the following priority:

  • Received from Kaspersky Security Center.
  • Defined via the API.

The updated data is displayed in the asset details. You can view asset details in the KUMA web interface.

This data may be overwritten when new assets are added. If the data used to generate asset information is not updated from sources for more than 30 days, the asset is deleted. The next time you add an asset from the same sources, a new asset is created.

If the KUMA web interface is used to edit asset information that was received from Kaspersky Security Center or KICS for Networks, you can edit the following asset data:

  • Name.
  • Category.

If asset information was added manually, you can edit the following asset data when editing these assets in the KUMA web interface:

  • Name.
  • Name of the tenant that owns the asset.
  • IP address.
  • Fully qualified domain name.
  • MAC address.
  • Owner.
  • Category.
  • Operating system.
  • Hardware info.

Asset data cannot be edited via the REST API. When importing from the REST API, the data is updated according to the rules for merging asset details provided above.

In this Help topic

Adding asset information in the KUMA web interface

Importing asset information from Kaspersky Security Center

Importing asset information from MaxPatrol

Importing asset information from KICS for Networks

Examples of asset field comparison during import

Page top

[Topic 217798]

Adding asset information in the KUMA web interface

To add an asset in the KUMA web interface:

  1. In the Assets section of the KUMA web interface, click the Add asset button.

    The Add asset details area opens in the right part of the window.

  2. Enter the asset parameters:
    • Asset name (required)
    • Tenant (required)
    • IP address and/or FQDN (required)
    • MAC address
    • Owner
  3. If required, assign one or multiple categories to the asset:
    1. Click the button with the parent-category icon.

      Select categories window opens.

    2. Select the check boxes next to the categories that should be assigned to the asset. You can use the plus and minus icons to expand or collapse the lists of categories.
    3. Click Save.

    The selected categories appear in the Categories fields.

  4. If required, add information about the operating system installed on the asset in the Software section.
  5. If required, add information about asset hardware in the Hardware info section.
  6. Click Add.

The asset is created and displayed in the assets table in the category assigned to it or in the Uncategorized assets category.

Page top

[Topic 217893]

Importing asset information from Kaspersky Security Center

All assets that are protected by this program are registered in Kaspersky Security Center. Information about assets protected by Kaspersky Security Center can be imported into KUMA. To do so, you need to configure integration between the applications in advance.

KUMA supports the following types of asset imports from KSC:

  • Import of information about all assets of all KSC servers.
  • Import of information about assets of the selected KSC server.

To import information about all assets of all KSC servers:

  1. In the KUMA web interface, select the Assets section.
  2. Click the Import assets button.

    The Import Kaspersky Security Center assets window opens.

  3. In the drop-down list, select the tenant for which you want to perform the import.

    In this case, the program downloads information about all assets of all KSC servers that have been configured to connect to the selected tenant.

    If you want to import information about all assets of all KSC servers for all tenants, select All tenants.

  4. Click OK.

The asset information will be imported.

To import information about the assets of one KSC server:

  1. Open the KUMA web interface and select SettingsKaspersky Security Center.

    The Kaspersky Security Center integration by tenant window opens.

  2. Select the tenant for which you want to import assets.

    The Kaspersky Security Center integration window opens.

  3. Click the connection for the relevant Kaspersky Security Center server.

    This opens a window containing the settings of this connection to Kaspersky Security Center.

  4. Do one of the following:
    • If you want to import all assets connected to the selected KSC server, click the Import assets button.
    • If you want to import only assets that are connected to a secondary server or included in one of the groups (for example, the Unassigned devices group), do the following:
      1. Click the Load hierarchy button.
      2. Select the check boxes next to the names of the secondary servers or groups from which you want to import asset information.
      3. Select the Import assets from new groups check box if you want to import assets from new groups.

        If no check boxes are selected, information about all assets of the selected KSC server is uploaded during the import.

      4. Click the Save button.
      5. Click the Import assets button.

The asset information will be imported.

Page top

[Topic 228184]

Importing asset information from MaxPatrol

In KUMA, you can import asset information from reports on the results of network device scans using MaxPatrol, which is a system for monitoring the state of security and compliance with standards. The import is performed through the API using the maxpatrol-tool on the server where the KUMA Core is installed. Imported assets are displayed in the KUMA web interface in the Assets section. If necessary, you can edit the settings of assets.

This tool is provided upon request.

Imports from MaxPatrol 8 are supported.

To import asset information from a MaxPatrol report:

  1. In MaxPatrol, generate a network asset scan report in XML file format and copy the report file to the KUMA Core server. For more details about scan tasks and output file formats, refer to the MaxPatrol documentation.

    Data cannot be imported from reports in SIEM integration file format. The XML file format must be selected.

  2. Create a file with the token for accessing the KUMA REST API. For convenience, it is recommended to place it into the MaxPatrol report folder. The file must not contain anything except the token.

    Requirements imposed on accounts for which the API token is generated:

  3. Copy the maxpatrol-tool to the server hosting the KUMA Core and make the tool's file executable by running the command chmod +x <path to maxpatrol-tool file on the server with the KUMA Core>.
  4. Run the maxpatrol-tool:

    ./maxpatrol-tool --kuma-rest <KUMA REST API server address and port> --token <path and name of API token file> --tenant <name of tenant where assets will reside> <path and name of MaxPatrol report file>

    Example: ./maxpatrol-tool --kuma-rest example.kuma.com:7223 --token token.txt --tenant Main example.xml

You can use additional flags and commands for import operations. For example, the command --verbose, -v will display a full report on the received assets. A detailed description of the available flags and commands is provided in the table titled Flags and commands of maxpatrol-tool. You can also use the --help command to view information on the available flags and commands.

The asset information will be imported from the MaxPatrol report to KUMA. The console displays information on the number of new and updated assets.

Example:

inserted 2 assets;

updated 1 asset;

errors occurred: []

The tool works as follows when importing assets:

  • The data of assets imported into KUMA through the API is overwritten, and information about their resolved vulnerabilities is deleted.
  • Assets with invalid data are skipped. Error information is displayed when using the --verbose flag.
  • If there are assets with identical IP addresses and fully qualified domain names (FQDN) in the same MaxPatrol report, these assets are merged. The information about their vulnerabilities and software is also merged into one asset.

    When uploading assets from MaxPatrol, assets that have equivalent IP addresses and fully qualified domain names (FQDN) that were previously imported from Kaspersky Security Center are overwritten.

    To avoid this problem, you must configure range-based asset filtering by running the command --ignore <IP address ranges> or -i <IP address ranges>. Assets that satisfy the filtering criteria are not uploaded. For a description of this command, please refer to the table titled Flags and commands of maxpatrol-tool.

Flags and commands of maxpatrol-tool

Flags and commands

Description

--kuma-rest <KUMA REST API server port and address>, -a <KUMA REST API server port and address>

Address (with the port) of KUMA Core server where assets will be imported. For example, example.kuma.com:7223.

Port 7223 is used for API requests by default. You can change the port if necessary.

--token <path and name of API token file>, -t <path and name of API token file>

Path and name of the file containing the token used to access the REST API. This file must contain only the token.

The Administrator or Analyst role must be assigned to the user account for which the API token is being generated.

--tenant <tenant name>, -T <tenant name>

Name of the KUMA tenant in which the assets from the MaxPatrol report will be imported.

--dns <IP address ranges> or -d <IP address ranges>

This command uses DNS to enrich IP addresses with FQDNs from the specified ranges if the FQDNs for these addresses were not already specified.

Example: --dns 0.0.0.0-9.255.255.255,11.0.0.0-255.255.255,10.0.0.2

--dns-server <DNS server IP address>, -s <DNS server IP address>

Address of the DNS server that the tool must contact to receive FQDN information.

Example: --dns-server 8.8.8.8

--ignore <IP address ranges> or -i <IP address ranges>

Address ranges of assets that should be skipped during import.

Example: --ignore 8.8.0.0-8.8.255.255, 10.10.0.1

--verbose, -v

Output of the complete report on received assets and any errors that occurred during the import process.

--help, -h

help

Get reference information on the tool or a command.

Examples:

./maxpatrol-tool help

./maxpatrol-tool <command> --help

version

Get information about the version of the maxpatrol-tool.

completion

Creation of an autocompletion script for the specified shell.

Examples:

  • ./maxpatrol-tool --kuma-rest example.kuma.com:7223 --token token.txt --tenant Main example.xml – import assets to KUMA from MaxPatrol report example.xml.
  • ./maxpatrol-tool help—get reference information on the tool.

Possible errors

Error message

Description

must provide path to xml file to import assets

The path to the MaxPatrol report file was not specified.

incorrect IP address format

Invalid IP address format. This error may arise when incorrect IP ranges are indicated.

no tenants match specified name

No suitable tenants were found for the specified tenant name using the REST API.

unexpected number of tenants (%v) match specified name. Tenants are: %v

KUMA returned more than one tenant for the specified tenant name.

could not parse file due to error: %w

Error reading the XML file containing the MaxPatrol report.

error decoding token: %w

Error reading the API token file.

error when importing files to KUMA: %w

Error transferring asset information to KUMA.

skipped asset with no FQDN and IP address

One of the assets in the report did not have an FQDN or IP address. Information about this asset was not sent to KUMA.

skipped asset with invalid FQDN: %v

One of the assets in the report had an incorrect FQDN. Information about this asset was not sent to KUMA.

skipped asset with invalid IP address: %v

One of the assets in the report had an incorrect IP address. Information about this asset was not sent to KUMA.

KUMA response: %v

An error occurred with the specified report when importing asset information.

unexpected status code %v

An unexpected HTTP code was received when importing asset information from KUMA.

Page top

[Topic 233671]

Importing asset information from KICS for Networks

After configuring KICS for Networks integration, tasks to obtain data about KICS for Networks assets are created automatically. This occurs:

  • Immediately after creating a new integration.
  • Immediately after changing the settings of an existing integration.
  • According to a regular schedule every several hours. Every 12 hours by default. The schedule can be changed.

Account data update tasks can be created manually.

To start a task to update KICS for Networks asset data for a tenant:

  1. In the KUMA web interface, open SettingsKaspersky Industrial CyberSecurity for Networks.
  2. Select the relevant tenant.

    The Kaspersky Industrial CyberSecurity for Networks integration window opens.

  3. Click the Import assets button.

A task to receive account data from the selected tenant is added to the Task manager section of the KUMA web interface.

Page top

[Topic 243031]

Examples of asset field comparison during import

Each imported asset is compared to the matching KUMA asset.

Checking for two-field value match in the IP, MAC, and FQDN fields

Compared assets

Compared fields

FQDN

IP

MAC

KUMA asset

Filled in

Filled in

Empty

Imported asset 1

Filled in, matching

Filled in, matching

Filled in

Imported asset 2

Filled in, matching

Filled in, matching

Empty

Imported asset 3

Filled in, matching

Empty

Filled in

Imported asset 4

Empty

Filled in, matching

Filled in

Imported asset 5

Filled in, matching

Empty

Empty

Imported asset

6

Empty

Empty

Filled in

Comparison results:

  • Imported asset 1 and KUMA asset: the FQDN and IP fields are filled in and match, no conflict in the MAC fields between the two assets. The assets are merged.
  • Imported asset 2 and KUMA asset: the FQDN and IP fields are filled in and match. The assets are merged.
  • Imported asset 3 and KUMA asset: the FQDN and MAC fields are filled in and match, no conflict in the IP fields between the two assets. The assets are merged.
  • Imported asset 4 and KUMA asset: the IP fields are filled in and match, no conflict in the FQDN and MAC fields between the two assets. The assets are merged.
  • Imported asset 5 and KUMA asset: the FQDN fields are filled in and match, no conflict in the IP and MAC fields between the two assets. The assets are merged.
  • Imported asset 6 and KUMA asset: no matching fields. The assets are not merged.

Checking for single-field value match in the IP, MAC, and FQDN fields

Compared assets

Compared fields

FQDN

IP

MAC

KUMA asset

Empty

Filled in

Empty

Imported asset 1

Filled in

Filled in, matching

Filled in

Imported asset 2

Filled in

Filled in, matching

Empty

Imported asset 3

Filled in

Empty

Filled in

Imported asset 4

Empty

Empty

Filled in

Comparison results:

  • Imported asset 1 and KUMA asset: the IP fields are filled in and match, no conflict in the FQDN and MAC fields between the two assets. The assets are merged.
  • Imported asset 2 and KUMA asset: the IP fields are filled in and match, no conflict in the FQDN and MAC fields between the two assets. The assets are merged.
  • Imported asset 3 and KUMA asset: no matching fields. The assets are not merged.
  • Imported asset 4 and KUMA asset: no matching fields. The assets are not merged.
Page top

[Topic 235241]

Assigning a category to an asset

To assign a category to one asset:

  1. In the KUMA web interface, go to the Assets section.
  2. Select the category with the relevant assets.

    The assets table is displayed.

  3. Select an asset.
  4. In the opened window, click the Edit button.
  5. In the Categories field, click the parent-category button.
  6. Select a category.

    If you want to move an asset to the Uncategorized assets section, you must delete the existing categories for the asset by clicking the cross button.

  7. Click the Save button.

The category will be assigned.

To assign a category to multiple assets:

  1. In the KUMA web interface, go to the Assets section.
  2. Select the category with the relevant assets.

    The assets table is displayed.

  3. Select the check boxes next to the assets for which you want to change the category.
  4. Click the Link to category button.
  5. In the opened window, select a category.
  6. Click the Save button.

The category will be assigned.

Do not assign the Categorized assets category to assets.

Page top

[Topic 217852]

Editing the parameters of assets

In KUMA, you can edit asset parameters. All the parameters of manually added assets can be edited. For assets imported from Kaspersky Security Center, you can only change the name of the asset and its category.

To change the parameters of an asset:

  1. In the Assets section of the KUMA web interface, click the asset that you want to edit.

    The Asset details area opens in the right part of the window.

  2. Click the Edit button.

    The Edit asset window opens.

  3. Make the changes you need in the available fields:
    • Asset name (required. This is the only field available for editing if the asset was imported from Kaspersky Security Center or KICS for Networks.)
    • IP address and/or FQDN (required)
    • MAC address
    • Owner
    • Software info:
      • OS name
      • OS build
    • Hardware info:

      Hardware parameters

      You can add information about asset hardware to the Hardware info section:

      Available fields for describing the asset CPU:

      • CPU name
      • CPU frequency
      • CPU core count

      You can add CPUs to the asset by using the Add CPU link.

      Available fields for describing the asset disk:

      • Disk free bytes
      • Disk volume

      You can add disks to the asset by using the Add disk link.

      Available fields for describing the asset RAM:

      • RAM frequency
      • RAM total bytes

      Available fields for describing the asset network card:

      • Network card name
      • Network card manufacture
      • Network card driver version

      You can add network cards to the asset by using the Add network card link.

  4. Assign or change the category of the asset:
    1. Click the button with the parent-category icon.

      Select categories window opens.

    2. Select the check boxes next to the categories that should be assigned to the asset.
    3. Click Save.

    The selected categories appear in the Categories fields.

    You can also select the asset and then drag and drop it into the relevant category. This category will be added to the list of asset categories.

    Do not assign the Categorized assets category to assets.

  5. Click the Save button.

Asset parameters have been changed.

Page top

[Topic 217832]

Deleting assets

KUMA has the capability to delete assets.

To delete an asset:

  1. In the Assets section of the KUMA web interface, click the asset that you want to delete.

    The Asset details area opens in the right part of the window.

  2. Click the Delete button.

    A confirmation window opens.

  3. Click OK.

The asset is deleted.

Assets imported from Kaspersky Security Center are deleted automatically if their information has not been updated in 30 days. An absence of updated information may be caused by a lack of data on the asset in Kaspersky Security Center or due to disconnection of KUMA from the Kaspersky Security Center server. If an asset was deleted but KUMA starts receiving information about that asset again from Kaspersky Security Center, the asset is recreated with the same ID. If you manually recreate an automatically deleted asset, the new asset will have an ID that is different from the ID of the old asset.

Page top

[Topic 235047]

Updating third-party applications and fixing vulnerabilities on Kaspersky Security Center assets

You can update third-party applications (including Microsoft applications) that are installed on Kaspersky Security Center assets, and fix vulnerabilities in these applications.

First you need to create the Install required updates and fix vulnerabilities task on the selected Kaspersky Security Center Administration Server with the following settings:

  • Application—Kaspersky Security Center.
  • Task type—Install required updates and fix vulnerabilities.
  • Devices to which the task will be assigned—you need to assign the task to the root administration group.
  • Rules for installing updates:
    • Install approved updates only.
    • Fix vulnerabilities with a severity level equal to or higher than (optional setting).

      If this setting is enabled, updates fix only those vulnerabilities for which the severity level set by Kaspersky is equal to or higher than the value selected in the list (Medium, High, or Critical). Vulnerabilities with a severity level lower than the selected value are not fixed.

  • Scheduled start—the task run schedule.

For details on how to create a task, please refer to the Kaspersky Security Center Help Guide.

The Install required updates and fix vulnerabilities task is available with a Vulnerability and Patch Management license.

Next, you need to install updates for third-party applications and fix vulnerabilities on assets in KUMA.

To install updates and fix vulnerabilities in third-party applications on an asset in KUMA:

  1. Open the asset details window in one of the following ways:
    • In the KUMA web interface, select Assets → select a category with the relevant assets → select an asset.
    • In the KUMA web interface, select Alerts → click the link with the relevant alert → select the asset in the Related endpoints section.
    • In the KUMA web interface, select Eventssearch and filter events → select the relevant event → click the link in one of the following fields: SourceAssetID, DestinationAssetID, or DeviceAssetID.
  2. In the asset details window, expand the list of Kaspersky Security Center vulnerabilities.
  3. Select the check boxes next to the applications that you want to update.
  4. Click the Upload updates link.
  5. In the opened window, select the check box next to the ID of the vulnerability that you want to fix.
  6. If No is displayed in the EULA accepted column for the selected ID, click the Approve updates button.
  7. Click the link in the EULA URL column and carefully read the text of the End User License Agreement.
  8. If you agree to it, click the Accept selected EULAs button in the KUMA web interface.

    The ID of the vulnerability for which the EULA was accepted shows Yes in the EULA accepted successfully column.

  9. Repeat steps 7–10 for each required vulnerability ID.
  10. Click OK.

Updates will be uploaded and installed on the assets managed by the Administration Server where the task was started, and on the assets of all secondary Administration Servers.

The terms of the End User License Agreement for updates and vulnerability patches must be accepted on each secondary Administration Server separately.

Updates are installed on assets where the vulnerability was detected.

You can update the list of vulnerabilities for an asset in the asset details window by clicking the Update link.

Page top

[Topic 235060]

Moving assets to a selected administration group

You can move assets to a selected administration group of Kaspersky Security Center. In this case, the group policies and tasks will be applied to the assets. For more details on Kaspersky Security Center tasks and policies, please refer to the Kaspersky Security Center Help Guide.

Administration groups are added to KUMA when the hierarchy is loaded during import of assets from Kaspersky Security Center. First, you need to configure KUMA integration with Kaspersky Security Center.

To move an asset to a selected administration group:

  1. Open the asset details window in one of the following ways:
    • In the KUMA web interface, select Assets → select a category with the relevant assets → select an asset.
    • In the KUMA web interface, select Alerts → click the link with the relevant alert → select the asset in the Related endpoints section.
    • In the KUMA web interface, select Events, find the event you need using search and filter functions, and click the link in the DeviceExternalID field of the event.
  2. In the asset details window, click the Move to KSC group button.
  3. Click the Move to KSC group button.
  4. Select the group in the opened window.

    The selected group must be owned by the same tenant as the asset.

  5. Click the Save button.

The selected asset will be moved.

To move multiple assets to a selected administration group:

  1. In the KUMA web interface, select the Assets section.
  2. Select the category with the relevant assets.
  3. Select the check boxes next to the assets that you want to move to the group.
  4. Click the Move to KSC group button.

    The button is active if all selected assets belong to the same Administration Server.

  5. Select the group in the opened window.
  6. Click the Save button.

The selected assets will be moved.

You can see the specific group of an asset in the asset details.

Kaspersky Security Center assets information is updated in KUMA when information about assets is imported from Kaspersky Security Center. This means that a situation may arise when assets have been moved between administration groups in Kaspersky Security Center, but this information is not yet displayed in KUMA. When an attempt is made to move such an asset to an administration group in which it is already located, KUMA returns the Failed to move assets to another KSC group error.

Page top

[Topic 233934]

Asset audit

KUMA can be configured to generate asset audit events under the following conditions:

  • Asset was added to KUMA. The application monitors manual asset creation, as well as creation during import via the REST API and during import from Kaspersky Security Center or KICS for Networks.
  • Asset parameters have been changed. A change in the value of the following asset fields is monitored:
    • Name
    • IP address
    • MAC address
    • FQDN
    • Operating system

    Fields may be changed when an asset is updated during import.

  • Asset was deleted from KUMA. The program monitors manual deletion of assets, as well as automatic deletion of assets imported from Kaspersky Security Center and KICS for Networks, whose data is no longer being received.
  • Vulnerability info was added to the asset. The program monitors the appearance of new vulnerability data for assets. Information about vulnerabilities can be added to an asset, for example, when importing assets from Kaspersky Security Center or KICS for Networks.
  • Asset vulnerability was resolved. The program monitors the removal of vulnerability information from an asset. A vulnerability is considered to be resolved if data about this vulnerability is no longer received from any sources from which information about its occurrence was previously obtained.
  • Asset was added to a category. The program monitors the assignment of an asset category to an asset.
  • Asset was removed from a category. The program monitors the deletion of an asset from an asset category.

Asset audit events can be sent to storage or to correlators, for example.

In this section

Configuring an asset audit

Storing and searching asset audit events

Enabling and disabling an asset audit

Page top

[Topic 233948]

Configuring an asset audit

To configure an asset audit:

  1. In the KUMA web interface, open SettingsAsset audit.
  2. Perform one of the following actions with the tenant for which you want to configure asset audit:
    • Add the tenant by using the Add tenant button if this is the first time you are configuring asset audit for the relevant tenant.

      In the opened Asset audit window, select a name for the new tenant.

    • Select an existing tenant in the table if asset audit has already been configured for the relevant tenant.

      In the opened Asset audit window, the tenant name is already defined and cannot be edited.

    • Clone the settings of an existing tenant to create a copy of the conditions configuration for the tenant for which you are configuring asset audit for the first time. To do so, select the check box next to the tenant whose configuration you need to copy and click Clone. In the opened Asset audit window, select the name of the tenant to use the copied configuration.
  3. For each condition for generating asset audit events, select the destination to where the created events will be sent:
    1. In the settings block of the relevant type of asset audit events, use the Add destination drop-down list to select the type of destination to which the created events should be sent:
      • Select Storage if you want events to be sent to storage.
      • Select Correlator if you want events to be sent to the correlator.
      • Select Other if you want to select a different destination.

        This type of resource includes correlator and storage services that were created in previous versions of the program.

      In the Add destination window that opens you must define the settings for event forwarding.

    2. Use the Destination drop-down list to select an existing destination or select Create if you want to create a new destination.

      If you are creating a new destination, fill in the settings as indicated in the destination resource description.

    3. Click Save.

    A destination has been added to the condition for generating asset audit events. Multiple destinations can be added for each condition. You can also disable a previously configured condition for creating asset audit events by clicking the Disabled check box next to the relevant condition.

  4. Click Save.

The asset audit has been configured. Asset audit events will be generated for those conditions for which destinations have been added. You can also disable asset audit for an existing tenant. To do so, click the relevant tenant and select the Disabled check box in the upper part of the opened Asset audit window. Click Save.

Page top

[Topic 233950]

Storing and searching asset audit events

Asset audit events are considered to be base events and do not replace audit events. Asset audit events can be searched based on the following parameters:

Event field

Value

DeviceVendor

Kaspersky

DeviceProduct

KUMA

DeviceEventCategory

Audit assets

Page top

[Topic 233949]

Enabling and disabling an asset audit

You can enable or disable asset audits for a tenant or for a specific condition within a single tenant.

To enable or disable an asset audit for a tenant:

  1. In the KUMA web interface, open SettingsAsset audit and select the tenant for which you want to enable or disable an asset audit.

    The Asset audit window opens.

  2. Select or clear the Disabled check box in the upper part of the window.
  3. Click Save.

To enable or disable an individual condition for generating asset audit events:

  1. In the KUMA web interface, open SettingsAsset audit and select the tenant for which you want to enable or disable a condition for generating asset audit events.

    The Asset audit window opens.

  2. Select or clear the Disabled check box next to the relevant conditions.
  3. Click Save.
Page top

[Topic 217937]

Managing users

It is possible for multiple users to have access to KUMA. Users are assigned user roles, which affect the tasks the users can perform. The same user may have different roles with different tenants.

You can create or edit user accounts under SettingsUsers in the KUMA web interface. Users are also created automatically in the program if KUMA integration with Active Directory is enabled and the user is logging in to the KUMA web interface for the first time using their domain account.

The table of user accounts is displayed in the Users window of the KUMA web interface. You can use the Search field to look for users. You can sort the table based on the User information column by clicking the column header and selecting Ascending or Descending.

User accounts can be created, edited, or disabled. When editing user accounts (your own or the accounts of others), you can generate an API token for them.

By default, disabled user accounts are not displayed in the users table. However, they can be viewed by clicking the User information column and selecting the Disabled users check box.

To disable a user:

In the KUMA web interface, under SettingsUsers, select the check box next to the relevant user and click Disable user.

In this Help topic

User roles

Creating a user

Editing user

Editing your user account

Page top

[Topic 218031]

User roles

KUMA users may have the following roles:

  • General administrator—this role is designed for users who are responsible for the core functionality of KUMA systems. For example, they install system components, perform maintenance, work with services, create backups, and add users to the system. These users have full access to KUMA.
  • Administrator—this role is for users responsible for the core functionality of KUMA systems owned by specific tenants.
  • Analyst—this role is for users responsible for configuring the KUMA system to receive and process events of a specific tenant. They also create and tweak correlation rules.
  • Operator—this role is for users dealing with immediate security threats of a specific tenant. A user with the operator role sees resources in a shared tenant through the REST API.

    User roles rights

    Web interface section and actions

    General administrator

    Administrator

    Analyst

    Operator

    Comment

    Reports

     

     

     

     

     

    View and edit templates and reports

    filled in

    filled in

    filled in

    no

    Analysts can:

    View and edit templates and reports that they created themselves.

    View reports sent to them by email.

    View predefined templates.

    Generate reports

    filled in

    filled in

    filled in

    no

    Analysts can generate reports that they created themselves or that are predefined (from a template or report).

    Analysts cannot generate reports sent to them by email.

    Export generated reports

    filled in

    filled in

    filled in

    no

    Analysts can export the following:

    Reports that they created themselves.

    Predefined reports.

    Reports received by email.

    Delete templates and generated reports

    filled in

    filled in

    filled in

    no

    Analysts can delete the templates and reports that they generated themselves.

    Analysts should not delete:

    Predefined templates.

    Reports received by email.

    Only the general administrator can delete predefined templates and reports.

    Edit the settings for generating reports

    filled in

    filled in

    filled in

    no

    Analysts may change the settings for generating reports that they created themselves or that are predefined.

    Duplicate report template

    filled in

    filled in

    filled in

    no

    Analysts can duplicate predefined report templates and report templates that they created themselves.

    Dashboard

     

     

     

     

     

    View data on the dashboard and change layouts

    filled in

    filled in

    filled in

    filled in

     

    Add layouts

    filled in

    filled in

    filled in

    no

    This includes adding widgets to a layout.

    Edit and rename layouts

    filled in

    filled in

    filled in

    no

    This includes adding, editing, and deleting widgets.

    Analysts may change/rename predefined layouts and layouts that were created using their account.

    Delete layouts

    filled in

    filled in

    filled in

    no

    Tenant administrators may delete layouts in the tenants available to them.

    Analysts may delete layouts that were created using their account.

    Only the general administrator can delete predefined layouts.

    ResourcesServices and ResourcesServicesActive services

     

     

     

     

     

    View the list of active services

    filled in

    filled in

    filled in

    no

    Only the general administrator can view and delete storage spaces.

    Access rights do not depend on the tenants selected in the menu.

    View the contents of the active list 

    filled in

    filled in

    filled in

    no

     

    Import/export/clear the contents of the active list

    filled in

    filled in

    filled in

    no

     

    Create a set of resources for services

    filled in

    filled in

    filled in

    no

    Analysts cannot create storages.

    Create a service under Resources → Services → Active services 

    filled in

    filled in

    no

    no

     

    Delete services

    filled in

    filled in

    no

    no

     

    Restart services

    filled in

    filled in

    no

    no

     

    Update the settings of services

    filled in

    filled in

    filled in

    no

     

    Reset certificates

    filled in

    filled in

    no

    no

    A user with the administrator role can reset the certificates of services only in the tenants that are accessible to the user.

    ResourcesResources

     

     

     

     

     

    View the list of resources

    filled in

    filled in

    filled in

    no*

    Analysts cannot view the list of secret resources, but these resources are available to them when they create services.

    Add resources

    filled in

    filled in

    filled in

    no

    Analysts cannot add secret resources.

    Edit resources

    filled in

    filled in

    filled in

    no

    Analysts cannot change secret resources.

    Create/edit/delete resources in a shared tenant

    filled in

    no

    no

    no

     

    Delete resources

    filled in

    filled in

    filled in

    no

    Analysts cannot delete secret resources.

    Import resources

    filled in

    filled in

    filled in

    no

    Only the general administrator can import resources to a shared tenant.

    Export resources

    filled in

    filled in

    filled in

    no

    This includes resources from a shared tenant.

    View/edit collector or correlator drafts 

    filled in

    filled in

    filled in

    no

    The user may only access their own drafts, regardless of the selected tenant. The list of drafts is generated based on those that belong to the user.

    Sources statusList of event sources

     

     

     

     

     

    View sources of events

    filled in

    filled in

    filled in

    filled in

     

    Change sources of events

    filled in

    filled in

    filled in

    no

    Edit source name, assign monitoring policy, disable monitoring policy.

    Delete sources of events

    filled in

    filled in

    filled in

    no

     

    Sources statusMonitoring policies

     

     

     

     

     

    View monitoring policies

    filled in

    filled in

    filled in

    filled in

     

    Create monitoring policies

    filled in

    filled in

    filled in

    no

     

    Edit monitoring policies

    filled in

    filled in

    filled in

    no

    Only the general administrator can edit the predefined monitoring policies.

    Delete monitoring policies

    filled in

    filled in

    filled in

    no

    Predefined policies cannot be removed.

    Assets

     

     

     

     

     

    View assets and asset categories

    filled in

    filled in

    filled in

    filled in

    This includes shared tenant categories.

    Add/edit/delete asset categories

    filled in

    filled in

    filled in

    no

    Within the tenant available to the user.

    Add asset categories in a shared tenant

    filled in

    no

    no

    no

    This includes editing and deleting shared tenant categories.

    Link assets to an asset category of the shared tenant

    filled in

    filled in

    filled in

    no

     

    Add assets

    filled in

    filled in

    filled in

    no

     

    Edit assets

    filled in

    filled in

    filled in

    no

     

    Delete assets

    filled in

    filled in

    filled in

    no

     

    Import assets from Kaspersky Security Center

    filled in

    filled in

    filled in

    no

     

    Start tasks on assets in Kaspersky Security Center

    filled in

    filled in

    filled in

    no

     

    Run tasks on Kaspersky Endpoint Detection and Response assets

    filled in

    filled in

    filled in

    no

     

    Alerts

     

     

     

     

     

    View the list of alerts

    filled in

    filled in

    filled in

    filled in

     

    Change the severity of alerts

    filled in

    filled in

    filled in

    filled in

     

    Open the details of alerts

    filled in

    filled in

    filled in

    filled in

     

    Assign responsible users

    filled in

    filled in

    filled in

    filled in

     

    Close alerts

    filled in

    filled in

    filled in

    filled in

     

    Add comments to alerts

    filled in

    filled in

    filled in

    filled in

     

    Attach an event to alerts

    filled in

    filled in

    filled in

    filled in

     

    Detach an event from alerts

    filled in

    filled in

    filled in

    filled in

     

    Edit and delete someone else's filters

    filled in

    filled in

    no

    no

     

    Incidents

     

     

     

     

     

    View the list of incidents

    filled in

    filled in

    filled in

    filled in

     

    Create blank incidents

    filled in

    filled in

    filled in

    filled in

     

    Manually create incidents from alerts

    filled in

    filled in

    filled in

    filled in

     

    Change the severity of incidents

    filled in

    filled in

    filled in

    filled in

     

    Open the details of incidents

    filled in

    filled in

    filled in

    filled in

    Incident details display data from only those tenants to which the user has access.

    Assign executors

    filled in

    filled in

    filled in

    filled in

     

    Close incidents

    filled in

    filled in

    filled in

    filled in

     

    Add comments to incidents

    filled in

    filled in

    filled in

    filled in

     

    Attach alerts to incidents

    filled in

    filled in

    filled in

    filled in

     

    Detach alerts from incidents

    filled in

    filled in

    filled in

    filled in

     

    Edit and delete someone else's filters

    filled in

    filled in

    no

    no

     

    Export incidents to RuCERT

    filled in

    filled in

    filled in

    filled in

     

    Events

     

     

     

     

     

    View the list of events

    filled in

    filled in

    filled in

    filled in

     

    Search events

    filled in

    filled in

    filled in

    filled in

     

    Open the details of events

    filled in

    filled in

    filled in

    filled in

     

    Open statistics

    filled in

    filled in

    filled in

    filled in

     

    Conduct a retroscan

    filled in

    filled in

    filled in

    no

     

    Export events to a TSV file

    filled in

    filled in

    filled in

    filled in

     

    Edit and delete someone else's filters

    filled in

    filled in

    no

    no

     

    Start ktl enrichment

    filled in

    filled in

    filled in

    no

     

    Run tasks on Kaspersky Endpoint Detection and Response assets in event details

    filled in

    filled in

    filled in

    no

     

    SettingsUsers

     

     

     

     

    This section is available only to the general administrator.

    View the list of users

    filled in

    no

    no

    no

     

    Add a user

    filled in

    no

    no

    no

     

    Edit a user

    filled in

    no

    no

    no

     

    View the data of their own profile

    filled in

    filled in

    filled in

    filled in

     

    Edit the data of their own profile

    filled in

    filled in

    filled in

    filled in

    The user role is not available for change.

    SettingsLDAP server

     

     

     

     

     

    View the LDAP connection settings

    filled in

    filled in

    no

    no

     

    Edit the LDAP connection settings

    filled in

    filled in

    no

    no

     

    SettingsTenants

     

     

     

     

    This section is available only to the general administrator.

    View the list of tenants

    filled in

    no

    no

    no

     

    Add tenants

    filled in

    no

    no

    no

     

    Change tenants

    filled in

    no

    no

    no

     

    Disable tenants

    filled in

    no

    no

    no

     

    SettingsDomain authorization

     

     

     

     

    This section is available only to the general administrator.

    View the Active Directory connection settings

    filled in

    no

    no

    no

     

    Edit the Active Directory connection settings

    filled in

    no

    no

    no

     

    Add filters based on roles for tenants

    filled in

    no

    no

    no

     

    SettingsGeneral

     

     

     

     

    This section is available only to the general administrator.

    View the SMTP connection settings

    filled in

    no

    no

    no

     

    Edit the SMTP connection settings

    filled in

    no

    no

    no

     

    SettingsLicense

     

     

     

     

    This section is available only to the general administrator.

    View the list of added license keys

    filled in

    no

    no

    no

     

    Add license keys

    filled in

    no

    no

    no

     

    Delete license keys

    filled in

    no

    no

    no

     

    SettingsKaspersky Security Center

     

     

     

     

     

    View the list of successfully integrated Kaspersky Security Center servers

    filled in

    filled in

    no

    no

     

    Add Kaspersky Security Center connections

    filled in

    filled in

    no

    no

     

    Delete Kaspersky Security Center connections

    filled in

    filled in

    no

    no

     

    SettingsKaspersky CyberTrace

     

     

     

     

    This section is available only to the general administrator.

    View the CyberTrace integration settings

    filled in

    no

    no

    no

     

    Edit the CyberTrace integration settings

    filled in

    no

    no

    no

     

    SettingsIRP / SOAR

     

     

     

     

    This section is available only to the general administrator.

    View the settings for integration with IRP / SOAR

    filled in

    no

    no

    no

     

    Edit the settings for integration with IRP / SOAR

    filled in

    no

    no

    no

     

    SettingsKaspersky Threat Lookup

     

     

     

     

    This section is available only to the general administrator.

    View the Threat Lookup integration settings

    filled in

    no

    no

    no

     

    Edit the Threat Lookup integration settings

    filled in

    no

    no

    no

     

    SettingsAlerts

     

     

     

     

     

    View the parameters

    filled in

    filled in

    filled in

    no

     

    Edit the parameters

    filled in

    filled in

    filled in

    no

     

    SettingsIncidentsAutomatic linking of alerts to incidents

     

     

     

     

     

    See the settings 

    filled in

    no

    no

    no

     

    Edit the settings

    filled in

    no

    no

    no

     

    SettingsIncidentsIncident types

     

     

     

     

     

    View the categories reference

    filled in

    filled in

    no

    no

     

    View the categories charts

    filled in

    filled in

    no

    no

     

    Add categories

    filled in

    filled in

    no

    no

    Available if the user has the administrator role in at least one tenant.

    Edit categories

    filled in

    filled in

    no

    no

    Available if the user has the administrator role in at least one tenant.

    Delete categories

    filled in

    filled in

    no

    no

    Available if the user has the administrator role in at least one tenant.

    SettingsRuCERT

     

     

     

     

     

    View the parameters

    filled in

    no

    no

    no

     

    Edit the parameters

    filled in

    no

    no

    no

     

    SettingsHierarchy

     

     

     

     

     

    View the parameters

    filled in

    no

    no

    no

     

    Edit the parameters

    filled in

    no

    no

    no

     

    View incidents from child nodes

    filled in

    filled in

    filled in

    filled in

     

    Metrics

     

     

     

     

     

    Open metrics

    filled in

    no

    no

    no

     

    Task manager

     

     

     

     

     

    View a list of your own tasks

    filled in

    filled in

    filled in

    filled in

    The section and tasks are not tied to a tenant. The tasks are available only to the user who created them.

    Finish your own tasks

    filled in

    filled in

    filled in

    filled in

     

    Restart your own tasks

    filled in

    filled in

    filled in

    filled in

     

    View a list of all tasks

    filled in

    no

    no

    no

     

    Finish any task

    filled in

    no

    no

    no

     

    Restart any task

    filled in

    no

    no

    no

     

    CyberTrace

     

     

     

     

    This section is not displayed in the web interface unless CyberTrace integration is configured under Settings → CyberTrace.

    Open the section 

    filled in

    no

    no

    no

     

    Access to the data of tenants

     

     

     

     

     

    Access to tenants

    filled in

    filled in

    filled in

    filled in

    A user has access to the tenant if its name is indicated in the settings blocks of the roles assigned to the user account. The access level depends on which role is indicated for the tenant.

    Shared tenant

    filled in

    filled in

    filled in

    filled in

    A shared tenant is used to store shared resources that must be available to all tenants.

    Although services cannot be owned by the shared tenant, these services may utilize resources that are owned by the shared tenant. These services are still owned by their respective tenants.

    Events, alerts and incidents cannot be shared.

    Permissions to access the shared tenant:

    • Read/write—only the general administrator.
    • Read—all other users, including users that have permissions to access the main tenant.

    Main tenant

    filled in

    filled in

    filled in

    filled in

    A user has access to the main tenant if its name is indicated in the settings blocks of the roles assigned to the user account. The access level depends on which role is indicated for the tenant.

    Permissions to access the main tenant do not grant access to other tenants.

Page top

[Topic 217796]

Creating a user

To create a user account:

  1. In the KUMA web interface, open SettingsUsers.

    In the right part of the Settings section the Users table will be displayed.

  2. Click the Add user button and set the parameters as described below.
    • Name (required)—enter the user name. Must contain from 1 to 128 Unicode characters.
    • Login(required) – enter a unique user name for the user account. Must contain from 3 to 64 characters (only a–z, A–Z, 0–9, . \ - _).
    • Email (required)—enter the unique email address of the user. Must be a valid email address.
    • New password (required)—enter the password to the user account. Password requirements:
      • 8 to 128 characters long.
      • At least one lowercase character.
      • At least one uppercase character.
      • At lease one numeral.
      • At least one of the following special characters: !, @, #, %, ^, &, *.
    • Confirm password (required)—enter the password again for confirmation.
    • Disabled—select this check box if you want to disable a user account. By default, this check box is cleared.
    • In the Tenants for roles settings block, use the Add field buttons to specify which roles the user will perform on which tenants. Although a user can have different roles on different tenants, the user can have only one role on the same tenant.
    • Receive email notifications—select this check box if you want the user to receive SMTP notifications from KUMA.
    • Select the Can interact with RuCERT check box if you want the user to be able to export incidents to RuCERT. Only a user with the General Administrator role can select this check box.
    • Select the General administrators group check box if you want to assign the general administrator role to the user. Users with the general administrator role can change the settings of other user accounts. By default, this check box is cleared.
  3. Click Save.

The user account will be created and displayed in the Users table.

Page top

[Topic 217858]

Editing user

To edit a user:

  1. In the KUMA web interface, open SettingsUsers.

    In the right part of the Settings section the Users table will be displayed.

  2. Select the relevant user and change the necessary settings in the user details area that opens on the right.
    • Name (required)—edit the user name. Must contain from 1 to 128 Unicode characters.
    • Login(required) – enter a unique user name for the user account. Must contain from 3 to 64 characters (only a–z, A–Z, 0–9, . \ - _).
    • Email (required)—enter the unique email address of the user. Must be a valid email address.
    • Disabled—select this check box if you want to disable a user account. By default, this check box is cleared.
    • In the Tenants for roles settings block, use the Add field buttons to specify which roles the user will perform on which tenants. Although a user can have different roles on different tenants, the user can have only one role on the same tenant.
    • Receive email notifications—select this check box if you want the user to receive SMTP notifications from KUMA.
    • Select the Can interact with RuCERT check box if you want the user to be able to export incidents to RuCERT. Only a user with the General Administrator role can select this check box.
    • Select the General administrators group check box if you want to assign the general administrator role to the user. Users with the general administrator role can change the settings of other user accounts. By default, this check box is cleared.
  3. If you need to change the password, click the Change password button and fill in the fields described below in the opened window. When finished, click OK.
    • Current password (required)—enter the current password of your user account. The field is available if you change your account password.
    • New password (required)—enter a new password to the user account. Password requirements:
      • 8 to 128 characters long.
      • At least one lowercase character.
      • At least one uppercase character.
      • At lease one numeral.
      • At least one of the following special characters: !, @, #, %, ^, &, *.
    • Confirm password (required)—enter the password again for confirmation.
  4. If necessary, use the Generate token button to generate an API token. Clicking this button displays the token creation window.
  5. If necessary, configure the operations available to the user via the REST API by using the API access rights button.
  6. Click Save.

The user account will be changed.

Page top

[Topic 217861]

Editing your user account

To edit your user account:

  1. Open the KUMA web interface, click the name of your user account in the bottom-left corner of the window and click the Profile button in the opened menu.

    The User window with your user account parameters opens.

  2. Make the necessary changes to the parameters:
    • Name (required)—enter the user name. Must contain from 1 to 128 Unicode characters.
    • Login(required) – enter a unique user name for the user account. Must contain from 3 to 64 characters (only a–z, A–Z, 0–9, . \ - _).

      Email (required)—enter the unique email address of the user. Must be a valid email address.

    • Receive email notifications—select this check box if you want to receive SMTP notifications from KUMA.
    • Display non-printable characters—select this check box if you want the KUMA web interface to display non-printing characters such as spaces, tab characters, and line breaks.

      Spaces and tab characters are displayed in all input fields (except Description), in normalizers, correlation rules, filters and connectors, and in SQL queries for searching events in the Events section.

      Spaces are displayed as dots.

      A tab character is displayed as a dash in normalizers, correlation rules, filters and connectors. In other fields, a tab character is displayed as one or two dots.

      Line break characters are displayed in all input fields that support multi-line input, such as the event search field.

      If the Display non-printable characters check box is selected, you can press Ctrl/Command+* to enable and disable the display of non-printing characters.

  3. If you need to change the password, click the Change password button and fill in the fields described below in the opened window. When finished, click OK.
    • Current password (required)—enter the current password of your user account.
    • New password (required)—enter a new password to your account. Password requirements:
      • 8 to 128 characters long.
      • At least one lowercase character.
      • At least one uppercase character.
      • At lease one numeral.
      • At least one of the following special characters: !, @, #, %, ^, &, *.
    • Confirm password (required)—enter the password again for confirmation.
  4. If necessary, use the Generate token button to generate an API token. Clicking this button displays the token creation window.
  5. If necessary, configure the operations that are available via the REST API by using the API access rights button.
  6. Click Save.

Your user account is changed.

Page top

[Topic 218007]

Logging in to the program web interface

To log in to the program web interface:

  1. Enter the following address in your browser:

    https://<IP address or FQDN of KUMA Core server>:7220

    The web interface authorization page will open and prompt you to enter your login and password.

  2. Enter the login of your account in the Login field.
  3. Enter the password for the specified account in the Password field.
  4. Click the Login button.

The main window of the program web interface opens.

In multitenancy mode, a user who is logging in to the program web interface for the first time will see the data only for those tenants that were selected for the user when their user account was created.

To log out of the program web interface,

open the KUMA web interface, click your user account name in the bottom-left corner of the window, and click the Logout button in the opened menu.

Page top

[Topic 218035]

Viewing KUMA metrics

Comprehensive information about the performance of the KUMA Core, storage, collectors, and correlators is available in the Metrics section of the KUMA web interface. Selecting this section opens the Grafana portal deployed as part of KUMA Core installation and is updated automatically.

The default Grafana user name and password are admin and admin.

Available metrics

Collector indicators:

  • IO—metrics related to the service input and output.
    • Processing EPS—the number of processed events per second.
    • Processing Latency—the time required to process a single event (the median is displayed).
    • Output EPS—the number of events, sent to the destination per second.
    • Output Latency—the time required to send a batch of events to the destination and receive a response from it (the median is displayed).
    • Output Errors—the number or errors when sending event batches to the destination per second. Network errors and errors writing the disk buffer are displayed separately.
    • Output Event Loss—the number of lost events per second. Events can be lost due to network errors or errors writing the disk buffer. Events are also lost if the destination responded with an error code (for example, if the request was invalid).
  • Normalization—metrics related to the normalizers.
    • Raw & Normalized event size—the size of the raw event and size of the normalized event (the median is displayed).
    • Errors—the number of normalization errors per second.
  • Filtration—metrics related to the filters.
    • EPS—the number of events rejected by the Collector per second. The Collector only rejects events if the user has added a Filter resource into the Collector service configuration.
  • Aggregation—metrics related to the aggregation rules.
    • EPS—the number of events received and created by the aggregation rule per second. This metric helps determine the effectiveness of aggregation rules.
    • Buckets—the number of buckets in the aggregation rule.
  • Enrichment—metrics related to the enrichment rules.
    • Cache RPS—the number requests to the local cache per second.
    • Source RPS—the number of requests to the enrichment source (for example, the Dictionary resource).
    • Source Latency—the time required to send a request to the enrichment source and receive a response from it (the median is displayed).
    • Queue—the enrichment requests queue size. This metric helps to find bottleneck enrichment rules.
    • Errors—the number of enrichment source request errors per second.

Correlator metrics

  • IO—metrics related to the service input and output.
    • Processing EPS—the number of processed events per second.
    • Processing Latency—the time required to process a single event (the median is displayed).
    • Output EPS—the number of events, sent to the destination per second.
    • Output Latency—the time required to send a batch of events to the destination and receive a response from it (the median is displayed).
    • Output Errors—the number or errors when sending event batches to the destination per second. Network errors and errors writing the disk buffer are displayed separately.
    • Output Event Loss—the number of lost events per second. Events can be lost due to network errors or errors writing the disk buffer. Events are also lost if the destination responded with an error code (for example, if the request was invalid).
  • Correlation—metrics related to the correlation rules.
    • EPS—the number of correlation events created per second.
    • Buckets—the number of buckets in the correlation rule (only for the standard kind of correlation rules).
  • Active Lists—metrics related to the active lists.
    • RPS—the number of requests (and their type) to the Active list per second.
    • Records—the number of entries in the Active list.
    • WAL Size—the size of the Write-Ahead-Log. This metric helps determine the size of the Active list.

Storage indicators

  • IO—metrics related to the service input and output.
    • RPS—the number of requests to the Storage service per second.
    • Latency—the time of proxying a single request to the ClickHouse node (the median is displayed).

Core service metrics

  • IO—metrics related to the service input and output.
    • RPS—the number of requests to the Core service per second.
    • Latency—the time of processing a single request (the median is displayed).
    • Errors—the number of request errors per second.
  • Notification Feed—metrics related to user activity.
    • Subscriptions—the number of clients, connected to the Core via SSE to receive server messages in real time. This number usually correlates with the number of clients using the KUMA web interface.
    • Errors—the number of message sending errors per second.
  • Schedulers—metrics related to Core tasks.
    • Active—the number of repeating active system tasks. The tasks created by the user are ignored.
    • Latency—the time of processing a single request (the median is displayed).
    • Position—the position (timestamp) of the alert creation task. The next ClickHouse scan for correlation events will start from this position.
    • Errors—the number of task errors per second.

General metrics common for all services

  • Process—general process metrics.
    • CPU—CPU usage.
    • Memory—RAM usage (RSS).
    • DISK IOPS—the number of disk read/write operations per second.
    • DISK BPS—the number of bytes read/written to the disk per second.
    • Network BPS—the number of bytes received/sent per second.
    • Network Packet Loss—the number of network packets lost per second.
    • GC Latency—the time of the GO Garbage Collector cycle (the median is displayed).
    • Goroutines—the number of active goroutines. This number differs from the thread count.
  • OS—metrics related to the operating system.
    • Load—the average load.
    • CPU—CPU usage.
    • Memory—RAM usage (RSS).
    • Disk—disk space usage.

Metrics storage period

KUMA operation data is saved for 3 months by default. This storage period can be changed.

To change the storage period for KUMA metrics:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. In the file /etc/systemd/system/multi-user.target.wants/kuma-victoria-metrics.service, in the ExecStart parameter, edit the --retentionPeriod=<metrics storage period, in months> flag by inserting the necessary period. For example, --retentionPeriod=4 means that the metrics will be stored for 4 months.
  3. Restart KUMA by running the following commands in sequence:
    1. systemctl daemon-reload
    2. systemctl restart kuma-victoria-metrics

The storage period for metrics has been changed.

Page top

[Topic 234574]

Managing KUMA tasks

When working in the program web interface, you can use tasks to perform various operations. For example, you can import assets or export KUMA event information to a TSV file.

In this Help topic

Viewing the tasks table

Configuring the display of the tasks table

Viewing task run results

Restarting a task

Page top

[Topic 218036]

Viewing the tasks table

The tasks table contains a list of created tasks and is located in the Task manager section of the program web interface window. You can view the tasks that were created by you (current user).

A user with the General Administrator role can view the tasks of all users.

The tasks table contains the following information:

  • State—the state of the task. One of the following statuses can be assigned to a task:
    • Green dot blinking—the task is active.
    • Completed—the task is complete.
    • Cancel—the task was canceled by the user.
    • Error—the task was not completed because of an error. The error message is displayed if you hover the mouse over the exclamation mark icon.
  • Task—the task type. The program provides the following types of tasks:
    • Events export—export KUMA events.
    • Threat Lookup—request data from the Kaspersky Threat Intelligence Portal.
    • Retroscan—task for replaying events.
    • KSC assets import—imports asset data from Kaspersky Security Center servers.
    • Accounts import—imports user data from Active Directory.
    • KICS for Networks assets import—imports asset data from KICS for Networks.
  • Created by—the user who created the task. If the task was created automatically, the column will show Scheduled task.

    This column is displayed only for users with the General Administrator and Administrator roles.

  • Created—task creation time.
  • Updated—time when the task was last updated.
  • Tenant—the name of the tenant in which the task was started.

Displayed date format:

  • English localization: YYYY-MM-DD.
  • Russian localization: DD.MM.YYYY.
Page top

[Topic 234604]

Configuring the display of the tasks table

You can customize the display of columns and the order in which they appear in the tasks table.

To customize the display and order of columns in the tasks table:

  1. In the KUMA web interface, select the Task manager section.

    The tasks table is displayed.

  2. In the table header, click the gear button.
  3. In the opened window, do the following:
    • If you want to enable display of a column in the table, select the check box next to the name of the parameter that you want to display in the table.
    • If you do not want the parameter to be displayed in the table, clear the check box.

    At least one check box must be selected.

  4. If you want to reset the settings, click the Default link.
  5. If you want to change the order in which the columns are displayed in the table, move the mouse cursor over the name of the column, hold down the left mouse button and drag the column to the necessary position.

The display of columns in the tasks table will be configured.

Page top

[Topic 234598]

Viewing task run results

To view the results of a task:

  1. In the KUMA web interface, select the Task manager section.

    The tasks table is displayed.

  2. Click the link containing the task type in the Task column.

    A list of the operations available for this task type will be displayed.

  3. Select Show results.

The task results window opens.

Page top

[Topic 234601]

Restarting a task

To restart a task:

  1. In the KUMA web interface, select the Task manager section.

    The tasks table is displayed.

  2. Click the link containing the task type in the Task column.

    A list of the operations available for this task type will be displayed.

  3. Select Restart.

The task will be restarted.

Page top

[Topic 217936]

Connecting to an SMTP server

KUMA can be configured to send email notifications using an SMTP server. Users will receive notifications if the Receive email notifications check box is selected in their profile settings.

Only one SMTP server can be added to process KUMA notifications. An SMTP server connection is managed in the KUMA web interface under SettingsGeneralSMTP server settings.

To configure SMTP server connection:

  1. Open the KUMA web interface and select SettingsGeneral.
  2. In the SMTP server settings block, change the relevant settings:
    • Disabled—select this check box if you want to disable connection to the SMTP server.
    • Host (required)—SMTP host in one of the following formats: hostname, IPv4, IPv6.
    • Port (required)—SMTP port. The value must be an integer from 1 to 65535.
    • From (required)—email address of the message sender. For example, kuma@company.com.
    • Alias for KUMA Core server—name of the KUMA Core server that is used in your network. Must be different from the FQDN.
    • If necessary, use the Secret drop-down list to select a secret resource of the credentials type that contains the account credentials for connecting to the SMTP server.

      Add secret

      1. If you previously created a secret, select it from the Secret drop-down list.

        If no secret was previously added, the drop-down list shows No data.

      2. If you want to add a new secret, click the AD_plus button on the right of the Secret list.

        The Secret window opens.

      3. In the Name field, enter the name that will be used to display the secret in the list of available secrets.
      4. In the User and Password fields, enter the credentials of the user account that the Agent will use to connect to the connector.
      5. If necessary, add any other information about the secret in the Description field.
      6. Click the Save button.

      The secret will be added and displayed in the Secret list.

    • Select the necessary frequency of notifications in the Monitoring notifications interval drop-down list.
    • Turn on the Disable monitoring notifications toggle button if you do not want to receive notifications about the state of event sources. The toggle switch is turned off by default.
  3. Click Save.

The SMTP server connection is now configured, and users can receive email messages from KUMA.

Page top

[Topic 217944]

Opening Online Help for KUMA

Online Help is available on the Kaspersky web resource.

Online Help provides information regarding the following tasks:

  • Preparing to install and installing KUMA.
  • Configuring and using KUMA.

To open Online Help for KUMA,

log in to the KUMA web interface, click the name of your user account in the lower-left corner of the window, then click the Help button in the opened menu.

Page top

[Topic 217686]

KUMA logs

Some KUMA services and resources can log information related to their functioning. This feature is enabled by using the Debug drop-down list or check box in the settings of the service or the resource.

The logs are stored on the machine where the required service or the service using the required resource is installed:

  • Logs residing on Linux machines can be viewed using the journalctl command in the Linux console.

    Examples:

    • journalctl -u kuma-collector * kuma-correlator * -f will return the latest logs from the collectors and the correlators installed on the server where the command was executed.
    • journalctl -u kuma-collector-<service ID> will return the latest logs of the specific collector installed on the server where the command was executed.
  • Logs on Windows machines can be viewed in the file located at the path %PROGRAMDATA%\Kaspersky Lab\KUMA\<Agent ID>\agent.log. The activity of Agents on Windows machines is always logged if they are assigned the logon as a service permission. Data is specified in more detail when the Debug check box is selected.

Services where logging is available:

  • Correlators
  • Collectors
  • Agents

Resources where logging is available:

  • Connectors
  • Enrichment rules
  • Destinations

The logs on Windows machines are updated within a month. After a month expires, the log is archived, and events begin to be recorded in a new log. A maximum of three archived logs are stored on the server at a time. When a new log archive appears, if there are more than three archives, the oldest one is deleted.

Page top

[Topic 222208]

KUMA backup

KUMA allows you to back up the KUMA Core database and certificates. Backups may be created using the executable file /opt/kaspersky/kuma/kuma.

Data may only be restored from a backup if it is restored to the KUMA of the same version as the backup one.

To perform a backup:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma tools backup --dst <path to folder for backup copy> --certificates

    The flag --certificates is optional and is used to back up certificates.

The backup copy has been created.

To restore data from a backup:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. On the KUMA Core server, run the following command:

    sudo systemctl stop kuma-core

  3. Execute the following command:

    sudo /opt/kaspersky/kuma/kuma tools restore --src <path to folder containing backup copy> --certificates

    The --certificates flag is optional and is used to restore certificates.

  4. Start KUMA by running the following command:

    sudo systemctl start kuma-core

  5. Rebuild the services using the recovered service resource sets.

Data is restored from the backup.

What to do if KUMA malfunctions after restoring data from a backup copy

If the KUMA Core fails to start after data recovery, the recovery must be performed again but this time the kuma database in MongoDB must be reset.

To restore KUMA data and reset the MongoDB database:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. On the KUMA Core server, run the following command:

    sudo systemctl stop kuma-core

  3. Log in to MongoDB by running the following commands:
    1. cd /opt/kaspersky/kuma/mongodb/bin/
    2. ./mongo
  4. Reset the MongoDB database by running the following commands:
    1. use kuma
    2. db.dropDatabase()
  5. Log out of the MongoDB database by pressing Ctrl+C.
  6. Restore data from a backup copy by running the following command:

    sudo /opt/kaspersky/kuma/kuma tools restore --src <path to folder containing backup copy> --certificates

    The --certificates flag is optional and is used to restore certificates.

  7. Start KUMA by running the following command:

    sudo systemctl start kuma-core

  8. Rebuild the services using the recovered service resource sets.

Data is restored from the backup.

Backup of collectors is not required unless the collectors have an SQL connection. When restoring such collectors, you should revert to the original initial value of the ID.

Page top

[Topic 233516]

KUMA notifications

Standard notifications

KUMA can be configured to send email notifications using an SMTP server. To do so, configure a connection to an SMTP server and select the Receive email notifications check box for users who should receive notifications.

KUMA automatically notifies users about the following events:

  • A report was created (the users listed in the report template receive a notification).
  • An alert was created (all users receive a notification).
  • An alert was assigned to a user (the user to whom the alert was assigned receives a notification).
  • A task was performed (the users who created the task receive a notification).

Custom notifications

Instead of the standard KUMA notifications about the alert generation, you can send notifications based on custom templates. To configure custom notifications instead of standard notifications, take the following steps:

When an alert is created based on the selected correlation rules, notifications created based on custom email templates will be sent to the specified email addresses. Standard KUMA notifications about the same event will not be sent to the specified addresses.

Page top

[Topic 217777]

Contacting Technical Support

If you are unable to find a solution to your issue in the program documentation, please contact Kaspersky Technical Support.

Kaspersky provides technical support for this program throughout its lifecycle (please refer to the product support lifecycle page).

Page top

[Topic 217973]

REST API

You can access KUMA from third-party solutions using the API. The KUMA REST API operates over HTTP and consists of a set of request/response methods.

REST API requests must be sent to the following address:

https://<KUMA Core FQDN>/api/<API version>/<request>

Example:

https://kuma.example.com:7223/api/v1

By default the 7223 port is used for API requests. You can change the port.

To change port used for REST API requests:

  1. Log in to the OS of the server where the KUMA Core is installed.
  2. In the file /etc/systemd/system/multi-user.target.wants/kuma-core.service change the following string, adding required port:

    ExecStart=/opt/kaspersky/kuma/kuma core --external :7220 --internal :7210 --mongo mongodb://localhost:27017 --rest <required port number for REST API requests>

  3. Restart KUMA by running the following commands in sequence:
    1. systemctl daemon-reload
    2. systemctl restart kuma-core

New port is used for REST API.

Make sure that the port is available and is not closed by the firewall.

Authentication header: Authorization: Bearer <token>

Default data format: JSON

Date and time format: RFC 3339

Intensity of requests: unlimited

In this Help topic

Creating a token

Configuring permissions to access the API

Authorizing API requests

Standard error

Operations

Page top

[Topic 235379]

Creating a token

To generate a token for a user:

  1. In the KUMA web interface, open SettingsUsers.

    In the right part of the Settings section the Users table will be displayed.

  2. Select the relevant user and click the Generate token button in the details area that opens on the right.

    The New token window opens.

  3. If necessary, set the token expiration date:
    1. Select the No expiration date check box.
    2. In the Expiration date field, use the calendar to specify the date and time when the created token will expire.
  4. Click the Generate token button.

    When you click this button, the user details area displays a field containing the automatically created token. When the window is closed, the token is no longer displayed. If you did not copy the token before closing the window, you will have to generate a new token.

  5. Click Save.

The token is generated and can be used for API requests. These same steps can be taken to generate a token in your account profile.

Page top

[Topic 235388]

Configuring permissions to access the API

In KUMA, you can configure the specific operations that can be performed on behalf of each user. Permissions can be configured only for users created in KUMA.

To configure available operations for a user:

  1. In the KUMA web interface, open SettingsUsers.

    In the right part of the Settings section the Users table will be displayed.

  2. Select the relevant user and click the API access rights button in the details area that opens on the right.

    This opens a window containing a list of available operations. By default, all API requests are available to a user.

  3. Select or clear the check box next to the relevant operation.
  4. Click Save.

Available operations for the user are configured.

The available operations can be configured in the same way in your account profile.

Page top

[Topic 217974]

Authorizing API requests

Each REST API request must include token-based authorization. The user whose token is used to make the API request must have the permissions to perform this type of request.

Each request must be accompanied by the following header:

Authorization: Bearer <token>

Possible errors:

HTTP code

Description

message field value

details field value

400

Invalid header

invalid authorization header

Example: <example>

403

The token does not exist or the owner user is disabled

access denied

 

Page top

[Topic 222250]

Standard error

Errors returned by KUMA have the following format:

type Error struct {

    Message    string      `json:"message"`

    Details    interface{} `json:"details"`

}

Page top

[Topic 222252]

Viewing a list of active lists on the correlator

GET /api/v1/activeLists

The target correlator must be running.

Access: administrator and analyst.

Query parameters

Name

Data type

Mandatory

Description

Value example

correlatorID

string

Yes

Correlator service ID

00000000-0000-0000-0000-000000000000

Response

HTTP code: 200

Format: JSON

type Response []ActiveListInfo

 

type ActiveListInfo struct {

    ID      string `json:"id"`

    Name    string `json:"name"`

    Dir     string `json:"dir"`

    Records uint64 `json:"records"`

    WALSize uint64 `json:"walSize"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Correlator service ID is not specified

query parameter required

correlatorID

403

The user does not have the required role in the correlator tenant

access denied

 

404

The service with the specified identifier (correlatorID) was not found

service not found

 

406

The service with the specified ID (correlatorID) is not a correlator

service is not correlator

 

406

The correlator did not execute the first start

service not paired

 

406

The correlator tenant is disabled

tenant disabled

 

50x

Failed to access the correlator API

correlator API request failed

variable

500

Failed to decode the response body received from the correlator

correlator response decode failed

variable

500

Any other internal errors

variable

variable

Page top

[Topic 222253]

Import entries to an active list

POST /api/v1/activeLists/import

The target correlator must be running.

Access: administrator and analyst.

Query parameters

Name

Data type

Mandatory

Description

Value example

correlatorID

string

Yes

Correlator service ID

00000000-0000-0000-0000-000000000000

activeListID

string

If activeListName is not specified

Active list ID

00000000-0000-0000-0000-000000000000

activeListName

string

If activeListID is not specified

Active list name

Attackers

format

string

Yes

Format of imported entries

csv, tsv, internal

keyField

string

For the CSV and TSV formats only

The name of the field in the header of the CSV or TSV file that will be used as the key field of the active list record. The values of this field must be unique

ip

clear

bool

No

Clear the active list before importing. If the parameter is present in the URL query, then its value is assumed to be true. The values specified by the user are ignored. Example: /api/v1/activeLists/import?clear

 

Request body

Format

Contents

csv

The first line is the header, which lists the comma-separated fields. The rest of the lines are the values corresponding to the comma-separated fields in the header. The number of fields in each line must be the same.

tsv

The first line is the header, which lists the TAB-separated fields. The remaining lines are the values corresponding to the TAB-separated fields in the header. The number of fields in each line must be the same.

internal

Each line contains one individual JSON object. Data in the internal format can be received by exporting the contents of the active list from the correlator in the KUMA web console.

Response

HTTP code: 204

Possible errors

HTTP code

Description

message field value

details field value

400

Correlator service ID is not specified

query parameter required

correlatorID

400

Neither the activeListID parameter nor the activeListName parameter is specified

one of query parameters required

activeListID, activeListName

400

The format parameter is not specified

query parameter required

format

400

The format parameter is invalid

invalid query parameter value

format

400

The keyField parameter is not specified

query parameter required

keyField

400

The request body has a zero-length

request body required

 

400

The CSV or TSV file does not contain the field specified in the keyField parameter

correlator API request failed

line 1: header does not contain column <name>

400

Request body parsing error

correlator API request failed

line <number>: <message>

403

The user does not have the required role in the correlator tenant

access denied

 

404

The service with the specified identifier (correlatorID) was not found

service not found

 

404

No active list was found

active list not found

 

406

The service with the specified ID (correlatorID) is not a correlator

service is not correlator

 

406

The correlator did not execute the first start

service not paired

 

406

The correlator tenant is disabled

tenant disabled

 

406

A name search was conducted for the active list (activeListName), and more than one active list was found

more than one matching active lists found

 

50x

Failed to access the correlator API

correlator API request failed

variable

500

Failed to decode the response body received from the correlator

correlator response decode failed

variable

500

Any other internal errors

variable

variable

Page top

[Topic 222254]

Searching alerts

GET /api/v1/alerts

Access: administrator, analyst, and operator.

Query parameters

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Alert ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

00000000-0000-0000-0000-000000000000

TenantID

string

No

Alert tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied. If the user does not have the required role in the specified tenant, then this tenant is ignored.

00000000-0000-0000-0000-000000000000

name

string

No

Alert name. Case-insensitive regular expression (PCRE).

alert
^My alert$

timestampField

string

No

The name of the alert field that is used to perform sorting (DESC) and search by period (from – to). lastSeen by default.

lastSeen, firstSeen

from

string

No

Lower bound of the period in RFC3339 format. <timestampField> >= <from>

2021-09-06T00:00:00Z (UTC)

2021-09-06T00:00:00.000Z (UTC, including milliseconds)

2021-09-06T00:00:00Z+00:00 (MSK)

to

string

No

Upper bound of the period in RFC3339 format. <timestampField> <= <to>

2021-09-06T00:00:00Z (UTC)

2021-09-06T00:00:00.000Z (UTC, including milliseconds)

2021-09-06T00:00:00Z+00:00 (MSK)

status

string

No

Alert status. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

new, assigned, escalated, closed

withEvents

bool

No

Include normalized KUMA events associated with found alerts in the response. If the parameter is present in the URL query, then its value is assumed to be true. The values specified by the user are ignored. Example: /api/v1/alerts?withEvents

 

withAffected

bool

No

Include information about the assets and accounts associated with the found alerts in the report.  If the parameter is present in the URL query, then its value is assumed to be true. The values specified by the user are ignored. Example: /api/v1/alerts?withAffected

 

Response

HTTP code: 200

Format: JSON

type Response []Alert

 

type Alert struct {

    ID                string            `json:"id"`

    TenantID          string            `json:"tenantID"`

    TenantName        string            `json:"tenantName"`

    Name              string            `json:"name"`

    CorrelationRuleID string            `json:"correlationRuleID"`

    Priority          string            `json:"priority"`

    Status            string            `json:"status"`

    FirstSeen         string            `json:"firstSeen"`

    LastSeen          string            `json:"lastSeen"`

    Assignee          string            `json:"assignee"`

    ClosingReason     string            `json:"closingReason"`

    Overflow          bool              `json:"overflow"`

    Events            []NormalizedEvent `json:"events"`

    AffectedAssets    []AffectedAsset   `json:"affectedAssets"`

    AffectedAccounts  []AffectedAccount `json:"affectedAccounts"`

}

 

type NormalizedEvent map[string]interface{}

 

type AffectedAsset struct {

    ID               string          `json:"id"`

    TenantID         string          `json:"tenantID"`

    TenantName       string          `json:"tenantName"`

    Name             string          `json:"name"`

    FQDN             string          `json:"fqdn"`

    IPAddresses      []string        `json:"ipAddresses"`

    MACAddresses     []string        `json:"macAddresses"`

    Owner            string          `json:"owner"`

    OS               *OS             `json:"os"`

    Software         []Software      `json:"software"`

    Vulnerabilities  []Vulnerability `json:"vulnerabilities"`

    KSC              *KSCFields      `json:"ksc"`

    Created          string          `json:"created"`

    Updated          string          `json:"updated"`

}

 

type OS struct {

    Name    string `json:"name"`

    Version uint64 `json:"version"`

}

 

type Software struct {

    Name    string `json:"name"`

    Version string `json:"version"`

    Vendor  string `json:"vendor"`

}

 

type Vulnerability struct {

    KasperskyID           string   `json:"kasperskyID"`

    ProductName           string   `json:"productName"`

    DescriptionURL        string   `json:"descriptionURL"`

    RecommendedMajorPatch string   `json:"recommendedMajorPatch"`

    RecommendedMinorPatch string   `json:"recommendedMinorPatch"`

    SeverityStr           string   `json:"severityStr"`

    Severity              uint64   `json:"severity"`

    CVE                   []string `json:"cve"`

    ExploitExists         bool     `json:"exploitExists"`

    MalwareExists         bool     `json:"malwareExists"`

}

 

type AffectedAccount struct {

    Name             string `json:"displayName"`

    CN               string `json:"cn"`

    DN               string `json:"dn"`

    UPN              string `json:"upn"`

    SAMAccountName   string `json:"sAMAccountName"`

    Company          string `json:"company"`

    Department       string `json:"department"`

    Created          string `json:"created"`

    Updated          string `json:"updated"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

400

Invalid value of the "status" parameter

invalid status

<status>

400

Invalid value of the "timestampField" parameter

invalid timestamp field

 

400

Invalid value of the "from" parameter

cannot parse from

variable

400

Invalid value of the "to" parameter

cannot parse to

variable

400

The value of the "from" parameter is greater than the value of the "to" parameter

from cannot be greater than to

 

500

Any other internal errors

variable

variable

Page top

[Topic 222255]

Closing alerts

POST /api/v1/alerts/close

The target correlator must be running.

Access: administrator, analyst, and operator.

Request body

Format: JSON

Name

Data type

Mandatory

Description

Value example

id

string

Yes

Alert ID

00000000-0000-0000-0000-000000000000

reason

string

Yes

Reason for closing the alert

responded, incorrect data, incorrect correlation rule

Response

HTTP code: 204

Possible errors

HTTP code

Description

message field value

details field value

400

Alert ID is not specified

id required

 

400

The reason for closing the alert is not specified

reason required

 

400

Invalid value of the "reason" parameter

invalid reason

 

403

The user does not have the required role in the alert tenant

access denied

 

404

Alert not found

alert not found

 

406

Alert tenant disabled

tenant disabled

 

406

Alert already closed

alert already closed

 

500

Any other internal errors

variable

variable

Page top

[Topic 222256]

Searching assets

GET /api/v1/assets

Access: administrator, analyst, and operator.

Query parameters

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Asset ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

00000000-0000-0000-0000-000000000000

TenantID

string

No

Asset tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied. If the user does not have the required role in the specified tenant, then this tenant is ignored.

00000000-0000-0000-0000-000000000000

name

string

No

Asset name. Case-insensitive regular expression (PCRE).

asset

^My asset$

fqdn

string

No

Asset FQDN. Case-insensitive regular expression (PCRE).

^com$

ip

string

No

Asset IP address. Case-insensitive regular expression (PCRE).

10.10

^192.168.1.2$

mac

string

No

Asset MAC address. Case-insensitive regular expression (PCRE).

^00:0a:95:9d:68:16$

Response

HTTP code: 200

Format: JSON

type Response []Asset

 

type Asset struct {

ID string `json:"id"`

TenantID string `json:"tenantID"`

TenantName string `json:"tenantName"`

Name string `json:"name"`

FQDN string `json:"fqdn"`

IPAddresses []string `json:"ipAddresses"`

MACAddresses []string `json:"macAddresses"`

Owner string `json:"owner"`

OS *OS `json:"os"`

Software []Software `json:"software"`

Vulnerabilities []Vulnerability `json:"vulnerabilities"`

KSC *KSCFields `json:"ksc"`

Created string `json:"created"`

Updated string `json:"updated"`

}

 

type KSCFields struct {

NAgentID string `json:"nAgentID"`

KSCInstanceID string `json:"kscInstanceID"`

KSCMasterHostname string `json:"kscMasterHostname"`

LastVisible string `json:"lastVisible"`

}

 

type OS struct {

Name string `json:"name"`

Version uint64 `json:"version"`

}

 

type Software struct {

Name string `json:"name"`

Version string `json:"version"`

Vendor string `json:"vendor"`

}

 

type Vulnerability struct {

KasperskyID string `json:"kasperskyID"`

ProductName string `json:"productName"`

DescriptionUrl string `json:"descriptionUrl"`

RecommendedMajorPatch string `json:"recommendedMajorPatch"`

RecommendedMinorPatch string `json:"recommendedMinorPatch"`

SeverityStr string `json:"severityStr"`

Severity uint64 `json:"severity"`

CVE []string `json:"cve"`

ExploitExists bool `json:"exploitExists"`

MalwareExists bool `json:"malwareExists"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

500

Any other internal errors

variable

variable

Page top

[Topic 222258]

Importing assets

Details on identifying, creating, and updating assets

Assets are imported according to the asset data merging rules.

POST /api/v1/assets/import

Bulk creation or update of assets.

Access: administrator and analyst.

Request body

Format: JSON

type Request struct {

TenantID string `json:"tenantID"`

    Assets []Asset `json:"assets"`

}

 

type Asset struct {

Name string `json:"name"`

FQDN string `json:"fqdn"`

IPAddresses []string `json:"ipAddresses"`

MACAddresses []string `json:"macAddresses"`

Owner string `json:"owner"`

OS *OS `json:"os"`

Software []Software `json:"software"`

Vulnerabilities []Vulnerability `json:"vulnerabilities"`

}

 

type OS struct {

Name string `json:"name"`

Version uint64 `json:"version"`

}

 

type Software struct {

Name string `json:"name"`

Version string `json:"version"`

Vendor string `json:"vendor"`

}

 

type Vulnerability struct {

KasperskyID string `json:"kasperskyID"`

ProductName string `json:"productName"`

DescriptionUrl string `json:"descriptionUrl"`

RecommendedMajorPatch string `json:"recommendedMajorPatch"`

RecommendedMinorPatch string `json:"recommendedMinorPatch"`

SeverityStr string `json:"severityStr"`

Severity uint64 `json:"severity"`

CVE []string `json:"cve"`

ExploitExists bool `json:"exploitExists"`

MalwareExists bool `json:"malwareExists"`

}

Request mandatory fields

Name

Data type

Mandatory

Description

Value example

TenantID

string

Yes

Tenant ID

00000000-0000-0000-0000-000000000000

assets

[]Asset

Yes

Array of imported assets

 

Asset mandatory fields

Name

Data type

Mandatory

Description

Value example

fqdn

string

If the ipAddresses array is not specified

Asset FQDN. It is recommended that you specify the FQDN and not just the host name. Priority indicator for asset identification.

my-asset-1.example.com

my-asset-1

ipAddresses

[]string

If FQDN is not specified

Array of IP addresses for the asset. IPv4 or IPv6. The first element of the array is used as a secondary indicator for asset identification.

["192.168.1.1", "192.168.2.2"]

["2001:0db8:85a3:0000:0000:8a2e:0370:7334"]

Response

HTTP code: 200

Format: JSON

type Response struct {

InsertedIDs map[int64]interface{} `json:"insertedIDs"`

UpdatedCount uint64 `json:"updatedCount"`

Errors []ImportError `json:"errors"`

}

 

type ImportError struct {

Index uint64 `json:"index"`

Message string `json:"message"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Tenant ID is not specified

tenantID required

 

400

Attempt to import assets into the shared tenant

import into shared tenant not allowed

 

400

Not a single asset was specified in the request body

at least one asset required

 

400

None of the mandatory fields is specified

one of fields required

asset[<index>]: fqdn, ipAddresses

400

Invalid FQDN

invalid value

asset[<index>].fqdn

400

Invalid IP address

invalid value

asset[<index>].ipAddresses[<index>]

400

IP address is repeated

duplicated value

asset[<index>].ipAddresses

400

Invalid MAC address

invalid value

asset[<index>].macAddresses[<index>]

400

MAC address is repeated

duplicated value

asset[<index>].macAddresses

403

The user does not have the required role in the specified tenant

access denied

 

404

The specified tenant was not found

tenant not found

 

406

The specified tenant was disabled

tenant disabled

 

500

Any other internal errors

variable

variable

Page top

[Topic 222296]

Deleting assets

POST /api/v1/assets/delete

Access: administrator and analyst.

Request body

Format: JSON

Name

Data type

Mandatory

Description

Value example

TenantID

string

Yes

Tenant ID

00000000-0000-0000-0000-000000000000

ids

[]string

If neither the ipAddresses array nor the FQDNs are specified

List of asset IDs

["00000000-0000-0000-0000-000000000000"]

fqdns

[]string

If neither the ipAddresses array nor the IDs are specified

Array of asset FQDNs

["my-asset-1.example.com", "my-asset-1"]

ipAddresses

[]string

If neither the IDs nor FQDNs are specified

Array of main IP addresses of the asset.

["192.168.1.1", "2001:0db8:85a3:0000:0000:8a2e:0370:7334"]

Response

HTTP code: 200

Format: JSON

type Response struct {

DeletedCount uint64 `json:"deletedCount"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Tenant ID is not specified

tenantID required

 

400

Attempt to delete an asset from the shared tenant

delete from shared tenant not allowed

 

400

None of the mandatory fields is specified

one of fields required

ids, fqdns, ipAddresses

400

Invalid FQDN specified

invalid value

fqdns[<index>]

400

Invalid IP address specified

invalid value

ipAddresses[<index>]

403

The user does not have the required role in the specified tenant

access denied

 

404

The specified tenant was not found

tenant not found

 

406

The specified tenant was disabled

tenant disabled

 

500

Any other internal errors

variable

variable

Page top

[Topic 222297]

Searching events

POST /api/v1/events

Access: administrator, analyst, and operator.

Request body

Format: JSON

Request

Name

Data type

Mandatory

Description

Value example

period

Period

Yes

Search period

 

sql

string

Yes

SQL query

SELECT * FROM events WHERE Type = 3 ORDER BY Timestamp DESC LIMIT 1000

SELECT sum(BytesOut) as TotalBytesSent, SourceAddress FROM events WHERE DeviceVendor = 'netflow' GROUP BY SourceAddress LIMIT 1000

SELECT count(Timestamp) as TotalEvents FROM events LIMIT 1

ClusterID

string

No, if the cluster is the only one

Storage cluster ID. You can find it by requesting a list of services with kind = storage. The cluster ID will be in the resourceID field.

00000000-0000-0000-0000-000000000000

rawTimestamps

bool

No

Display timestamps in their current format—Milliseconds since EPOCH. False by default.

true or false

emptyFields

bool

No

Display empty fields for normalized events. False by default.

true or false

Period

Name

Data type

Mandatory

Description

Value example

from

string

Yes

Lower bound of the period in RFC3339 format. Timestamp >= <from>

2021-09-06T00:00:00Z (UTC)

2021-09-06T00:00:00.000Z (UTC, including milliseconds)

2021-09-06T00:00:00Z+00:00 (MSK)

to

string

Yes

Upper bound of the period in RFC3339 format.

Timestamp <= <to>

2021-09-06T00:00:00Z (UTC)

2021-09-06T00:00:00.000Z (UTC, including milliseconds)

2021-09-06T00:00:00Z+00:00 (MSK)

Response

HTTP code: 200

Format: JSON

Result of executing the SQL query

Possible errors

HTTP code

Description

message field value

details field value

400

The lower bounds of the range is not specified

period.from required

 

400

The lower bounds of the range is in an unsupported format

cannot parse period.from

variable

400

The lower bounds of the range is equal to zero

period.from cannot be 0

 

400

The upper bounds of the range is not specified

period.to required

 

400

The upper bounds of the range is in an unsupported format

cannot parse period.to

variable

400

The upper bounds of the range is equal to zero

period.to cannot be 0

 

400

The lower bounds of the range is greater than the upper bounds

period.from cannot be greater than period.to

 

400

Invalid SQL query

invalid sql

variable

400

An invalid table appears in the SQL query

the only valid table is `events`

 

400

The SQL query lacks a LIMIT

sql: LIMIT required

 

400

The LIMIT in the SQL query exceeds the maximum (1000)

sql: maximum LIMIT is 1000

 

404

Storage cluster not found

cluster not found

 

406

The clusterID parameter was not specified, and many clusters were registered in KUMA

multiple clusters found, please provide clusterID

 

500

No available cluster nodes

no nodes available

 

50x

Any other internal errors

event search failed

variable

Page top

[Topic 222298]

Viewing information about the cluster

GET /api/v1/events/clusters

Access: administrator, analyst, and operator.

The main tenant clusters are accessible to all users.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Cluster ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied

00000000-0000-0000-0000-000000000000

TenantID

string

No

Tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied. If the user does not have the required role in the specified tenant, then this tenant is ignored.

00000000-0000-0000-0000-000000000000

name

string

No

Cluster name. Case-insensitive regular expression (PCRE).

cluster
^My cluster$

Response

HTTP code: 200

Format: JSON

type Response []Cluster

 

type Cluster struct {

ID string `json:"id"`

Name string `json:"name"`

TenantID string `json:"tenantID"`

TenantName string `json:"tenantName"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

500

Any other internal errors

variable

variable

Page top

[Topic 222299]

Resource search

GET /api/v1/resources

Access: administrator, analyst, and operator.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Resource ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

00000000-0000-0000-0000-000000000000

TenantID

string

No

Resource tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied. If the user does not have the required role in the specified tenant, then this tenant is ignored.

00000000-0000-0000-0000-000000000000

name

string

No

Resource name. Case-insensitive regular expression (PCRE).

resource
^My resource$

kind

string

No

Resource type. If the parameter is specified several times, then a list is generated and the logical OR operator is applied

collector, correlator, storage, activeList, aggregationRule, connector, correlationRule, dictionary, 

enrichmentRule, destination, filter, normalizer, responseRule, search, agent, proxy, secret

Response

HTTP code: 200

Format: JSON

type Response []Resource

 

type Resource struct {

ID string `json:"id"`

Kind string `json:"kind"`

Name string `json:"name"`

Description string `json:"description"`

TenantID string `json:"tenantID"`

TenantName string `json:"tenantName"`

UserID string `json:"userID"`

UserName string `json:"userName"`

Created string `json:"created"`

Updated string `json:"updated"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

400

Invalid value of the "kind" parameter

invalid kind

<kind>

500

Any other internal errors

variable

variable

Page top

[Topic 222300]

Loading resource file

POST /api/v1/resources/upload

Access: administrator and analyst.

Request body

Encrypted contents of the resource file in binary format.

Response

HTTP code: 200

Format: JSON

File ID. It should be specified in the body of requests for viewing the contents of the file and for importing resources.

type Response struct {

ID string `json:"id"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

The file size exceeds the maximum allowable (64 MB)

maximum file size is 64 MB

 

403

The user does not have the required roles in any of the tenants

access denied

 

500

Any other internal errors

variable

variable

Page top

[Topic 222301]

Viewing the contents of a resource file

POST /api/v1/resources/toc

Access: administrator, analyst, and operator.

Request body

Format: JSON

Name

Data type

Mandatory

Description

Value example

fileID

string

Yes

The file ID obtained as a result of loading the resource file.

00000000-0000-0000-0000-000000000000

password

string

Yes

Resource file password.

SomePassword!88

Response

HTTP code: 200

Format: JSON

File version, list of resources, categories, and folders.

The ID of the retrieved resources must be used when importing.

type Package struct {

Version string `json:"version"`

AssetCategories []*categories.Category `json:"assetCategories"`

Folders []*folders.Folder `json:"folders"`

Resources []*resources.ExportedResource `json:"resources"`

}

Page top

[Topic 222302]

Importing resources

POST /api/v1/resources/import

Access: administrator and analyst.

Request body

Name

Data type

Mandatory

Description

Value example

fileID

string

Yes

The file ID obtained as a result of loading the resource file.

00000000-0000-0000-0000-000000000000

password

string

Yes

Resource file password.

SomePassword!88

TenantID

string

Yes

ID of the target tenant

00000000-0000-0000-0000-000000000000

actions

map[string]uint8

Yes

Mapping of the resource ID to the action that must be taken in relation to it.

0—do not import (used when resolving conflicts)

1—import (should initially be assigned to each resource)

2—replace (used when resolving conflicts)

{

"00000000-0000-0000-0000-000000000000": 0,

"00000000-0000-0000-0000-000000000001": 1,

"00000000-0000-0000-0000-000000000002": 2,

}

 

Response

HTTP code

Body

204

 

409

The imported resources conflict with the existing ones by ID. In this case, you need to repeat the import operation while specifying the following actions for these resources:

0—do not import

2—replace

type ImportConflictsError struct {

HardConflicts []string `json:"conflicts"`

}

 

Page top

[Topic 222303]

Exporting resources

POST /api/v1/resources/export

Access: administrator and analyst.

Request body

Format: JSON

Name

Data type

Mandatory

Description

Value example

ids

[]string

Yes

Resource IDs to be exported

["00000000-0000-0000-0000-000000000000"]

password

string

Yes

Exported resource file password

SomePassword!88

TenantID

string

Yes

ID of the tenant that owns the exported resources

00000000-0000-0000-0000-000000000000

Response

HTTP code: 200

Format: JSON

ID of the file with the exported resources. It should be used in a request to download the resource file.

type ExportResponse struct {

FileID string `json:"fileID"`

}

Page top

[Topic 222304]

Downloading the resource file

GET /api/v1/resources/download/<id>

Here "id" is the file ID obtained as a result of executing a resource export request.

Access: administrator and analyst.

Response

HTTP code: 200

Encrypted contents of the resource file in binary format.

Possible errors

HTTP code

Description

message field value

details field value

400

File ID not specified

route parameter required

id

400

The file ID is not a valid UUID

id is not a valid UUID

 

403

The user does not have the required roles in any of the tenants

access denied

 

404

File not found

file not found

 

406

The file is a directory

not regular file

 

500

Any other internal errors

variable

variable

Page top

[Topic 222305]

Search for services

GET /api/v1/services

Access: administrator and analyst.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Service ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

00000000-0000-0000-0000-000000000000

TenantID

string

No

Service tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied. If the user does not have the required role in the specified tenant, then this tenant is ignored.

00000000-0000-0000-0000-000000000000

name

string

No

Service name. Case-insensitive regular expression (PCRE).

service
^My service$

kind

string

No

Service type. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

collector, correlator, storage, agent

fqdn

string

No

Service FQDN. Case-insensitive regular expression (PCRE).

hostname

^hostname.example.com$

paired

bool

No

Display only those services that executed the first start. If the parameter is present in the URL query, then its value is assumed to be true. The values specified by the user are ignored. Example: /api/v1/services?paired

 

Response

HTTP code: 200

Format: JSON

type Response []Service

 

type Service struct {

ID string `json:"id"`

TenantID string `json:"tenantID"`

TenantName string `json:"tenantName"`

ResourceID string `json:"resourceID"`

Kind string `json:"kind"`

Name string `json:"name"`

Address string `json:"address"`

FQDN string `json:"fqdn"`

Status string `json:"status"`

Warning string `json:"warning"`

APIPort string `json:"apiPort"`

Uptime string `json:"uptime"`

Version string `json:"version"`

Created string `json:"created"`

Updated string `json:"updated"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

400

Invalid value of the "kind" parameter

invalid kind

<kind>

500

Any other internal errors

variable

variable

Page top

[Topic 222306]

Tenant search

GET /api/v1/tenants

Only tenants available to the user are displayed.

Access: administrator and analyst.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

page

number

No

Page number. Starts with 1. The page size is 250 entries. If the parameter is not specified, the default value is 1.

1

id

string

No

Tenant ID. If the parameter is specified several times, then a list is generated and the logical OR operator is applied.

00000000-0000-0000-0000-000000000000

name

string

No

Tenant name. Case-insensitive regular expression (PCRE).

tenant
^My tenant$

main

bool

No

Only display the main tenant. If the parameter is present in the URL query, then its value is assumed to be true. The values specified by the user are ignored. Example: /api/v1/tenants?main

 

Response

HTTP code: 200

Format: JSON

type Response []Tenant

 

type Tenant struct {

ID string `json:"id"`

Name string `json:"name"`

Main bool `json:"main"`

Description string `json:"description"`

EPS uint64 `json:"eps"`

EPSLimit uint64 `json:"epsLimit"`

Created string `json:"created"`

Updated string `json:"updated"`

}

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid value of the "page" parameter

invalid query parameter value

page

500

Any other internal errors

variable

variable

Page top

[Topic 222307]

View token bearer information

GET /api/v1/users/whoami

Response

HTTP code: 200

Format: JSON

type Response struct {

ID string `json:"id"`

Name string `json:"name"`

Login string `json:"login"`

Email string `json:"email"`

Tenants []TenantAccess `json:"tenants"`

}

 

type TenantAccess struct {

ID string `json:"id"`

Name string `json:"name"`

Role string `json:"role"`

}

Page top

[Topic 234105]

Dictionary updating in services

POST /api/v1/dictionaries/update

You can update only dictionaries in dictionary resources of the table type.

Access: administrator and analyst.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

collectorID

string

No

IDs of the collectors in which the dictionary will be updated. You can specify multiple values, in which case the dictionary will be updated in each of the services.

00000000-0000-0000-0000-000000000000

correlatorID

string

No

IDs of the correlators in which the dictionary will be updated. You can specify multiple values, in which case the dictionary will be updated in each of the services.

00000000-0000-0000-0000-000000000000

dictionaryID

string

Yes

ID of the dictionary that will be updated.

00000000-0000-0000-0000-000000000000

If an update in one of the services ends with an error, this does not interrupt updates in the other services.

Request body

Multipart field name

Data type

Mandatory

Description

Value example

file

CSV file

Yes

The request contains a CSV file. Data of the existing dictionary is being replaced with data from this file. The first line of the CSV file containing the column names must not be changed.

key columns,column1,column2

key1,k1col1,k1col2

key2,k2col1,k2col2

Response

HTTP code: 200

Format: JSON

type Response struct {

ServicesFailedToUpdate []UpdateError `json:"servicesFailedToUpdate"`

}

type UpdateError struct {

ID string `json:"id"`

Err error `json:"err"`

}

Returns only errors for services in which the dictionaries have not been updated.

Possible errors

HTTP code

Description

message field value

details field value

400

Invalid request body

request body decode failed

Error

400

Null count of dictionary lines

request body required

 

400

Dictionary ID not specified

invalid value

dictionaryID

400

Incorrect value of dictionary line

invalid value

rows or rows[i]

400

Dictionary with the specified ID has an invalid type (not table)

can only update table dictionary

 

400

Attempt to change dictionary columns

columns must not change with update

 

403

No access to requested resource

access denied

 

404

Service not found

service not found

 

404

Dictionary not found

dictionary not found

Service ID

500

Any other internal errors

variable

variable

Page top

[Topic 234106]

Dictionary retrieval

GET /api/v1/dictionaries

You can get only dictionaries in dictionary resources of the table type.

Access: administrator and analyst.

Query parameters (URL Query)

Name

Data type

Mandatory

Description

Value example

dictionaryID

string

Yes

ID of the dictionary that will be received

00000000-0000-0000-0000-000000000000

Response

HTTP code: 200

Format: text/plain; charset=utf-8

A CSV file is returned with the dictionary data in the response body.

Page top

[Topic 217766]

Commands for components manual starting and installing

This section contains the parameters of KUMA's executable file /opt/kaspersky/kuma/kuma that can be used to manually start or install KUMA services. This may be useful for when you need to see output in the server operating system console.

Commands parameters

Commands

Description

tools

Start KUMA administration tools.

collector

Install, start, or remove a collector service.

core

Install, start, or uninstall a Core service.

correlator

Install, start, or remove a correlator service.

agent

Install, start, or remove an agent service.

help

Get information about available commands and parameters.

license

Get information about license.

storage

Start or install a Storage.

version

Get information about version of the program.

Flags:

-h, --h are used to get help about any kuma command. For example, kuma <component> --help.

Examples:

  • kuma version is used to get version of the KUMA installer.
  • kuma core -h is used to get help about core command of KUMA installer.
  • kuma collector --core <address of the server where the collector should obtain its settings> --id <ID of the installed service> --api.port <port> is used to start collector service installation.
Page top

[Topic 238733]

Integrity check of KUMA files

The integrity of KUMA components is checked using a set of scripts based on the integrity_checker tool and located in the/opt/kaspersky/kuma/integrity/bin directory. An integrity check uses manifest xml files in the/opt/kaspersky/kuma/integrity/manifest/* directory, signed with a Kaspersky cryptographic signature.

Running the integrity check tool requires a user account with permissions at least matching those of the KUMA account.

The integrity check tool processes each KUMA component individually, and it must be run on servers that has the appropriate components installed. An integrity check also screens the xml file that was used.

To check the integrity of component files:

  1. Run the following command to navigate to the directory that contains the set of scripts:

    cd /opt/kaspersky/kuma/integrity/bin

  2. Then pick the command that matches the KUMA component you want to check:
    • ./check_all.sh for KUMA Core and Storage components.
    • ./check_core.sh for KUMA Core components.
    • ./check_collector.sh for KUMA collector components.
    • ./check_collector.sh for KUMA correlator components.
    • ./check_storage.sh for storage components.
    • ./check_kuma_exe.sh <full path to kuma.exe omitting file name> for KUMA Agent for Windows. The standard location of the agent executable file on the Windows device is: C:\Program Files\Kaspersky Lab\KUMA\.

The integrity of the component files is checked.

The result of checking each component is displayed in the following format:

  • The Summary section describes the number of scanned objects along with the scan status: integrity not confirmed / object skipped / integrity confirmed:
    • Manifests – the number of manifest files processed.
    • Files – the number of KUMA files processed.
    • Directories – integrity checking does not use KUMA.
    • Registries – integrity checking does not use KUMA.
    • Registry values – integrity checking does not use KUMA.
  • Component integrity check result:
    • SUCCEEDED – integrity confirmed.
    • FAILED – integrity violated.
Page top

[Topic 217941]

Normalized event data model

This section presents the KUMA normalized event data model. All events that are processed by KUMA Correlator to detect alerts must be compliant to this model.

Events that are not compliant to this data model must be imported into this format (or normalized) using Collectors.

Normalized event data model

Field name

Value type

Description

Internal standard fields

 

 

ID

String

Unique event ID of UUID type. It never changes its value

The collector generates the ID for the base event that is generated in the collector.

The correlator generates the ID of the correlation event.

Timestamp

Number, timestamp

Time when the base event and correlation events were created in the collector.

Time when the correlation event was created in the correlator.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

TenantID

String

Tenant ID.

ServiceID

String

ID of the service instance: correlator, collector, storage.

ServiceName

String

Name of the service instance that was assigned by the KUMA administrator to the service when it was created.

AggregationRuleName

String

The name of the aggregation rule that processed the event.

AggregationRuleID

String

ID of the aggregation rule that processed the event.

CorrelationRuleName

String

Name of the correlation rule that triggered the creation of the correlation event. It is filled in only for the correlation event.

CorrelationRuleID

String

ID of the correlation rule that triggered the creation of the correlation event. It is filled in only for the correlation event.

GroupedBy

Nested list of strings

List of names of the fields that were used for grouping in the correlation rule. It is filled in only for the correlation event.

Priority

Number

Event severity level.

Code

String

In a base event, this is the code of a process, function or operation return from the source.

In a correlation event, the alert code for the first line support or the template code of the notification to be submitted is written to this field.

 

Tactic

String

Name of the tactic from MITRE.

Technique

String

Name of the technique from MITRE.

ReplayID

String

ID of the retroscan that generated the event.

Raw

String

Unalterable text of the source "raw" event.

SourceAssetID

String

ID of the destination asset.

DestinationAssetID

String

ID of the source asset.

DeviceAssetID

String

Asset ID.

SourceAccountID

String

ID of the destination account.

DestinationAccountID

String

ID of the source account.

SpaceID

String

ID of the space.

BaseEvents

Nested [Event] list

Nested structure containing a list of base events. This field can be filled in for correlation events.

TI

Nested [string:string] dictionary

Field that contains categories in a dictionary format received from an external Threat Intelligence source based on indicators from an event.

Extra

Nested [string:string] dictionary

During normalization of a raw event, this field can be used to place those fields that have not been mapped to KUMA event fields. This field can be filled in only for base events.

AffectedAssets

Nested [Affected] structure

Nested structure from which you can query alert-related assets and user accounts, and find out the number of times they appear in alert events.

CEF standard fields

 

 

DeviceVendor

String

Name of the log source producer. The value is taken from the raw event.

The DeviceVendor, DeviceProduct, and DeviceVersion all uniquely identify the log source.

DeviceProduct

String

Product name from the log source. The value is taken from the raw event.

The DeviceVendor, DeviceProduct, and DeviceVersion all uniquely identify the log source.

DeviceVersion

String

Product version from the log source. The value is taken from the raw event.

The DeviceVendor, DeviceProduct, and DeviceVersion all uniquely identify the log source.

DeviceEventClassID

String

Unique ID for the event type from the log source. Certain log sources categorize events.

Name

String

Event name in the raw event.

Severity

String

Error severity from the raw event.

DeviceAction

String

Action taken by a device or by a log source. For example, blocked, detected.

ApplicationProtocol

String

Application-layer protocol such as HTTP or Telnet.

DeviceCustomIPv6Address1

String

Field for displaying an IPv6 address value that cannot be mapped to any other element of the data model.

It can be used to process the logs of network devices where you need to distinguish between the IP addresses of various devices (for firewalls, for example).

This field is customizable.

DeviceCustomIPv6Address1Label

String

Description of the purpose of the DeviceCustomIPv6Address1 field.

DeviceCustomIPv6Address2

String

Field for displaying an IPv6 address value that cannot be mapped to any other element of the data model.

It can be used to process the logs of network devices where you need to distinguish between the IP addresses of various devices (for firewalls, for example).

This field is customizable.

DeviceCustomIPv6Address2Label

String

Description of the purpose of the DeviceCustomIPv6Address2 field.

DeviceCustomIPv6Address3

String

Field for displaying an IPv6 address value that cannot be mapped to any other element of the data model.

It can be used to process the logs of network devices where you need to distinguish between the IP addresses of various devices (for firewalls, for example).

This field is customizable.

DeviceCustomIPv6Address3Label

String

Description of the purpose of the DeviceCustomIPv6Address3 field.

DeviceCustomIPv6Address4

String

Field for displaying an IPv6 address value that cannot be mapped to any other element of the data model.

It can be used to process the logs of network devices where you need to distinguish between the IP addresses of various devices (for firewalls, for example).

This field is customizable.

DeviceCustomIPv6Address4Label

String

Description of the purpose of the DeviceCustomIPv6Address4 field.

DeviceEventCategory

String

Raw event category from the diagram defining the categories of log source events.

DeviceCustomFloatingPoint1

Number

Field for the Float-type value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomFloatingPoint1Label

String

Description of the purpose of the DeviceCustomFloatingPoint1 field.

DeviceCustomFloatingPoint2

Number

Field for the Float-type value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomFloatingPoint2Label

String

Description of the purpose of the DeviceCustomFloatingPoint2 field.

DeviceCustomFloatingPoint3

Number

Field for the Float-type value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomFloatingPoint3Label

String

Description of the purpose of the DeviceCustomFloatingPoint3 field.

DeviceCustomFloatingPoint4

Number

Field for the Float-type value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomFloatingPoint4Label

String

Description of the purpose of the DeviceCustomFloatingPoint4 field.

DeviceCustomNumber1

Number

Field for the integer value that cannot be mapped to any other field of the data model.

This field is customizable.

 

DeviceCustomNumber1Label

String

Description of the purpose of the DeviceCustomNumber1 field.

DeviceCustomNumber2

Number

Field for the integer value that cannot be mapped to any other field of the data model.

This field is customizable.

 

DeviceCustomNumber2Label

String

Description of the purpose of the DeviceCustomNumber2 field.

DeviceCustomNumber3

Number

Field for the integer value that cannot be mapped to any other field of the data model.

This field is customizable.

 

DeviceCustomNumber3Label

String

Description of the purpose of the DeviceCustomNumber3 field.

BaseEventCount

Number

Number of base events combined into an aggregated event.

DeviceCustomString1

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString1Label

String

Descriptions of the purpose of the DeviceCustomString1 field.

DeviceCustomString2

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString2Label

String

Descriptions of the purpose of the DeviceCustomString2 field.

DeviceCustomString3

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString3Label

String

Descriptions of the purpose of the DeviceCustomString3 field.

DeviceCustomString4

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString4Label

String

Descriptions of the purpose of the DeviceCustomString4 field.

DeviceCustomString5

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString5Label

String

Descriptions of the purpose of the DeviceCustomString5 field.

DeviceCustomString6

String

Field for the string value that cannot be mapped to any other field of the data model.

This field is customizable.

DeviceCustomString6Label

String

Descriptions of the purpose of the DeviceCustomString6 field.

DestinationDnsDomain

String

The DNS domain portion of the complete fully qualified domain name (FQDN) of the destination, if the raw event contains the values of the traffic sender and recipient.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationServiceName

String

Service name on the traffic recipient's side. For example, "sshd".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationTranslatedAddress

String

IP address of the traffic recipient asset (after the address is translated).

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationTranslatedPort

Number

Port number on the traffic recipient asset (after the recipient address is translated).

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DeviceCustomDate1

Number, timestamp

Field for the Timestamp-type value that cannot be mapped to any other field of the data model.

This field is customizable.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

 

DeviceCustomDate1Label

String

Field for describing the purpose of the DeviceCustomDate1 field.

DeviceCustomDate2

Number, timestamp

Field for the Timestamp-type value that cannot be mapped to any other field of the data model.

This field is customizable.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

 

DeviceCustomDate2Label

String

Field for describing the purpose of the DeviceCustomDate2 field.

DeviceDirection

Number

Field for a description of the connection direction from the raw event.

  • 0—Inbound connection.
  • 1—Outbound connection.

DeviceDnsDomain

String

The DNS domain part of the complete fully qualified domain name (FQDN) of the asset IP address from which the raw event was received.

DeviceExternalID

String

External unique ID of the device if it is communicated in the raw event.

DeviceFacility

String

Facility from the raw event, if one exists. For example, the Facility field in the Syslog can be used to transmit the OS component name where an error occurred.

 

DeviceInboundInterface

String

Name of the incoming connection interface.

DeviceNtDomain

String

Windows Domain Name of the device.

DeviceOutboundInterface

String

Name of the outgoing connection interface.

DevicePayloadID

String

The payload's unique ID associated with the raw event.

DeviceProcessName

String

Name of the process from the raw event.

DeviceTranslatedAddress

String

Retranslated IP address of the device from which the raw event was received.

DestinationHostName

String

Host name of the traffic receiver. FQDN of the traffic recipient, if available.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationMacAddress

String

MAC address of the traffic recipient asset.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationNtDomain

String

Windows Domain Name of the traffic recipient device.
This is used to process network traffic logs in which you need to distinguish between the source and destination.

DestinationProcessID

Number

ID of the system process that is associated with the traffic recipient in the raw event. For example, if Process ID 105 is specified in the event, DestinationProcessId=105.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationUserPrivileges

String

Names of user roles that identify user privileges at the destination. For example, "User", "Guest", or "Administrator".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationProcessName

String

Name of the system process at the destination. For example, "sshd" or "telnet".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationPort

Number

Port number at the destination.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationAddress

String

Destination IPv4 address.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DeviceTimeZone

String

Timezone of the device where the event was generated.

The default timezone is the collector or correlator system time. If the event is configured to be enriched with timezone information, the field specifies the timezone from the enrichment rule. If the time zone of the event source was specified in the raw event and this data was saved during normalization, information about the time zone of the event source is saved in the event field.

The format of the field value is +-hh:mm.

DestinationUserID

String

User ID at the destination.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DestinationUserName

String

User name at the destination. It may contain the email address of the user.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

DeviceAddress

String

IPv4 address of the asset from which the event was received.

DeviceHostName

String

Name of the asset host from which the event was received. FQDN of the asset, if available.

DeviceMacAddress

String

MAC address of the asset from which the event was received. FQDN of the asset, if available.

DeviceProcessID

Number

ID of the system process on the device that generated the event.

EndTime

Number

Timestamp when the event was terminated.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

ExternalID

String

ID of the device that generated the event.

FileCreateTime

Number

Time of file creation from the event.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

FileHash

String

Hash of the file.

FileID

String

File ID.

FileModificationTime

Number

Time when the file was last modified.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

FilePath

String

File path, including the file name.

FilePermission

String

List of file permissions.

FileType

String

File type. For example, application, pipe, or socket.

FlexDate1

Number, timestamp

Field for the Timestamp-type value that cannot be mapped to any other field of the data model.

This field is customizable.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

 

FlexDate1Label

String

Description of the purpose of the flexDate1 field.

FlexString1

String

Field for the String-type value that cannot be mapped to any other field of the data model.

This field is customizable.

FlexString1Label

String

Description of the purpose of the flexString1 field.

FlexString2

String

Field for the String-type value that cannot be mapped to any other field of the data model.

This field is customizable.

FlexString2Label

String

Description of the purpose of the flexString2 field.

FlexNumber1

Number

Field for the integer type that cannot be mapped to any other field of the data model.

This field is customizable.

FlexNumber1Label

String

Description of the purpose of the flexNumber1 field.

FlexNumber2

Number

Field for the integer type that cannot be mapped to any other field of the data model.

This field is customizable.

FlexNumber2Label

String

Description of the purpose of the flexNumber2 field.

FileName

String

Filename without specifying the file path.

FileSize

Number

File size.

BytesIn

Number

Number of bytes received by the source and sent to the destination.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

 

Message

String

Short description of the error or problem from the raw event.

OldFileCreateTime

Number

Time when the OLD file was created from the event.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

OldFileHash

String

Hash code of the OLD file.

OldFileID

String

ID of the OLD file.

OldFileModificationTime

Number

Time when the OLD file was last modified.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

 

OldFileName

String

Name of the OLD file (without the file path).

OldFilePath

String

Path to the OLD file, including the file name.

OldFilePermission

String

Path to the OLD file, including the file name.

OldFileSize

Number

Size of the OLD file.

OldFileType

String

File type. For example, application, pipe, or socket.

BytesOut

Number

Number of sent bytes.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

EventOutcome

String

Result of the action. For example, "success", "failure".

TransportProtocol

String

Name of the OSI Layer 4 protocol (such as TCP or UDP).

Reason

String

Short description of the audit reason in the audit messages.

RequestUrl

String

URL of the request.

RequestClientApplication

String

Agent that processed the request.

RequestContext

String

Description of the request context.

RequestCookies

String

Cookie files related to the request.

RequestMethod

String

Method that was used to access the URL (such as POST or GET).

DeviceReceiptTime

Number

Time when the event was received.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

SourceHostName

String

Name of the host of the traffic source. FQDN of the traffic source, if available.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceDnsDomain

String

Windows Domain Name of the traffic source device.
This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceServiceName

String

Name of the service at the traffic source. For example, "sshd".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceTranslatedAddress

String

Source translated IPv4 address.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceTranslatedPort

Number

Number of the translated port at the source.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceMacAddress

String

MAC address of the traffic source asset.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceNtDomain

String

Windows Domain Name of the traffic source device.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceProcessID

Number

System process ID that is associated with the traffic source in the raw event. For example, if Process ID 105 is specified in the event, SourceProcessId=105.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceUserPrivileges

String

Names of user roles that identify user privileges at the source. For example, "User", "Guest", or "Administrator".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceProcessName

String

Name of the system process at the source. For example, "sshd" or "telnet".

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourcePort

Number

Port number at the source.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceAddress

String

Source IPv4 address.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

StartTime

Number

Timestamp of the action associated with the event began.

The time is specified in UTC0. In the KUMA web interface, the value is displayed based in the timezone of the user's browser.

SourceUserID

String

User ID at the source.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

SourceUserName

String

User name at the source. It may contain the email address of the user.

This is used to process network traffic logs in which you need to be able to distinguish between the source and destination.

Type

Number

Indicator of the correlation event type. The following values are available:

  • 1—Base event
  • 2—Aggregated event
  • 3—Correlation event
  • 4—Audit event
  • 5—Monitoring event

Fields containing geographic data

 

 

SourceCountry

String

Country matching the source IPv4 address from the SourceAddress field.

SourceRegion

String

Region matching the source IPv4 address from the SourceAddress field.

SourceCity

String

City matching the source IPv4 address from the SourceAddress field.

SourceLatitude

Number

Longitude matching the source IPv4 address from the SourceAddress field.

SourceLongitude

Number

Latitude matching the source IPv4 address from the SourceAddress field.

DestinationCountry

String

Country matching the destination IPv4 address from the DestinationAddress field.

DestinationRegion

String

Region matching the destination IPv4 address from the DestinationAddress field.

DestinationCity

String

City matching the destination IPv4 address from the DestinationAddress field.

DestinationLatitude

Number

Longitude matching the destination IPv4 address from the DestinationAddress field.

DestinationLongitude

Number

Latitude matching the destination IPv4 address from the DestinationAddress field.

DeviceCountry

String

Country matching the device IPv4 address from the DeviceAddress field.

DeviceRegion

String

Region matching the device IPv4 address from the DeviceAddress field.

DeviceCity

String

City matching the device IPv4 address from the DeviceAddress field.

DeviceLatitude

Number

Longitude matching the device IPv4 address from the DeviceAddress field.

DeviceLongitude

Number

Latitude matching the device IPv4 address from the DeviceAddress field.

 

Nested Affected structure

Field

Data type

Description

Assets

Nested [AffectedRecord] list

List and number of assets associated with the alert.

Accounts

Nested [AffectedRecord] list

List and number of user accounts associated with the alert.

Nested AffectedRecord structure

Field

Data type

Description

Value

String

ID of the asset or user account.

Count

Number

The number of times an asset or user account appears in alert-related events.

Page top

[Topic 233888]

Alert data model

This section describes the KUMA alert data model. Alerts are created by correlators whenever information security threats are detected using correlation rules. Alerts must be investigated to eliminate these threats.

Alert field

Data type

Description

ID

String

Unique ID of the alert.

TenantID

String

ID of the tenant that owns the alert. The value is inherited from the correlator that generated the alert.

TenantName

String

Tenant name.

CorrelationRuleID

String

ID of the rule used as the basis for generating the alert.

CorrelationRuleName

String

Name of the correlation rule used as the basis for generating the alert.

Status

String

Alert status. Possible values:

  • New—new alert.
  • Assigned—the alert is assigned to a user.
  • Closed—the alert was closed.
  • Exported to IRP—the alert was exported to the IRP system for further investigation.
  • Escalated—an incident was generated based on this alert.

Priority

Number

Alert severity. Possible values:

  • 1–4 — Low.
  • 5–8 — Medium.
  • 9–12 — High.
  • 13–16 — Critical.

ManualPriority

TRUE/FALSE string

Parameter showing how the alert severity level was determined. Possible values:

  • true—defined by the user.
  • false (default value)—calculated automatically.

FirstSeen

Number

Time when the first correlation event was created from the alert.

LastSeen

Number

Time when the last correlation event was created from the alert.

UpdatedAt           

Number

Date of the last modification to the alert parameters.

UserID               

String

ID of the KUMA user assigned to examine the alert.

UserName 

String

Name of the KUMA user assigned to examine the alert.
 

GroupedBy

Nested list of strings

List of event fields used to group events in the correlation rule.

ClosingReason

String

Reason for closing the alert. Possible values:

  • Incorrect Correlation Rule—the alert was a false positive and the received events do not indicate a real security threat. The correlation rule may need to be updated.
  • Incorrect Data—the alert was a false positive and the received events do not indicate a real security threat.
  • Responded—the appropriate measures were taken to eliminate the security threat.

Overflow             

TRUE/FALSE string

Indicator that the alert is overflowed, which means that the size of the alert and the events associated with it exceeds 16 MB. Possible values:

  • true
  • false

MaxAssetsWeightStr   

String

Maximum severity of the asset categories associated with the alert.

IntegrationID

String

ID of the alert in the IRP / SOAR application, if integration with such an application is configured in KUMA.

ExternalReference

String

Link to a section in the IRP / SOAR application that displays information about an alert imported from KUMA.

IncidentID 

String

ID of the incident to which the alert is linked.

IncidentName

String

Name of the incident to which the alert is linked.

SegmentationRuleName

String

Name of the segmentation rule used to group correlation events in the alert.

BranchID      

String

ID of the hierarchy branch in which the alert was generated. Indicated for a hierarchical deployment of KUMA.

BranchName  

String

Name of the hierarchy branch in which the alert was generated. Indicated for a hierarchical deployment of KUMA.

Actions

Nested [Action] structure

Nested structure with lines indicating changes to alert statuses and assignments, and user comments.

Events

Nested [EventWrapper] structure

Nested structure from which you can query the correlation events associated with the alert.

Assets

Nested [Asset] structure

Nested structure from which you can query assets associated with the alert.

Accounts

Nested [Account] structure

Nested structure from which you can query the user accounts associated with the alert.

AffectedAssets

Nested [Affected] structure

Nested structure from which you can query alert-related assets and user accounts, and find out the number of times they appear in alert events.

Nested Affected structure

Field

Data type

Description

Assets

Nested [AffectedRecord] list

List and number of assets associated with the alert.

Accounts

Nested [AffectedRecord] list

List and number of user accounts associated with the alert.

Nested AffectedRecord structure

Field

Data type

Description

Value

String

ID of the asset or user account.

Count

Number

The number of times an asset or user account appears in alert-related events.

Nested EventWrapper structure

Field

Data type

Description

Event

Nested [Event] structure

Event fields.

Comment

String

Comment added when events were added to the alert.

LinkedAt

Number

Date when events were added to the alert.

Nested Action structure

Field

Data type

Description

CreatedAt

Number

Date when the action was taken on the alert.

UserID

String

User ID.

Kind

String

Type of action.

Value

String

Value.

Event

Nested [Event] structure

Event fields.

ClusterID

String

Cluster ID.

Page top

[Topic 234818]

Asset data model

The structure of an asset is represented by fields that contain values. Fields can also contain nested structures.

Asset field

Value type

Description

ID

String

Asset ID.

TenantName

String

Tenant name.

DeletedAt

Number

Asset deletion date.

CreatedAt

Number

Asset creation date.

TenantID

String

Tenant ID.

DirectCategories

Nested list of strings

Asset categories.

CategoryModels

Nested [Category] structure

Changes asset categories.

AffectedByIncidents

Nested dictionary:

[string:string TRUE/FALSE]

IDs of incidents.

IPAddress

Nested list of strings

Asset IP addresses.

FQDN

String

Asset FQDN.

Weight

Number

Asset importance.

Deleted

String with TRUE/FALSE values

Indicator of whether the asset has been marked for deletion from KUMA.

UpdatedAt

Number

Date of last update of the asset.

MACAddress

Nested list of strings

Asset MAC addresses.

IPAddressInt

Nested list of numbers

IP address in number format.

Owner

Nested [OwnerInfo] structure

Asset owner information.

OS

Nested [OS] structure

Asset operating system information.

displayName

String

Asset name.

APISoft

Nested [Software] structure

Software installed on the asset.

APIVulns

Nested [Vulnerability] structure

Asset vulnerabilities.

KICSServerIp

String

KICS for Networks server IP address.

KICSConnectorID

Number

KICS for Networks connector ID.

KICSDeviceID

Number

KICS for Networks asset ID.

KICSStatus

String

KICS for Networks asset status.

KICSHardware

Nested [KICSSystemInfo] structure

Asset hardware information received from KICS for Networks.

KICSSoft

Nested [KICSSystemInfo] structure

Asset software information received from KICS for Networks.

KICSRisks

Nested [KICSRisk] structure

Asset vulnerability information received from KICS for Networks.

Sources

Nested [Sources] structure

Basic information about the asset from various sources.

FromKSC

String with TRUE/FALSE values

Indicator that asset details have been imported from KSC.

NAgentID

String

ID of the KSC Agent from which the asset information was received.

KSCServerFQDN

String

FQDN of the KSC Server.

KSCInstanceID

String

KSC instance ID.

KSCMasterHostname

String

KSC Server host name.

KSCGroupID

Number

KSC group ID.

KSCGroupName

String

KSC group name.

LastVisible

Number

Date when information about the asset was last received from KSC.

Products

Nested dictionary:

[string:nested [ProductInfo] structure]

Information about Kaspersky applications installed on the asset received from KSC.

Hardware

Nested [Hardware] structure

Asset hardware information received from KSC.

KSCSoft

Nested [Software] structure

Asset software information received from KSC.

KSCVulns

Nested [Vulnerability] structure

Asset vulnerability information received from KSC.

Nested Category structure

Field

Value type

Description

ID

String

Category ID.

TenantID

String

Tenant ID.

TenantName

String

Tenant name.

Parent

String

Parent category.

Path

Nested list of strings

Structure of categories.

Name

String

Category name.

UpdatedAt

Number

Last update of the category.

CreatedAt

Number

Category creation date.

Description

String

Category description.

Weight

Number

Category importance.

CategorizationKind

String

Asset category assignment type.

CategorizationAt

Number

Categorization date.

CategorizationInterval

String

Category assignment interval.

Nested OwnerInfo structure

Field

Value type

Description

displayName

String

Name of the asset owner.

Nested OS structure

Field

Value type

Description

Name

String

Name of the operating system.

BuildNumber

Number

Operating system version.

Nested Software structure

Field

Value type

Description

displayName

String

Software name.

DisplayVersion

String

Software version.

Publisher

String

Software publisher.

InstallDate

String

Installation date.

HasMSIInstaller

TRUE/FALSE string

Indicates whether the software has an MSI installer.

Nested Vulnerability structure

Field

Value type

Description

KasperskyID

String

Vulnerability ID assigned by Kaspersky.

ProductName

String

Software name.

DescriptionURL

String

URL containing the vulnerability description.

RecommendedMajorPatch

String

Recommended update.

RecommendedMinorPatch

String

Recommended update.

SeverityStr

String

Vulnerability severity.

Severity

Number

Vulnerability severity.

CVE

Nested list of strings

CVE vulnerability ID.

ExploitExists

TRUE/FALSE string

Indicates whether an exploit exists.

MalwareExists

TRUE/FALSE string

Indicates whether malware exists.

Nested KICSSystemInfo structure

Field

Value type

Description

Model

String

Device model.

Version

String

Device version.

Vendor

String

Vendor.

Nested KICSRisk structure

Field

Value type

Description

ID

Number

KICS for Networks risk ID.

Name

String

Risk name.

Category

String

Risk type.

Description

String

Risk description.

DescriptionURL

String

Link to risk description.

Severity

Number

Risk severity.

Cvss

Number

CVSS score.

Nested Sources structure

Field

Value type

Description

KSC

Nested [SourceInfo] structure

Asset information received from KSC.

API

Nested [SourceInfo] structure

Asset information received through the REST API.

Manual

Nested [SourceInfo] structure

Manually entered information about the asset.

KICS

Nested [SourceInfo] structure

Asset information received from KICS for Networks.

Nested Sources structure

Field

Value type

Description

MACAddress

Nested list of strings

Asset MAC addresses.

IPAddressInt

Nested list of numbers

IP address in number format.

Owner

Nested [OwnerInfo] structure

Asset owner information.

OS

Nested [OS] structure

Asset operating system information.

displayName

String

Asset name.

IPAddress

Nested list of strings

Asset IP addresses.

FQDN

String

Asset FQDN.

Weight

Number

Asset importance.

Deleted

String with TRUE/FALSE values

Indicator of whether the asset has been marked for deletion from KUMA.

UpdatedAt

Number

Date of last update of the asset.

Nested structure ProductInfo

Field

Value type

Description

ProductVersion

String

Software version.

ProductName

String

Software name.

Nested Hardware structure

Field

Value type

Description

NetCards

Nested [NetCard] structure

List of network cards of the asset.

CPU

Nested [CPU] structure

List of asset processors.

RAM

Nested [RAM] structure

Asset RAM list.

Disk

Nested [Disk] structure

List of asset drives.

Nested NetCard structure

Field

Value type

Description

ID

String

Network card ID.

MACAddresses

Nested list of strings

MAC addresses of the network card.

Name

String

Network card name.

Manufacture

String

Network card manufacture.

DriverVersion

String

Driver version.

Nested RAM structure

Field

Value type

Description

Frequency

String

RAM frequency.

TotalBytes

Number

Amount of RAM, in bytes.

Nested CPU structure

Field

Value type

Description

ID

String

CPU ID.

Name

String

CPU name.

CoreCount

String

Number of cores.

CoreSpeed

String

Frequency.

Nested Disk structure

Field

Value type

Description

FreeBytes

Number

Available disk space.

TotalBytes

Number

Total disk space.

Page top

[Topic 234819]

User account data model

User account fields can be queried from email templates and during event correlation.

Field

Value type

Description

ID

String

User account ID.

ObjectGUID

String

Active Directory attribute. User account ID in Active Directory.

TenantID

String

Tenant ID.

TenantName

String

Tenant name.

UpdatedAt

Number

Last update of user account.

Domain

String

Domain.

CN

String

Active Directory attribute. User name.

displayName

String

Active Directory attribute. Displayed user name.

This attribute can be used for an event search during correlation.

DistinguishedName

String

Active Directory attribute. LDAP object name.

This attribute can be used for an event search during correlation.

employeeID

String

Active Directory attribute. Employee ID.

Mail

String

Active Directory attribute. User email address.

This attribute can be used for an event search during correlation.

mailNickname

String

Active Directory attribute. Alternate email address.

Mobile

String

Active Directory attribute. Mobile phone number.

ObjectSID

String

Active Directory attribute. Security ID.

SAMAccountName

String

Active Directory attribute. Login.

This attribute can be used for an event search during correlation.

TelephoneNumber

String

Active Directory attribute. Phone number.

UserPrincipalName

String

Active Directory attribute. User principal name (UPN).

This attribute can be used for an event search during correlation.

Archived

TRUE/FALSE string

Indicator that determines whether a user account is obsolete.

MemberOf

List of strings

Active Directory attribute. AD groups joined by the user.

This attribute can be used for an event search during correlation.

PreliminarilyArchived

TRUE/FALSE string

Indicator that determines whether a user account should be designated as obsolete.

CreatedAt

Number

User account creation date.

SN

String

Active Directory attribute. Last name of the user.

This attribute can be used for an event search during correlation.

SAMAccountType

String

Active Directory attribute. User account type.

Title

String

Active Directory attribute. Job title of the user.

Division

String

Active Directory attribute. User's department.

Department

String

Active Directory attribute. User's division.

Manager

String

Active Directory attribute. User's supervisor.

Location

String

Active Directory attribute. User's location.

Company

String

Active Directory attribute. User's company.

StreetAddress

String

Active Directory attribute. Company address.

PhysicalDeliveryOfficeName

String

Active Directory attribute. Delivery address.

managedObjects

List of strings

Active Directory attribute. Objects under control of the user.

UserAccountControl

Number

Active Directory attribute. AD account type.

This attribute can be used for an event search during correlation.

WhenCreated

Number

Active Directory attribute. User account creation date.

WhenChanged

Number

Active Directory attribute. User account modification date.

AccountExpires

Number

Active Directory attribute. User account expiration date.

BadPasswordTime

Number

Active Directory attribute. Date of last unsuccessful login attempt.

Page top

[Topic 217865]

Event fields with general information

Every audit event has the event fields described below.

Event field name

Field value

ID

Unique event ID in the form of an UUID.

Timestamp

Event time.

DeviceHostName

The event source host. For audit events, it is the hostname where kuma-core is installed, because it is the source of events.

DeviceTimeZone

Timezone of the system time of the server hosting the KUMA Core in the format +-hh:mm.

Type

Type of the audit event. For audit event the value is 4.

TenantID

ID of the main tenant.

DeviceVendor

Kaspersky

DeviceProduct

KUMA

EndTime

Event creation time.

Page top

[Topic 218034]

User was successfully signed in or failed to sign in

Event field name

Field value

DeviceAction

user login

EventOutcome

succeeded or failed—the status depends on the success or failure of the operation.

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login.

SourceUserID

User ID.

Message

Description of the error; appears only if an error occurred during login. Otherwise, the field will be empty.

Page top

[Topic 218028]

User login successfully changed

Event field name

Field value

DeviceAction

user login changed

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to change data.

SourceUserID

User ID that was used to change data.

DestinationUserName

User login whose data was changed.

DestinationUserID

User ID whose data was changed.

DeviceCustomString1

Current value of the login.

DeviceCustomString1Label

new login

DeviceCustomString2

Value of the login before it was changed.

DeviceCustomString2Label

old login

Page top

[Topic 218030]

User role was successfully changed

Event field name

Field value

DeviceAction

user role changed

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to change data.

SourceUserID

User ID that was used to change data.

DestinationUserName

User login whose data was changed.

DestinationUserID

User ID whose data was changed.

DeviceCustomString1

Current value of the role.

DeviceCustomString1Label

new role

DeviceCustomString2

Value of the role before it was changed.

DeviceCustomString2Label

old role

Page top

[Topic 217947]

Other data of the user was successfully changed

Event field name

Field value

DeviceAction

user other info changed

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to change data.

SourceUserID

User ID that was used to change data.

DestinationUserName

User login whose data was changed.

DestinationUserID

User ID whose data was changed.

Page top

[Topic 218032]

User successfully logged out

This event appears only when the user pressed the logout button.

This event will not appear if the user is logged out due to the end of the session or if the user logs in again from another browser.

Event field name

Field value

DeviceAction

user logout

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login.

SourceUserID

User ID.

Page top

[Topic 218029]

User password was successfully changed

Event field name

Field value

DeviceAction

user password changed

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to change data.

SourceUserID

User ID that was used to change data.

DestinationUserName

User login whose data was changed.

DestinationUserID

User ID whose data was changed.

Page top

[Topic 218033]

User was successfully created

Event field name

Field value

DeviceAction

user created

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to create the user account.

SourceUserID

User ID that was used to create the user account.

DestinationUserName

User login for which the user account was created.

DestinationUserID

User ID for which the user account was created.

DeviceCustomString1

Role of the created user.

DeviceCustomString1Label

role

Page top

[Topic 218027]

User access token was successfully changed

Event field name

Field value

DeviceAction

user access token changed

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to change data.

SourceUserID

User ID that was used to change the data.

DestinationUserName

User login whose data was changed.

DestinationUserID

ID of the user whose data was changed.

Page top

[Topic 217997]

Service was successfully created

Event field name

Field value

DeviceAction

service created

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to create the service.

SourceUserID

User ID that was used to create the service.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217998]

Service was successfully deleted

Event field name

Field value

DeviceAction

service deleted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to delete the service.

SourceUserID

User ID that was used to delete the service.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DestinationAddress

The address of the machine that was used to start the service. If the service has never been started before, the field will be empty.

DestinationHostName

The FQDN of the machine that was used to start the service. If the service has never been started before, the field will be empty.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 218000]

Service was successfully reloaded

Event field name

Field value

DeviceAction

service reloaded

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to reset the service.

SourceUserID

User ID that was used to restart the service.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 218001]

Service was successfully restarted

Event field name

Field value

DeviceAction

service restarted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to restart the service.

SourceUserID

User ID that was used to restart the service.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 218002]

Service was successfully started

Event field name

Field value

DeviceAction

service started

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

Address that reported information about service start. It may be a proxy address if the information passed through a proxy.

SourcePort

Port that reported information about service start. It may be a proxy port if the information passed through a proxy.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DestinationAddress

Address of the machine where the service was started.

DestinationHostName

FQDN of the machine where the service was started.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217999]

Service was successfully paired

Event field name

Field value

DeviceAction

service paired

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

Address that sent a service pairing request. It may be a proxy address if the request passed through a proxy.

SourcePort

Port that sent a service pairing request. It may be a proxy port if the request passed through a proxy.

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217996]

Service status was changed

Event field name

Field value

DeviceAction

service status changed

DeviceExternalID

Service ID.

DeviceProcessName

Service name.

DeviceFacility

Service type.

DestinationAddress

Address of the machine where the service was started.

DestinationHostName

FQDN of the machine where the service was started.

DeviceCustomString1

green, yellow, or red

DeviceCustomString1Label

new status

DeviceCustomString2

green, yellow, or red

DeviceCustomString2Label

old status

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 218012]

Storage partition was deleted by user

Event field name

Field value

DeviceAction

partition deleted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to delete partition.

SourceUserID

User ID that was used to delete partition.

Name

Index name.

Message

deleted by user

Page top

[Topic 218014]

Storage partition was deleted automatically due to expiration

Event field name

Field value

DeviceAction

partition deleted

EventOutcome

succeeded

Name

Index name

SourceServiceName

scheduler

Message

deleted by retention period settings

Page top

[Topic 217705]

Active list was successfully cleared or operation failed

The event can be assigned the succeeded or failed status.

Since the request to clear an active list is made over a remote connection, a data transfer error may occur at any moment: both before and after deletion.

This means that the active list may be cleared successfully, but the event is assigned the failed status, because EventOutcome returns the TCP/IP connection status of the request, but not the succeeded or failed status of the active list clearing.

Event field name

Field value

DeviceAction

active list cleared

EventOutcome

succeeded or failed

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to clear the active list.

SourceUserID

User ID that was used to clear the active list.

DeviceExternalID

Service ID whose active list was cleared.

ExternalID

Active list ID.

Name

Active list name.

Message

If EventOutcome = failed, an error message can be found here.

DeviceCustomString5

Service tenant ID. Some errors prevent adding tenant information to the event.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217703]

Active list item was successfully deleted or operation was unsuccessful

The event can be assigned the succeeded or failed status.

Since the request to delete an active list item is made over a remote connection, a data transfer error may occur at any moment: both before and after deletion.

This means that the active list item may be deleted successfully, but the event is assigned the failed status, because EventOutcome returns the TCP/IP connection status of the request, but not the succeeded or failed status of the active list item deletion.

Event field name

Field value

DeviceAction

active list item deleted

EventOutcome

succeeded or failed

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to delete the item from the active list.

SourceUserID

User ID that was used to delete the item from the active list.

DeviceExternalID

Service ID whose active list was cleared.

ExternalID

Active list ID.

Name

Active list name.

DeviceCustomString1

Key name.

DeviceCustomString1Label

key

Message

If EventOutcome = failed, an error message can be found here.

DeviceCustomString5

Service tenant ID. Some errors prevent adding tenant information to the event.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217706]

Active list was successfully imported or operation failed

Active list items are imported in parts via a remote connection.

Since the import is performed via a remote connection, a data transfer error can occur at any time: when the data is imported partially or completely. EventOutcome returns the connection status, not the import status.

Event field name

Field value

DeviceAction

active list imported

EventOutcome

succeeded or failed

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to perform the import.

SourceUserID

User ID that was used to perform the import.

DeviceExternalID

Service ID for which an import was performed.

ExternalID

Active list ID.

Name

Active list name.

Message

If EventOutcome = failed, an error message can be found here.

DeviceCustomString5

Service tenant ID. Some errors prevent adding tenant information to the event.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name

DeviceCustomString6Label

tenant name

Page top

[Topic 217704]

Active list was exported successfully

Event field name

Field value

DeviceAction

active list exported

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to perform the export.

SourceUserID

User ID that was used to perform the export.

DeviceExternalID

Service ID for which an export was performed.

ExternalID

Active list ID.

Name

Active list name.

DeviceCustomString5

Service tenant ID. Some errors prevent adding tenant information to the event.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name

DeviceCustomString6Label

tenant name

Page top

[Topic 217968]

Resource was successfully added

Event field name

Field value

DeviceAction

resource added

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to add the resource.

SourceUserID

User ID that was used to add the resource.

DeviceExternalID

Resource ID.

DeviceProcessName

Resource name.

DeviceFacility

Resource type:

  • activeList
  • agent
  • aggregationRule
  • collector
  • connection
  • connector
  • correlationRule
  • correlator
  • destination
  • dictionary
  • enrichmentRule
  • filter
  • normalizer
  • proxy
  • responseRule
  • storage

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217969]

Resource was successfully deleted

Event field name

Field value

DeviceAction

resource deleted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to delete the resource.

SourceUserID

User ID that was used to delete the resource.

DeviceExternalID

Resource ID.

DeviceProcessName

Resource name.

DeviceFacility

Resource type:

  • activeList
  • agent
  • aggregationRule
  • collector
  • connection
  • connector
  • correlationRule
  • correlator
  • destination
  • dictionary
  • enrichmentRule
  • filter
  • normalizer
  • proxy
  • responseRule
  • storage

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217970]

Resource was successfully updated

Event field name

Field value

DeviceAction

resource updated

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to update the resource.

SourceUserID

User ID that was used to update the resource.

DeviceExternalID

Resource ID.

DeviceProcessName

Resource name.

DeviceFacility

Resource type:

  • activeList
  • agent
  • aggregationRule
  • collector
  • connection
  • connector
  • correlationRule
  • correlator
  • destination
  • dictionary
  • enrichmentRule
  • filter
  • normalizer
  • proxy
  • responseRule
  • storage

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217742]

Asset was successfully created

Event field name

Field value

DeviceAction

asset created

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to add the asset.

SourceUserID

User ID that was used to add the asset.

DeviceExternalID

Asset ID.

SourceHostName

Asset ID.

Name

Asset name.

DeviceCustomString1

Comma-separated IP addresses of the asset.

DeviceCustomString1Label

addresses

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217741]

Asset was successfully deleted

Event field name

Field value

DeviceAction

asset deleted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to add the asset.

SourceUserID

User ID that was used to add the asset.

DeviceExternalID

Asset ID.

SourceHostName

Asset ID.

Name

Asset name.

DeviceCustomString1

Comma-separated IP addresses of the asset.

DeviceCustomString1Label

addresses

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217740]

Asset category was successfully added

Event field name

Field value

DeviceAction

category created

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to add the category.

SourceUserID

User ID that was used to add the category.

DeviceExternalID

Category ID.

Name

Category name.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217739]

Asset category was deleted successfully

Event field name

Field value

DeviceAction

category deleted

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to delete the category.

SourceUserID

User ID that was used to delete the category.

DeviceExternalID

Category ID.

Name

Category name.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 218005]

Settings were updated successfully

Event field name

Field value

DeviceAction

settings updated

EventOutcome

succeeded

SourceTranslatedAddress

This field contains the value of the HTTP header x-real-ip or x-forwarded-for. If these headers are absent, the field will be empty.

SourceAddress

The address from which the user logged in. If the user logged in using a proxy, there will be a proxy address.

SourcePort

Port from which the user logged in. If the user logged in using a proxy, there will be a port on the proxy side.

SourceUserName

User login that was used to update the settings.

SourceUserID

User ID that was used to update the settings.

DeviceFacility

Type of settings.

DeviceCustomString5

Tenant ID.

DeviceCustomString5Label

tenant ID

DeviceCustomString6

Tenant name.

DeviceCustomString6Label

tenant name

Page top

[Topic 217894]

Information about third-party code

Information about third-party code is in the LEGAL_NOTICES file located in the /opt/kaspersky/kuma/LEGAL_NOTICES folder.

Page top

[Topic 221480]

Trademark notices

Registered trademarks and service marks are the property of their respective owners.

AMD is a trademark or registered trademark of Advanced Micro Devices, Inc.

Apache and the Apache feather logo are trademarks of The Apache Software Foundation.

Cisco is a registered trademark or trademark of Cisco Systems, Inc. and/or its affiliates in the United States and in certain other countries.

The Grafana word mark and the Grafana logo are registered trademarks/service marks or trademarks/service marks of Coding Instinct AB, in the United States and other countries and are used with Coding Instinct's permission. We are not affiliated with, endorsed, or sponsored by Coding Instinct, or the Grafana community.

Firebird is a registered trademark of the Firebird Foundation.

Google and Chrome are trademarks of Google LLC.

Huawei is a trademark of Huawei Technologies Co., Ltd.

Intel is a trademark of Intel Corporation registered in the United States and/or other countries.

Linux is a trademark of Linus Torvalds in the U.S. and other countries.

Microsoft, Active Directory, Windows, and Windows Server are trademarks of Microsoft Corporation.

CVE is a registered trademark of The MITRE Corporation.

Mozilla and Firefox are trademarks of the Mozilla Foundation in the USA and other countries.

Oracle is a registered trademark of Oracle and/or its affiliates.

Python is a trademark or registered trademark of the Python Software Foundation.

Ansible is a trademark or registered trademark of Red Hat, Inc. or its subsidiaries in the United States and other countries.

ClickHouse is a trademark of YANDEX LLC.

Page top

[Topic 90]

Glossary

Aggregation

Combining several messages of the same type from the event source into a single event.

Cluster

A group of servers on which the KUMA program has been installed and that have been clustered together for centralized management using the program's web interface.

Collector

A KUMA service that receives events from event sources, converts them, and transmits them to other program components for further processing.

Connector

A KUMA resource that ensures transport for receiving data from external systems.

Enrichment

The conversion of the textual representation of an event using dictionaries, constants, calls to the DNS service, and other tools.

Event

An instance of security-related activity of network devices and services that can be seen and recorded. For example, events include violations of the information security policy, the disabling of security measures, the occurrence of an unprecedented situation, etc.

Filter

The set of conditions the program uses to select events for further processing.

KUMA web interface

A KUMA service that provides a user interface to configure and track KUMA operations.

Network port

A TCP and UDP protocol setting that defines the destination of IP-format data packets that are transmitted to a host over a network and allows various programs running on the same host to receive the data independently of each other. Each program processes the data sent to a specific port (sometimes it is said that the program listens to this port number).

It's standard practice to assign standard port numbers to certain common network protocols (for example, web servers usually receive data over HTTP on TCP port 80), although in general a program can use any protocol on any port. Possible values: from 1 to 65535.

Normalization

A process that formats data received from an event in accordance with the fields of the KUMA event data model. During normalization, certain rules for changing the data may be executed (for example, changing upper case characters to lower case, replacing characters, etc.).

Role

A set of access privileges established to grant the KUMA web interface user the authority to perform tasks.

SELinux (Security-Enhanced Linux)

A system for controlling process access to operating system resources based on the use of security policies.

SIEM

Security Information and Event Management system. A solution for managing information and events in a company's security system.

STARTTLS

Text exchange protocol enhancement that lets you create an encrypted connection (TLS or SSL) directly over an ordinary TCP connection instead of opening a separate port for the encrypted connection.

UserPrincipalName

UserPrincipalName (UPN)—user name in email address format, such as username@domain.com.

The UPN must match the actual email address of the user. In this example, username is the user name in the Active Directory domain (user logon name), and domain.com is the UPN suffix. They are separated by the @ character. The DNS name of the Active Directory domain is used as the default UPN suffix in Active Directory.

Page top