Kaspersky SD-WAN

Redundancy of solution components

Expand all | Collapse all

About redundancy schemes for solution components

Kaspersky SD-WAN supports two deployment scenarios for solution components:

  • In the N+1 deployment scenario, you deploy two nodes of the solution component. If one node fails, the second node provides the functionality of the solution component.
  • In the 2N+1 deployment scenario, you deploy multiple nodes of the solution component. One node is the primary node and the rest are secondary nodes. If the primary node fails, a randomly chosen secondary node takes its place. This redundancy scheme allows solution components to remain operational even when multiple failures occur in a row.

The table below lists the solution components and the deployment scenarios that are applicable to them.

Solution component

Redundancy scheme

Database of the Zabbix monitoring system

2N+1

Zabbix server

N+1

Frontend part of the Zabbix monitoring system

N+1

Zabbix proxy server

N+1

MongoDB database

2N+1

Redis database:

  • Redis replica server
  • Redis Sentinel system

2N+1

Controller

2N+1

Frontend part of the solution

N+1

Orchestrator

N+1

Virtual Network Function Manager

N+1

Virtual Network Function Manager proxy

N+1

You can specify the number of nodes you want to deploy for each solution component in the configuration file.

When you configure the deployment settings for the MongoDB database or the controller node in accordance with the 2N+1 deployment scenario, the last node you specify becomes the arbiter node. The arbiter node is linked to other nodes and is used to choose the primary node. A node that has lost contact with the arbiter node enters standby mode. One of the nodes that have retained contact with the arbiter node stays or becomes the primary node. An arbiter node cannot become a primary node and does not store data.

Failure scenarios of solution component nodes

The figure below shows a diagram of Kaspersky SD-WAN deployed on three virtual machines in a data center. The diagram uses the following symbols:

  • 'www' is the frontend part of the solution
  • 'orc' is the orchestrator
  • 'mongo' is the MongoDB database
  • 'redis-m' is a Redis replica server
  • 'redis-s' is a Redis Sentinel system
  • 'vnfm-proxy' is a virtual network functions manager proxy
  • 'vnfm' is a Virtual Network Function Manager
  • 'ctl' is the controller and its database
  • 'zabbix-www' is the frontend part of the Zabbix monitoring system
  • 'zabbix-proxy' is the Zabbix proxy server
  • 'zabbix-srv' is the Zabbix server
  • 'zabbix-db' is the database of the Zabbix monitoring system
  • 'syslog' is the Syslog server

Users and CPE devices gain access to the web interface of the orchestrator and the web interface of the Zabbix monitoring system using a virtual IP address. The virtual IP address is assigned to virtual machine 1.

HA3_scheme

Solution deployed on three virtual machines

In this deployment scenario, the following failure modes are possible:

  • Failure of virtual machine 1 or its link.
  • Failure of virtual machine 2 or 3, or its link.
  • Simultaneous failure of virtual machines 1 and 3 or 2 and 3, or their links.
  • Simultaneous failure of virtual machines 1 and 2