Contents
- Development for KasperskyOS
- Starting processes
- File systems and network
- Contents of the VFS component
- Creating an IPC channel to VFS
- Building a VFS executable file
- Merging a client and VFS into one executable file
- Overview: arguments and environment variables of VFS
- Mounting a file system at startup
- Using VFS backends to separate file calls and network calls
- Writing a custom VFS backend
- IPC and transport
Overview: Einit and init.yaml
Einit initializing program
At startup, the KasperskyOS kernel finds the executable file named Einit
(initializing program) in the solution image and runs this executable file. The running process has the Einit
class and is normally used to start all other processes that are required when the solution is started.
Generating the C-code of the initializing program
The KasperskyOS Community Edition toolkit includes the einit
tool, which lets you generate the C-code of the initializing program based on the init description (the description file is normally named init.yaml
). The obtained program uses the KasperskyOS API to do the following:
- Statically create and run processes.
- Statically create IPC channels.
The standard way of using the einit
tool is to integrate an einit call into one of the steps of the build script. As a result, the einit
tool uses the init.yaml
file to generate the einit.c
file containing the code of the initializing program. In one of the following steps of the build script, you must compile the einit.c
file into the executable file of Einit
and include it into the solution image.
You are not required to create static description files for the initializing program. These files are included in the KasperskyOS Community Edition toolkit and are automatically connected during a solution build. However, the Einit
process class must be described in the security.psl
file.
Syntax of init.yaml
An init description contains data in YAML format. This data identifies the following:
- Processes that are started when KasperskyOS is loaded.
- IPC channels that are used by processes to interact with each other.
This data consists of a dictionary with the entities
key containing a list of dictionaries of processes. Process dictionary keys are presented in the table below.
Process dictionary keys in an init description
Key |
Required |
Value |
---|---|---|
|
Yes |
Process security class |
|
No |
Process name. If this name is not specified, the security class name will be used. Each process must have a unique name. You can start multiple processes of the same security class if they have different names. |
|
No |
Name of the executable file in ROMFS (in the solution image) from which the process will be started. If this name is not specified, the security class name (without prefixes and dots) will be used. For example, processes of the You can start multiple processes from the same executable file. |
|
No |
Process IPC channel dictionaries list. This list defines the statically created IPC channels whose client handles will be owned by the process. The list is empty by default. (In addition to statically created IPC channels, processes can also use dynamically created IPC channels.) |
|
No |
List of arguments passed to the process (the |
|
No |
Dictionary of environment variables passed to the process. The keys in this dictionary are the names of variables mapped to the passed values. The maximum size of a value is 1024 bytes. |
Process IPC channel dictionary keys are presented in the table below.
IPC channel dictionary keys in an init description
Key |
Required |
Value |
---|---|---|
|
Yes |
IPC channel name, which can be defined as a specific value or as a link such as
|
|
Yes |
Name of the process that will own the server handle of the IPC channel. |
Example init descriptions
This section contains init descriptions that demonstrate various aspects of starting processes.
Examples in KasperskyOS Community Edition may utilize a macro-containing init description format (init.yaml.in
).
The file containing an init description is usually named init.yaml
, but it can have any name.
Connecting and starting a client process and server process
In the next example, two processes will be started: one process of the Client
class and one process of the Server
class. The names of the processes are not specified, so they will match the names of their respective process classes. The names of the executable files are not specified either, so they will also match the names of their respective classes. The processes will be connected by an IPC channel named server_connection
.
init.yaml
entities:
- name: Client
connections:
- target: Server
id: server_connection
- name: Server
Specifying the executable file to run
The next example will run a Client
-class process from the cl
executable file, a ClientServer
-class process from the csr
executable file, and a MainServer
-class process from the msr
executable file. The names of the processes are not specified, so they will match the names of their respective process classes.
init.yaml
entities:
- name: Client
path: cl
- name: ClientServer
path: csr
- name: MainServer
path: msr
Starting two processes from the same executable file
The next example will run three processes: a Client
-class process from the default executable file (Client
), and processes of the MainServer
and BkServer
classes from the srv
executable file. The names of the processes are not specified, so they will match the names of their respective process classes.
init.yaml
entities:
- name: Client
- name: MainServer
path: srv
- name: BkServer
path: srv
Starting two processes of the same class
The next example will run one Client
-class process (named Client
by default) and two Server
-class processes named UserServer
and PrivilegedServer
. The client process is linked to the server processes through IPC channels named server_connection_us
and server_connection_ps
, respectively. The names of the executable files are not specified, so they will match the names of their respective process classes.
init.yaml
entities:
- name: Client
connections:
- id: server_connection_us
target: UserServer
- id: server_connection_ps
target: PrivilegedServer
- task: UserServer
name: Server
- task: PrivilegedServer
name: Server
Passing environment variables and arguments using the main() function
The next example will run two processes: one VfsFirst
-class process (named VfsFirst
by default) and one VfsSecond
-class process (named VfsSecond
by default). At startup, the first process receives the -f /etc/fstab
argument and the following environment variables: ROOTFS
with the value ramdisk0,0 / ext2 0 and UNMAP_ROMFS
with the value 1. At startup, the second process receives the -l devfs /dev devfs 0
argument.
The names of the executable files are not specified, so they will match the names of their respective process classes.
If the Env program is used in a solution, the arguments and environment variables passed through this program redefine the values that were defined through init.yaml
.
init.yaml
entities:
- name: VfsFirst
args:
- -f
- /etc/fstab
env:
ROOTFS: ramdisk0,0 / ext2 0
UNMAP_ROMFS: 1
- name: VfsSecond
args:
- -l
- devfs /dev devfs 0
Starting a process using the KasperskyOS API
This example uses the EntityInitEx()
and EntityRun()
functions to run an executable file from the solution image.
Below is the code of the GpMgrOpenSession()
function, which starts the server process, connects it to the client process and initializes IPC transport. The executable file of the new process must be contained in the ROMFS storage of the solution.
/**
* The "classname" parameter defines the class name of the started process,
* the "server" parameter defines a unique name for the process, and the "service" parameter contains the service name
* that is used when dynamically creating a channel.
* Output parameter "transport" contains the initialized transport
* if an IPC channel to the client was successfully created.
*/
Retcode GpMgrOpenSession(const char *classname, const char *server,
const char *service, NkKosTransport *transport)
{
Retcode rc;
Entity *e;
EntityInfo tae_info;
Handle endpoint;
rtl_uint32_t riid;
int count = CONNECT_RETRY;
/* Initializes the process description structure. */
rtl_memset(&tae_info, 0, sizeof(tae_info));
tae_info.eiid = classname;
tae_info.args[0] = server;
tae_info.args[1] = service;
/* Creates a process named "server" with the tae_info description.
* The third parameter is equal to RTL_NULL, therefore the name of the started
* binary file matches the class name from the tae_info description.
* The created process is in the stopped state. */
if ((e = EntityInitEx(&tae_info, server, RTL_NULL)) == NK_NULL)
{
rtl_printf("Cannot init entity '%s'\n", tae_info.eiid);
return rcFail;
}
/* Starts the process. */
if ((rc = EntityRun(e)) != rcOk)
{
rtl_printf("Cannot launch entity %" RTL_PRId32 "\n", rc);
EntityFree(e);
return rc;
}
/* Dynamically creates an IPC channel. */
while ((rc = KnCmConnect(server, service, INFINITE_TIMEOUT, &endpoint, &riid) ==
rcResourceNotFound && count--)
{
KnSleep(CONNECT_DELAY);
}
if (rc != rcOk)
{
rtl_printf("Cannot connect to server %" RTL_PRId32 "\n", rc);
return rc;
}
/* Initializes IPC transport. */
NkKosTransport_Init(transport, endpoint, NK_NULL, 0);
...
return rcOk;
}
To enable a process to start other processes, the solution security policy must allow this process to use the following core endpoints: Handle
, Task
and VMM
(their descriptions are in the directory kl\core\
).
Overview: Env program
The Env
program is intended for passing arguments and environment variables to started processes. When started, each process automatically sends a request to the Env
process and receives the necessary data.
A process query to Env
redefines the arguments and environment variables received through Einit
.
To use the Env
program in your solution, you need to do the following:
1. Develop the code of the Env
program by using macros from env/env.h
.
2. Build the binary file of the Env
program by linking it to the env_server
library.
3. In the init description, indicate that the Env
process must be started and connected to the selected processes (Env
acts a server in this case). The channel name is defined by the ENV_SERVICE_NAME
macro declared in the env/env.h
file.
4. Include the Env
binary file in the solution image.
Env program code
The code of the Env
program utilizes the following macros and functions declared in the env/env.h
file:
ENV_REGISTER_ARGS(name,argarr)
– arguments from theargarr
array are passed to the process namedname
(the maximum size of one element is 256 bytes).ENV_REGISTER_VARS(name,envarr)
– environment variables from theenvarr
array are passed to the process namedname
(the maximum size of one element is 256 bytes).ENV_REGISTER_PROGRAM_ENVIRONMENT(name,argarr,envarr)
– arguments and environment variables are passed to the process namedname
.envServerRun()
– initialize the server part of theEnv
program so that it can respond to requests.
Passing environment variables and arguments using Env
Example of passing arguments at process startup
Below is the code of the Env
program. When the process named NetVfs
starts, the program passes three arguments to this process: NetVfs
, -l devfs /dev devfs 0
and -l romfs /etc romfs 0
:
env.c
int main(int argc, char** argv)
{
const char* NetVfsArgs[] = {
"-l", "devfs /dev devfs 0",
"-l", "romfs /etc romfs 0"
};
ENV_REGISTER_ARGS("NetVfs", NetVfsArgs);
envServerRun();
return EXIT_SUCCESS;
}
Example of passing environment variables at process startup
Below is the code of the Env
program. When the process named Vfs3
starts, the program passes two environment variables to this process: ROOTFS=ramdisk0,0 / ext2 0
and UNMAP_ROMFS=1
:
env.c
int main(int argc, char** argv)
{
const char* Vfs3Envs[] = {
"ROOTFS=ramdisk0,0 / ext2 0",
"UNMAP_ROMFS=1"
};
ENV_REGISTER_VARS("Vfs3", Vfs3Envs);
envServerRun();
return EXIT_SUCCESS;
}
Contents of the VFS component
The VFS component contains a set of executable files, libraries and description files that let you use file systems and/or a network stack combined into a separate Virtual File System (VFS) process. If necessary, you can build your own VFS implementations.
VFS libraries
The vfs
CMake package contains the following libraries:
vfs_fs
– contains the defvs, ramfs and romfs implementations, and lets you add implementations of other file systems to VFS.vfs_net
– contains the defvs implementation and network stack.vfs_imp
– contains the sum of thevfs_fs
andvfs_net
components.vfs_remote
– client transport library that converts local calls into IPC requests to VFS and receives IPC responses.vfs_server
– server transport library of VFS that receives IPC requests, converts them into local calls, and sends IPC responses.vfs_local
– used for statically linking the client to VFS libraries.
VFS executable files
The precompiled_vfs
CMake package contains the following executable files:
VfsRamFs
VfsSdCardFs
VfsNet
The VfsRamFs
and VfsSdCardFs
executable files include the vfs_server
, vfs_fs
, vfat
and lwext4
libraries. The VfsNet
executable file includes the vfs_server
, vfs_imp
and dnet_imp
libraries.
Each of these executable files have their own default values for arguments and environment variables.
If necessary, you can independently build a VFS executable file with the necessary functionality.
VFS description files
The directory /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/include/kl/
contains the following VFS files:
VfsRamFs.edl
,VfsSdCardFs.edl
,VfsNet.edl
andVfsEntity.edl
, and the header files generated from them, including the transport code.Vfs.cdl
and the generatedVfs.cdl.h
.Vfs*.idl
and the header files generated from them, including the transport code.
Creating an IPC channel to VFS
Let's examine a Client
program using file systems and Berkeley sockets. To handle its calls, we start one VFS process (named VfsFsnet
). Network calls and file calls will be sent to this process. This approach is utilized when there is no need to separate file data streams from network data streams.
To ensure correct interaction between the Client
and VfsFsnet
processes, the name of the IPC channel between them must be defined by the _VFS_CONNECTION_ID
macro declared in the vfs/defs.h
file.
Below is a fragment of an init description for connecting the Client
and VfsFsnet
processes.
init.yaml
- name: Client
connections:
- target: VfsFsnet
id: {var: _VFS_CONNECTION_ID, include: vfs/defs.h}
- name: VfsFsnet
Building a VFS executable file
When building a VFS executable file, you can include whatever specific functionality is required in this file, such as:
- Implementation of a specific file system
- Network stack
- Network driver
For example, you will need to build a "file version" and a "network version" of VFS to separate file calls from network calls. In some cases, you will need to include a network stack and file systems in the VFS ("full version" of VFS).
Building a "file version" of VFS
Let's examine a VFS program containing only an implementation of the lwext4 file system without a network stack. To build this executable file, the file containing the main()
function must be linked to the vfs_server
, vfs_fs
and lwext4
libraries:
CMakeLists.txt
project (vfsfs)
include (platform/nk)
# Set compile flags
project_header_default ("STANDARD_GNU_11:YES" "STRICT_WARNINGS:NO")
add_executable (VfsFs "src/vfs.c")
# Linking with VFS libraries
target_link_libraries (VfsFs
${vfs_SERVER_LIB}
${LWEXT4_LIB}
${vfs_FS_LIB})
# Prepare VFS to connect to the ramdisk driver process
set_target_properties (VfsFs PROPERTIES ${blkdev_ENTITY}_REPLACEMENT ${ramdisk_ENTITY})
A block device driver cannot be linked to VFS and therefore must also be run as a separate process.
Interaction between three processes: client, "file version" of VFS, and block device driver.
Building a "network version" of VFS together with a network driver
Let's examine a VFS program containing a network stack with a driver but without implementations of files systems. To build this executable file, the file containing the main()
function must be linked to the vfs_server
, vfs_implementation
and dnet_implementation
libraries.
CMakeLists.txt
project (vfsnet)
include (platform/nk)
# Set compile flags
project_header_default ("STANDARD_GNU_11:YES" "STRICT_WARNINGS:NO")
add_executable (VfsNet "src/vfs.c")
# Linking with VFS libraries
target_link_libraries (VfsNet
${vfs_SERVER_LIB}
${vfs_IMPLEMENTATION_LIB}
${dnet_IMPLEMENTATION_LIB})
# Disconnect the block device driver
set_target_properties (VfsNet PROPERTIES ${blkdev_ENTITY}_REPLACEMENT "")
The dnet_implementation
library already includes a network driver, therefore it is not necessary to start a separate driver process.
Interaction between the Client process and the process of the "network version" of VFS
Building a "network version" of VFS with a separate network driver
Another option is to build the "network version" of VFS without a network driver. The network driver will need to be started as a separate process. Interaction with the driver occurs via IPC using the dnet_client
library.
In this case, the file containing the main()
function must be linked to the vfs_server
, vfs_implementation
and dnet_client
libraries.
CMakeLists.txt
project (vfsnet)
include (platform/nk)
# Set compile flags
project_header_default ("STANDARD_GNU_11:YES" "STRICT_WARNINGS:NO")
add_executable (VfsNet "src/vfs.c")
# Linking with VFS libraries
target_link_libraries (VfsNet
${vfs_SERVER_LIB}
${vfs_IMPLEMENTATION_LIB}
${dnet_CLIENT_LIB})
# Disconnect the block device driver
set_target_properties (VfsNet PROPERTIES ${blkdev_ENTITY}_REPLACEMENT "")
Interaction between three processes: client, "network version" of VFS, and network driver.
Building a "full version" of VFS
If the VFS needs to include a network stack and implementations of file systems, the build should use the vfs_server
library, vfs_implementation
library, dnet_implementation
library (or dnet_client
library for a separate network driver), and the libraries for implementing file systems.
Merging a client and VFS into one executable file
Let's examine a Client
program using Berkeley sockets. Calls made by the Client
must be sent to VFS. The normal path consists of starting a separate VFS process and creating an IPC channel. Alternatively, you can integrate VFS functionality directly into the Client
executable file. To do so, when building the Client
executable file, you need to link it to the vfs_local
library that will receive calls, and link it to the implementation libraries vfs_implementation
and dnet_implementation
.
Local linking with VFS is convenient during debugging. In addition, calls for working with the network can be handled much faster due to the exclusion of IPC calls. Nevertheless, insulation of the VFS in a separate process and IPC interaction with it is always recommended as a more secure approach.
Below is a build script for the Client
executable file.
CMakeLists.txt
project (client)
include (platform/nk)
# Set compile flags
project_header_default ("STANDARD_GNU_11:YES" "STRICT_WARNINGS:NO")
# Generates the Client.edl.h file
nk_build_edl_files (client_edl_files NK_MODULE "client" EDL "${CMAKE_SOURCE_DIR}/resources/edl/Client.edl")
add_executable (Client "src/client.c")
add_dependencies (Client client_edl_files)
# Linking with VFS libraries
target_link_libraries (Client ${vfs_LOCAL_LIB} ${vfs_IMPLEMENTATION_LIB} ${dnet_IMPLEMENTATION_LIB}
If the Client
uses file systems, it must also be linked to the vfs_fs
library and to the implementation of the utilized file system in addition to its linking to vfs_local
. You also need to add a block device driver to the solution.
Overview: arguments and environment variables of VFS
VFS arguments
-l <entry in fstab format>
The
-l
argument lets you mount the file system.-f <path to fstab file>
The
-f
argument lets you pass the file containing entries in fstab format for mounting file systems. The ROMFS storage will be searched for the file. If theUMNAP_ROMFS
variable is defined, the file system mounted using theROOTFS
variable will be searched for the file.
Example of using the -l and -f arguments
VFS environment variables
UNMAP_ROMFS
If the
UNMAP_ROMFS
variable is defined, the ROMFS storage will be deleted. This helps conserve memory and change behavior when using the-f
argument.ROOTFS = <entry in fstab format>
The
ROOTFS
variable lets you mount a file system to the root directory. In combination with theUNMAP_ROMFS
variable and the-f
argument, it lets you search for the fstab file in the mounted file system instead of in the ROMFS storage. ROOTFS usage exampleVFS_CLIENT_MAX_THREADS
The
VFS_CLIENT_MAX_THREADS
environment variable lets you redefine the SDK configuration parameterVFS_CLIENT_MAX_THREADS
during VFS startup.-
_VFS_NETWORK_BACKEND=<backend name>:<name of the IPC channel to VFS>
The _VFS_NETWORK_BACKEND
variable defines the backend used for network calls. You can specify the name of a standard backend such as client, server or local, and the name of a custom backend. If the local backend is used, the name of the IPC channel is not specified (_VFS_NETWORK_BACKEND=local:
). You can specify two or more IPC channels by separating them with a comma.
_VFS_FILESYSTEM_BACKEND=<backend name>:<name of the IPC channel to VFS>
The
_VFS_FILESYSTEM_BACKEND
variable defines the backend used for file calls. The backend name and name of the IPC channel to VFS are defined the same as way as they were for the_VFS_NETWORK_BACKEND
variable.
Default values
For the VfsRamFs
executable file:
ROOTFS = ramdisk0,0 / ext4 0
VFS_FILESYSTEM_BACKEND = server:kl.VfsRamFs
For the VfsSdCardFs
executable file:
ROOTFS = mmc0,0 / fat32 0
VFS_FILESYSTEM_BACKEND = server:kl.VfsSdCardFs
-l nodev /tmp ramfs 0
-l nodev /var ramfs 0
For the VfsNet
executable file:
VFS_NETWORK_BACKEND = server:kl.VfsNet
VFS_FILESYSTEM_BACKEND = server:kl.VfsNet
-l devfs /dev devfs 0
Mounting a file system at startup
When the VFS process starts, only the RAMFS file system is mounted to the root directory by default. If you need to mount other file systems, this can be done not only by using the mount()
call after the VFS starts but can also be done immediately when the VFS process starts by passing the necessary arguments and environment variables to it.
Let's examine three examples of mounting file systems at VFS startup. The Env
program is used to pass arguments and environment variables to the VFS process.
Mounting with the -l argument
A simple way to mount a file system is to pass the -l <entry in fstab format>
argument to the VFS process.
In this example, the devfs and romfs file systems will be mounted when the process named Vfs1
is started.
env.c
int main(int argc, char** argv)
{
const char* Vfs1Args[] = {
"-l", "devfs /dev devfs 0",
"-l", "romfs /etc romfs 0"
};
ENV_REGISTER_ARGS("Vfs1", Vfs1Args);
envServerRun();
return EXIT_SUCCESS;
}
Mounting with fstab from ROMFS
If an fstab file is added when building a solution, the file will be available through the ROMFS storage after startup. It can be used for mounting by passing the -f <path to fstab file>
argument to the VFS process.
In this example, the file systems defined via the fstab
file that was added during the solution build will be mounted when the process named Vfs2
is started.
env.c
int main(int argc, char** argv)
{
const char* Vfs2Args[] = { "-f", "fstab" };
ENV_REGISTER_ARGS("Vfs2", Vfs2Args);
envServerRun();
return EXIT_SUCCESS;
}
Mounting with an external fstab
Let's assume that the fstab file is located on a drive and not in the ROMFS image of the solution. To use it for mounting, you need to pass the following arguments and environment variables to VFS:
ROOTFS
. This variable lets you mount the file system containing the fstab file into the root directory.UNMAP_ROMFS
. If this variable is defined, the ROMFS storage is deleted. As a result, the fstab file will be sought in the file system mounted using theROOTFS
variable.-f
. This argument is used to define the path to the fstab file.
In the next example, the ext2 file system containing the /etc/fstab
file used for mounting additional file systems will be mounted to the root directory when the process named Vfs3
starts. The ROMFS storage will be deleted.
env.c
int main(int argc, char** argv)
{
const char* Vfs3Args[] = { "-f", "/etc/fstab" };
const char* Vfs3Envs[] = {
"ROOTFS=ramdisk0,0 / ext2 0",
"UNMAP_ROMFS=1"
};
ENV_REGISTER_PROGRAM_ENVIRONMENT("Vfs3", Vfs3Args, Vfs3Envs);
envServerRun();
return EXIT_SUCCESS;
}
Using VFS backends to separate file calls and network calls
This example shows a secure development pattern that separates network data streams from file data streams.
Let's examine a Client
program using file systems and Berkeley sockets. To handle its calls, we will start not one but two separate VFS processes from the VfsFirst
and VfsSecond
executable files. We will use environment variables to assign the file backends to work via the channel to VfsFirst
and assign the network backends to work via the channel to VfsSecond
. We will use the standard backends client and server. This way, we will redirect the file calls of the Client
to VfsFirst
and redirect the network calls to VfsSecond
. To pass the environment variables to processes, we will add the Env
program to the solution.
The init description of the solution is provided below. The Client
process will be connected to the VfsFirst
and VfsSecond
processes, and each of the three processes will be connected to the Env
process. Please note that the name of the IPC channel to the Env
process is defined by using the ENV_SERVICE_NAME
variable.
init.yaml
entities:
- name: Env
- name: Client
connections:
- target: Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
- target: VfsFirst
id: VFS1
- target: VfsSecond
id: VFS2
- name: VfsFirst
connections:
- target: Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
- name: VfsSecond
connections:
- target: Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
To send all file calls to VfsFirst
, we define the value of the _VFS_FILESYSTEM_BACKEND
environment variable as follows:
- For
VfsFirst
:_VFS_FILESYSTEM_BACKEND=server:<name of the IPC channel to VfsFirst>
- For
Client
:_VFS_FILESYSTEM_BACKEND=client:<name of the IPC channel to VfsFirst>
To send network calls to VfsSecond
, we use the equivalent _VFS_NETWORK_BACKEND
environment variable:
- We define the following for
VfsSecond
:_VFS_NETWORK_BACKEND=server:<name of the IPC channel to the VfsSecond>
- We define the following for the
Client
:_VFS_NETWORK_BACKEND=client: <name of the IPC channel to the VfsSecond>
We define the value of environment variables through the Env
program, which is presented below.
env.c
int main(void)
{
const char* vfs_first_envs[] = { "_VFS_FILESYSTEM_BACKEND=server:VFS1" };
ENV_REGISTER_VARS("VfsFirst", vfs_first_envs);
const char* vfs_second_envs[] = { "_VFS_NETWORK_BACKEND=server:VFS2" };
ENV_REGISTER_VARS("VfsSecond", vfs_second_envs);
const char* client_envs[] = { "_VFS_FILESYSTEM_BACKEND=client:VFS1", "_VFS_NETWORK_BACKEND=client:VFS2" };
ENV_REGISTER_VARS("Client", client_envs);
envServerRun();
return EXIT_SUCCESS;
}
Writing a custom VFS backend
This example shows how to change the logic for handling file calls using a special VFS backend.
Let's examine a solution that includes the Client
, VfsFirst
and VfsSecond
processes. Let's assume that the Client
process is connected to VfsFirst
and VfsSecond
using IPC channels.
Our task is to ensure that queries from the Client
process to the fat32 file system are handled by the VfsFirst
process, and queries to the ext4 file system are handled by the VfsSecond
process. To accomplish this task, we can use the VFS backend mechanism and will not even need to change the code of the Client
program.
We will write a custom backend named custom_client
, which will send calls via the VFS1
or VFS2
channel depending on whether or not the file path begins with /mnt1. To send calls, custom_client
will use the standard backends of the client
. In other words, it will act as a proxy backend.
We use the -l argument to mount fat32 to the /mnt1 directory for the VfsFirst
process and mount ext4 to /mnt2 for the VfsSecond
process. (It is assumed that VfsFirst
contains a fat32 implementation and VfsSecond
contains an ext4 implementation.) We use the _VFS_FILESYSTEM_BACKEND
environment variable to define the backends (custom_client and server) and IPC channels (VFS1 and VFS2) to be used by the processes.
Then we use the init description to define the names of the IPC channels: VFS1 and VFS2.
This is examined in more detail below:
- Code of the
custom_client
backend. - Linking of the
Client
program and thecustom_client
backend. Env
program code.- Init description.
Writing a custom_client backend
This file contains an implementation of the proxy custom backend that relays calls to one of the two standard client backends. The backend selection logic depends on the utilized path or on the file handle and is managed by additional data structures.
backend.c
/* Code for managing file handles. */
struct entry
{
Handle handle;
bool is_vfat;
};
struct fd_array
{
struct entry entries[MAX_FDS];
int pos;
pthread_rwlock_t lock;
};
struct fd_array fds = { .pos = 0, .lock = PTHREAD_RWLOCK_INITIALIZER };
int insert_entry(Handle fd, bool is_vfat)
{
pthread_rwlock_wrlock(&fds.lock);
if (fds.pos == MAX_FDS)
{
pthread_rwlock_unlock(&fds.lock);
return -1;
}
fds.entries[fds.pos].handle = fd;
fds.entries[fds.pos].is_vfat = is_vfat;
fds.pos++;
pthread_rwlock_unlock(&fds.lock);
return 0;
}
struct entry *find_entry(Handle fd)
{
pthread_rwlock_rdlock(&fds.lock);
for (int i = 0; i < fds.pos; i++)
{
if (fds.entries[i].handle == fd)
{
pthread_rwlock_unlock(&fds.lock);
return &fds.entries[i];
}
}
pthread_rwlock_unlock(&fds.lock);
return NULL;
}
/* Custom backend structure. */
struct context
{
struct vfs wrapper;
pthread_rwlock_t lock;
struct vfs *vfs_vfat;
struct vfs *vfs_ext4;
};
struct context ctx =
{
.wrapper =
{
.dtor = _vfs_backend_dtor,
.disconnect_all_clients = _disconnect_all_clients,
.getstdin = _getstdin,
.getstdout = _getstdout,
.getstderr = _getstderr,
.open = _open,
.read = _read,
.write = _write,
.close = _close,
}
};
/* Implementation of custom backend methods. */
static bool is_vfs_vfat_path(const char *path)
{
char vfat_path[5] = "/mnt1";
if (memcmp(vfat_path, path, sizeof(vfat_path)) != 0)
return false;
return true;
}
static void _vfs_backend_dtor(struct vfs *vfs)
{
ctx.vfs_vfat->dtor(ctx.vfs_vfat);
ctx.vfs_ext4->dtor(ctx.vfs_ext4);
}
static void _disconnect_all_clients(struct vfs *self, int *error)
{
(void)self;
(void)error;
ctx.vfs_vfat->disconnect_all_clients(ctx.vfs_vfat, error);
ctx.vfs_ext4->disconnect_all_clients(ctx.vfs_ext4, error);
}
static Handle _getstdin(struct vfs *self, int *error)
{
(void)self;
Handle handle = ctx.vfs_vfat->getstdin(ctx.vfs_vfat, error);
if (handle != INVALID_HANDLE)
{
if (insert_entry(handle, true))
{
*error = ENOMEM;
return INVALID_HANDLE;
}
}
return handle;
}
static Handle _getstdout(struct vfs *self, int *error)
{
(void)self;
Handle handle = ctx.vfs_vfat->getstdout(ctx.vfs_vfat, error);
if (handle != INVALID_HANDLE)
{
if (insert_entry(handle, true))
{
*error = ENOMEM;
return INVALID_HANDLE;
}
}
return handle;
}
static Handle _getstderr(struct vfs *self, int *error)
{
(void)self;
Handle handle = ctx.vfs_vfat->getstderr(ctx.vfs_vfat, error);
if (handle != INVALID_HANDLE)
{
if (insert_entry(handle, true))
{
*error = ENOMEM;
return INVALID_HANDLE;
}
}
return handle;
}
static Handle _open(struct vfs *self, const char *path, int oflag, mode_t mode, int *error)
{
(void)self;
Handle handle;
bool is_vfat = false;
if (is_vfs_vfat_path(path))
{
handle = ctx.vfs_vfat->open(ctx.vfs_vfat, path, oflag, mode, error);
is_vfat = true;
}
else
handle = ctx.vfs_ext4->open(ctx.vfs_ext4, path, oflag, mode, error);
if (handle == INVALID_HANDLE)
return INVALID_HANDLE;
if (insert_entry(handle, is_vfat))
{
if (is_vfat)
ctx.vfs_vfat->close(ctx.vfs_vfat, handle, error);
*error = ENOMEM;
return INVALID_HANDLE;
}
return handle;
}
static ssize_t _read(struct vfs *self, Handle fd, void *buf, size_t count, bool *nodata, int *error)
{
(void)self;
struct entry *found_entry = find_entry(fd);
if (found_entry != NULL && found_entry->is_vfat)
return ctx.vfs_vfat->read(ctx.vfs_vfat, fd, buf, count, nodata, error);
return ctx.vfs_ext4->read(ctx.vfs_ext4, fd, buf, count, nodata, error);
}
static ssize_t _write(struct vfs *self, Handle fd, const void *buf, size_t count, int *error)
{
(void)self;
struct entry *found_entry = find_entry(fd);
if (found_entry != NULL && found_entry->is_vfat)
return ctx.vfs_vfat->write(ctx.vfs_vfat, fd, buf, count, error);
return ctx.vfs_ext4->write(ctx.vfs_ext4, fd, buf, count, error);
}
static int _close(struct vfs *self, Handle fd, int *error)
{
(void)self;
struct entry *found_entry = find_entry(fd);
if (found_entry != NULL && found_entry->is_vfat)
return ctx.vfs_vfat->close(ctx.vfs_vfat, fd, error);
return ctx.vfs_ext4->close(ctx.vfs_ext4, fd, error);
}
/* Custom backend builder. ctx.vfs_vfat and ctx.vfs_ext4 are initialized
* as standard backends named "client". */
static struct vfs *_vfs_backend_create(Handle client_id, const char *config, int *error)
{
(void)config;
ctx.vfs_vfat = _vfs_init("client", client_id, "VFS1", error);
assert(ctx.vfs_vfat != NULL && "Can't initialize client backend!");
assert(ctx.vfs_vfat->dtor != NULL && "VFS FS backend has not set the destructor!");
ctx.vfs_ext4 = _vfs_init("client", client_id, "VFS2", error);
assert(ctx.vfs_ext4 != NULL && "Can't initialize client backend!");
assert(ctx.vfs_ext4->dtor != NULL && "VFS FS backend has not set the destructor!");
return &ctx.wrapper;
}
/* Registration of the custom backend under the name custom_client. */
static void _vfs_backend(create_vfs_backend_t *ctor, const char **name)
{
*ctor = &_vfs_backend_create;
*name = "custom_client";
}
REGISTER_VFS_BACKEND(_vfs_backend)
Linking of the Client program and the custom_client backend
Compile the written backend into a library:
CMakeLists.txt
add_library (backend_client STATIC "src/backend.c")
Link the prepared backend_client
library to the Client
program:
CMakeLists.txt (fragment)
add_dependencies (Client vfs_backend_client backend_client)
target_link_libraries (Client
pthread
${vfs_CLIENT_LIB}
"-Wl,--whole-archive" backend_client "-Wl,--no-whole-archive" backend_client
)
Writing the Env program
We use the Env
program to pass arguments and environment variables to processes.
env.c
int main(int argc, char** argv)
{
/* Mount fat32 to /mnt1 for the VfsFirst process and mount ext4 to /mnt2 for the VfsSecond process. */
const char* VfsFirstArgs[] = {
"-l", "ahci0 /mnt1 fat32 0"
};
ENV_REGISTER_ARGS("VfsFirst", VfsFirstArgs);
const char* VfsSecondArgs[] = {
"-l", "ahci1 /mnt2 ext4 0"
};
ENV_REGISTER_ARGS("VfsSecond", VfsSecondArgs);
/* Define the file backends. */
const char* vfs_first_args[] = { "_VFS_FILESYSTEM_BACKEND=server:VFS1" };
ENV_REGISTER_VARS("VfsFirst", vfs_first_args);
const char* vfs_second_args[] = { "_VFS_FILESYSTEM_BACKEND=server:VFS2" };
ENV_REGISTER_VARS("VfsSecond", vfs_second_args);
const char* client_fs_envs[] = { "_VFS_FILESYSTEM_BACKEND=custom_client:VFS1,VFS2" };
ENV_REGISTER_VARS("Client", client_fs_envs);
envServerRun();
return EXIT_SUCCESS;
}
Editing init.yaml
For the IPC channels that connect the Client
process to the VfsFirst
and VfsSecond
processes, you must define the same names that you specified in the _VFS_FILESYSTEM_BACKEND
environment variable: VFS1 and VFS2.
init.yaml
entities:
- name: vfs_backend.Env
- name: vfs_backend.Client
connections:
- target: vfs_backend.Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
- target: vfs_backend.VfsFirst
id: VFS1
- target: vfs_backend.VfsSecond
id: VFS2
- name: vfs_backend.VfsFirst
connections:
- target: vfs_backend.Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
- name: vfs_backend.VfsSecond
connections:
- target: vfs_backend.Env
id: {var: ENV_SERVICE_NAME, include: env/env.h}
Overview: creating IPC channels
There are two methods for creating IPC channels: static and dynamic.
Static creation of IPC channels is simpler to implement because you can use the init description for this purpose.
Dynamic creation of IPC channels lets you change the topology of interaction between processes on the fly. This is necessary if it is unknown which specific server contains the endpoint required by the client. For example, you may not know which specific drive you will need to write data to.
Statically creating an IPC channel
The static method has the following distinguishing characteristics:
- The client and server are in the stopped state when the IPC channel is created.
- Creation of this channel is initiated by the parent process that starts the client and server (this is normally Einit).
- The created IPC channel cannot be deleted.
- To get the IPC handle and endpoint ID (riid) after the IPC channel is created, the client and server must use the endpoint locator interface (
coresrv/sl/sl_api.h
).
Dynamically creating an IPC channel
The dynamic method has the following distinguishing characteristics:
- The client and server are already running at the time of creating the IPC channel.
- Creation of the channel is initiated jointly by the client and server.
- The created IPC channel can be deleted.
- The client and server get the IPC handle and endpoint ID (riid) immediately after the IPC channel is successfully created.
Creating IPC channels using init.yaml
This section contains init descriptions that demonstrate the specific features of creating IPC channels. Examples of defining properties and arguments of processes via init descriptions are examined in a separate article.
Examples in KasperskyOS Community Edition may utilize a macro-containing init description format (init.yaml.in
).
The file containing an init description is usually named init.yaml
, but it can have any name.
Connecting and starting a client process and server process
In the next example, two processes will be started: one process of the Client
class and one process of the Server
class. The names of the processes are not specified, so they will match the names of their respective process classes. The names of the executable files are not specified either, so they will also match the names of their respective classes. The processes will be connected by an IPC channel named server_connection
.
init.yaml
entities:
- name: Client
connections:
- target: Server
id: server_connection
- name: Server
Dynamically created IPC channels
A dynamically created IPC channel uses the following functions:
- Name Server interface
- Connection Manager interface
An IPC channel is dynamically created according to the following scenario:
- The following processes are started: client, server, and name server.
- The server connects to the name server by using the
NsCreate()
call and publishes the server name, interface name, and endpoint name by using theNsPublishService()
call. - The client uses the
NsCreate()
call to connect to the name server and then uses theNsEnumServices()
call to search for the server name and endpoint name based on the interface name. - The client uses the
KnCmConnect()
call to request access to the endpoint and passes the found server name and endpoint name as arguments. - The server calls the
KnCmListen()
function to check for requests to access the endpoint. - The server accepts the client request to access the endpoint by using the
KnCmAccept()
call and passes the client name and endpoint name received from theKnCmListen()
call as arguments.
Steps 2 and 3 can be skipped if the client already knows the server name and endpoint name in advance.
The server can use the NsUnPublishService()
call to unpublish endpoints that were previously published on the name server.
The server can use the KnCmDrop()
call to reject requests to access endpoints.
To use a name server, the solution security policy must allow interaction between a process of the kl.core.NameServer
class and processes between which IPC channels must be dynamically created.
Adding an endpoint to a solution
To ensure that a Client
program can use some specific functionality via the IPC mechanism, the following is required:
- In KasperskyOS Community Edition, find the executable file (we'll call it
Server
) that implements the necessary functionality. (The term "functionality" used here refers to one or more endpoints that have their own IPC interfaces) - Inhclude the CMake package containing the
Server
file and its client library. - Add the
Server
executable file to the solution image. - Edit the init description so that when the solution starts, the
Einit
program starts a new server process from theServer
executable file and connects it, using an IPC channel, to the process started from theClient
file.You must indicate the correct name of the IPC channel so that the transport libraries can identify this channel and find its IPC handles. The correct name of the IPC channel normally matches the name of the server process class. VFS is an exception in this case.
- Edit the PSL description to allow startup of the server process and IPC interaction between the client and the server.
- In the source code of the
Client
program, include the server methods header file. - Link the
Client
program with the client library.
Example of adding a GPIO driver to a solution
KasperskyOS Community Edition includes a gpio_hw
file that implements GPIO driver functionality.
The following commands connect the gpio CMake package:
.\CMakeLists.txt
...
find_package (gpio REQUIRED COMPONENTS CLIENT_LIB ENTITY)
include_directories (${gpio_INCLUDE})
...
The gpio_hw
executable file is added to a solution image by using the gpio_HW_ENTITY
variable, whose name can be found in the configuration file of the package at /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/lib/cmake/gpio/gpio-config.cmake:
einit\CMakeLists.txt
...
set (ENTITIES Client ${gpio_HW_ENTITY})
...
The following strings need to be added to the init description:
init.yaml.in
...
- name: client.Client
connections:
- target: kl.drivers.GPIO
id: kl.drivers.GPIO
- name: kl.drivers.GPIO
path: gpio_hw
The following strings need to be added to the PSL description:
security.psl.in
...
execute src=Einit, dst=kl.drivers.GPIO
{
grant()
}
request src=client.Client, dst=kl.drivers.GPIO
{
grant()
}
response src=kl.drivers.GPIO, dst=client.Client
{
grant()
}
...
In the code of the Client
program, you need to include the header file in which the GPIO driver methods are declared:
client.c
...
...
Finally, you need to link the Client
program with the GPIO client library:
client\CMakeLists.txt
...
target_link_libraries (Client ${gpio_CLIENT_LIB})
...
To ensure correct operation of the GPIO driver, you may need to add the BSP component to the solution. To avoid overcomplicating this example, BSP is not examined here. For more details, see the gpio_output example: /opt/KasperskyOS-Community-Edition-<version>/examples/gpio_output
Overview: IPC message structure
In KasperskyOS, all interactions between processes have statically defined types. The permissible structures of an IPC message are defined by the description of the interfaces of the process that receives the message (server).
A correct IPC message (request and response) contains a constant part and an arena.
Constant part of a message
The constant part of a message contains arguments of a fixed size, and the RIID and MID.
Fixed-size arguments can be arguments of any IDL types except the sequence
type.
The RIID and MID identify the interface and method being called:
- The RIID (Runtime Implementation ID) is the number of the process endpoint being called, starting at zero.
- The MID (Method ID) is the number of the method within the interface that contains it, starting at zero.
The type of the constant part of the message is generated by the NK compiler based on the IDL description of the interface. A separate structure is generated for each interface method. Union
types are also generated for storing any request to a process, component or interface. For more details, refer to Example generation of transport methods and types.
Arena
The arena is a buffer for storing variable-size arguments (sequence
IDL type).
Message structure verification by the security module
Prior to calling message-related rules, the Kaspersky Security Module verifies that the sent message is correct. Requests and responses are both validated. If the message has an incorrect structure, it will be rejected without calling the security model methods associated with it.
Forming a message structure
KasperskyOS Community Edition includes the following tools that make it easier for the developer to create and package an IPC message:
- The
transport-kos
library for working with NkKosTransport. - The NK compiler that lets you generate special methods and types.
Simple IPC message generation is demonstrated in the echo and ping examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
).
Finding an IPC handle
The client and server IPC handles must be found if there are no ready-to-use transport libraries for the utilized endpoint (for example, if you wrote your own endpoint). To independently work with IPC transport, you need to first initialize it by using the NkKosTransport_Init()
method and pass the IPC handle of the utilized channel as the second argument.
For more details, see the echo and ping examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
)
You do not need to find an IPC handle to utilize services that are implemented in executable files provided in KasperskyOS Community Edition. The provided transport libraries are used to perform all transport operations, including finding IPC handles.
See the gpio_*, net_*, net2_* and multi_vfs_* examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
).
Finding an IPC handle when statically creating a channel
When statically creating an IPC channel, both the client and server can find out their IPC handles immediately after startup by using the ServiceLocatorRegister()
and ServiceLocatorConnect()
methods and specifying the name of the created IPC channel.
For example, if the IPC channel is named server_connection
, the following must be called on the client side:
…
Handle handle = ServiceLocatorConnect("server_connection");
The following must be called on the server side:
…
nk_iid_t iid;
Handle handle = ServiceLocatorRegister("server_connection", NULL, 0, &iid);
For more details, see the echo and ping examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
), and the header file /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/include/coresrv/sl/sl_api.h
.
Finding an IPC handle when dynamically creating a channel
Both the client and server receive their own IPC handles immediately after dynamic creation of an IPC channel is successful.
The client IPC handle is one of the output (out
) arguments of the KnCmConnect()
method. The server IPC handle is an output argument of the KnCmAccept()
method. For more details, see the header file /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/include/coresrv/cm/cm_api.h
.
Finding an endpoint ID (riid)
The endpoint ID (riid) must be found on the client side if there are no ready-to-use transport libraries for the utilized endpoint (for example, if you wrote your own endpoint). To call methods of the server, you must first call the proxy object initialization method on the client side and pass the endpoint ID as the third argument. For example, for the Filesystem
interface:
Filesystem_proxy_init(&proxy, &transport.base, riid);
For more details, see the echo and ping examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
)
You do not need to find the endpoint ID to utilize services that are implemented in executable files provided in KasperskyOS Community Edition. The provided transport libraries are used to perform all transport operations.
See the gpio_*, net_*, net2_* and multi_vfs_* examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
).
Finding a service ID when statically creating a channel
When statically creating an IPC channel, the client can find out the ID of the necessary endpoint by using the ServiceLocatorGetRiid()
method and specifying the IPC channel handle and the fully qualified name of the endpoint. For example, if the OpsComp
component instance contains the FS
endpoint, the following must be called on the client side:
…
nk_iid_t riid = ServiceLocatorGetRiid(handle, "OpsComp.FS");
For more details, see the echo and ping examples (/opt/KasperskyOS-Community-Edition-<version>/examples/
), and the header file /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/include/coresrv/sl/sl_api.h
.
Finding a service ID when dynamically creating a channel
The client receives the endpoint ID immediately after dynamic creation of an IPC channel is successful. The client IPC handle is one of the output (out
) arguments of the KnCmConnect()
method. For more details, see the header file /opt/KasperskyOS-Community-Edition-<version>/sysroot-aarch64-kos/include/coresrv/cm/cm_api.h
.
Example generation of transport methods and types
When building a solution, the NK compiler uses the EDL, CDL and IDL descriptions to generate a set of special methods and types that simplify the creation, forwarding, receipt and processing of IPC messages.
As an example, we will examine the Server
process class that provides the FS
endpoint, which contains a single Open()
method:
Server.edl
entity Server
/* OpsComp is the named instance of the Operations component */
components {
OpsComp: Operations
}
Operations.cdl
component Operations
/* FS is the local name of the endpoint implementing the Filesystem interface */
endpoints {
FS: Filesystem
}
Filesystem.idl
package Filesystem
interface {
Open(in string<256> name, out UInt32 h);
}
These descriptions will be used to generate the files named Server.edl.h
, Operations.cdl.h
, and Filesystem.idl.h
, which contain the following methods and types:
Methods and types that are common to the client and server
- Abstract interfaces containing the pointers to the implementations of the methods included in them.
In our example, one abstract interface (
Filesystem
) will be generated:typedef struct Filesystem {
const struct Filesystem_ops *ops;
} Filesystem;
typedef nk_err_t
Filesystem_Open_fn(struct Filesystem *, const
struct Filesystem_Open_req *,
const struct nk_arena *,
struct Filesystem_Open_res *,
struct nk_arena *);
typedef struct Filesystem_ops {
Filesystem_Open_fn *Open;
} Filesystem_ops;
- Set of interface methods.
When calling an interface method, the corresponding values of the RIID and MID are automatically inserted into the request.
In our example, a single
Filesystem_Open
interface method will be generated:nk_err_t Filesystem_Open(struct Filesystem *self,
struct Filesystem_Open_req *req,
const
struct nk_arena *req_arena,
struct Filesystem_Open_res *res,
struct nk_arena *res_arena)
Methods and types used only on the client
- Types of proxy objects.
A proxy object is used as an argument in an interface method. In our example, a single
Filesystem_proxy
proxy object type will be generated:typedef struct Filesystem_proxy {
struct Filesystem base;
struct nk_transport *transport;
nk_iid_t iid;
} Filesystem_proxy;
- Functions for initializing proxy objects.
In our example, the single initializing function
Filesystem_proxy_init
will be generated:void Filesystem_proxy_init(struct Filesystem_proxy *self,
struct nk_transport *transport,
nk_iid_t iid)
- Types that define the structure of the constant part of a message for each specific method.
In our example, two such types will be generated:
Filesystem_Open_req
(for a request) andFilesystem_Open_res
(for a response).typedef struct __nk_packed Filesystem_Open_req {
__nk_alignas(8)
struct nk_message base_;
__nk_alignas(4) nk_ptr_t name;
} Filesystem_Open_req;
typedef struct Filesystem_Open_res {
union {
struct {
__nk_alignas(8)
struct nk_message base_;
__nk_alignas(4) nk_uint32_t h;
};
struct {
__nk_alignas(8)
struct nk_message base_;
__nk_alignas(4) nk_uint32_t h;
} res_;
struct Filesystem_Open_err err_;
};
} Filesystem_Open_res;
Methods and types used only on the server
- Type containing all endpoints of a component, and the initializing function. (For each server component.)
If there are embedded components, this type also contains their instances, and the initializing function takes their corresponding initialized structures. Therefore, if embedded components are present, their initialization must begin with the most deeply embedded component.
In our example, the
Operations_component
structure andOperations_component_init
function will be generated:typedef struct Operations_component {
struct Filesystem *FS;
};
void Operations_component_init(struct Operations_component *self,
struct Filesystem *FS)
- Type containing all endpoints provided directly by the server; all instances of components included in the server; and the initializing function.
In our example, the
Server_entity
structure andServer_entity_init
function will be generated:typedef struct Server_component {
struct : Operations_component *OpsComp;
} Server_component;
void Server_entity_init(struct Server_entity *self,
struct Operations_component *OpsComp)
- Types that define the structure of the constant part of a message for any method of a specific interface.
In our example, two such types will be generated:
Filesystem_req
(for a request) andFilesystem_res
(for a response).typedef union Filesystem_req {
struct nk_message base_;
struct Filesystem_Open_req Open;
};
typedef union Filesystem_res {
struct nk_message base_;
struct Filesystem_Open_res Open;
};
- Types that define the structure of the constant part of a message for any method of any endpoint of a specific component.
If embedded components are present, these types also contain structures of the constant part of a message for any method of any endpoint included in all embedded components.
In our example, two such types will be generated:
Operations_component_req
(for a request) andOperations_component_res
(for a response).typedef union Operations_component_req {
struct nk_message base_;
Filesystem_req FS;
} Operations_component_req;
typedef union Operations_component_res {
struct nk_message base_;
Filesystem_res FS;
} Operations_component_res;
- Types that define the structure of the constant part of a message for any method of any endpoint of a specific component whose instance is included in the server.
If embedded components are present, these types also contain structures of the constant part of a message for any method of any endpoint included in all embedded components.
In our example, two such types will be generated:
Server_entity_req
(for a request) andServer_entity_res
(for a response).typedef union Server_component_req {
struct nk_message base_;
Filesystem_req OpsComp_FS;
} Server_component_req;
typedef union Server_component_res {
struct nk_message base_;
Filesystem_res OpsComp_FS;
} Server_component_res;
- Dispatch methods (dispatchers) for a separate interface, component, or process class.
Dispatchers analyze the received query (the RIID and MID values), call the implementation of the corresponding method, and then save the response in the buffer. In our example, three dispatchers will be generated:
Filesystem_interface_dispatch
,Operations_component_dispatch
, andServer_entity_dispatch
.The process class dispatcher handles the request and calls the methods implemented by this class. If the request contains an incorrect RIID (for example, an RIID for a different endpoint that this process class does not have) or an incorrect MID, the dispatcher returns
NK_EOK
orNK_ENOENT
.nk_err_t Server_entity_dispatch(struct Server_entity *self,
const
struct nk_message *req,
const
struct nk_arena *req_arena,
struct nk_message *res,
struct nk_arena *res_arena)
In special cases, you can use dispatchers of the interface and the component. They take an additional argument: interface implementation ID (
nk_iid_t
). The request will be handled only if the passed argument and RIID from the request match, and if the MID is correct. Otherwise, the dispatchers returnNK_EOK
orNK_ENOENT
.nk_err_t Operations_component_dispatch(struct Operations_component *self,
nk_iid_t iidOffset,
const
struct nk_message *req,
const
struct nk_arena *req_arena,
struct nk_message *res,
struct nk_arena *res_arena)
nk_err_t Filesystem_interface_dispatch(struct Filesystem *impl,
nk_iid_t iid,
const
struct nk_message *req,
const
struct nk_arena *req_arena,
struct nk_message *res,
struct nk_arena *res_arena)