KasperskyOS Community Edition 1.2

File systems and network

In KasperskyOS, operations with file systems and the network are executed via a separate system program that implements a virtual file system (VFS).

In the SDK, the VFS component consists of a set of executable files, libraries, formal specification files, and header files. For more details, see the Contents of the VFS component section.

The main scenario of interaction with the VFS system program includes the following:

  1. An application connects via IPC channel to the VFS system program and then links to the client library of the VFS component during the build.
  2. In the application code, POSIX calls for working with file systems and the network are converted into client library function calls.

    Input and output to file handles for standard I/O streams (stdin, stdout and stderr) are also converted into queries to the VFS. If the application is not linked to the client library of the VFS component, printing to stdout is not possible. If this is the case, you can only print to the standard error stream (stderr), which in this case is performed via special methods of the KasperskyOS kernel without using VFS.

  3. The client library makes IPC requests to the VFS system program.
  4. The VFS system program receives an IPC requests and calls the corresponding file system implementations (which, in turn, may make IPC requests to device drivers) or network drivers.
  5. After the request is handled, the VFS system program responds to the IPC requests of the application.

Using multiple VFS programs

Multiple copies of the VFS system program can be added to a solution for the purpose of separating the data streams of different system programs and applications. You can also separate the data streams within one application. For more details, refer to Using VFS backends to separate data streams.

Adding VFS functionality to an application

The complete functionality of the VFS component can be included in an application, thereby avoiding the need to pass each request via IPC. For more details, refer to Including VFS functionality in a program.

However, use of VFS functionality via IPC enables the solution developer to do the following:

  • Use a solution security policy to control method calls for working with the network and file systems.
  • Connect multiple client programs to one VFS program.
  • Connect one client program to two VFS programs to separately work with the network and file systems.

In this section

Contents of the VFS component

Creating an IPC channel to VFS

Including VFS functionality in a program

Overview: startup parameters and environment variables of VFS

Mounting file systems when VFS starts

Using VFS backends to separate data streams

Creating a VFS backend

Dynamically configuring the network stack

Page top
[Topic sc_filesystems_and_net]

Contents of the VFS component

The VFS component implements the virtual file system. In the KasperskyOS SDK, the VFS component consists of a set of executable files, libraries, formal specification files and header files that enable the use of file systems and/or a network stack.

VFS libraries

The vfs CMake package contains the following libraries:

  • vfs_fs contains implementations of the devfs, ramfs and ROMFS file systems, and adds implementations of other file systems to VFS.
  • vfs_net contains the implementation of the devfs file system and the network stack.
  • vfs_imp contains the vfs_fs and vfs_net libraries.
  • vfs_remote is the client transport library that converts local calls into IPC requests to VFS and receives IPC responses.
  • vfs_server is the VFS server transport library that receives IPC requests, converts them into local calls, and sends IPC responses.
  • vfs_local is used to include VFS functionality in a program.

VFS executable files

The precompiled_vfs CMake package contains the following executable files:

  • VfsRamFs
  • VfsSdCardFs
  • VfsNet

The VfsRamFs and VfsSdCardFs executable files include the vfs_server, vfs_fs, vfat and lwext4 libraries. The VfsNet executable file includes the vfs_server, vfs_imp libraries.

Each of these executable files has its own default values for startup parameters and environment variables.

Formal specification files and header files of VFS

The sysroot-*-kos/include/kl directory from the KasperskyOS SDK contains the following VFS files:

  • Formal specification files VfsRamFs.edl, VfsSdCardFs.edl, VfsNet.edl and VfsEntity.edl, and the header files generated from them.
  • Formal specification file Vfs.cdl and the header file Vfs.cdl.h generated from it.
  • Formal specification files Vfs*.idl and the header files generated from them.

Libc library API supported by VFS

VFS functionality is available to programs through the API provided by the libc library.

The functions implemented by the vfs_fs and vfs_net libraries are presented in the table below. The * character denotes the functions that are optionally included in the vfs_fs library (depending on the library build parameters).

Functions implemented by the vfs_fs library

mount()

unlink()

ftruncate()

lsetxattr()*

umount()

rmdir()

chdir()

fsetxattr()*

open()

mkdir()

fchdir()

getxattr()*

openat()

mkdirat()

chmod()

lgetxattr()*

read()

fcntl()

fchmod()

fgetxattr()*

readv()

statvfs()

fchmodat()

listxattr()*

write()

fstatvfs()

chroot()

llistxattr()*

writev()

getvfsstat()

fsync()

flistxattr()*

stat()

pipe()

fdatasync()

removexattr()*

lstat()

futimens()

pread()

lremovexattr()*

fstat()

utimensat()

pwrite()

fremovexattr()*

fstatat()

link()

sendfile()

acl_set_file()*

lseek()

linkat()

getdents()

acl_get_file()*

close()

symlink()

sync()

acl_delete_def_file()*

rename()

symlinkat()

ioctl()

 

renameat()

unlinkat()

setxattr()*

 

Functions implemented by the vfs_net library

read()

bind()

getsockname()

recvfrom()

readv()

listen()

gethostbyname()

recvmsg()

write()

connect()

getnetbyaddr()

send()

writev()

accept()

getnetbyname()

sendto()

fstat()

poll()

getnetent()

sendmsg()

close()

shutdown()

setnetent()

ioctl()

fcntl()

getnameinfo()

endnetent()

sysctl()

fstatvfs()

getaddrinfo()

getprotobyname()

 

pipe()

freeaddrinfo()

getprotobynumber()

 

futimens()

getifaddrs()

getsockopt()

 

socket()

freeifaddrs()

setsockopt()

 

socketpair()

getpeername()

recv()

 

If there is no implementation of a called function in VFS, the EIO error code is returned.

Page top
[Topic vfs_overview]

Creating an IPC channel to VFS

In this example, the Client process uses the file systems and network stack, and the VfsFsnet process handles the IPC requests of the Client process related to the use of file systems and the network stack. This approach is utilized when there is no need to separate data streams related to file systems and the network stack.

The IPC channel name must be assigned by the _VFS_CONNECTION_ID macro defined in the header file sysroot-*-kos/include/vfs/defs.h from the KasperskyOS SDK.

Init description of the example:

init.yaml

- name: Client connections: - target: VfsFsnet id: {var: _VFS_CONNECTION_ID, include: vfs/defs.h} - name: VfsFsnet
Page top
[Topic client_and_vfs_ipc_channel]

Including VFS functionality in a program

In this example, the Client program includes the VFS program functionality for working with the network stack (see the figure below).

VFS component libraries in a program

The client.c implementation file is compiled and the vfs_local, vfs_implementation and dnet_implementation libraries are linked:

CMakeLists.txt

project (client) include (platform/nk) # Set compile flags project_header_default ("STANDARD_GNU_11:YES" "STRICT_WARNINGS:NO") # Generates the Client.edl.h file nk_build_edl_files (client_edl_files NK_MODULE "client" EDL "${CMAKE_SOURCE_DIR}/resources/edl/Client.edl") add_executable (Client "src/client.c") add_dependencies (Client client_edl_files) # Linking with VFS libraries target_link_libraries (Client ${vfs_LOCAL_LIB} ${vfs_IMPLEMENTATION_LIB} ${dnet_IMPLEMENTATION_LIB}

In case the Client program uses file systems, you must link the vfs_local and vfs_fs libraries, and the libraries for implementing these file systems. In this case, you must also add a block device driver to the solution.

Page top
[Topic client_and_vfs_linked]

Overview: startup parameters and environment variables of VFS

VFS program startup parameters

  • -l <entry in fstab format>

    The startup parameter -l mounts the defined file system.

  • -f <path to fstab file>

    The parameter -f mounts the file systems specified in the fstab file. If the UNMAP_ROMFS environment variable is not defined, the fstab file will be sought in the ROMFS image. If the UNMAP_ROMFS environment variable is defined, the fstab file will be sought in the file system defined through the ROOTFS environment variable.

Examples of using VFS program startup parameters

Environment variables of the VFS program

  • UNMAP_ROMFS

    If the UNMAP_ROMFS environment variable is defined, the ROMFS image will be deleted from memory. This helps conserve memory. When using the startup parameter -f, it also provides the capability to search for the fstab file in the file system defined through the ROOTFS environment variable instead of searching the ROMFS image.

    Example of using the UNMAP_ROMFS environment variable

  • ROOTFS = <entry in fstab format>

    The ROOTFS environment variable mounts the defined file system to the root directory. When using the startup parameter -f, a combination of the ROOTFS and UNMAP_ROMFS environment variables provides the capability to search for the fstab file in the file system defined through the ROOTFS environment variable instead of searching the ROMFS image.

    Example of using the ROOTFS environment variable

  • VFS_CLIENT_MAX_THREADS

    The VFS_CLIENT_MAX_THREADS environment variable redefines the SDK configuration parameter VFS_CLIENT_MAX_THREADS.

  • _VFS_NETWORK_BACKEND=<VFS backend name>:<name of the IPC channel to the VFS process>

    The _VFS_NETWORK_BACKEND environment variable defines the VFS backend for working with the network stack. You can specify the name of the standard VFS backend: client (for a program that runs in the context of a client process), server (for a VFS program that runs in the context of a server process) or local, and the name of a custom VFS backend. If the local VFS backend is used, the name of the IPC channel is not specified (_VFS_NETWORK_BACKEND=local:). You can specify more than one IPC channel by separating them with a comma.

  • _VFS_FILESYSTEM_BACKEND=<VFS backend name>:<name of the IPC channel to the VFS process>

    The _VFS_FILESYSTEM_BACKEND environment variable defines the VFS backend for working with file systems. The name of the VFS backend and the name of the IPC channel to the VFS process are defined the same way as they are defined in the _VFS_NETWORK_BACKEND environment variable.

Default values for startup parameters and environment variables of VFS

For the VfsRamFs executable file:

ROOTFS = ramdisk0,0 / ext4 0 VFS_FILESYSTEM_BACKEND = server:kl.VfsRamFs

For the VfsSdCardFs executable file:

ROOTFS = mmc0,0 / fat32 0 VFS_FILESYSTEM_BACKEND = server:kl.VfsSdCardFs -l nodev /tmp ramfs 0 -l nodev /var ramfs 0

For the VfsNet executable file:

VFS_NETWORK_BACKEND = server:kl.VfsNet VFS_FILESYSTEM_BACKEND = server:kl.VfsNet -l devfs /dev devfs 0
Page top
[Topic vfs_args_and_envs_overview]

Mounting file systems when VFS starts

When the VFS program starts, only the RAMFS file system is mounted to the root directory by default. If you need to mount other file systems, this can be done not only by calling the mount() function but also by setting the startup parameters and environment variables of the VFS program.

The ROMFS and squashfs file systems are intended for read-only operations. For this reason, you must specify the ro parameter to mount these file systems.

Using the startup parameter -l

One way to mount a file system is to set the startup parameter -l <entry in fstab format> for the VFS program.

In these examples, the devfs and ROMFS file systems will be mounted when the VFS program is started:

init.yaml.(in)

... - name: VfsFirst args: - -l - devfs /dev devfs 0 - -l - romfs /etc romfs ro ...

CMakeLists.txt

... set_target_properties (${vfs_ENTITY} PROPERTIES EXTRA_ARGS " - -l - devfs /dev devfs 0 - -l - romfs /etc romfs ro") ...

Using the fstab file from the ROMFS image

When building a solution, you can add the fstab file to the ROMFS image. This file can be used to mount file systems by setting the startup parameter -f <path to the fstab file> for the VFS program.

In these examples, the file systems defined via the fstab file that was added to the ROMFS image during the solution build will be mounted when the VFS program is started:

init.yaml.(in)

... - name: VfsSecond args: - -f - fstab ...

CMakeLists.txt

... set_target_properties (${vfs_ENTITY} PROPERTIES EXTRA_ARGS " - -f - fstab") ...

Using an "external" fstab file

If the fstab file resides in another file system instead of in the ROMFS image, you must set the following startup parameters and environment variables for the VFS program to enable use of this file:

  1. ROOTFS. This environment variable mounts the file system containing the fstab file to the root directory.
  2. UNMAP_ROMFS. If this environment variable is defined, the fstab file will be sought in the file system defined through the ROOTFS environment variable.
  3. -f. This startup parameter is used to mount the file systems specified in the fstab file.

In these examples, the ext2 file system that should contain the fstab file at the path /etc/fstab will be mounted to the root directory when the VFS program starts:

init.yaml.(in)

... - name: VfsThird args: - -f - /etc/fstab env: ROOTFS: ramdisk0,0 / ext2 0 UNMAP_ROMFS: 1 ...

CMakeLists.txt

... set_target_properties (${vfs_ENTITY} PROPERTIES EXTRA_ARGS " - -f - /etc/fstab" EXTRA_ENV " ROOTFS: ramdisk0,0 / ext2 0 UNMAP_ROMFS: 1") ...
Page top
[Topic mount_on_start]

Using VFS backends to separate data streams

This example employs a secure development pattern that separates data streams related to file system use from data streams related to the use of a network stack.

The Client process uses file systems and the network stack. The VfsFirst process works with file systems, and the VfsSecond process provides the capability to work with the network stack. The environment variables of programs that run in the contexts of the Client, VfsFirst and VfsSecond processes are used to define the VFS backends that ensure the segregated use of file systems and the network stack. As a result, IPC requests of the Client process that are related to the use of file systems are handled by the VfsFirst process, and IPC requests of the Client process that are related to network stack use are handled by the VfsSecond process (see the figure below).

Process interaction scenario

Init description of the example:

init.yaml

entities: - name: Client connections: - target: VfsFirst id: VFS1 - target: VfsSecond id: VFS2 env: _VFS_FILESYSTEM_BACKEND: client:VFS1 _VFS_NETWORK_BACKEND: client:VFS2 - name: VfsFirst env: _VFS_FILESYSTEM_BACKEND: server:VFS1 - name: VfsSecond env: _VFS_NETWORK_BACKEND: server:VFS2
Page top
[Topic client_and_two_vfs]

Creating a VFS backend

This example demonstrates how to create and use a custom VFS backend.

The Client process uses the fat32 and ext4 file systems. The VfsFirst process works with the fat32 file system, and the VfsSecond process provides the capability to work with the ext4 file system. The environment variables of programs that run in the contexts of the Client, VfsFirst and VfsSecond processes are used to define the VFS backends ensuring that IPC requests of the Client process are handled by the VfsFirst or VfsSecond process depending on the specific file system being used by the Client process. As a result, IPC requests of the Client process related to use of the fat32 file system are handled by the VfsFirst process, and IPC requests of the Client process related to use of the ext4 file system are handled by the VfsSecond process (see the figure below).

On the VfsFirst process side, the fat32 file system is mounted to the directory /mnt1. On the VfsSecond process side, the ext4 file system is mounted to the directory /mnt2. The custom VFS backend custom_client used on the Client process side sends IPC requests over the IPC channel VFS1 or VFS2 depending on whether or not the file path begins with /mnt1. The custom VFS backend uses the standard VFS backend client as an intermediary.

Process interaction scenario

Source code of the VFS backend

This implementation file contains the source code of the VFS backend custom_client, which uses the standard client VFS backends:

backend.c

#include <vfs/vfs.h> #include <stdio.h> #include <stdlib.h> #include <platform/compiler.h> #include <pthread.h> #include <errno.h> #include <string.h> #include <getopt.h> #include <assert.h> /* Code for managing file handles */ #define MAX_FDS 50 struct entry { Handle handle; bool is_vfat; }; struct fd_array { struct entry entries[MAX_FDS]; int pos; pthread_rwlock_t lock; }; struct fd_array fds = { .pos = 0, .lock = PTHREAD_RWLOCK_INITIALIZER }; int insert_entry(Handle fd, bool is_vfat) { pthread_rwlock_wrlock(&fds.lock); if (fds.pos == MAX_FDS) { pthread_rwlock_unlock(&fds.lock); return -1; } fds.entries[fds.pos].handle = fd; fds.entries[fds.pos].is_vfat = is_vfat; fds.pos++; pthread_rwlock_unlock(&fds.lock); return 0; } struct entry *find_entry(Handle fd) { pthread_rwlock_rdlock(&fds.lock); for (int i = 0; i < fds.pos; i++) { if (fds.entries[i].handle == fd) { pthread_rwlock_unlock(&fds.lock); return &fds.entries[i]; } } pthread_rwlock_unlock(&fds.lock); return NULL; } /* Custom VFS backend structure */ struct context { struct vfs wrapper; pthread_rwlock_t lock; struct vfs *vfs_vfat; struct vfs *vfs_ext4; }; struct context ctx = { .wrapper = { .dtor = _vfs_backend_dtor, .disconnect_all_clients = _disconnect_all_clients, .getstdin = _getstdin, .getstdout = _getstdout, .getstderr = _getstderr, .open = _open, .read = _read, .write = _write, .close = _close, } }; /* Implementation of custom VFS backend methods */ static bool is_vfs_vfat_path(const char *path) { char vfat_path[5] = "/mnt1"; if (memcmp(vfat_path, path, sizeof(vfat_path)) != 0) return false; return true; } static void _vfs_backend_dtor(struct vfs *vfs) { ctx.vfs_vfat->dtor(ctx.vfs_vfat); ctx.vfs_ext4->dtor(ctx.vfs_ext4); } static void _disconnect_all_clients(struct vfs *self, int *error) { (void)self; (void)error; ctx.vfs_vfat->disconnect_all_clients(ctx.vfs_vfat, error); ctx.vfs_ext4->disconnect_all_clients(ctx.vfs_ext4, error); } static Handle _getstdin(struct vfs *self, int *error) { (void)self; Handle handle = ctx.vfs_vfat->getstdin(ctx.vfs_vfat, error); if (handle != INVALID_HANDLE) { if (insert_entry(handle, true)) { *error = ENOMEM; return INVALID_HANDLE; } } return handle; } static Handle _getstdout(struct vfs *self, int *error) { (void)self; Handle handle = ctx.vfs_vfat->getstdout(ctx.vfs_vfat, error); if (handle != INVALID_HANDLE) { if (insert_entry(handle, true)) { *error = ENOMEM; return INVALID_HANDLE; } } return handle; } static Handle _getstderr(struct vfs *self, int *error) { (void)self; Handle handle = ctx.vfs_vfat->getstderr(ctx.vfs_vfat, error); if (handle != INVALID_HANDLE) { if (insert_entry(handle, true)) { *error = ENOMEM; return INVALID_HANDLE; } } return handle; } static Handle _open(struct vfs *self, const char *path, int oflag, mode_t mode, int *error) { (void)self; Handle handle; bool is_vfat = false; if (is_vfs_vfat_path(path)) { handle = ctx.vfs_vfat->open(ctx.vfs_vfat, path, oflag, mode, error); is_vfat = true; } else handle = ctx.vfs_ext4->open(ctx.vfs_ext4, path, oflag, mode, error); if (handle == INVALID_HANDLE) return INVALID_HANDLE; if (insert_entry(handle, is_vfat)) { if (is_vfat) ctx.vfs_vfat->close(ctx.vfs_vfat, handle, error); *error = ENOMEM; return INVALID_HANDLE; } return handle; } static ssize_t _read(struct vfs *self, Handle fd, void *buf, size_t count, bool *nodata, int *error) { (void)self; struct entry *found_entry = find_entry(fd); if (found_entry != NULL && found_entry->is_vfat) return ctx.vfs_vfat->read(ctx.vfs_vfat, fd, buf, count, nodata, error); return ctx.vfs_ext4->read(ctx.vfs_ext4, fd, buf, count, nodata, error); } static ssize_t _write(struct vfs *self, Handle fd, const void *buf, size_t count, int *error) { (void)self; struct entry *found_entry = find_entry(fd); if (found_entry != NULL && found_entry->is_vfat) return ctx.vfs_vfat->write(ctx.vfs_vfat, fd, buf, count, error); return ctx.vfs_ext4->write(ctx.vfs_ext4, fd, buf, count, error); } static int _close(struct vfs *self, Handle fd, int *error) { (void)self; struct entry *found_entry = find_entry(fd); if (found_entry != NULL && found_entry->is_vfat) return ctx.vfs_vfat->close(ctx.vfs_vfat, fd, error); return ctx.vfs_ext4->close(ctx.vfs_ext4, fd, error); } /* Custom VFS backend builder. ctx.vfs_vfat and ctx.vfs_ext4 are initialized * as standard backends named "client". */ static struct vfs *_vfs_backend_create(Handle client_id, const char *config, int *error) { (void)config; ctx.vfs_vfat = _vfs_init("client", client_id, "VFS1", error); assert(ctx.vfs_vfat != NULL && "Can't initialize client backend!"); assert(ctx.vfs_vfat->dtor != NULL && "VFS FS backend has not set the destructor!"); ctx.vfs_ext4 = _vfs_init("client", client_id, "VFS2", error); assert(ctx.vfs_ext4 != NULL && "Can't initialize client backend!"); assert(ctx.vfs_ext4->dtor != NULL && "VFS FS backend has not set the destructor!"); return &ctx.wrapper; } /* Registration of the custom VFS backend under the name custom_client */ static void _vfs_backend(create_vfs_backend_t *ctor, const char **name) { *ctor = &_vfs_backend_create; *name = "custom_client"; } REGISTER_VFS_BACKEND(_vfs_backend)

Linking the Client program

Creating a static VFS backend library:

CMakeLists.txt

... add_library (backend_client STATIC "src/backend.c") ...

Linking the Client program to the static VFS backend library:

CMakeLists.txt

... add_dependencies (Client vfs_backend_client backend_client) target_link_libraries (Client pthread ${vfs_CLIENT_LIB} "-Wl,--whole-archive" backend_client "-Wl,--no-whole-archive" backend_client ) ...

Setting the startup parameters and environment variables of programs

Init description of the example:

init.yaml

entities: - name: vfs_backend.Client connections: - target: vfs_backend.VfsFirst id: VFS1 - target: vfs_backend.VfsSecond id: VFS2 env: _VFS_FILESYSTEM_BACKEND: custom_client:VFS1,VFS2 - name: vfs_backend.VfsFirst args: - -l - ahci0 /mnt1 fat32 0 env: _VFS_FILESYSTEM_BACKEND: server:VFS1 - name: vfs_backend.VfsSecond - -l - ahci1 /mnt2 ext4 0 env: _VFS_FILESYSTEM_BACKEND: server:VFS2
Page top
[Topic vfs_backends]

Dynamically configuring the network stack

To change the default network stack parameters, use the sysctl() function or sysctlbyname() function that are declared in the header file sysroot-*-kos/include/sys/sysctl.h from the KasperskyOS SDK. The parameters that can be changed are presented in the table below.

Configurable network stack parameters

Parameter name

Parameter description

net.inet.ip.ttl

Maximum time to live (TTL) of sent IP packets. It does not affect the ICMP protocol.

net.inet.ip.mtudisc

If its value is set to 1, "Path MTU Discovery" (RFC 1191) mode is enabled. This mode affects the maximum size of a TCP segment (Maximum Segment Size, or MSS). In this mode, the MSS value is determined by the limitations of network nodes. If "Path MTU Discovery" mode is not enabled, the MSS value does not exceed the value defined by the net.inet.tcp.mssdflt parameter.

net.inet.tcp.mssdflt

MSS value (in bytes) that is applied if only the communicating side failed to provide this value when opening the TCP connection, or if "Path MTU Discovery" mode (RFC 1191) is not enabled. This MSS value is also forwarded to the communicating side when opening a TCP connection.

net.inet.tcp.minmss

Minimum MSS value, in bytes.

net.inet.tcp.mss_ifmtu

If its value is set to 1, the MSS value is calculated for an opened TCP connection based on the maximum size of a transmitted data block (Maximum Transmission Unit, or MTU) of the employed network interface. If its value is set to 0, the MSS value for an opened TCP connection is calculated based on the MTU of the network interface that has the highest value for this parameter among all available network interfaces (except the loopback interface).

net.inet.tcp.keepcnt

Number of times to send test messages (Keep-Alive Probes, or KA) without receiving a response before the TCP connection will be considered closed. If its value is set to 0, the number of sent keep-alive probes is unlimited.

net.inet.tcp.keepidle

TCP connection idle period, after which keep-alive probes begin. This is defined in conditional units, which can be converted into seconds via division by the net.inet.tcp.slowhz parameter value.

net.inet.tcp.keepintvl

Time interval between recurring keep-alive probes when no response is received. This is defined in conditional units, which can be converted into seconds via division by the net.inet.tcp.slowhz parameter value.

net.inet.tcp.recvspace

Size of the buffer (in bytes) for data received over the TCP protocol.

net.inet.tcp.sendspace

Size of the buffer (in bytes) for data sent over the TCP protocol.

net.inet.udp.recvspace

Size of the buffer (in bytes) for data received over the UDP protocol.

net.inet.udp.sendspace

Size of the buffer (in bytes) for data sent over the UDP protocol.

MSS configuration example:

static const int mss_max = 1460; static const int mss_min = 100; static const char* mss_max_opt_name = "net.inet.tcp.mssdflt"; static const char* mss_min_opt_name = "net.inet.tcp.minmss"; int main(void) { ... if ((sysctlbyname(mss_max_opt_name, NULL, NULL, &mss_max, sizeof(mss_max)) != 0) || (sysctlbyname(mss_min_opt_name, NULL, NULL, &mss_min, sizeof(mss_min)) != 0)) { ERROR(START, "Can't set tcp default maximum/minimum MSS value."); return EXIT_FAILURE; } }
Page top
[Topic vfs_net_stack_dyn_conf]