Views: 934
Last Modified: 31.08.2023

Description

Let's overview server pool of two nodes:

  • vm04 - main node (with deployed pool)
  • vm03 - additional node (doesn't have servers)

Creating a web cluster, solves the following objectives:

  • Creating a backup copy.
  • Dividing web load to 2 nodes.

The following objectives are not solved via existing cluster implementation:

  • Hot copy of web server (second web node) won't be a full fledged replacement to the first one (without additional intervention from server administrator).
  • Option to restore files, in case they are deleted. Synchronization, including deletion, is performed quite quickly and deleting at any node - leads to deleting a file on all instances.

Two types of web servers are selected at current settings:

  • Balancer: proxies the queries to all the rest of nodes.
  • Web: PHP query processing.

First server in the group - always serves as balancer and web role. All additional servers - serve as web role. In case of Virtual Appliance - balancer role cannot be switched. If you need to move it to separate server, read the next lesson.

Web node settings

Web node setup includes:

  • File synchronization setting between nodes.
  • Web services config for query processing.

BitrixVM has two scenarios to configure web nodes:

  1. Configures web node by steps:
    • first step launches scenario for configuring file sync,
    • second step configures web services.
    Steps must be separated by time if files are numerous and ensure that synchronization was completed without errors before launching web services.
  2. Set up everything in a "single" step. Single scenario launch.

Files synchronization/lsyncd

Files synchronization is performed using lsyncd.

Basic principle:

  • intotify notifies application on updates in file system;
  • lsyncd aggregates and launches rsync/ssh for synchronization;
  • synchronization is executed under the user bitrix.

Directories, synchronized (from server with balancer role to other servers):

- /etc/nginx/bx/conf,
- /etc/nginx/bx/maps,
- /etc/nginx/bx/site_avaliable,
- /etc/nginx/bx/site_enabled,
- /etc/nginx/bx/site_cluster,
- /etc/nginx/bx/settings,
- /etc/httpd/bx/conf.

Between servers with the web role: synchronizes site directories, except the following subdirectories:

- bitrix/cache
- bitrix/managed_cache
- bitrix/stack_cache
- upload/resize_cache.

lsyncd settings

Setup includes the following steps:

  1. Create ssh keys for user bitrix. Required for rsync/ssh.
  2. Create services for synchronizing lsyncd-<HOSTNAME>, where HOSTNAME - server name Unique server identifier in pool. to receive files in case they are updated, created or deleted.

    For example, if we create a web node at the server vm03, then server vm04 will have created service lsyncd-vm03; at server vm03 - lsyncd-vm04.

  3. Creating directories for temporary lsyncd files. This is done via service systemd-tmpfiles. Config file - /etc/tmpfiles.d/lsyncd.conf.
  4. Config files lsyncd are located at the following path /etc/lsyncd-<HOSTNAME>.conf.

    Main difference between servers: for server with balancer role (in example vm04) config files will contain subdirectories from /etc and site directories. For the rest of additional nodes, config files contain only sites directories.

    For example, for the config indicated above. At the server vm04, the file /etc/lsyncd-vm03.conf will contain the following directories:

    sync {
      default.rsyncssh,
      host        = "vm03",
      source      = "/etc/nginx/bx/",
      targetdir   = "/etc/nginx/bx/",
      exclude   = {
        "site_enabled/http_balancer*.conf",
        "site_enabled/https_balancer*.conf",
        "site_enabled/upstream.conf",
        "site_enabled/pool_manager.conf",
        "site_ext_enabled/",
        "server_monitor.conf",
        "pool_passwords",
        }
    ....
    }
    
    -- httpd
    sync {
      default.rsyncssh,
      host        = "vm03",
      source      = "/etc/httpd/bx/conf/",
      targetdir   = "/etc/httpd/bx/conf/",
    }
    
    sync {
      default.rsyncssh,
      host        = "vm03",
      source      = "/home/bitrix/www/",
      targetdir   = "/home/bitrix/www/",
      exclude   = {
        "bitrix/cache/",
        "bitrix/managed_cache/",
        "bitrix/stack_cache/",
        "upload/resize_cache/",
        "*.log", // ensure, that excluding "*.log", excludes components and modules, bizproc.log, for example
      },

    Config file at the server vm03 /etc/lsyncd-vm04.conf will contain the directories:

    sync {
      default.rsyncssh,
      host        = "vm04",
      source      = "/home/bitrix/www",
      targetdir   = "/home/bitrix/www",
      exclude   = {
        "bitrix/cache/",
        "bitrix/managed_cache/",
        "bitrix/stack_cache/",
        "upload/resize_cache/",
        "*.log", // ensure, that excluding "*.log", excludes components and modules, bizproc.log, for example
        "bitrix/.settings.php",
        "php_interface/*.php",
    ...
    }

    Note: If server contains more than one site, all such sites will be synchronized.

  5. Server launch and sync status tracking. Scenario includes sync completion check for selected directories. It's done via status file for each launched service: /var/log/lsyncd/daemon-<HOSTNAME>.status

Web service setup and config

Perform the following steps:

  1. Create login for handling the database. Handling of the database is performed at the remote server, that's why a new user is required for handling the database.
  2. Write new parameters and site config. Updates file settings and creates new records in the database for cluster module.
  3. Create balancer at the first pool node (service NGINX config). Load balancing is done always at the first node in the group (in our example, vm04).

    Configuration files: /etc/nginx/bx/site_enabled/http_balancer.conf and https_balancer_<SITENAME>.conf Web servers pool settings are contained in the config: /etc/nginx/bx/site_enabled/upstream.conf.

    Load is distributed by client's IP address. In standard situation, client from the same IPv4 address will go to the same web server. Additional configs for https_balancer* are related with the possible existing of several sites with various certificates for a single configuration.
  4. Add status page for Apache, which allows to monitor status of connected service from the cluster module
  5. Add a new web server at the Web cluster module.

    Additionally, at this step, new role is assigned to a server in pool - web server role.





Courses developed by Bitrix24