Installing and Configuring a Cloudify Manager Distributed Cluster

Cloudify Cluster Architecture

Cloudify_Cluster

Cloudify Manager 5.0.5 introduces a new cluster architecture to Cloudify. This cluster is comprised of 3 separate services that construct the entire Cloudify solution:

  1. Cloudify Management service – The Management service embeds the Cloudify workers framework, the REST API, the User Interface infrastructure and other backend services. The Cloudify Management service is a cluster of at least two Manager nodes running in an active/active mode.
  2. PostgreSQL database cluster – This service provides a high-availability PostgreSQL cluster based on Patroni. The cluster must consist of at least 3 nodes.
  3. RabbitMQ cluster – This service provides a high-availability RabbitMQ cluster based on the RabbitMQ best practices. The cluster must consist of 3 nodes.

This guide describes the process of configuring and installing such a cluster:

  1. [Certificates Setup] (/install_maintain/installation/installing-cluster/)
  2. [Installing Services] (/install_maintain/installation/installing-cluster/)
  3. [Post Installation] (/install_maintain/installation/installing-cluster/)

Certificates Setup

The Cloudify Manager cluster uses the SSL protocol for:

  1. Communication between the PostgreSQL cluster nodes.
  2. Communication between the RabbitMQ cluster nodes.
  3. Communication between the Cloudify Management service cluster nodes and the other services.

Note: Each time the term “CA” shows, it means the CA certificate of the CA that signed/issued the host’s public certificate.

Remark: All the following mentioned files should exist on the relevant instance

For each PostgreSQL and RabbitMQ cluster node we will configure the following:

  1. CA certificate path - The CA certificate should be the same for all cluster nodes. Meaning, the nodes’ public certificates are signed by the same CA.
  2. certificate (cert) path - A public certificate signed by the given CA that specifies the node’s IP.
  3. key path - The key associated with the certificate.

For each Cloudify Management service cluster node we will configure the following:

  1. PostgreSQL nodes’ CA path (CA is the same for all the cluster nodes).
  2. RabbitMQ nodes’ CA path (CA is the same for all the cluster nodes).

Example of creating certificate and key for host myhost with 1.1.1.2 IP address using a configuration file:

  1. Writing a configuration file:

    [req]  
    distinguished_name = req_distinguished_name  
    x509_extensions = v3_ext  
    [ req_distinguished_name ]  
    commonName = _common_name # ignored, _default is used instead  
    commonName_default = myhost  
    [ v3_ext ]  
    basicConstraints=CA:false  
    authorityKeyIdentifier=keyid:true  
    subjectKeyIdentifier=hash  
    subjectAltName=DNS:myhost,DNS:127.0.0.1,DNS:1.1.1.2,DNS:localhost,IP:127.0.0.1,IP:1.1.1.2  
    
  2. Generating a certificate and an associated key using a CA certificate, a CA key, and a configuration file: The first command will generate a certificate and key using the configuration file, and the second one will sign the created certificate with the given CA.

       
    sudo openssl req -newkey rsa:2048 -nodes -batch -sha256 -config conffile -out myhost.crt.csr -keyout myhost.key 
    sudo openssl x509 -days 3650 -sha256 -req -in myhost.crt.csr -out myhost.crt -extensions v3_ext -extfile conffile -CA ca.crt -CAkey ca.key -CAcreateserial  
    

Installing Services

The Cloudify Manager cluster best-practice consists of three main services: PostgreSQL Database, RabbitMQ, and a Cloudify Management Service. Each of these services is a cluster comprised of three nodes and each node should be installed separately by order. Another optional service of the Cloudify Manager cluster is the Management Service Load Balancer, which should be installed after all the other components.
The following sections describe how to install and configure Cloudify Manager cluster services. The order of installation should be as follows:

  1. [PostgresSQL Database Cluster ] (/install_maintain/installation/installing-cluster/)
  2. [RabbitMQ Cluster] (/install_maintain/installation/installing-cluster/)
  3. [Cloudify Management Service] (/install_maintain/installation/installing-cluster/)
  4. [Management Service Load Balancer] (/install_maintain/installation/installing-cluster/)

PostgreSQL Database Cluster

The PostgreSQL database high-availability cluster is comprised of 3 nodes (Cloudify best-practice) or more.

Note Make sure the following ports are open for each node:

Port Description
tcp/2379 etcd port.
tcp/2380 etcd port.
tcp/5432 PostgreSQL connection port.
tcp/8008 Patroni control port.

Externally Hosted PostgreSQL Database Installation

Azure DBaaS for Postgres

Cloudify supports Microsoft’s Azure Database for Postgres as an external database option replacing Cloudify’s PostgreSQL deployment.

Azure Database for Postgres is a fully managed database-as-a-service offering that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability. It is available in two deployment options, as a single server and as a Hyperscale (Citus) cluster (preview).

Setting up Azure database for PostgreSQL as the Cloudify database

The DBaaS of Azure supports a clustered instance and a single instance available for resizing on demand.
As opposed to other DBaaS vendors, Azure doesn’t give access to the postgres user with SuperUser privileges, so while working with Azure DBaaS is fully supported, the configuration is a bit different than regular Postgres installations.

Using Azure DBaaS (either the single instance or the clustered instance), requires specific setup changes to the Cloudify manager configuration.
Azure connection string for the users must be in the form of <username>@<dbhostname>, so for a DB user named cloudify and a db hostname named azurepg, the user that needs to be configured should be: cloudify@azurepg.
So, for example, if we created an Azure DBaaS for Postgres instance with the following information:

So the following settings in /etc/cloudify/config.yaml need to be configured as follows:

postgresql_client:
  host: 'azurepg.postgres.database.azure.com'
  ca_path: '/path/to/azure/dbaas/ca/certificate'
  server_db_name: 'postgres'
  server_username: 'testuser@azurepg'
  server_password: 'testuserpassword'
  cloudify_db_name: 'cloudify_db'
  cloudify_username: 'cloudify@azurepg'
  cloudify_password: 'cloudify'
  ssl_enabled: true
  ssl_client_verification: false

server_username will be used by Cloudify to make the initial connection to the DB and create all the resources Cloudify needs to operate, which include, among other resources, the cloudify_username
cloudify_username will be used by Cloudify after the installation for day-to-day operations

Note that both server_username and cloudify_username have the postfix @azurepg added to them, as it is required by Azure DBaaS for Postgres

Locally Hosted Cloudify PostgreSQL Database Cluster Installation

Configure the following settings in /etc/cloudify/config.yaml for each PostgreSQL node:

postgresql_server:

    postgres_password: '<strong password for postgres superuser>'

    cert_path: '<path to certificate for this server>'
    key_path: '<path to key for this server>'
    ca_path: '<path to ca certificate>'

    cluster:
        nodes:
            <first postgresql instance-name>:
                ip: <private ip of postgres server 1>
            <second postgresql instance-name>:
                ip: <private ip of postgres server 2>
            <third postgresql instance-name>:
                ip: <private ip of postgres server 3>
        
        # Should be the same on all nodes
        etcd:
            cluster_token: '<a strong secret string (password-like)>'
            root_password: '<strong password for etcd root user>'
            patroni_password: '<strong password for patroni to interface with etcd>'

        # Should be the same on all nodes
        patroni:
            rest_password: '<strong password for replication user>'

        # Should be the same on all nodes
        postgres:
            replicator_password: '<strong password for replication user>'

    enable_remote_connections: true
    ssl_enabled: true
    ssl_only_connections: true

    # If true, client certificate verification will be required for PostgreSQL clients,
    # e.g. Cloudify Management service cluster nodes.
    ssl_client_verification: false

services_to_install:
    - database_service

Execute on each node by order:

cfy_manager install [--private-ip <PRIVATE_IP>] [--public-ip <PUBLIC_IP>] [-v]

RabbitMQ Cluster

The RabbitMQ service is a cluster comprised of any amount of nodes, whereas Cloudify best-practice is three nodes.

Note Please refer to the RabbitMQ networking guide - Ports to verify the open ports needed for a RabbitMQ cluster installation.

Externally Hosted RabbitMQ Installation

Locally Hosted RabbitMQ Cluster Installation

Configure and install the first RabbitMQ node and then the rest of the nodes.

For the first RabbitMQ, configure the following settings in /etc/cloudify/config.yaml:

rabbitmq:

    username: '<secure username for queue management>'
    password: '<secure password for queue management>'

    cluster_members:
        <short host name of rabbit server 1- e.g. using `hostname -s`>:
            networks:
                default: <private ip of rabbit server 1>
                <other network name>: <address for this node on `other network`>
        <short host name of rabbit server 2>:
            networks:
                default: <private ip of rabbit server 2>
                <other network name>: <address for this node on `other network`>
        <short host name of rabbit server 3>:
            networks:
                default: <private ip of rabbit server 3>
                <other network name>: <address for this node on `other network`>

    cert_path: '<path to certificate for this server>'
    key_path: '<path to key for this server>'
    ca_path: '<path to ca certificate>'

    nodename: '<short host name of this rabbit server>'
    
    # Should be the same on all nodes
    erlang_cookie: '<a strong secret string (password-like)>'

services_to_install:
    - queue_service

For the rest of the nodes, configure the following settings in /etc/cloudify/config.yaml:

rabbitmq:

    username: '<username for quque management>'
    password: '<secure password for queue management>'

    cluster_members:
        <short host name of rabbit server 1- e.g. using `hostname -s`>:
            networks:
                default: <private ip of rabbit server 1>
                <other network name>: <address for this node on `other network`>
        <short host name of rabbit server 2>:
            networks:
                default: <private ip of rabbit server 2>
                <other network name>: <address for this node on `other network`>
        <short host name of rabbit server 3>:
            networks:
                default: <private ip of rabbit server 3>
                <other network name>: <address for this node on `other network`>

    cert_path: '<path to certificate for this server>'
    key_path: '<path to key for this server>'
    ca_path: '<path to ca certificate>'

    nodename: '<short host name of this rabbit server>'

    join_cluster: '<short host name of the first rabbit server>'

    # Should be the same on all nodes
    erlang_cookie: '<a strong secret string (password-like)>'

services_to_install:
    - queue_service

Execute on each node by order:

cfy_manager install [--private-ip <PRIVATE_IP>] [--public-ip <PUBLIC_IP>] [-v]

Cloudify Management Service

The Cloudify Management service is a cluster comprised of two to ten nodes, whereas Cloudify best-practice is three nodes.

Port Description
tcp/80 REST API and UI. For improved security we recommend using secure communication (SSL), if your system is configured for SSL, this port should be closed.
tcp/443 REST API and UI.
tcp/22 For remote access to the manager from the Cloudify CLI.
tcp/5671 RabbitMQ. This port must be accessible from agent VMs.
tcp/53333 Internal REST communications. This port must be accessible from agent VMs.
tcp/5432 PostgreSQL connection port.
tcp/8008 Patroni control port.
tcp/22000 Filesystem replication port.

Configure the following settings in /etc/cloudify/config.yaml for each Manager service cluster node:

Note: In case you want to use an externally hosted PostgreSQL database and an internally hosted RabbitMQ or vice versa, please use the relevant section from the following examples and use in your configuration.

Execute on each node:

cfy_manager install [--private-ip <PRIVATE_IP>] [--public-ip <PUBLIC_IP>] [-v]

Management Service Load Balancer

The Cloudify setup requires a load-balancer to direct the traffic across the Cloudify Management service cluster nodes. Any load-balancer can be used provided that the following are supported:

  1. The load-balancer directs the traffic over the following ports to the Manager nodes based on round robin or any other load sharing policy:
    • Port 443 - REST API & UI.
    • Port 53333 - Agents to Manager communication.
    • Note Port 80 is not mentioned and should not be load balanced because the recommended approach is to use SSL.
  2. Session stickiness must be kept.

Accessing the Load Balancer Using Cloudify Agents

In case you use a load-balancer and you want Cloudify agents to communicate with it instead of a specific Cloudify Management service cluster node, you can use the following Multi-Network Management guide and specify the load-balancer private-IP as the value of the ‘external’ key under ‘networks’. Moreover, In case you want all communication of the Cloudify agents to go through the load-balancer, you can specify its private-IP as the value of the ‘default’ key under ‘networks’ (as shown in the config.yaml above).

Installing a Load Balancer

Note Although the load-balancer is not provided by Cloudify, here is a simple example of HAProxy as a load-balancer.
In order to use HAProxy as a load-balancer, you would first need to download HAProxy to your machine and set the relevant certificates.
Afterwards, you would need to configure HAProxy as the Cloudify Managers’ load-balancer, and you can do so using the following configuration:

global
    maxconn 100
    tune.ssl.default-dh-param 2048
defaults
    log global
    retries 2
    timeout client 30m
    timeout connect 4s
    timeout server 30m
    timeout check 5s
listen manager
    bind *:80
    bind *:443 ssl crt /etc/haproxy/cert.pem
    redirect scheme https if !{ ssl_fc }
    mode http
    option forwardfor
    stick-table type ip size 1m expire 1h
    stick on src
    option httpchk GET /api/v3.1/status
    http-check expect status 401
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server manager_<first manager private-ip> <first manager public-ip> maxconn 100 ssl check check-ssl port 443 ca-file /etc/haproxy/ca.crt
    server manager_<second manager private-ip> <second manager public-ip> maxconn 100 ssl check check-ssl port 443 ca-file /etc/haproxy/ca.crt
    server manager_<third manager private-ip> <third manager public-ip> maxconn 100 ssl check check-ssl port 443 ca-file /etc/haproxy/ca.crt

Post Installation

Setup Cloudify HA cluster status reporters

  1. Collect the following data:
  1. To enable Cloudify’s monitoring of the RabbitMQ cluster status, the node’s status reporter needs to be configured, execute (on every node’s machine):
cfy_manager status-reporter configure --token <broker status reporter token> --ca-path <Cloudify-rest CA certificate local path> --managers-ip <list of current managers ip>
  1. To enable Cloudify’s monitoring of the PostgreSQL cluster status, the node’s status reporter needs to be configured, execute (on the node’s machine):
cfy_manager status-reporter configure --token <db status reporter token> --ca-path <Cloudify-rest CA certificate local path> --managers-ip <list of current managers ip>
  1. Verify the configuration was applied successful by running the following command (all status are OK):
cfy cluster status

Update the CLI

Update all remote CLI instances (not hosted on the manager) to the newly deployed Cloudify version. Please refer to the CLI installation guide for further instructions.

Run the following command from the client in order to connect to the load-balancer:

cfy profiles use <load-balancer host ip> -u <username> -p <password> -t <tenant-name>

In case you haven’t mentioned the license path in the config.yaml file of the Manager installation, you can upload a valid Cloudify license from the client using the following command:

cfy license upload <path to the license file>

Day 2 cluster operations

Please refer to the Day 2 cluster operations guide for further operations regarding the Cloudify active-active cluster.