Using Clusters to Provide High Availability

If you have a Premium version of Cloudify Manager, an admin user can create a cluster of Cloudify Managers to enable high availability.

It is recommended that you have three Cloudify Managers in a cluster for the following reasons:

A Cloudify Manager cluster is dynamic, meaning that you do not need to specify the size of the cluster in advance.

For more information about working with clusters, refer to the CLI cluster command.

How High Availability Works

One Cloudify Manager is designated as the active Cloudify Manager, and the others are designated as hot standbys, that are constant mirrors of the data of the active Manager. In the event that the active Cloudify Manager health check fails, an automatic failover switch activates one of the hot standbys as the new active Manager. Both the CLI and the Cloudify Agents will then start contacting the new active Manager. When the previous active Manager is restored to a healthy state, it will become a hot standby node, and will mirror the data of the new active Manager.

Synchronized Data

All Cloudify database and filesystem data is mirrored on the cluster hot standby nodes. This includes all objects that are managed using the REST service, such as blueprints and deployments, and management data, such as users and tenants.

Health Checks

To determine the health of the a Cloudify Manager node, the following are verifed:

A Cloudify Manager that is down remains in the cluster unless you remove it. To remove a Cloudify Manager, run cfy cluster nodes remove.

Failure of the Master Cloudify Manager

In the event that the active Cloudify Manager fails, it is important to investigate and fix the issues that caused the original master to fail, or add another Cloudify Manager to the cluster, so that high availability is maintained, and to avoid having a single point of failure.

Finding the Active Cloudify Manager

To find the active manager in a Cloudify Manager cluster, you can either:

curl -u admin:admin https://<manager_ip>/api/v3.1/status

Selecting a New Active Manager

To manage the situation in which the active Cloudify Manager fails one or more health checks, all Managers in the cluster constantly monitor the Consul next master function. When one of the standby Manager instances in the cluster detects that next master is pointing to it, it starts any services that are not running (RabbitMQ and mgmtworker) and changes PostgreSQL to master state. When the active Manager changes, the hot standby nodes begin to follow it with filesync and database.

If the original active Cloudify Manager was processing a workflow at the time it fails, the newly active Manager will attempt to resume the workflow (if the workflow is not declared as resumable, it will immediately fail).

Managing Network Failure

If there is a loss of connection between the Cloudify Managers in the cluster, the cluster might become partitioned into several disconnected parts. The partition that contains the majority will continue to operate as normal, while the other part - containing the minority of the nodes, so usually only one - will enter active minority mode. In this mode, the node becomes active and responds to requests, but the writes aren’t replicated to the majority of the cluster, and are at risk of being lost. Therefore, it is not recommended to continue using the cluster if the majority of the nodes are unreachable, as reported by cfy cluster nodes list. When the connection is resumed, the Cloudify Manager with the most-recently updated database becomes the active Manager. Data that was accumulated on the other Cloudify Manager cluster nodes during the disconnection is not synchronized, so is lost.

Creating a Cluster

Create a cluster after you complete installing your Cloudify Managers. When you run the cfy cluster start command on a first Cloudify Manager, high availability is configured automatically. Use the cfy cluster join command, following installation, to add more Cloudify Managers to the cluster. The Cloudify Managers that you join to the cluster must be in an empty state, otherwise the operation will fail.

The data on each Cloudify Manager mirrors that of the active Cloudify Manager. Operations can only be performed on the active Manager in the cluster, but are also reflected on the standby Managers. Similarly, upload requests can only be sent to the active Cloudify Manager.

Within the cluster, Cloudify uses the Consul utility and internal health checks to detect when the active Cloudify Manager is down, and which standby will become active.

Create Cluster Process

  1. Complete installing a Cloudify Manager.
  2. Run cluster start on the installed Manager to designate this Cloudify Manager instance as the active Manager.
  3. Run cluster join on two other clean Cloudify Manager instances.
  4. (Optional) To remove a Cloudify Manager from the cluster, run cfy cluster nodes remove <node-id>.
cfy profiles use <master IP>
cfy cluster start (on the Manager that you want to set active)
cfy profiles use <secondary IP>
cfy cluster join [--cluster-host-ip <new cfy manager IP>] --cluster-node-name <some name> <master ip> (on a Manager that you want to add to the cluster)

Cluster node options

When starting the cluster, or joining a node to the cluster, the --options can be provided, to specify the following configuration options:

Upgrading Clusters

Cloudify Manager snapshots do not include clusters. If you restore the snapshot of a Cloudify Manager that was the active Manager in a cluster to a new version, you must join the other Cloudify Managers to recreate the cluster. Managers in a cluster must all be the same Cloudify version.

Upgrade Cluster Process

Upgrading via Snapshot Restore on a New VM
In this process you create new VMs for all Cloudify Managers that will be part of the cluster.

  1. Create a snapshot of the active Cloudify Manager.
  2. Boostrap three Cloudify Managers with the upgraded version.
  3. Restore the snapshot to one of the Cloudify Manager instances.
  4. Run cluster start on the Manager with the restored snapshot, to designate this Cloudify Manager instance as the active Manager.
  5. Run cluster join on the two other installed Cloudify Manager instances to designate them as hot standbys.

Upgrading via Snapshot Restore on an Existing VM
In this process you teardown the active Cloudify Manager and install a new one on the same VM. You create new VMs for the Cloudify Managers that will become the hot standbys in the cluster.

  1. Create a snapshot of the active Cloudify Manager.
  2. Uninstall Cloudify Manager from the active machine.
  3. Install an updated Manager on the existing machine.
  4. Restore the snapshot to the Cloudify Manager instance.
  5. Run cluster start to designate this Cloudify Manager instance as the active Manager.
  6. Boostrap two new Cloudify Manager VMs with the upgraded version.
  7. Run cluster join on the two new installed Cloudify Manager instances to designate them as hot standbys.

Using a load balancer

While using the Cloudify CLI with a cluster profile will automatically find the active node, that mechanism is not available for the Cloudify Management Console. To allow users contacting a known static address to access the Cloudify Management Console, a load balancer such as eg. HAProxy can be used. The load balancer should be configured with a health check that contacts all the nodes in the cluster in order to find the current active node, and forward all traffic to the active node. The load balancer address can then be used for both accessing the Cloudify Management Console, and for creating a CLI profile.

Clients without a load balancer Clients using a load balancer

Implementing a load balancer health check

To configure the load balancer to pass traffic to the active node, implement a health check which queries all nodes in the cluster and examines the response code, as described in the finding the active manager section.

Example load balancer configuration

With HAProxy, the health check can be implemented by using the http-check directive. To use it, first obtain the value for the Authorization HTTP header, by encoding the Cloudify Manager credentials:

echo -n "admin:admin" | base64

Use the resulting value in the HAProxy configuration, for example:

backend http_back
   balance roundrobin
   option httpchk GET /api/v3.1/status HTTP/1.0\r\nAuthorization:\ Basic\ YWRtaW46YWRtaW4=
   http-check expect status 200
   server server_name_1 192.168.0.1:80 check
   server server_name_2 192.168.0.2:80 check

In the example above, 192.168.0.1 and 192.168.0.2 are the public IP addresses of the two cluster nodes, and YWRtaW46YWRtaW4= are the encoded credentials.

Tearing down clusters

If the active node is reachable and responding, we recommend that you to remove all nodes from the cluster before you uninstall them. This process avoids unnecessary failovers that put stress on the network and on the nodes.

Cluster teardown process

  1. Run cluster nodes list and note the current active node and the non-active nodes.
  2. For each non-active node, run: cluster nodes remove <node name>
  3. To remove each node from the cluster, from the command line of each non-active node run: cfy_manager remove -f
  4. To teardown the cluster, from the command line of the active node run: cfy_manager remove -f

Additional Information

Cluster Tools

The following tools are used to facilitate clustering in Cloudify.

Services Run with Cluster

The cluster function runs the following services:

Security

The following security mechanisms are implemented.

Internal CA certificate

The internal CA certificate, which is used by the agents to verify manager connections, is replicated between all cluster nodes. When joining the cluster, a new replica copies the internal CA certificate (and the key) from the active node, and uses that to sign a new internal certificate, which will be used by servers on that replica. This means that the agents can continue using the same internal CA certificate to access that replica, if it becomes the active node.

Troubleshooting

The primary log file for troubleshooting is /var/log/cloudify/cloudify-cluster.log. All services log to journald. To view their logs, use journalctl:

If required, direct access to Consul REST API is also possible from the Manager machine: it is listening locally on port 8500, and authentication requires passing the SSL client certificate which is located at /etc/cloudify/cluster-ssl/consul_client.crt (with the key located at /etc/cloudify/cluster-ssl/consul_client.key).