Managing Deployment Workflows
Workflow and Execution Parameters
Workflows can have parameters. Workflow parameters are declared in the blueprint, and each parameter can be declared as either mandatory or optional with a default value. To learn more about parameter declaration please refer to Creating your own workflow. Viewing a workflow’s parameters can be done in the CLI using the following command: cfy workflows get my_workflow -d my_deployment This command shows information on the my_workflow workflow of the my_deployment deployment, including the workflow’s mandatory parameters as well as the optional parameters and their default values.
Cancelling Workflow Executions
It is possible to cancel an execution whose status is pending, started or queued. There are three types of execution cancellations: Standard cancellation - This type means that a cancel request is posted for the execution. The execution’s status will become cancelling. However, the actions to take upon such a request are up to the workflow that’s being executed: It might try and stop, perform a full rollback, or even ignore the request completely and continue executing.
Workflow Error Handling
Task Retries When an error is raised from the workflow itself, the workflow execution will fail - it will end with failed status, and should have an error message under its error field. There is no built-in retry mechanism for the entire workflow. However, there’s a retry mechanism for task execution within a workflow. Two types of errors can occur during task execution: Recoverable and NonRecoverable. By default, all errors originating from tasks are *Recoverable*.
Workflow Execution Statuses
The workflow execution status is stored in the status field of the Execution object. These are the execution statuses which currently exist: pending - The execution is waiting for a worker to start it. started - The execution is currently running. cancelling - The execution is currently being cancelled. force_cancelling - The execution is currently being force-cancelled (see more information under Cancelling workflows executions). cancelled - The execution has been cancelled.
Dry Run Workflow Execution
Overview In a dry-run execution, you can execute a workflow so that the entire flow of the execution (all the operations that are executed in an actual run) is shown, but no actual code is executed and there are no side effects. A dry-run is useful in the process when you write complex blueprints with potentially long executions. The dry-run helps you to configure relationships between node templates, and operations that depend on those relationships.
Resuming workflow execution
Overview Resuming workflows allows to continue execution after a failure, continuing from a cancelled execution, or after a Manager failure (eg. power loss). When a workflow is resumed, the workflow function is executed again. Workflows which are resumable will then restore the state of the previous execution and continue from there. If the workflow was not explicitly declared as resumable, it will fail immediately instead. Workflows using tasks graphs Most workflows (including all built-in ones) are implemented in terms of a tasks graph.
Built-in Workflows
Overview Cloudify comes with a number of built-in workflows, covering: Application installation / uninstallation (install / uninstall) Application start / stop / restart (start / stop / restart) Scaling (scale) Healing (heal) Running arbitrary operations (execute_operation) Built-in workflows are declared and mapped in types.yaml, which is usually imported either directly or indirectly via other imports. # Snippet from types.yaml workflows: install: default_workflows.cloudify.plugins.workflows.install uninstall: default_workflows.cloudify.plugins.workflows.uninstall execute_operation: mapping: default_workflows.
Creating Custom Workflows
This section is aimed at advanced users. Before reading it, make sure you have a good understanding of Workflows, Blueprints, and Plugins. Introduction to Implementing Workflows Workflows implementation shares several similarities with plugins implementation: Workflows are also implemented as Python functions. A workflow method is, optionally, decorated with @workflow, a decorator from the cloudify.decorators module of the cloudify-plugins-common package. Workflow methods should import ctx from cloudify.workflows, which offers access to context data and various system services.
Workflows are automation process algorithms. They describe the flow of the automation by determining which tasks will be executed and when. A task may be an operation (implemented by a plugin), or other actions including running arbitrary code. Workflows are written in Python, using a dedicated framework and APIs.
Workflows are deployment-specific. Each deployment has its own set of workflows, which are declared in the Blueprint. Executions of a workflow are in the context of that deployment.
Controlling workflows (i.e. executing, cancelling, etc.) is achieved using REST calls to the management server. In this guide, the examples use Cloudify CLI commands, which in turn call the REST API calls.
Executing Workflows
Workflows can be executed directly. You can execute workflows from the CLI as follows:
cfy executions start my_workflow -d my_deployment
This executes the my_workflow
workflow on the my_deployment
deployment.
Workflows run on deployment-dedicated workers on the management server, on top of the Cloudify workflow engine.
When a workflow is executed, an execution object is created for the deployment, containing both static and dynamic information about the workflow’s execution run. The status
field in the Execution object is an important dynamic field that conveys the current state of the execution.
An execution is considered to be a running execution until it reaches one of the three final statuses: terminated
, failed
or cancelled
. For more information, see the Workflow Execution Statuses section on this page.
It is recommended that you have only one running execution per deployment at any time. By default, an attempt to execute a workflow while another execution is running for the same deployment triggers an error. To override this behavior and enable multiple executions to run in parallel, use the force
flag for each execute command. To view the syntax reference, see the CLI Commands Reference.
Queing Executions
In general, executions run in parallel. There are a few exceptions:
- When a system-wide execution is running (e.g.
snapshots create
), no other execution will be allowed to start. - Two executions under the same deployment cannot run parallely.
- System-wide executions (e.g.
snapshots create
) cannot start while an execution (e.g.install
workflow) is running.
If you start an execution and receive one of the following errors: “You cannot start an execution if there is a running system-wide execution” / “The following executions are currently running for this deployment…” / “You cannot start a system-wide execution if there are other executions running.”, you can add the execution to the executions queue:
cfy executions start -d deployment1 install --queue
cfy snapshots create --queue
Queued executions will begin automatically when possible.
- If an execution can start immediately it will, even when the
queue
flag is passed. - If the queue contains a system-wide execution waiting to start (e.g. snapshot create), Cloudify will not accept any
other execution request unless the
queue
flag is passed. This behavior ensures there is no starvation of blocking system operations. If thequeue
flag isn’t provided, an error will be returned.
Queing Executions
Writing a Custom Workflow
If you are an advanced user, you might want to create custom workflows. For more information, see Creating Custom Workflows.