Skip to main content
Version: 10.9.x

How to setup a Pipeline

Projects created with the Mia-Platform Console can be deployed with simple script and tooling for applying standard Kubernetes yaml files that describe the various resources. To aid the developer we have created a new open source tool called mlp.

In this chapter we will highlight how to use mlp to deploy the project and give you all the information if you want to use another tool (e.g. kubectl).

First Phase: Environment Preparation

We have to prepare the environment for the deploy of the resources. For preparing the environment we intend to setup correctly various environment variables needed in the next phases.
These variables are the public ones generated by the console and some others needed for the metadata set inside the various yaml file.

Here are the environment variables that we setup inside the pipelines that are used in our PaaS:

  • AUTHOR_EMAIL: an email of the user that is running the deploy
  • COMMIT_SHA: set as the commit sha of the commit to deploy
  • STAGE_TO_DEPLOY: set to the current environment of the project to deploy
  • RELEASE_DATE: current date generated with the unix command date -I'seconds' -u

Additionally, to these variables, the Console will generate a .env file for every environment of the project that can be usually found in the path configuration/variables/<environment-name>.env so we simply source it in the current context with the command:

export VARIABLES_FILE="configuration/variables/<environment-name>.env"
test -f "${VARIABLES_FILE}" && set -a && source "${VARIABLES_FILE}"

For compatibility reasons we always check first if the file actually exists before sourcing it.
Additionally, we have inside the shell environment the variables set inside the Environment Variables table of the Project Settings page of the Console.

After setting up the shell environment with all the variables, we can start to setup the files for being ready to be deployed inside the target environment.

Second Phase: Generation

This phase is targeted principally to the mlp tool and is a function added to dynamically create configmaps and secrets using variables and files set inside the pipeline and referenced by name inside the Design section of the Console. mlp can read its configuration from a file, typically created in the root of the project and called mlp.yaml for generating Kubernetes secrets and configmaps. For the syntax of the file you can read the relative documentation page.
To generate the files we need to launch the following command:

export GENERATE_FILE="./mlp.yaml"
test -f "${GENERATE_FILE}" && mlp generate -c "${GENERATE_FILE}" -e "<environment-prefix>" -o "<target-folder>"

As you can see we check if the file is present before launching the command because this step can be skipped if not necessary. The <environment-prefix> is a prefix to the name of the environment variables that indicate if a more specific environment variables can be used instead. You can read more on this subject in the relative page on this site.
For example we can have a variables named SERVICE_URL referenced inside the yaml files, and a variable named DEV_SERVICE_URL that contains a variation of the values needed only for a specific environment. In this case we can pass as an argument -e "DEV_" to the cli and the values inside variables prefixed with DEV_ will be considered as high priority. You can use the flag multiple times to add multiple prefixes with a scalar priority, the first flag as an higher priority to the last one.

The generated files is outputted inside the <target-folder>, that will be used in the last phase and can be any empty folder you want.

Third Phase: Interpolation

The third phase will effectively substitute all the environment placeholders inside the generated yaml files with the environments present in the context.

mlp interpolate -e "<environment-prefix>" -f "configuration/<environment-name>" -f "configuration" -o "<target-folder>"

Even with this command we can pass one or multiple environment prefix that is used and follow the same logic explained in the previous phase. We use two different path to get and interpolate all the files present inside the base folder, usually called configuration, and the folder specific to the environment target, usually placed as subfolder of configuration.

The interpolated files than will be outputted inside a <target-folder> of your choice that is the same one used in the previous command.

Fourth Phase: Deploy

The last phase of the pipeline is to apply the files to the target cluster with the following command:

mlp deploy --server "<k8s-api-server-url>" --certificate-authority "<path-to-cluster-ca>" \
--token "<token-for-deploying>" --deploy-type "${DEPLOY_TYPE}" \
--force-deploy-when-no-semver=${FORCE_DEPLOY_WHEN_NO_SEMVER} -f <target-folder> -n "${KUBE_NAMESPACE}"

As you can see there are three variables that are not defined yet, because are set by the console in the pipeline trigger. These variabiles contains:

  • DEPLOY_TYPE: the type of deploy to use with mlp, the values supported are deploy_all and smart_deploy
  • FORCE_DEPLOY_WHEN_NO_SEMVER: boolean that will force the redeploy of deployments with tags that don't follow the SemVer convention
  • KUBE_NAMESPACE: contains the namespace target where to deploy all the files

Additionally, mlp supports the flag --ensure-namespace=false to skip the namespace creation if not already present in the cluster. With this flag, if the namespace is not in already present mlp will throw an error and the deploy step fails. During this command we will also change the pods annotations with references to configmaps or secrets mounted to the deployment for forcing a redeploy if the content has changed between deployment.

mlp also has a strict ordering for deploying certain resources for ensuring the correct update has taken place before deploying the main workloads. The order can be seen in this file.