Azure DevOps pipelines for business intelligent
- DevOps Engineer –role definition and responsibilities - 09/27/2024
- Basic Kubernetes features for a dynamic orchestrator of containers - 01/26/2024
- Self-Host Azure DevOps Agent on AKS - 01/22/2024
The last months I’ve been involved for a business intelligent automation processes. My client wanted to automate several extractor processes on Azure using the existent Azure Kubernetes clusters of their client. (transport agency). So despite of I can’t share all details, in this post you will can see how I used Azure DevOps service to automate several processes in Azure Kubernetes Service.
Azure DevOps Overview
Azure DevOps is a comprehensive platform that offers a suite of tools and services to support the implementation of DevOps practices. From application planning to development, delivery, and operations, Azure DevOps provides end-to-end solutions for a straightforward way:
-
- Azure Boards: This service allows teams to plan, track, and discuss work across their teams using Kanban boards, backlogs, team dashboards, and custom reporting.
-
- Azure Pipelines: Azure Pipelines enable teams to implement continuous integration and continuous delivery (CI/CD) to continuously build, test, and deploy to any platform and any cloud. It works with any language, platform, and cloud, and provides unparalleled traceability and reporting.
-
- Azure Repos: Azure Repos provides unlimited cloud-hosted private Git repositories and supports collaboration to build better code with pull requests and advanced GitHub Advanced Security for Azure DevOps
-
- Azure Test Plans: This service allows teams to test and ship with confidence using manual and exploratory testing tools.
-
- Azure Artifacts: With Azure Artifacts, teams can create, host, and share packages with their team, and add artifacts to their CI/CD pipelines with a single click.
Key requirements project
Some of requirements project pieces were:
-
- Same languaje and structure of processes , developed in NodeJs.
-
- Processes receive arguments.
-
- Pool of nodes dedicated for cluster apps.
-
- Possibility to get a dry-run and debug execution to tests.
-
- Continuos integration only for he build process and specific branches
-
- Provide the to intereact with Azure DevOps API
Azure Devops Pipelines
The CI/CD pipelines are defined in YAML format. Some of keynames were:
-
- Pool: This specifies the agent pool that will be used to run the pipeline. The value of the pool was defined using a variable $(AGENT_POOL), depending of branch used to deploy in pre or production environment/ kubernetes cluster.
-
- Trigger: This defines the conditions under which the pipeline will run automatically. In this case, the pipeline was triggered when changes were pushed to the release or develop branches.
-
- Variables: These are key-value pairs that can be used throughout the pipeline. In this pipeline, several variables were defined for branches, timeouts or if they wanted to force the built before.
-
- Parameters: Azure DevOps parameters are a way to pass values into your pipeline at runtime. They allow you to customize the behavior of your pipeline without having to modify the pipeline definition. Parameters can be used in many places within a pipeline, including scripts, template references, task inputs, etc. ON this case were used for interactive arguments provide by developers like dates, debug or dry-run mode.
Despite of templates are other utility of pipelines, with this approuch I could to re-use code using “Templates” on my pipelines with a “satellite” repository called “devops”.
Templates
Azure DevOps templates are predefined blocks of YAML code that can be reused across multiple pipelines or stages in a pipeline. They are used to reduce duplication and promote reuse across multiple pipelines.
There are two types of templates in Azure DevOps:
-
- Step templates: These are reusable sequences of steps that can be referenced in a job
- Job templates: These are reusable sequences of jobs that can be referenced in a stage.
A template is defined in a separate YAML file, and it can take parameters to customize its behavior. Once defined, a template can be referenced in another pipeline using the template keyword.
At the end I followed the next scheme for each extractor repository:
-
- build-extractor
-
- debug-dry-extractor
-
- deploy-extractor
A little example from a deploy-extractor pipeline code was:
- job: 'Deploy'
dependsOn: 'Secret_Gen'
timeoutInMinutes: ${{variables.timeoutInMinutes}}
cancelTimeoutInMinutes: ${{variables.cancelTimeoutInMinutes}}
variables:
- ${{ if eq(variables['Build.SourceBranchName'], variables.prodbranch ) }}:
- group: prod-vars
- ${{ if ne(variables['Build.SourceBranchName'], variables.prodbranch ) }}:
- group: preprod-vars
steps:
- template: job-deploy.yml@templates
parameters:
extractor: $(extractor)
tag: $(TAG)
namespace: $(namespace)
date: ${{parameters.dateparam}}
-
- Depends On: The dependsOn keyword indicates that this job depends on the completion of the ‘Secret_Gen’ job. The ‘Deploy’ job will not start until the ‘Secret_Gen’ job has successfully completed.
-
- Timeouts: The timeoutInMinutes and cancelTimeoutInMinutes are set using variables. These settings control how long the job will run before it’s automatically cancelled, and how long it will wait for cancellation before forcing it.
-
- Variables: This section defines variable groups based on the source branch name. If the source branch name is the same as the production branch name (as defined by the prodbranch variable), it uses the prod-vars variable group. If not, it uses the preprod-vars variable group.
-
- Steps: This section defines the steps that will be run as part of the ‘Deploy’ job. It uses a template (job-deploy.yml@templates) for the steps, and passes parameters to the template. The parameters include extractor, tag, namespace, and date. The values for these parameters are taken from variables and parameters defined elsewhere in the pipeline.
And other pice of code from the job-deploy.yml template was:
- task: Kubernetes@1
condition: and(succeeded(), eq(variables['ENV'], 'pre'))
displayName: 'Job Deploy'
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: 'SP_AKSBI-Pre'
namespace: ${{ parameters.namespace }}
command: apply
arguments: -f $(System.DefaultWorkingDirectory)/k8s/${{ parameters.extractor }}.job.yml
env:
ACCOUNTKEY: $(ACCOUNTKEY)
ACCOUNTNAME: $(ACCOUNTNAME)
CONTAINER: $(CONTAINER)
DEFAULT_DATE: '${{ parameters.date }}'
-
- Condition: The task will only run if the previous tasks have succeeded (succeeded()) and if the ENV variable is equal to ‘pre’ (eq(variables[‘ENV’], ‘pre’)). This means that this task is specifically designed to run in a pre-production environment.
-
- Display Name: The name that will appear in the Azure DevOps UI for this task is ‘Job Deploy’.
-
- Inputs: These are the parameters for the Kubernetes task.
-
- connectionType: This is set to ‘Kubernetes Service Connection’, which means the task will use a service connection to connect to a Kubernetes cluster.
-
- kubernetesServiceEndpoint: This is the name of the service connection that the task will use to connect to the Kubernetes cluster.
-
- namespace: This is the Kubernetes namespace where the job will be deployed. The value is set using the namespace parameter provided.
-
- command: This is set to ‘apply’, which means the task will apply the specified configuration to the Kubernetes cluster with kubectl.
-
- arguments: This specifies the Kubernetes job configuration file to apply. The file is expected to be in the k8s directory in the pipeline’s working directory (DevOps), and its name is expected to be the value of the extractor parameter followed by .job.yml.
-
- Environment Variables: These are environment variables that will be set for the task. They include ACCOUNTKEY, ACCOUNTNAME, CONTAINER, and DEFAULT_DATE. The values for these variables are taken from pipeline variables and parameters.
Although I had parametrized the most of parts, for this part Azure DevOps not allow you parametrizering the kubernetesServiceEndpoint
input for a Kubernetes@1 task, so simply it was duplicated for production condition and using SP_AKSBI-Pro.
Wrapping up
This was a little introduction how I could to help to this consulting BI company on a specific project, and of course there was meny other details, interaction with the client for permissions and so on.
However I hope to improve this kind of posts or videos to show a more realistic portfolio.