Test-Drive Feature Readiness Before Release

Open-source development has enabled greater collaboration than ever before. Whole communities of people who have never met in person can work together to create just about anything. However, this does not come without its challenges. One such challenge is quality control. Even with strict pull request requirements, it can be difficult to ensure every pull request receives proper testing. Kubernetes preview environments and build validations in Azure Pipelines can make this easier.

Preview Environments in Kubernetes

Kubernetes is an open-source, container orchestration tool that automates deployment, scaling, and management of containerized applications. It enables developers to deploy quickly into test and production environments. A preview environment is an ephemeral environment created with the code of your pull requests. By using a preview environment, you can see your changes live and test-drive features before merging to master. Furthermore, by using namespaces within your Kubernetes cluster, you can test your changes in a fully isolated environment that can be destroyed when you’re finished by simply closing the PR.

To deploy pull requests for review with Azure DevOps, you need to add a build validation branch policy to your Azure Pipeline.

Build Validations

Build validations are tests that run on a build to check changes made before a release. In the Repos submenu in Azure DevOps, you’ll define the requirements for pull requests that are being made against your selected branch. With these policies in place, every time someone creates a new pull request targeting the branch you defined the policy for, a reviewer can manually decide to deploy the changes to a dedicated Kubernetes namespace for detailed review within your preview environment. Alternately, you can choose to have this deployment to the namespace happen automatically.

In modern DevOps development, deploying quickly and often is critical. Equally important is making sure the code you’re putting into production is valuable to your end users. For more information, or to get started today, contact our team of experts here at PRAKTIK.

 

 

 

 

 

 

 


DevOps Deployments and Ansible

Doing things by hand is an accident-prone method of development. DevOps best practices state that we should automate as much as possible. Automation helps developers be more productive elsewhere, in addition to eliminating human error. We can follow this best practice by using tools that enable things like Continuous Integration and Continuous Deployment. One such tool is Ansible.

Ansible Basics

Ansible is an open-source tool that enables automation in your environment. In the past, you may have needed to manually coordinate to deliver an application to your end users. Now, Ansible can do that work for you, from automating cloud provisioning, application deployment, configuration management, and more.

Ansible can also help orchestrate zero-downtime deployments to deliver the best experience to your end user. This is especially important in our fast-paced world; consumers expect to be able to reach their services at all times, while simultaneously desiring the services to be continuously improved.

How Does Ansible Work?

Getting started with Ansible is as simple as describing your automation job in YAML. This description takes the form of an Ansible Playbook. When going through your DevOps CI/CD pipeline, Ansible uses this playbook to provision everything you described for your deployment in the exact same way, every time. This means that your deployments are simple and repeatable, without any human error.

To create and provision resources in Azure, you’ll need a Linux VM with Ansible configured. Additionally, as Ansible is agentless, it will need SSH authentication using a key pair and an SSH service connection in Azure DevOps.

To learn more, or to get started with Ansible today, contact our team of experts here at PRAKTIK.


Jenkins or Azure Pipelines?

Jenkins is an open-source automation server that helps facilitate continuous integration. It is installed on-premises and managed there. Azure Pipelines is a continuous integration tool that is available in the cloud or on-premises, and can manage build and release orchestration. Both are reasonable and popular options, but which is truly the best for your situation?

Simplicity

From an organization perspective, it is important to determine which tool is going to get you to your desired end result faster. While it is certainly true that Jenkins and Azure DevOps can integrate with one another, it is important to remember that there is an additional time-based cost for this integration. Using more than one tool requires an additional investment in training and maintenance. For many cases, reducing the number of tools used is optimal, especially if the additional tools are redundant.

In addition, Azure Pipelines natively integrates with things like Git repos and Azure Boards. This kind of integration is hard to pass up, especially because it provides a seamless end-to-end traceability matrix of code and items across releases.

YAML

YAML allows a developer to define the pipeline as code. While using YAML to define pipelines isn’t the right solution for all teams, it can be a powerful tool with certain benefits. For instance, the pipeline itself is managed as a source file, so it will go through the standard code review process, therefore increasing quality. It is also easier to compare different versions of the pipeline if something breaks. Azure Pipelines has a YAML interface in addition to the standard GUI; Jenkins does not.

Cost

Something we always need to take into consideration is the total cost of a solution. An Azure DevOps instance already has all the infrastructure for running pipelines built in. Not only this, Azure Pipelines comes with 30 hours of free Microsoft-hosted builds a month, or unlimited build minutes for a self-hosted job. With Jenkins, this is not the case. While Jenkins is open-source and therefore free to use, you would be responsible for deploying and maintaining your build infrastructure, as well as paying for it.

This is not a debate about which technology is better. Both are mature enough to cover all build requirements for most companies. This is about deciding which option works best for you and your teams with the least amount of friction. For more information, or to speak to one of our experts, contact us today.


Integrating Power BI and Azure DevOps Analytics

Collecting and analyzing data is an imperative part of developing a successful application. You need to be able to determine if the needs of your end user are being met. But with the sheer volume of data points available, it can be difficult to understand what needs to be improved. Power BI is a tool that helps convert your data into easily readable insights. Power BI and Azure DevOps can work together to provide reports and analytics to fit your needs.

Azure DevOps Analytics

Analytics is the reporting platform for Azure DevOps. It provides data from Azure DevOps you can use to improve your application. For instance, you can access Azure Pipelines analytics with metrics like run failures to improve your code, pipeline, or tests. You can also create Widgets for Azure Boards to track things like Burndown, Cycle Time, and Velocity.

Power BI Integration

Analytics is great for collecting data from within Azure DevOps, but what if you need to pull in data from other sources? This is where Power BI comes in. Power BI allows you to pull in data from any source with a connector. You can even pull Azure DevOps Analytics data into it, allowing you to get a fuller picture of all your data points in one place.

You can pull data from Analytics into Power BI in three ways, but the recommended way is to connect using the OData queries. The advantage of these queries is that they are powerful and very specific, which means only the data you want is returned to you. You can also pre-aggregate data server-side, which means data is collected and analyzed then presented to you as summarized findings in Power BI. There is no need to pull all the detailed data down, saving you valuable time.

With Power BI, you can create organization-level metrics for high-level information, or you can drill down into specifics that allow you to address problem areas. You can also create project health reports to look at bug trends, build success rates, and other specific metrics at the organization level. These types of data points are critical when it comes to the success of your application.

As of this writing, Power BI integration with Analytics is currently in Preview. For more information on Power BI, or to get started today, contact our team of experts at PRAKTIK.


SonarQube—Azure DevOps Integration

The most important parts of any project are quality and security. While it’s important to have a solid user experience, it’s equally important to maintain security standards. This can be accomplished with SonarQube. This tool can be integrated with Azure DevOps to give you data where you need it, such as in your Pipeline and Pull Requests.

What is it?

SonarQube is a self-hosted code analysis services that detects issues to ensure the reliability, security, and quality of your project. It finds issues in your code and provides guidance on how to best address them. You can also use this tool to add Quality Gates to your CI/CD workflow. If the quality parameters are not passed, the job fails so you can correct it before it rushes into production. Additionally, SonarQube decorates your issues directly in your Azure DevOps Pull Requests, which will help you deal with them sooner. SonarQube is free for open-source projects; you’ll only pay when you start analyzing private repositories. In this article, we will be focusing on the cloud-hosted version of this product called SonarCloud.

How to Integrate with Azure DevOps

Integrating SonarCloud with AzureDevOps is as simple as installing the extension from the Visual Studio Marketplace and following the setup flow on SonarCloud’s website. The flow will ask you for things like your Azure DevOps Organization name and a Personal Access Token. Then, you’ll set up a SonarCloud organization and project, as well as choose a plan for your SonarCloud subscription. If all the repositories you want to analyze are public, you can choose the free plan. You’ll only pay if you analyze a private repo.

Now you’re ready to set up your analysis. To do this, follow the SonarCloud walk-through to set up scanning in Azure Pipelines. This analysis will be done during your build. You can include Quality Gates that will cause the build to fail if it does not pass the quality check. After your build runs, you’ll be able to view the Detailed SonarCloud Report in the build summary. Additionally, you can set up pull request integration that will allow the Azure DevOps UI to display when an analysis build is running. This is done by configuring your build policy in Azure DevOps, as well as giving SonarCloud access to your pull requests. The results are visible directly in Azure DevOps or on the SonarCloud dashboard.

The information provided by SonarCloud and its integration with Azure DevOps is invaluable. Now, you’ll be able to identify and repair issues faster and more efficiently. For more information about SonarCloud, or to get started today, contact our team of experts at PRAKTIK.


Using Terraform with Azure DevOps

Managing infrastructure can be tricky business. Development teams must maintain the settings for each individual deployment environment. Over time, environments can become difficult or impossible to reproduce. This would mean that hard-to-track manual processes would have to be used to create and maintain these environments. Instead of going through this headache, we can use an Infrastructure as Code (IaC) tool called Terraform.

What is Terraform?

Terraform is an open-source IaC tool for provisioning and managing cloud infrastructure. This tool allows users to define a desired end-state infrastructure configuration. It will then go ahead and provision the infrastructure exactly as described. It can also safely and efficiently re-provision infrastructure in response to configuration changes. This means that your infrastructure will be exactly the same every time.

Using Terraform with Azure DevOps

As with many technologies, there is an extension in the Visual Studio Marketplace to make your life easier. To get started, simply install the Terraform extension. This extension provides service connections for AWS and GCP for deployment to Amazon or Google clouds – you’ll need to create an Azure Service Principal if you’re deploying to Azure. It also includes a task for installing the required version of Terraform on your agent, as well as a task for executing the core commands. This extension requires some configuration, such as defining your provider and which command you want the tool to execute.

After your configuration is complete, you are able to include the Terraform task in your Build or Release Pipeline to manage your infrastructure automatically.

For more information about Terraform, or to get started, contact our team of experts at PRAKTIK.

 

 

 

 


Using Dependabot with AzureDevOps

Technology is ever-changing, which means your repository’s dependencies are, too. Maintaining and updating dependencies is crucial for the security and functionality of your app. Unfortunately, it is a laborious chore that takes away from time that could be spent working on your next project. Dependabot is a tool that can take care of all of this for you.

What is Dependabot?

Dependabot is a GitHub-native tool that can monitor your repository’s dependencies, and even update them. This is done automatically at an interval of your choosing: daily, weekly, or monthly. When the tool identifies a dependency that needs to be updated, it raises a pull request. These pull requests can be for version or security updates. This tool can update versions of dependencies automatically, but security updates must have human intervation.

Integrate with Azure DevOps

It is really easy to integrate Dependabot and Azure DevOps using the free Dependabot extension in the Visual Studio Marketplace. This extension is full of features and easy to configure. All you have to do is run a YAML file. In this file, you can specify your task parameters, such as target branch and package manager. This file is also where you would specify the schedule on which you’d like it to run. Then, give it access to your repository’s Project Collection Build Service and allow it to do things like contribute to pull requests and create branches. Now you’re ready to start having Dependabot do work for you!

To learn more or to get started today, contact our team of experts here at PRAKTIK.


Azure Pipelines for Jira

It is important for a development team to be able to choose which tools work best for them. Jira is a popular bug and issue tracking system. It is also used for requirements and test case management. Azure Pipelines is a powerful CI/CD tool to automatically build, test, and ship your code. What if you want to use both? Azure Pipelines for Jira integrates the two tools to provide a seamless development experience.

Getting Started

Getting started is very simple. To use Azure Pipelines for Jira, you will need to install the extension from the Atlassian Marketplace. The extension will need to be linked to your Azure DevOps organization. Then, you need to enable “report deployment status to Jira” in your release pipeline, as well as provide a map of your deployment stages.

Using Azure Pipelines for Jira

One of the best parts of Azure DevOps is that you are free to integrate with whatever tool is going to make you successful. Azure Pipelines for Jira makes the integration that much easier with bidirectional end-to-end functionality. For example, the integration will add links to Jira issues as work items deployed with Azure Pipelines. It also enables you to view details about your Azure Pipelines deployment directly in Jira issues. Furthermore, you’ll be able to include Jira issues in release notes, as well as track issue delivery in Jira.

To learn more about this integration, or to get started today, contact our team of experts at PRAKTIK.


Run a Self-Hosted Agent Using Docker and Kubernetes

There are many reasons to host your own build agents in Azure DevOps, including cost savings, better control over your agents, and, of course, all the benefits of the cloud. However, private build agents do require their own maintenance. Wouldn’t it be nice to have a short-lived, customized agent ready whenever you needed it? This is possible using Kubernetes and Docker.

Getting Started

Before you get started, you will need Docker, an Azure DevOps account, and a Kubernetes cluster. You could use Windows Server Core or an Ubuntu container for your self-hosted agent to run inside. However, we will be running our agent inside Kubernetes in order to take advantage of the power of this technology.

To get started, we are going to create a self-hosted Agent Pool in Azure DevOps. This can be done simply in the UI.

We will also need to create a Personal Access Token.

Next, we need to create and build a Dockerfile. A Dockerfile is a text file that contains instructions to build a Docker image. So, we must give the Dockerfile all the information it needs to build the image, including where the directory will be and what executables you need. This is like the recipe for the Docker image. This is also where you would make alterations to your agent’s specifications should your needs change over time.

Once the Docker image is built, push it to a registry, such as Azure Container Registry.

Using Docker, you’re able to avoid the maintenance associated with a self-hosted build agent. You can configure a new Docker image every time you want to spin up an agent. This allows you to create the agent you need, perfectly every single time.

Running Your Agent in Kubernetes

Finally, we need to deploy to Kubernetes using a Deployment manifest. Applying the Deployment manifest will create Azure agents in your Agent Pool. You can check that this was successful by simply viewing your Agent Pools in Azure DevOps.

By provisioning agents this way, you’re able to scale to your needs on demand. Additionally, you can spin up customized agents easily and only have them running as long as you need them.

For more information, or to see how to take advantage of this technology in your organization, contact our team of experts at PRAKTIK here.

 

 

 

 

 

 

 

 


Deploy to Kubernetes using Azure DevOps

Modern-day applications are being built using containers more and more often to take advantage of the increased portability and efficiency. As these applications become more complex, they grow to span multiple containers across multiple servers. Kubernetes, an open-source container management tool, helps deploy and manage containers at scale and manage this complexity. In this article, we will discuss how to deploy to Kubernetes using Azure DevOps Services.

Setting up a Kubernetes Cluster

Azure Pipelines can be used to deploy container images to Kubernetes. First, you must set up a Kubernetes cluster. To do this, you can use the Kubectl task to deploy, configure and update a Kubernetes cluster. This task works with Azure Resource Manager and Kubernetes Service Connection. Kubectl is a command-line tool that allows you to manually run commands against Kubernetes clusters. If you are comfortable, you can also use YAML to run these commands. This gives the added benefit of being able to create more complex structures.

Azure Kubernetes Service (AKS) is a simpler way of deploying a Kubernetes cluster in Azure. This service is an open-source and fully managed Kubernetes service that offers an integrated continuous integration and continuous delivery (CI/CD) experience. When you deploy an AKS cluster, the master and all nodes are configured for you. Things like Azure Active Directory integration and monitoring can be configured during the deployment process.

Deploy to Kubernetes

Now that your cluster is set up, you’re ready to get started with your deploying to Kubernetes. Because Azure Pipelines is so flexible, there are several different methods to complete this task. For example, you can simply use the Kubernetes resource view within your environments. This view allows for traceability from the Kubernetes object back to the pipeline, and back to the original commit. This traceability is provided by the KubernetesManifest task. The KubernetesManifest task will also check for object stability and will even deploy according to your deployment strategy.

You can also use Helm to simplify your deployment. Helm is a package manager for Kubernetes that is used to deploy and manage Kubernetes apps. Azure Pipelines has built-in support for Helm charts, so you are ready to go without any additional extensions. For our purposes, the Helm build and deploy task in Azure Pipelines is perfect for packaging your app and deploying it to your Kubernetes cluster.

 

 

Kubernetes can be an intimidating tool to use, especially if you’re new to it. There is no shortage of methods to deploy to Kubernetes, but it’s most important to find the method that works best for your team. For more information about deploying to Kubernetes, or to get started today, contact our team of experts.