Large-Scale Deployment Strategy with Azure Pipelines and Omaha

Deploying a client application at scale requires compatibility, security, performance, and reliability of the software across different platforms and environments. Additionally, the software needs to be updated frequently to fix bugs, add features, or address security vulnerabilities. By using Azure Pipelines in conjunction with an application update framework such as Omaha, you can achieve a streamlined strategy to ensure seamless updates, efficient delivery, and overall user satisfaction.

Azure Pipelines and Omaha

Azure Pipelines is a cloud-based continuous integration and continuous delivery (CI/CD) platform. It enables development teams to automate build, testing, and deployment, ensuring fast and reliable software delivery.

Omaha is the open-source version of Google Update, which offers a mechanism to deliver client application updates. If you're using Chrome or Edge for your browser right now, you already have Omaha working for you in the background. It makes sure users have access to the latest features, bug fixes, and security patches without manual intervention. Omaha consists of two main components: a server-side module called Omaha Server and a client-side module called Omaha Client. The Omaha Server hosts and distributes the updates. The Omaha Client runs on the end-users’ devices and periodically communicates with the Omaha Server to check for and install updates.

By leveraging Azure Pipelines and Omaha, you can automate build and deployment processes while guaranteeing efficient and reliable updates.

Deployment Strategy

The deployment strategy using these tools is quite simple. For your build, testing, and deployment, you'll use Azure Pipelines in the same way you know and love. To handle large-scale deployments, you can use tools such as Azure Kubernetes Service or Azure Pipelines Scale Set Agents. Both of these allow automatic scaling based on demand.

For any updates, you'll use Omaha. Setting up and using Omaha for your updates requires that you have an update server, which is where the update information and installer files will be stored. Your installer files will also need to include registry values that tell Omaha important information, such as version. With all files and parameters provided to the server, you can now distribute the Omaha client and client application installer to your users. There, it will run in the background. By default, Omaha will check every 24 hours for an update and will automatically install the update if detected.

By using Omaha for updates instead of manual intervention, we are reducing human error, increasing efficiency, and allowing teams to focus on new features as opposed to maintaining old ones. This is especially true for large-scale client applications.

For more information, or to get started today, contact our team of experts at PRAKTIK.


Migrating Bitbucket Repositories: GitHub vs. Azure DevOps

When software development teams grow and evolve, they may need to move their code repositories from one platform to another. However, moving repositories can be a complex process, especially if the repositories contain large file histories. In this article, we will discuss the advantages and disadvantages of migrating from Bitbucket to GitHub and Azure DevOps, particularly with large file histories.

Migrating from Bitbucket

Choosing to migrate your Bitbucket repositories is a big decision. Maybe you want to consolidate your tools into fewer ecosystems. Perhaps your repository is getting too large and you want to have more space to grow. Or maybe you want more flexibility in customization. Whatever the reason, it's important to consider the pros and cons of each tool.

 

Migrating to GitHub

GitHub is the largest code hosting platform in the world, which means it comes with a huge community. This community can provide support and collaboration, especially with tools like pull requests and code reviews. Furthermore, GitHub is owned by Microsoft, which means it integrates well with their tools.

However, GitHub does have some disadvantages, particularly for migrating a repository with large history. For instance, large repositories can negatively affect performance. GitHub recommends repos less than 1 GB for optimal performance, and no greater than 5GB. Furthermore, GitHub blocks files larger than 100MB. For these larger files, you can use Git Large File Storage (LFS) to refactor your artifacts.

Migrating to Azure DevOps

Azure Repos is a part of the Azure DevOps suite of tools. This means that it has the major advantage of native integration with the rest of the Azure DevOps suite, Azure, and Visual Studio, making it a more natural choice for migration if you're already using these tools. It also has similar integration and collaboration tools as Github. However, Azure DevOps does not currently have the same community that GitHub has. Furthermore, some new users find the learning curve and the sheer volume of tools available to be intimidating.

That said, Azure DevOps does not have the same size restrictions that GitHub does. For instance, repositories are limited to 250GB, but the recommended size for optimal performance is 10GB. Additionally, Azure DevOps allows pushes up to 5GB. Azure DevOps also allows for Git LFS to manage larger files, in addition to Azure Artifacts.

For more information, or to get started today, contact our team of experts here at PRAKTIK.


Azure Virtual Machine Scale Set Agents

To build your code or deploy your solution, you need an agent. Agents can be self-hosted or hosted by some other entity in the cloud, such as Microsoft. Microsoft-hosted agents have their benefits, such as no maintenance. Similarly, self-hosted agents have the benefit of allowing you to have greater control over your agent and install specific software. One issue that both types of agents experience is scalability. This problem can be solved by using Azure Virtual Machine Scale Set agents.

Use Cases

Azure Virtual Machine Scale Set agents are a form of self-hosted agents that can be scaled automatically to meet your needs. This means that the management of your self-hosted agents will be simplified. For example, you'll be able to de-provision agent machines instead of running them around the clock, therefore cutting down on costs.

Because they are self-hosted, you have control over the size and the image of the machines on which the agents run. Scale set agents can also be helpful if you have specific requirements, such as the need to restrict network connectivity of agent machines or running configuration warmups before the agent begins accepting jobs.

As a note, you can only run Windows or Linux agents using scale sets.

Creating the scale set

First, you must create a Virtual Machine Scale Set in the Azure portal. This is what will allow you to create and manage a group of load-balanced virtual machines that will be your agents. Azure Pipelines will manage your VMSS agents by making sure there is always the appropriate number of agents available for jobs. The number of VM instances will increase or decrease based on demand or a defined schedule. Therefore, you must configure settings such as autoscaling and overprovisioning to ensure Pipelines can determine how to do this effectively.

After your Virtual Machine Scale Set is created, you need to then create and configure the scale set agent pool in Azure Pipelines. This is where you can decide if you want to do things like automatically tear down the virtual machine after each use or have virtual machines run interactive tests. Additionally, you'll set the number of agents you want on standby for jobs, as well as the maximum number of agents allowed in the scale set.

Using a scale set agent pool is just like using any other agent pool. So, with your scale set agent pool successfully created, you're ready to use it in your build, release, or YAML pipelines. For more information, or to get started today, contact our team of experts here at PRAKTIK.


Deploy Azure Logic Apps with Azure DevOps

Technology is constantly changing. This means that sometimes we end up with several different services for everything we do. The problem then becomes integrating these services in a way that caters to the scalability and performance we need. Azure Logic Apps is a serverless cloud platform that facilitates this integration.

Azure Logic Apps Overview

Azure Logic Apps helps you integrate and orchestrate all your different services so they work together seamlessly. It has hundreds of built-in connectors, but you can also create custom connectors for whatever service you require. Since Azure Logic Apps is serverless, the underlying platform handles scale, availability, and performance. All you have to do is define the workflow with a trigger and the desired actions for it to perform.

To help you automatically create and deploy a Consumption logic app, you can create an Azure Resource Manager template. Alternately, you can use the prebuilt logic app ARM template that is provided. An ARM template defines the infrastructure and configuration of your project, and is where you will specify the resources to deploy and their properties. To create this template, you can use Visual Studio, or the LogicApp Template module in Azure PowerShell.

Deploying Azure Logic Apps

Once you have the ARM template for your Azure Logic Apps, it is time to deploy it. First, create an empty pipeline in Azure Pipelines. Next, choose the resources you need for the pipeline. In this case, you’ll need your logic app template and any template parameter files. Then, for your agent job, add the Azure Resource Manager deployment task and configure it with an Azure Active Directory service principal. The service principal grants authorization to deploy and generate the release pipeline. This is also when you will add references to your logic app template and template parameters files. Finally, continue to build out the steps in your release process, such as adding additional environments, automated tests, or approvers. Now you’re ready to deploy your logic app and begin taking advantage of the serverless integration and orchestration benefits.

For more information, or to get started today, contact our team of experts here at PRAKTIK.


How to Automate Azure Data Factory Deployment with Azure DevOps

Virtually every part of modern life generates data, from using a credit card to shop for groceries to driving a car to work. Because of this huge volume of data, there needs to be a way to not only store the information but orchestrate and operationalize it into actionable business insights. Azure Data Factory is a managed cloud service that does just this. Using Azure DevOps, we can implement continuous deployment practices to automate Azure Data Factory deployment.

Getting Started

Before integrating Azure Data Factory with Azure DevOps for your automatic deployment, you must first ensure all prerequisites are met. First, you will need an Azure subscription linked to Azure DevOps Server or Azure Repos that uses the Azure Resource Manager (ARM) service endpoint. Next, you will need a data factory configured with Azure Repos Git integration. You will also need an Azure key vault that contains the secrets for each environment.

Azure Data Factory Integration with Azure Pipelines

With all prerequisites met, you’re now ready to set up your Azure Pipelines release. In Azure DevOps, open the project that holds your data factory. Then, open the tab for releases and select the option to create a new release pipeline. For this pipeline, you’ll choose the Empty job template. With the pipeline created, you’re going to modify it by adding an artifact. Here, that artifact is the Git repository configured with your data factory. Next, you’re going to add an ARM deployment task and configure it for this job.

If you have secrets to pass in an ARM template, it is recommended to use Azure Key Vault in your release. To do this, simply add an Azure Key Vault task before the ARM task in your pipeline. It is also recommended to keep separate key vaults for each environment.

Automating Deployment

Your release pipeline is complete, but now we want to make sure your deployment is automated. This can be done using release triggers. When the trigger conditions are met, the pipeline will automatically deploy your artifacts to the environment specified. To allow your release to move from environment to environment in your release pipeline, you’ll want to set up stage triggers. To set this up, click the lightning icon on your environments and set up your pre-deployment conditions, including what condition(s) specifically you want the trigger to be.

You now have an automated Azure Data Factory deployment! For more information, or to get started, contact our team of experts here at PRAKTIK.


Azure DevOps Server 2022 Final Release

This month, the final release of Azure DevOps Server 2022 was made public. This final release was a roll up of bug fixes and features from previous release candidates. Let’s discuss some of the feature highlights you can expect with this latest version of Azure DevOps Server.

Delivery Plans

Delivery Plans provide an interactive calendar view of multiple team backlogs. It allows you to view a timeline view of the work, progress of the work, and dependency tracking. There are two main views of delivery plans: Condensed and Expanded. Condensed is more beneficial for at-a-glance information, while Expanded gives a fuller view. Both can be helpful visualizations of a project to enable effective planning and delivery.

This feature is available without an extension for Azure DevOps Services and Azure DevOps Server 2022.

Widget Improvements

The Group By Tags chart widget is now available by default. When using the widget, there is an option for tags. This can allow you to visualize your information by selecting all tags or a set of tags in the widget. Additionally, you can now display custom work items in your burndown widget. To try it out, browse the widget catalog.

Generate Unrestricted Token for Fork Builds

When Azure Pipelines builds contributions from a fork of a GitHub Enterprise repo, it restricts permissions and doesn’t allow pipeline secrets to be accessed. This can be more prohibitory than necessary in closed environments. While there are pipeline settings to make secrets available to forks, there is no setting to control the job access token scope. However, this feature allows you to generate a regular job access token, even for fork builds.

To get a full overview of the many features available in Azure DevOps Server 2022, visit the release notes. For more information, contact our team of experts here at PRAKTIK.


Build and Deploy Apps in a Private Kubernetes Cluster with Azure DevOps

Security is a top concern for many developers and consumers alike. This is especially true if you’re in the financial or government sectors with a lot of sensitive or classified information. Many of us also want to use container orchestration tools like Kubernetes for deployment to allow for faster time-to-market and simplified scalability. One way to ensure the security of your application, as well as to take advantage of Kubernetes, is by deploying to a private Kubernetes cluster using Azure DevOps.

Why a Private Cluster?

A private cluster is just that: private. But how exactly? The API server is how you can control and access your Kubernetes control plane. By using a private server, you are ensuring that all network traffic between your API server and your node pools will remain on the private network only. They will communicate through the Azure Private Link service in the API server virtual network and a private endpoint that is exposed in the subnet of your AKS cluster.

Build and Deploy in the Private Kubernetes Cluster

Since your AKS cluster is only accessible within the virtual network, you’ll need a self-hosted agent within the same virtual network. Therefore, you’ll create a virtual network. Next, you’ll create a private Azure Container Registry (ACR), as well as the registry’s private endpoint that you’ll use to integrate AKS with ACR. With your AKS cluster created and integrated, you’ll need a virtual machine to host your agent. This virtual machine will live within the virtual network with your AKS cluster.

After you’ve deployed your agent on the virtual machine, you’ll create the pipeline you want to build and deploy your app with. You can do this from Azure DevOps Services or from your own instance of Azure DevOps Server. It may seem a little counter-intuitive to use Azure DevOps Services for your build and deployment because it is in the public internet. However, by using a service endpoint, your virtual network resources use private IP addresses to connect to Azure DevOps Service’s public endpoint. This effectively extends the identity of the virtual network to the target resource. Additionally, traffic flows over the Azure backbone instead of over the internet. Therefore, you can take advantage of the ease and power of Azure DevOps Services while still maintaining the level of security and privacy required by your organization.

For more information, or to get started today, contact our team of experts at PRAKTIK.

 

 

 


Aha! Roadmaps Integration with Azure DevOps

In order to develop applications successfully, we need to know where we’re going, how to get there, and how we’re doing so far. We also need to be able to coordinate across teams, even if those teams use different tools in their day-to-day. Aha! Roadmaps is tool to help you effectively strategize across teams. In this article, we will discuss Aha! Roadmaps integration with Azure DevOps Services and Azure DevOps Server.

How It Works

Aha! Roadmaps is one of a suite of Aha! tools that enable collaboration between all teams, from developers to product managers. Historically, teams may have used different tools to track their progress and inform their strategy. This meant extra work for collaboration, which sometimes meant that collaboration wasn’t prioritized as it should be. Aha! Roadmaps is a singular hub for all things strategy, meant to be used for collaboration between cross-functional teams. It is also made to integrate with multiple tools, so that your teams can work where they’re comfortable. This integration provides real-time, two-way updates that are fully customizable to your team’s workflow and terminology.

Integration

To get started with integrating Aha! Roadmaps with Azure DevOps, you’re going to start in Aha! Roadmaps and build or import your records. To do this, as a workspace owner, you will simply add a new cloud or on-premises integration from the workplace setting in Aha!. This will launch the integration wizard, which will ask you to create a template and authenticate your Azure DevOps credentials. After authentication, you’ll get to choose your project and start configuring your integration mappings.

If you’d like to take advantage of two-way sync for updates, you’ll set up webhooks in Azure DevOps using the webhook URL in the Aha! integration configuration. You can also use webhooks to send security-related events to a SIEM system, or to stream activity to a third-party tool. If you want additional integration security for your Azure DevOps Server instance, you also have the option to include a client certificate in your integration settings.

Aha! Roadmaps integrates with Azure DevOps to allow you to be as productive as possible, all while enabling that all-important cross-team collaboration. For more information, or to get started today, contact our team of experts at PRAKTIK.


Test-Drive Feature Readiness Before Release

Open-source development has enabled greater collaboration than ever before. Whole communities of people who have never met in person can work together to create just about anything. However, this does not come without its challenges. One such challenge is quality control. Even with strict pull request requirements, it can be difficult to ensure every pull request receives proper testing. Kubernetes preview environments and build validations in Azure Pipelines can make this easier.

Preview Environments in Kubernetes

Kubernetes is an open-source, container orchestration tool that automates deployment, scaling, and management of containerized applications. It enables developers to deploy quickly into test and production environments. A preview environment is an ephemeral environment created with the code of your pull requests. By using a preview environment, you can see your changes live and test-drive features before merging to master. Furthermore, by using namespaces within your Kubernetes cluster, you can test your changes in a fully isolated environment that can be destroyed when you’re finished by simply closing the PR.

To deploy pull requests for review with Azure DevOps, you need to add a build validation branch policy to your Azure Pipeline.

Build Validations

Build validations are tests that run on a build to check changes made before a release. In the Repos submenu in Azure DevOps, you’ll define the requirements for pull requests that are being made against your selected branch. With these policies in place, every time someone creates a new pull request targeting the branch you defined the policy for, a reviewer can manually decide to deploy the changes to a dedicated Kubernetes namespace for detailed review within your preview environment. Alternately, you can choose to have this deployment to the namespace happen automatically.

In modern DevOps development, deploying quickly and often is critical. Equally important is making sure the code you’re putting into production is valuable to your end users. For more information, or to get started today, contact our team of experts here at PRAKTIK.

 

 

 

 

 

 

 


DevOps Deployments and Ansible

Doing things by hand is an accident-prone method of development. DevOps best practices state that we should automate as much as possible. Automation helps developers be more productive elsewhere, in addition to eliminating human error. We can follow this best practice by using tools that enable things like Continuous Integration and Continuous Deployment. One such tool is Ansible.

Ansible Basics

Ansible is an open-source tool that enables automation in your environment. In the past, you may have needed to manually coordinate to deliver an application to your end users. Now, Ansible can do that work for you, from automating cloud provisioning, application deployment, configuration management, and more.

Ansible can also help orchestrate zero-downtime deployments to deliver the best experience to your end user. This is especially important in our fast-paced world; consumers expect to be able to reach their services at all times, while simultaneously desiring the services to be continuously improved.

How Does Ansible Work?

Getting started with Ansible is as simple as describing your automation job in YAML. This description takes the form of an Ansible Playbook. When going through your DevOps CI/CD pipeline, Ansible uses this playbook to provision everything you described for your deployment in the exact same way, every time. This means that your deployments are simple and repeatable, without any human error.

To create and provision resources in Azure, you’ll need a Linux VM with Ansible configured. Additionally, as Ansible is agentless, it will need SSH authentication using a key pair and an SSH service connection in Azure DevOps.

To learn more, or to get started with Ansible today, contact our team of experts here at PRAKTIK.