DevOps

Add a Service Connection to Azure Stack Hub in Azure DevOps

Pipelines.JPG

One of the key aspects of the system formally known as Azure Stack, now to be called Azure Stack Hub (ASH) is, that it is a target for automation. The chances are if you are considering running this at scale and you are deploying content through the portal UI, you’re probably doing it wrong.

ASH makes a great target for among other things a DevOps toolchain. I’m not sure if you have tried connecting from Azure DevOps portal but Azure DevOps has undergone a lot of changes and a lot of the documentation looks different.

I wanted to share the steps to create a ‘Service Connection’ from Azure DevOps. This is assuming you are using an Azure AD connected ASH Installation.

First, you’ll need to create a Service Principal in Azure AD.

The steps are shown here briefly below. Here are some detailed instructions if needed.
https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal


Navigate to your Azure AD and register a new application

Register an application - Microsoft Azure.png

Make sure to copy the Application ID or Client ID, the Directory ID or Tenant ID into some temporarily like notepad

PPE1DatacenterA.png

Next click ‘Certificates & Secrets’
Select ‘+ New Client Secret’
Enter a description and key expiry length
Make sure to copy the secret

CertificatesSecrets.png

You need to add this SPN you have just created to your Azure Stack Subscription through the Access Control (IAM). At this stage you might want to capture the Azure Stack Subscription ID and name for later.

2019-11-06 18_48_12-Users - Microsoft Azure Stack.png

Regarding Azure DevOps I am going to assume you already have an organization and project configured.

If you go to the Project Settings page, under the Pipelines section,
then select 'Service Connections’
click ‘New service connection’

ServiceConnection.png

Finally we can create the ‘Service Connection’ using ‘Service Principal Authentication’

You need to click the hyperlink ‘Use the full version of the service connection dialog’

You need to click the hyperlink ‘Use the full version of the service connection dialog’

Select ‘Azure Stack’ from the drop down list (obviously Azure DevOps hasn’t been told about the name change just yet)Complete the form and select ‘Verify Connection’And you’re done | Good luck on your Pipeline

Select ‘Azure Stack’ from the drop down list (obviously Azure DevOps hasn’t been told about the name change just yet)

Complete the form and select ‘Verify Connection’

And you’re done | Good luck on your Pipeline

Migrate your project from Jira to Azure DevOps

jira-to-vsts.png

My team recently had the need to migrate two of our project boards from Jira to Azure DevOps (formerly VSTS). There was a whole lot of suggestions when I googled with bing, but not a whole lot of sample code that I could start with. This is completely understandable. With both systems being highly customizable and the needs of you team being unique, it would be near impossible to come up with a complete solution that will work flawlessly for everyone. So, I decided to provide one.

Just kidding.

I hacked together pieces from all over to come up with a solution that worked for my project. It is by no means robust and bulletproof, but it does get the job done and i open for improvement and tailoring. In short, it is a good starting point for anyone needing to do this type of migration.

It is done as a console app without any trappings of a UI. This is a process that is usually executed once and therefore having a UI is not necessary. So, it is designed to be run using the debugger, which has the added benefit of being able to be monitored and paused whenever you want.

I had a few things that I was interested in. This may or may not line up to your requirements.

  • Migrate two JIra projects/boards into a single Azure DevOps project

  • Each Jira project work items would be a configured as a child of a Azure DevOps epic.

  • Jira epics are mapped to features

  • Jira PBIs are mapped to PBIs

  • Jira tasks and sub-tasks are mapped to tasks

You can absolutely go nuts in migrating all the history of PBIs. In that is your case, it might be better to find someone who specialized in this type of migration. In my case, I wanted some limited history. Here is what I was hoping to migrate:

  • Created by and Created date

  • Assigned To

  • work item hierarchy

  • title and description

  • Status (ToDo, done, etc)

  • priority

  • attachments

  • comments

  • tags

You'll notice that I did not migrate anything to do with sprints. In my case, both Jira projects had a different number of completed sprints and it wasn't important enough to keep the sprint history to deal with this inconsistency. If you have to need, good luck!

I am using the Azure DevOps Scrum template for my project. It should work for other templates as well, but I have not tested it, so your mileage may vary.

Code

Enough already. Show me the code! Ok, ok.

Nuget Packages

You'll need 3 nuget packages:


Install-Package Atlassian.SDK Install-Package Microsoft.VisualStudio.Services.Client Install-Package Microsoft.TeamFoundationServer.Client

Credentials

You'll need to configure the connection to Jira and Azure DevOps. The todo block at the top contains some constants for this.

You'll need an Azure DevOps personal access token. See this for more information about personal access tokens.

You'll also need a local user account for Jira. Presumably, you could connect using an OpenId account. However, the SDK did not seem to provide an easy way to do this and, in the end, it was easier to create a temporary local admin account.

Field Migrations

Some fields, like title and attachments migrate just fine. Others need a little massaging. For example rich text in Jira uses markdown while rich text in Azure DevOps (at this point) uses HTML. In my case, I decided to punt on converting between markdown and html. It wasn't worth spending the time and Azure DevOps is likely to support markdown rich text in the future.

Another place that needs massaging is work item statuses. They are close enough that, if you haven't customized your Azure DevOps status, the provided mapping should work pretty well.

Lastly, username conversions is completely unimplemented. You'll have to provide your own mapping. In my case, we only had a dozen developers and stakeholders, so I just created a static mapping. If your Jira usernames naturally map to your Azure DevOps (ours didn't) you could probably just tack on your @contoso.com and call it a day. Unfortunately, our Jira instanced used a completely different AAD tenant than our Azure DevOps organization. There were also some inconsistencies usernames between the two systems.

Idempotency

You'll notice that the migration keeps and stores a log of everything that has been migrated so far. This accomplishes two things:

  1. An easy way to look up the completed mapping of Jira items to Azure DevOps items. This is essential to keep the Jira hierarchy.

  2. Allow you to resume after an inevitable exception without re-importing everything again. If you do need to start over, simply delete the migrated.json file in the projects root directory.

That's It

Good luck in your migration! I hope this helps.


using Atlassian.Jira; using Microsoft.TeamFoundation.WorkItemTracking.WebApi; using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models; using Microsoft.VisualStudio.Services.Common; using Microsoft.VisualStudio.Services.WebApi; using Microsoft.VisualStudio.Services.WebApi.Patch.Json; using Newtonsoft.Json; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks;

namespace JiraMigration {     class Program     {         // TODO: Provide these         const string VstsUrl = "https://{AzureDevOps Organization}.visualstudio.com";         const string VstsPAT = "{AzureDevOps Personal Access Token}";         const string VstsProject = "{AzureDevOps Project Name}";

        const string JiraUserID = "{Jira local username}";         const string JiraPassword = "{Jira local password}";         const string JiraUrl = "{Jira instance url}";         const string JiraProject = "{Jira Project abbreviation}";         // END TODO

        // These are to provide the ability to resume a migration if an error occurs.         //         static string MigratedPath = Path.Combine(Environment.CurrentDirectory, "..", "..", "migrated.json");         static Dictionary Migrated = File.Exists(MigratedPath) ? JsonConvert.DeserializeObject>(File.ReadAllText(MigratedPath)) : new Dictionary();

        static void Main(string[] args) => Execute().GetAwaiter().GetResult();         static async Task Execute()         {             var vstsConnection = new VssConnection(new Uri(VstsUrl), new VssBasicCredential(string.Empty, VstsPAT));             var witClient = vstsConnection.GetClient();

            var jiraConn = Jira.CreateRestClient(JiraUrl, JiraUserID, JiraPassword);

            var issues = jiraConn.Issues.Queryable                 .Where(p => p.Project == JiraProject)                 .Take(Int32.MaxValue)                 .ToList();

            // By default this will root the migrated items at the root of Vsts project             // Uncomment ths line and provide an epic id if you want everything to be             // a child of Vsts epic             //             //AddMigrated(JiraProject, {VstsEpic Id});             foreach (var feature in issues.Where(p => p.Type.Name == "Epic"))                 await CreateFeature(witClient, feature);             foreach (var bug in issues.Where(p => p.Type.Name == "Bug"))                 await CreateBug(witClient, bug, JiraProject);             foreach (var backlogItem in issues.Where(p => p.Type.Name == "Story"))                 await CreateBacklogItem(witClient, backlogItem, JiraProject);             foreach (var task in issues.Where(p => p.Type.Name == "Task" || p.Type.Name == "Sub-task"))                 await CreateTask(witClient, task, JiraProject);        }

        static Task CreateFeature(WorkItemTrackingHttpClient client, Issue jira) =>             CreateWorkItem(client, "Feature", jira,                 jira.Project,                  jira.CustomFields["Epic Name"].Values[0],                  jira.Description ?? jira.Summary,                 ResolveFeatureState(jira.Status));         static Task CreateBug(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) =>             CreateWorkItem(client, "Bug", jira,                 jira.CustomFields["Epic Link"]?.Values[0] ?? defaultParentKey,                 jira.Summary,                 jira.Description,                 ResolveBacklogItemState(jira.Status));         static Task CreateBacklogItem(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) =>             CreateWorkItem(client, "Product Backlog Item", jira,                 jira.CustomFields["Epic Link"]?.Values[0] ?? defaultParentKey,                 jira.Summary,                 jira.Description,                 ResolveBacklogItemState(jira.Status),                 new JsonPatchOperation { Path = "/fields/Microsoft.VSTS.Scheduling.Effort", Value = jira.CustomFields["Story Points"]?.Values[0] });         static Task CreateTask(WorkItemTrackingHttpClient client, Issue jira, string defaultParentKey) =>             CreateWorkItem(client, "Task", jira,                 jira.ParentIssueKey ?? defaultParentKey,                 jira.Summary,                 jira.Description,                 ResolveTaskState(jira.Status));         static async Task CreateWorkItem(WorkItemTrackingHttpClient client, string type, Issue jira, string parentKey, string title, string description, string state, params JsonPatchOperation[] fields)         {             // Short-circuit if we've already projcessed this item.             //             if (Migrated.ContainsKey(jira.Key.Value)) return;

            var vsts = new JsonPatchDocument             {                 new JsonPatchOperation { Path = "/fields/System.State", Value = state },                 new JsonPatchOperation { Path = "/fields/System.CreatedBy", Value = ResolveUser(jira.Reporter) },                 new JsonPatchOperation { Path = "/fields/System.CreatedDate", Value = jira.Created.Value.ToUniversalTime() },                 new JsonPatchOperation { Path = "/fields/System.ChangedBy", Value = ResolveUser(jira.Reporter) },                 new JsonPatchOperation { Path = "/fields/System.ChangedDate", Value = jira.Created.Value.ToUniversalTime() },                 new JsonPatchOperation { Path = "/fields/System.Title", Value = title },                 new JsonPatchOperation { Path = "/fields/System.Description", Value = description },                 new JsonPatchOperation { Path = "/fields/Microsoft.VSTS.Common.Priority", Value = ResolvePriority(jira.Priority) }             };             if (parentKey != null)                 vsts.Add(new JsonPatchOperation { Path = "/relations/-", Value = new WorkItemRelation { Rel = "System.LinkTypes.Hierarchy-Reverse", Url = $"https://ciappdev.visualstudio.com/_apis/wit/workItems/{Migrated[parentKey]}" } });             if (jira.Assignee != null)                 vsts.Add(new JsonPatchOperation { Path = "/fields/System.AssignedTo", Value = ResolveUser(jira.Assignee) });             if (jira.Labels.Any())                 vsts.Add(new JsonPatchOperation { Path = "/fields/System.Tags", Value = jira.Labels.Aggregate("", (l, r) => $"{l}; {r}").Trim(';', ' ') });             foreach (var attachment in await jira.GetAttachmentsAsync())             {                 var bytes = await attachment.DownloadDataAsync();                 using (var stream = new MemoryStream(bytes))                 {                     var uploaded = await client.CreateAttachmentAsync(stream, VstsProject, fileName: attachment.FileName);                     vsts.Add(new JsonPatchOperation { Path = "/relations/-", Value = new WorkItemRelation { Rel = "AttachedFile", Url = uploaded.Url } });                 }             }

            var all = vsts.Concat(fields)                 .Where(p => p.Value != null)                 .ToList();             vsts = new JsonPatchDocument();             vsts.AddRange(all);             var workItem = await client.CreateWorkItemAsync(vsts, VstsProject, type, bypassRules: true);             AddMigrated(jira.Key.Value, workItem.Id.Value);

            await CreateComments(client, workItem.Id.Value, jira);

            Console.WriteLine($"Added {type}: {jira.Key} {title}");         }         static async Task CreateComments(WorkItemTrackingHttpClient client, int id, Issue jira)         {             var comments = (await jira.GetCommentsAsync())                 .Select(p => CreateComment(p.Body, p.Author, p.CreatedDate?.ToUniversalTime()))                 .Concat(new[] { CreateComment($"Migrated from {jira.Key}") })                 .ToList();             foreach (var comment in comments)                 await client.UpdateWorkItemAsync(comment, id, bypassRules: true);         }         static JsonPatchDocument CreateComment(string comment, string username = null, DateTime? date = null)         {             var patch = new JsonPatchDocument             {                 new JsonPatchOperation { Path = "/fields/System.History", Value = comment }             };             if (username != null)                 patch.Add(new JsonPatchOperation { Path = "/fields/System.ChangedBy", Value = ResolveUser(username) });             if (date != null)                 patch.Add(new JsonPatchOperation { Path = "/fields/System.ChangedDate", Value = date?.ToUniversalTime() });

            return patch;         }

        static void AddMigrated(string jira, int vsts)         {             if (Migrated.ContainsKey(jira)) return;

            Migrated.Add(jira, vsts);             File.WriteAllText(MigratedPath, JsonConvert.SerializeObject(Migrated));         }         static string ResolveUser(string user)         {             // Provide your own user mapping             //             switch (user)             {                 case "anna.banana": return "anna.banana@contoso.com";                 default: throw new ArgumentException("Could not find user", nameof(user));             }         }         static string ResolveFeatureState(IssueStatus state)         {             // Customize if your Vsts project uses custom task states.             //             switch (state.Name)             {                 case "Needs Approval": return "New";                 case "Ready for Review": return "In Progress";                 case "Closed": return "Done";                 case "Resolved": return "Done";                 case "Reopened": return "New";                 case "In Progress": return "In Progress";                 case "Backlog": return "New";                 case "Selected for Development": return "New";                 case "Open": return "New";                 case "To Do": return "New";                 case "DONE": return "Done";                 default: throw new ArgumentException("Could not find state", nameof(state));             }         }         static string ResolveBacklogItemState(IssueStatus state)         {             // Customize if your Vsts project uses custom task states.             //             switch (state.Name)             {                 case "Needs Approval": return "New";                 case "Ready for Review": return "Committed";                 case "Closed": return "Done";                 case "Resolved": return "Done";                 case "Reopened": return "New";                 case "In Progress": return "Committed";                 case "Backlog": return "New";                 case "Selected for Development": return "Approved";                 case "Open": return "Approved";                 case "To Do": return "New";                 case "DONE": return "Done";                 default: throw new ArgumentException("Could not find state", nameof(state));             }         }         static string ResolveTaskState(IssueStatus state)         {             // Customize if your Vsts project uses custom task states.             //             switch (state.Name)             {                 case "Needs Approval": return "To Do";                 case "Ready for Review": return "In Progress";                 case "Closed": return "Done";                 case "Resolved": return "Done";                 case "Reopened": return "To Do";                 case "In Progress": return "In Progress";                 case "Backlog": return "To Do";                 case "Selected for Development": return "To Do";                 case "Open": return "To Do";                 case "To Do": return "To Do";                 case "DONE": return "Done";                 default: throw new ArgumentException("Could not find state", nameof(state));             }         }         static int ResolvePriority(IssuePriority priority)         {             switch (priority.Name)             {                 case "Low-Minimal business impact": return 4;                 case "Medium-Limited business impact": return 3;                 case "High-Significant business impact": return 2;                 case "Urgent- Critical business impact": return 1;                 default: throw new ArgumentException("Could not find priority", nameof(priority));             }         }     } }

Real-World DevOps with Octopus, Part 1

octopus.png

So, like me, you’re thinking of dipping your toes in the new DevOps revolution. You’ve picked an app to start with and spun up an Octopus server. Now what? There are plenty of tutorials about Octopus Deploy that show how to use all of Octopus’s features and also how to integrate with TFS Build, but I have yet to find a good tutorial that shows best practices for a real-world setup of an Octopus project. If you have an application that consists of anything more complicated an Azure WebApp, you'll need to think a little hard about a consistent strategy for managing and configuring your deployment pipeline. My hope is that this can be one of those guides. As a disclaimer, I am not a DevOps or Octopus expert. I have, however, slogged through the bowels of Octopus trying to get two medium-complexity applications continuously deployed using a Visual Studio Online build and Octopus Deploy. My first foray, though functional, was a disaster to configure and maintain. But I learned a lot in the process. While configuring the second application, I applied what I previously learned and I am much happier with the result.

This first part of the series will lay some foundational guidance around configuring a deployment project. It may not be groundbreaking, but it is an important step for the future installments. So, without further ado, on with the show…

The Application

My application is hosted completely in Azure and my deployment, obviously, is very Azure centric. Having said that, it should be trivial to adapt some of this guidance for on-premise or other cloud providers.

My application consists of:

  • SQL Server with multiple databases
  • Key Vault
  • Service Bus
  • Azure Service Fabric containing multiple stateless micro-services and WebAPIs
  • Azure WebApp front end

The Service Fabric micro-services are the heart of the system and they communicate with each other via Service Bus queues.

The WebApp is the front end portal to system. It talks to some of the micro-services using their WebAPI endpoints. In hindsight, it would have been easier to host the website as an ASP.NET Core site in the fabric cluster, but unfortunately core wasn't baked yet when we started this project. So, alas, we live with the extra complexity.

Variables

The variable system in Octopus is extremely powerful. The capabilities of variable expansion continue to surprise me. Just when I think I’m going to break it using a hair-brained scheme, it effortlessly carries on bending to my will. Good job Octopus team! But, as my Uncle Ben always says, “with great power, comes great responsibility” (sorry).

I’m going to assume you already have a cursory understanding the variable system in Octopus. If not, please read their great documentation and then come back. All set? Good.

Variable Sets

The first hard lesson I learned was to use variable sets right from the beginning. It is tempting to shove all of your variables in the project itself, and that’s exactly how I started. This is probably fine at first, though hard to manage when your variable count grows large. But, you will soon come to a point where one of two things will happen:

  1. Your variable count grows so large that it’s hard to maintain and conceptualize.
  2. You want to split your project in half or add a new project, and you want to share the variables between the related projects.

Personally, I hit the latter. "Well," I thought, :I’ll just move all my variables into a variable set that I can share between my projects." Not so fast, mon frère! You see, there is no UI feature that allows you to move a variable from a project to a variable set, nor from a variable set to another variable set. So, you’re stuck with recreating all of your variables by hand, or using the Octopus REST APIs to copy from one to the other. The latter works fine, until you hit sensitive variables. You cannot retrieve sensitive variable values using the UI or the REST API, so your stuck with entering it again from the sticky note on your monitor (shame on you!). This why deciding on a variable set scheme is crucial right up front.

Ok, so we’re all agreed that you should create variable sets right away. But, you ask, should I create just one big one? Well, if you just create one variable set, you’ve solved issue #2, but not #1. Your variable set can still get pretty long and while Octopus does sort the variables by name, it can still be difficult to find the variable you want when the page seems to scroll indefinitely. So, I recommend creating a set of variable sets. While it is a bit more work to set everything up just right, trust me when I say, you will thank me later.

You can use any segregation scheme you wish, but I used these criteria for my variable sets:

  1. Resource Level These types of variable sets contain infrastructure level variables that have no concept of the applications that run on top of them. For instance, a SQL Server variable set may contain the name of the SQL Service instance, admin login and password, but not any of the application level database information (especially in my case where each micro-service uses an isolated database). Another example would be an Active Directory set that contains common things like your TenantId, Tenant Name, etc, but not any AAD application variables that you may want to create. The idea here, is like all infrastructure, you should be able to configure it once and never change it again.
  2. Application Level These types of variable sets contain variables that pertain to a logical application, service or component. You may have only one of these, or multiple, depending on your solution. This is where all the magic happens and where you will spend most of your time tweaking as your application changes. Things like app.config settings, AAD Applications, Database names and connection strings, etc. live in these sets. You may have variables in these sets that pertain to several different resource types, but that ok. The point is to group all of your variables pertaining to Component A into a single variable set so you know exactly where to go to change them.
  3. Project Level Granted, variables in the project itself are not technically a variable set, but it is useful to think of them as such. These variable should be kept to an absolute minimum since they cannot be shared by other projects. These should contain any overrides that you may need or wrappers around step output variables (more on this in a future post).

Now that you have a handful of variable sets, it’s important to name them appropriately. I used the scheme <ProjectGroup>.<Resource|Component>. Being a C# guy, I like periods instead of spaces, but that just be me. At the end of the day it doesn't really matter, since to Octopus set and variable names are just strings. The <ProjectGroup> part is optional if you only have 1 solution running on your Octopus server, but is crucial as soon as you want to onboard a completely unrelated solution and want to keep any semblance of sanity.

In the end, the naming and segregation scheme is completely up to you. The most important thing is that you thing decide on a scheme and stick to it. It takes much more effort to adapt to a scheme later than to do it up front.

One last convention that I tried to follow with variable sets is to keep the Environment Scoping of variables to a minimum within variable sets. This seems like it wouldn’t be a problem, and may not be for your situation, but if you wind up with multiple Octopus projects with different lifecycles sharing a variable set, it can become problematic. For example, if you are naming your websites differently in each environment (say with a –DEV suffix or something), the answer is NOT to create scoped versions of the website name in the set. The answer IS to use expansion (see further down for this). Anything that must be scoped to environments should either utilize clever expansions or be put in the project-level variables. The only exception I make to this rule is for sensitive data that needs to be shared with multiple Octopus projects and must be different for each environment. SQL admin password is a good example of this. In this case, it is beneficial store that as a scoped variable in the variable set, but you must remember this if you ever change the lifecycle of a project or add a new project with a different lifecycle.

Variable Names

Like variable sets, variables should follow a strict naming scheme. To optimize for the UI sorting, I picked <Resource>.<OptionalSubResource>.<Name>. This helps keep related variables together when viewing the UI. As an example, this is roughly what my variable sets look like for my SQL related variables:

  • MyProjectGroup.Database variable set Variable Set for MyProjectGroup.Database
  • MyProjectGroup.MyApplication variable set Variable set for MyProjectGroup.MyApplication

 Variable Expansion

Variable expansion is one of the features that sets Octopus apart from, say, Visual Studio Online Release Management. In VSO, you can do most anything else, but the VSO variable system is absolutely dwarfed by Octopus. I’ll assume you understand the basics of variable expansion and dive right into my usage of it. My goal was to have a good balance between adhering to the DRY no duplication principal and also having enough extension points in the variable system to change things without having to do large overhauls. To that end, I wind up having a decent amount of variables that simply reference another variable. But, defining it upfront means that I just need to change the variable value rather than creating a new variable and updating all the places in my code/deployment scripts that use the old variable. Make enough changes in your variables and you’ll begin to see how useful this is. The typical way to use variable expansions is to build things like connection strings with them. For example, if you have a database connection string, you could build the connection string by hand and stick it in a single variable and mark the whole thing as sensitive (since it has a password). But, now your stuck if the server or database name changes. Instead, something like this:

Name Value
SQL.Name MyDatabaseServer
SQL.Database.MyApplication.Name MyApplication
SQL.Database.MyApplication.Password ********
SQL.Database.MyApplication.Username MyApplicationUser
SQL.Database.MyApplication.ConnectionString Server=tcp:#{SQL.Name}.database.windows.net,1433; Initial Catalog=#{SQL.Database.MyApplication.Name}; Persist Security Info=False; User ID=#{SQL.Database.MyApplication.Username}; Password=#{SQL.Database.MyApplication.Password}; MultipleActiveResultSets=False; Encrypt=True; TrustServerCertificate=False; Connection Timeout=30;

The cool things is that Octopus is smart enough to know that the password fragment is sensitive and will replace it with stars whenever the connection string's expanded value is put in the logs or deployment output. Score 1 Octopus!

Another use for variable expansion is putting optional environment suffixes (like –DEV) on the names of resources. I’ll get into this in Part 2, but the keen eyed among you may have already spotted it in the screenshots.

Project Setup

Once you get your variable system up and running (I know it took a while), it’s time to create your project. Again, I’ll assume you know or have read about the basics, so I’ll only point out a few nuggets.

Don’t forget to reference all your many variable sets in your project. Also, if you add a new variable set in the future, don’t forget go into your project and add it there. I know it sounds silly to mention this, but trust me, you’ll forget. Ask me how I know...

One of the questions that I had, and to some extent still have, is whether you should break apart your system into multiple projects or a single large project. I have yet to find a compelling argument either way, except to say that Octopus’s guidance is to have a single project and, the approach of multiple projects is only a holdover from previous versions that couldn’t handle multiple steps in a project. While I somewhat agree with this, it is important to understand the tradeoffs of each approach. For the record, I have tried them both and I would tend towards a single project purely for simplicity purposes. I will make one caveat: if you’re planning on deploying your infrastructure as part of your pipeline, consider separating that into it’s own project. I’ll talk more about infrastructure deployment in Part 2.

Single Project

Single projects are generally much easier to manage and maintain. You have to think a little less about making sure your variable are in order and that all the components are on the same version. However, it does mean that you cannot easily rev your components individually. That’s not strictly true because you can set your project to skip steps where the package hasn’t changed, but it does mean that you can’t easily see at a glance what components have changed from version to version.

Having said that, this is still my preferred configuration since I find it much easier to maintain. Especially when you factor in certain common steps like gathering secret keys that can only be retrieved via powershell script (and thus are not part of your variable sets), such as storage account connection strings or AAD application IDs.

Multiple Projects

Having multiple projects does give you a clearer view of what versions of your components are everywhere. It also allows you move up or down with each component. While upgrading specific pieces of your application can be accomplished by careful management of your packages, it is still difficult to rollback a specific component while leaving the rest alone using a single project. You can accomplish this by creating a new release and customizing which packages to use, but man is that annoying!

The other downside of multiple projects is that it is difficult, if not impossible, to manage the timing of deployments. If Component B needs to be deployed only after Component A, there is no way to do it using multiple projects in an automated fashion. You would have to manually publish them in the right order and wait for any dependencies to finish before moving onto the next component. Since I’m looking for Continuous Deployment style pipeline, this is a deal-breaker for me.

In the end, I understand why there is no clear-cut guidance about which approach to use. It really depends on your application. If you have a simple application where all the components are meant to rev together, you should probably pick a single project. If any of your components are designed and expected to rev independently, or you need to very fine grained control over the releases you create, multiple projects might be the right fit.

Build

In my case, I used the VSO build system. All in all it is pretty straightforward on how to build and package your solution. There are really only a few places where you have to make changes.

I’m using GitVersion to automatically increment the build number. I’m also having it apply the version number to my AssemblyInfo files so all of my assemblies match versions.

The next step is to extract the version number from GitVersion and put it in a build variable to so it can be handed off to Octopus to use as the release version. This is convenient because GitVersion uses SemVer, which Octopus understands. So, all the releases created from my CI build  are automatically understood by Octopus to be pre-release. Here is the powershell for that task:

[powershell]

$UtcDateTime = (Get-Date).ToUniversalTime() $FormattedDateTime = (Get-Date -Date $UtcDateTime -Format "yyyyMMdd-HHmmss") $CI_Version = "$env:GITVERSION_MAJORMINORPATCH-ci-$FormattedDateTime" Write-Host "CI Version: $CI_Version" Write-Host ("##vso[task.setvariable variable=CI_Version;]$CI_Version")

[/powershell]

I used OctoPack to package my projects into NuGet packages. To get OctoPack to package your solution, simply add “/p:RunOctoPack=true /p:OctoPackPackageVersion=$(CI_Version) /p:OctoPackPublishPackageToFileShare=$(build.artifactstagingdirectory)\deployment” to the MSBuild arguments of the Visual Studio Build task. Alternatively, you can run a standard Package NuGet task.

Last, but not least, you need to push your packages to your Octopus server. There is an Octopus extension to VSO that does this very nicely. I recommend using that to communicate with the Octopus server. If you don’t have your project set to automatically create a release when a new package is detected, you’ll also need to add a create release task. In that task, I use the same CI_Version variable for the Release Number parameter.

The only question to ask when it comes to your build is the same question asked when creating your Octopus project: Should you create one build or multiple. I would argue that the answer is probably the same as for the project setup. If you need or want granular control the over packages you create, you’ll have to create multiple projects targeted at each component of your application. Unfortunately, VSO does not have any way to customize your build based on which files actually changed in your changesets, therefore a new package version of each component will be created for each build, even if nothing changed in it. For most projects this is acceptable. If it is not, the nearest I have come is to create multiple VSO build definitions, one for each component. In the build triggers tab, I added path filters for all of the projects that affect that component. Make sure that you include any dependencies that your component has. The downside of this is that it can be awfully brittle. You have to be careful to add new path filters for any new dependencies that are added to your projects. In the end, I found it not worth the hassle.

Wrap Up

Hopefully this gave you a good foundation for you Octopus deployment. There wasn’t much that was juicy here and much of it seems tedious and unnecessary, but I guarantee this up front work will pay off greatly in the future as your application evolves, requiring your build and deploy to evolve with it.

In the rest of the series, I will dig a little deeper into the especially tricky components of my sample application and some of the strange and sometimes hacky things I had to do to get Octopus to place nice with them. This will include: deploying my Azure infrastructure, creating/updating an Azure Active Directory application dynamically and deploying a Service Fabric cluster.