comments gitkraken, git edit

In my previous post, “GitKraken Git GUI How-To: Add & Remove Files”, we went over how to add and remove (stage and unstage) changes using the GitKraken Git GUI application.

In this post, I’m going to show you how to commit those changes to your repository.

The very first thing you need to do before you can commit, is to stage your changes!

Changes are Staged

Once you have your changes staged the way you like, now you must supply a commit message. Git, and by extension GitKraken, allow you to have a “summary” and “description” for a commit. You can use either one, or both, based on how you like to format your messages. Personally, I generally only use the “description” for my commit messages. Once you have entered a message (and this message can be as short as a single character, or much more polished), the green, “Commit changes to <#> files”, will become enabled -

Message Added, Commit Button Enabled

Clicking the, “Commit changes to <#> files”, button will take the staged files, commit them to your local repository, and add a line to the commit history graph in the middle of the window -

Changes Committed!

That’s it! You’ve now committed your changes, and the process can start over with the next set of changes you need to make. If you look closely in the last screenshot above, you’ll see that I already have changes that are “unstaged” that I’ll be “staging” and “committing” to make this blog post go live.

One thing to keep in mind (we’ll go over this in the next post in the series) is that these changes are still ONLY AVAILABLE TO YOU. You must “push” them to the remote repository to make them available to others.

I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post (the video for this will be coming next). If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends. :)

This post, “GitKraken Git GUI How-To: Committing Changes”, first appeared on

comments gitkraken, git edit

In my previous post, “GitKraken Git GUI How-To: Cloning a Repository”, we went over how to do just that. Now that we have a repository to work with, we need to make some changes! Maybe that involves changing existing files, or adding new ones. However, just editing or creating files in the repository doesn’t necessarily mean they’ll be committed, pushed (future topics, I promise), and available for other folks to work with.

In this post, I’m going to show you how to add and remove files - or, in git lingo, stage and unstage files.

Let’s get an idea of what it means to “stage” (or “unstage”) your changes in a git repository. There are three primary reasons you might need to “stage” a file:

  1. When you make a change to a “tracked” file (a file that has previously been committed to the repository, for example, a file that you received during the cloning process), it simply exists in a changed state on the file system and git knows it changed. If you were to perform a commit on the repository right now, nothing would actually happen. We have to tell git that the changed file should be committed by “staging” it.
  2. If you add a new file to the repository (that isn’t being ignored by git - we’ll dive into git ignore files soon, too!), it will simply exist on disk, and git will know it’s new, but again if you commit now, nothing would actually happen.
  3. If you delete a file that was previously being tracked.

With all three of these types of changes, nothing is ready to commit until we “stage” them. You can see in the following screenshot, all three of these types of changes waiting to be staged -

Waiting to Stage

The yellow pencil icon indicates a change was made to a file. The red dash icon indicates a file was deleted. The green plus icon indicates a new file was created.

I can “stage” all of these changes at once by clicking the green, “Stage all changes” button in the area above the “Unstaged Files” list -

Stage All Changes

Clicking this button will move all of the lines shown from “Unstaged Files” to “Staged Files” -

Staged Files

Note that all the icons remain the same in this list, so you can easily tell which type of change was staged for a given entry.

At this point, you would be ready to “commit” these changes to your local repository, if desired. But, what if you decided you weren’t ready, and wanted to unstage the changes? Well, the GitKraken Git GUI gives you a simple button to “Unstage all changes” as shown in the following screenshot -

Unstaging Changes

You’ll see that the list moved back to the top, indicating all the changes are currently unstaged.

With that complete, you now decide you want to stage only a few of the changes. The GitKraken Git GUI makes this easy as well. Simply hover over the entry in the list you want to stage and a green, “Stage File” button will appear on that line, in the far right. While still hovering over the line, mouseover the button and click it.

Staging a Single File/Change

Alternatively, right click the line and choose, “Stage” from the popup menu -

Staging a Single File/Change

Be careful not to click “Discard changes”, as that will revert your change - i.e., you’ll lose your work!

Doing that for a couple of the items results in the following screenshot -

Some Changes Staged

As you can see, I still have the ability to “Stage all changes” for what remains in the “Unstaged Files” section, and the ability to “Unstage all changes” in the “Staged Files” section. Hovering over an item in the “Staged Files” section gives me a red, “Unstage File”, button, similar to its green counterpart mentioned previously -

Unstage a Single File/Change

Alternatively, right click the line and choose, “Unstage” from the popup menu -

Unstage a Single File/Change

Be careful not to click “Discard changes”, as that will revert your change - i.e., you’ll lose your work!

With the GitKraken Git GUI, you can dive even deeper into staging and unstaging, by staging individual LINES of a file or multiple lines known as “hunks”. Clicking the file in the “Unstaged Files” area will open a view allowing you to see the changes to the file -

Diff View

Once this view opens, you get those options I previously mentioned. The most visible ones are “Discard Hunk” and “Stage Hunk” in the upper right area of the diff view -

Discard and Stage Hunk Buttons

These are pretty straightforward - “discard hunk” reverts the change to the “chunk” of code that has been changed right below it. “Stage Hunk” will stage JUST that chunk of code. If you’re looking at this view from a file in the “Staged Files” area, you will be presented with an “Unstage Hunk” button instead, which unstages that chunk of code.

Unstage Hunk Button

Depending on how big the file is, or how many changes you made, you very well may see multiple “sections” in this view that allowing you to Discard, Stage, or Unstage multiple “hunks” from a single file. These actions are presented (and useful) for file changes, since adding or deleting a file is an atomic operation to the actual, whereas an edited file can change all over.

The last type of “staging”/”unstaging” is at the LINE level of a changed file. I mentioned this earlier, and although its present in some of the last few screenshots, I didn’t want to confuse anyone while covering “hunks”.

Added Lines

As you can see in the previous screenshot, while viewing the diff of a file in “Unstaged Files”, you’ll see the lines added to the file in green. Hovering over one of these lines will reveal a green “+” (plus) indicator in the left margin. Clicking this button will stage just that single line. Where “Stage Hunk” would stage both lines, this allows you to stage individual lines. Of course, if you were hovering one of these lines for a file in the “Staged Files” area, that green “+” plus icon would be a red “-“ minus icon to unstage that specific line -

Unstage a Single Line


There you have it - all the various ways you can add, or “stage”, files/changes as well as the various ways you can remove, or “unstage”, files/changes.

I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post (the video for this will be coming next). If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends :).

This post, “GitKraken Git GUI How-To: Adding Files”, first appeared on

comments gitkraken, git edit

If you’re new to the GitKraken Git GUI or interested in it, one of the first things you’ll want to do after installing it is clone a repository so you can get to work.

There are three ways in the GitKraken Git GUI to “initiate” the cloning of a repository. Each one of these items will lead to the same “Repository Management” popup dialog, with the “Clone” section selected, which I will show you at the end.

Launching the Repository Management Dialog

1. File | Clone Repo

From the File menu, click on Clone Repo. Alternatively, this menu item also comes with a keyboard shortcut of CTRL + N, if you prefer keyboard shortcuts

File Clone Repo

2. “New Tab” tab

From the “New Tab” page, which can be added (if you don’t already have one) by clicking the + button in the tab bar -

Add New Tab

Once the “New Tab” page is opened, click on “Clone a Repo” from the menu down the left-hand side -

Clone a Repo from the New Tab page

3. Repository Management Icon

This one is a little more subtle, but always available in view if you need it. On the far left of any open tabs (even the “New Tab”), there is a folder icon. Clicking on this icon will launch the “Repository Management” popup.

Launch the Repo Management Popup

Cloning from the Repository Management Dialog

Once you’ve successfully launched the “Repository Management” dialog, make sure you’re on the “Clone” item on the left-hand side -

The Repository Management Dialog

When “Clone” is selected, we are presented with a multitude of providers to clone our repo from.

If all you have is a URL that doesn’t correspond with any of the listed providers, you can do that using the “Clone with URL” item at the very top. Simply provide the local folder you want to clone the repository into and the URL to the remote repository.

For example, acquire the URL of your repository from GitHub -

GitHub Repository URL

And paste that value into the GitKraken Git GUI “URL” field -

Cloning a GitHub repository with a URL

With those fields provided, you will be presented with the “Full Path” field. This pre-populates with the “Where to clone to” plus the repository name. You can change the repository name by typing over the value in that field.

Once you’re satisfied, click on, “Clone the repo!” to initiate the clone process. The GitKraken Git GUI will ask you for credentials (if necessary), and then a progress dialog will be shown -

Cloning Progress

Once this process completes, you’ll be asked if you want to open the newly cloned repository -

Open the Clone?

Clicking on “Open Now” will open a new tab in the GitKraken Git GUI to your newly cloned repository -

Newly Opened Repository

You’re ready to work with your repository!

Now, I do want to back up just a little bit to the Repository Management dialog to take a look at another provider -

The Repository Management Dialog

If you’ve authorized the GitKraken Git GUI to interact with one of these other providers (perhaps another post is warranted for that?), you can select which repository you want to clone from a list. For example, I’ve authorized the GitKraken Git GUI to work with, so I am able to directly select which repository I want to clone -

Clone from

Click the drop down, and select any repository from your account (or organizations, if you’ve allowed the GitKraken Git GUI access to them) -

Remote Listing

Upon selecting a remote repository from the list, you’ll be presented with the “full path” item, so you can change the local folder name the repository is being cloned into, and the “Clone the repo!” button becoming active -

Where to clone?

Once you click the “Clone the repo!” button, the same progress dialog will launch, as well as asking whether you would like to open the newly cloned repository, just like the URL version shown previously.

I won’t show you anymore providers from the dialog, as they all work basically the exact same way, the only difference being that you will need to authorize the GitKraken Git GUI access to your accounts in those services. Just note that, although you see multiple providers in my screenshots, some of them required a paid version of the GitKraken Git GUI to access, so please check out the plan comparison page on the GitKraken website for more details!


I hope this post has shed some light on the various ways you can clone a remote repository using the GitKraken Git GUI. I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post. If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends :).

Thanks, dear reader, hope you enjoy unleashing your inner Kraken!

This post, “GitKraken Git GUI How-To: Cloning a Repository”, first appeared on

comments csharp, git, github edit

In a previous post, I discussed how I was able to get a .NET Framework application built using GitHub actions. Go check out that post for the full YAML’y goodness.

In this post, however, I want to explain how I modified that original GitHub Action to take advantage of git tags to automate the release (of that application).

To accomplish this, we’re going to add TWO items to our yaml file:

  1. Run the action when a git tag is pushed (some extra coolness here)
  2. Apply Conditionals to Deployment Steps

Part 1 - Run the Action when a git tag is pushed

Here’s our original YAML for triggering our action:

    branches: master

Right beneath push:, but before branches: master, we’re going to add our tag line:

    tags: releases/[1-9]+.[0-9]+.[0-9]+
    branches: master

Woah, is that…is that a regex in there?! Why yes it is! Let me explain….

I don’t necessarilly want any random tag pushed to the repo to trigger this event, so you have to be pretty specific. First, you need to prefix your tag with releases/, and then it must also confirm to the remaining regex - which enforces a “version number”.

Here are a couple example tags -

  • releases/1.2.0 = action RUNS
  • bob/tag123 = action does NOT run
  • v1.2.0 = action does NOT run
  • releases/v1.2.0 = action does NOT run
  • releases/12.5.12 = action RUNS

Alright. Given that we push the “correct” tag, we’ll trigger the action. How do we take that and actually deploy the application? ONWARD! (that’s a good movie, btw)…

Part 2 - Apply Conditionals to Deployment Steps

In our original action, we were already logging into Azure and deploying our application. For reference, that looks like this:

- name: Login to Azure
  uses: azure/login@v1
    creds: $

- name: Publish Artifacts to Azure
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

The problem is, as listed, these steps will ALWAYS run, and I only want them to when I’ve pushed a tag that (successfully) triggers the action. How do we do that?

We use a conditional on the two steps, and a built-in function from GitHub -

- name: Login to Azure
  if: startsWith( github.ref, 'refs/tags/releases/')
  uses: azure/login@v1
    creds: $

- name: Publish Artifacts to Azure
  if: startsWith( github.ref, 'refs/tags/releases/')
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

Breaking this down a bit, you’ll notice we added the if line to both actions. Within that, we utilize the startsWith function to see if the github.ref that triggered the build “starts with”, refs/tags/releases/. If that’s true, run the step. Now, github.ref is part of the data that we have access to during an action, and refs/tags/releases/ is a hard-coded string.

Why does this work? Well, our build will only get triggered if we push a new git tag that follows our standard at the top of the action, so by the time we get to this step, we’ve either:

  • pushed to master, but that “ref” would be refs/master
  • created a pull request against master (ref doesn’t match)
  • OR, pushed a tag (releases/1.2.5), which would have a “ref” of refs/tags/releases/1.2.5 and THAT matches our “starts with” conditional

To recap, if we push to master, we’ll get a build, but no deployment. If we create a pull request to master, we’ll get a build of the PR, but no deployment. If we push a non-standard tag, we get nothing. Finally, if we push the “correct” tag, we’ll get a build AND a deployment to Azure.

I’ll be honest, it took my a lot longer to piece this together than I care to admit (but I’m admitting it anyway). The documentation, quite honestly, left a bit to be desired around how to utilize these things together, so I have about 40 failed builds from various attempts before getting this right.

I think there will be one more post, at some point, about parsing that version number from the tag name, and automatically applying that to all the assemblies as the actual version of the software. Right now, this application isn’t “versioned”, and it should be. I’m still trying to piece together the right steps, since its a .NET Framework application.

Thanks again, dear reader. I hope this is useful!

*If you need a full yaml reference, please check out this gist

This post, “Git Tag Based Released Process Using GitHub Actions”, first appeared on

comments csharp, dotnet edit

One thing I’ve learned over the years is that being clever with your code is a waste of time and energy. The simpler, the better. Part of being “simpler”, to me, falls into the paradigm of “clean code”. But - what does “clean code” actually mean? In this post, we’ll look at what I consider to be a “clean(er)” conditional statement that reduces cognitive complexity/overhead.

For example, consider a “simple” authorization check (contrived, of course):

if(_authorizationService.HasClaim(Claims.Admin) || (_authorizationService.HasClaim(Claims.User) && _authorizationService.HasClaim(Claims.ModifyTimesheet))){
    // do something

That if statement is getting kinda hairy, huh? Take into consideration new folks joining your team trying to make heads or tails of that, too.

Yes, within a few seconds we gleam that if your an Admin or a User that also has the ModifyTimesheet permission, you should be allowed to //do something, but what if we just gave those “things” actual names?

Consider this refactor:

bool isAdmin = _authorizationService.HasClaim(Claims.Admin);
bool userHasPermission = _authorizationService.HasClaim(Claims.User) && _authorizationService.HasClaim(Claims.ModifyTimesheet);

if(isAdmin || userHasPermission){
    // do something

You can see we’ve introduced a couple of variables with very explicit names that we’ve swapped into the if statement. Now when you scan that code and come across that if statement, you don’t have to read into the logic to understand the condition that needs met. If you do care about what those two things are, then you can easily scan up to the variable declarations and “dig in” a little more.

Happy clean coding, dear reader!

This post, “Clean Coding in C# - Part I”, first appeared on

comments github, azure, dotnet edit

In this post, I’m going to show you how I finally managed to configure a Github action to build my .NET Framework web application and then deploy it to Azure. It took way too long, so I hope this helps somebody else out there save some time.

To get started, I didn’t know how to get started. I couldn’t find an action template to do this, like you can for .NET Core. Luckily, I put out a tweet and got a response:

As soon as he said use “windows-latest”…“no need to install .NET Framework, its already there” (paraphrasing), it clicked.

Okay, fantastic, but what steps will we ultimately need to get this thing built and subsequently deployed? That part took a little longer, unfortunately.

Let’s start with the “basics” of the action -

name: EZRep Build

    branches: master

    runs-on: windows-latest

    - uses: actions/checkout@v2

We’re calling this the “EZRep Build”, run when we push to master, use the latest Windows image/runner, and checkout the repository. Great, so we have our code checked out, now what do we do?

Since this is a .NET Framework application (that still uses packages.config, I might add), I needed two more steps to get going -

- name: Setup MSBuild
  uses: microsoft/setup-msbuild@v1

- name: Setup NuGet
  uses: NuGet/setup-nuget@v1.0.2

These steps get MSBuild and NuGet setup and added to the PATH variable (since we’re on Windows).

This next part is where I struggled a bit, trying to get the various steps to use environment variables, so there may very well be a better way, but I’ll show ya anyway -

- name: Navigate to Workspace

- name: Create Build Directory
  run: mkdir _build

Everytime I tried to call MSBuild (which I’ll show in a second), I was never in the right working directory. I tried calling it with $GITHUB_WORKSPACE/EZRep.sln (my solution file), but it never worked. Finally, after quite a few attempts, I just added a step to change the directory, and this solved all my problems.

My MSBuild step actually creates a package for deploying to Azure, and I later learned that it wouldn’t automatically create the directory I wanted to use, so that’s why there is the mkdir step in there. You may or may not need that at all, depending on how you package and deploy your code.

It’s finally time to actually build the solution -

- name: Restore Packages
  run: nuget restore EzRep.sln

- name: Build Solution
  run: |
    msbuild.exe EzRep.sln /nologo /nr:false /p:DeployOnBuild=true /p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True /p:platform="Any CPU" /p:configuration="Release" /p:PublishUrl="../_build"

A little long winded there, but first thing is to restore the packages. I’ve got a weird setup right now that I didn’t even realize until all of this, that I need to go back and research / fix. It seems like my solution is part way migrated from the old packages.config construct and the MSBuild construct, but not entirely. You may not need this step, specifically, but you might need the -t:restore flag for your MSBuild step. You’ll notice we’re using that _build directory we created earlier for our PackageLocation, except its back one directory from the default location, hence the double dot relative directory path - ../_build

Here we are, the actual build step. There are a lot of parameters/flags going on in there, but most of those are because of the packaging routine. You could easily get by with a simpler version, such as this -

- name: Build Solution
  run: |
    msbuild.exe EzRep.sln /p:platform="Any CPU" /p:configuration="Release"

Substitute your actual solution filename in there, of course :)

I also chose to upload the artifacts before deploying them, so they would exist alongside the build in Github -

- name: Upload artifact
  uses: actions/upload-artifact@v1.0.0
    name: EZRepBundle
    path: "./_build"

Again, using that _build directory (we’re back to our default working directory from our cd $GITHUB_WORKSPACE step earlier in the file). Give it a name. I chose EZRepBundle, but you can call this whatever you like / makes the most sense for your application. Now this step just stores those artifacts for us. It doesn’t do anything else, so we still need to deploy our application to Azure.

That looks like this -

- name: Login to Azure
  uses: azure/login@v1
    creds: $
- name: Publish Artifacts to Azure
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

First you’ll notice the “Login to Azure” step. There is a little bit of setup you have to do before this will work, that requires using the Azure CLI to create the necessary credentials, which you then store in the secrets area of the project so Github can access them when logging in. Check out this post to learn more about HOW to do that. If you’re using Azure, and comfortable at the commandline, you should have no problem here. If you do run into issues, ping me on Twitter, I’d be glad to help.

Now that we’re “logged in” to Azure, we can publish our _build package we created earlier. Give it the Azure WebApp name you want to deploy to, the local directory to find the package (_build for us), and the slot to deploy to. The slot is optional and defaults to ‘production’ anyway, but I like having it there as a reminder.

Hopefully, with any luck, you’ll have this thing working on the first try - unlike my 40-50 failed attempts :).

I am going to call this post, “version 1”, because I am also working on a versioning and release process using a few more steps, git tags, and step conditionals (You can have an if statement on a step in Github Actions!)

Since my complete file is in a private repo, you can get the full v1 file in this public gist

Thanks for reading!

This post, “Building .NET Framework Applications with Github Actions”, first appeared on

comments azure, devops, postman edit

I’ve let them linger for too long. It’s time to figure out a way to delete the three service connections in my Azure DevOps project that don’t work, and can’t be deleted through the UI. There has to be a way!

There is. It involves the AzureDevOps API, a Personal Access Token (PAT), Postman (or curl), and some patience.

Before we dive in too far, go generate an AzureDevOps PAT under your account. I generated mine with full permissions with a 1 day expiration. Theoretically, you could probably get by with just giving it “Read, Query, & Manage” for the Service Connections scope.

Whatever tool you choose to make the API requests is up to you, but I prefer Postman these days. You will need to authenticate to the APIs with Basic Authentication. You will need to base64 encode a string comprised of a blank username, colon, and the PAT you generated earlier. It will look something like this (C# pseudo code) -

var apiToken = Convert.ToBase64String($":{PAT}");

Alternatively, using Postman, you would use “Basic Auth”, and put your PAT in the password field:

Postman Basic Authorization

The first API call you want to make will be to get the list of Service Endpoints:


Replace your organization and project names as appropriate. Note that we are explicitly saying that we also want endpoints that are in a FAILED state. This was the only way I could get the three I wanted to delete. Otherwise, all I received was the single endpoint that was working fine.

You’ll end up with a giant response body that includes all your service endpoints. Find the “id”s for the ones you need to delete, and copy them out somewhere. You’ll need them for the next COUPLE of API calls….

Next you’ll want to retrieve the specific details for the endpoints (one at time, of course):


Again, replace the organization and project names as appropriately, but now you’ll need to also replace the endpointId within the query as well.

This will give you another giant response body that includes all the details for that specific endpoint.

Using that same URL, we need to switch the verb to a PUT -


(replace tokens as needed)

The response body you received from the previous GET will now be the template for the body you need to send back (with some changes).

Inside of the body, find the creationMode field and change the value from Automatic to Manual.

At this point, if you send the PUT, you will likely receive errors that some fields should be omitted from the Body. Go ahead and remove whichever ones the error gives you, until it processes successfully. I had to remove azureSpnRoleAssignmentId, spnObjectId, and appObjectId from all of mine, but you may receive others.

Assuming you get a 200 OK response from this call, navigate to the UI that lists your service connections in Azure DevOps ({organization}/{project}/_settings/adminservices). Alternatively, navigate to the “Project Settings” in AzureDevOps for the project in question, and click “Service Connections” in the left-hand navigation menu.

In the list of service connections, click the one that corresponds to the service connection you just modified through the API. Then, click the kebab menu in the upper right, next to the ‘Edit’ button:

Location of Kebab Menu

From the menu that drops down, click delete:

Delete Item

And then confirm the deletion:

Confirm Delete

With any luck, the bad connection should disappear from the UI! Now, you just need to go back and perform the necessary API calls and deletions again for every bad connection you may have.

Luckily, I only had three, which took about 10 minutes (it took darn near an hour to figure out all the necessary steps).

I hope this helps somebody out there until we’re able to delete these bad service connections from the UI without hassle. Until then, good luck, dear reader!

This post, “Deleting Failed AzureDevOps Service Connections”, first appeared on

comments csharp, dotnet, devops, docker edit

Part II of II

See Part I here

In my previous post, I went through how we set up versioning for our assemblies during local development, and also during our Azure DevOps pipelines. In this post, I want to share how we did “tagging” - both to the repository, and to the docker containers.

I’ll start out by telling you that adding tags to our repository proved WAY TOO noisy, and was abandoned. I’ll still share what I tried, though.

As mentioned in the previous post, we have three different pipelines for Azure DevOps - one for each potential Docker container. And, remember, based on our configuration, a given commit could cause no containers (maybe just a README change), or up to all three (shared library change).

Going in, I thought it would be useful to know, from the repository, where a given container was built. I started by having each pipeline create a git tag at the commit/SHA that triggered the pipeline, formatted with the container name and the corresponding $(Build.BuildId).

After a handful of commits to master over the course of a day, we were up to around 10 tags. It was apparent before the end of that first day that this was going to be WAY too noisy to keep going, and we shut it off.

That’s all I’m gonna say about that. It was a bad idea - in our scenario. It may work better for you if you don’t have as many containers, or you do it less often.

Okay, now what? Twitter comes through again!

Kelly Andrews comes through with his preferred method:

That makes sense. Instead of tagging the git SHA with the build id, tag the container with the commit SHA. Sounds great, let’s give it a go.

First, how do I get the commit SHA that triggered the pipeline? I figured there could be enough of a delay that if I queried for the SHA from HEAD, that I could end up with a newer SHA that what actually triggered the pipeline. Off to the AzureDevOps docs!

After some searching through the Predefined Build Variables section of the documentation, I found what I was looking for:


The latest version control change of the triggering repo that is included in this build.

  • Git: The commit ID.
  • TFVC: the changeset.

This variable is agent-scoped, and can be used as an environment variable in a script and as a parameter in a build task, but not as part of the build number or as a version control tag.

Perfect! Since we’re using git (who isn’t these days?), all I needed to was alter our pipeline to tag our container with Build.SourceVersion, and then since we’re also using ECR, we needed to push that tag to our (private) registry.

First, our Docker task was already tagging our container with latest and Build.BuildId, as shown below -

- task: Docker@2
    displayName: Build an image
        # more stuff here, omitted for brevity
        tags: |

So, a simple modification here, just add in Build.SourceVersion:

- task: Docker@2
    displayName: Build an image
        # more stuff here, omitted for brevity
        tags: |
            $(Build.SourceVersion)   <-- THIS PART

With that part completed, we just need to push the $(Build.SourceVersion) as a tag to the image in ECR using the ECR Push Image task (AFTER the image has been pushed, of course). We were already pushing the $(Build.BuildId) separately, and latest goes by default on the initial push. With that said, we ended up with this -

- task: ECRPushImage@1
        # more stuff here, omitted for brevity
        pushTag: $(Build.SourceVersion)

Every time this pipeline pushes a new container to ECR, it will be tagged with the commit SHA that triggered it! Now, its the “long form” SHA, not the shortened version, so its little noisy when looking at the image list in ECR, but its much better than what we started with. And, how often do you look at the commit history in your repo compared to how often you review the list of images in your container registry?

The result ends up looking like this:

ECR Image List

The (currently) three digit number is the $(Build.BuildId) and the gnarly string is the $(Build.SourceVersion) from AzureDevOps. You’ll see a few others in there, but those are to know which environment the container is running in - quasi unrelated to this post :).

Hope you enjoyed tagging along in my adventure…get it…tagging?

Until next time, dear reader!

This post, “Docker Containers and my Adventures in Versioning and Tagging - Part II”, first appeared on

comments csharp, dotnet, devops, docker edit

Part I of II…..

For a while, I’ve been running a little blind on answering the question, “is that fix in production?”. I could roughly gauge that it was or wasn’t by digging into the build and deployment logs, and backtracking into commit SHAs. Gotta be honest, it was painful and sucked. I got tired of doing that, so I set out on an adventure to answer that question as quickly as possible. My problem was, I don’t have extensive experience with Docker containers, and that added a layer of complexity to my situation, which I’ll explain as we dig in.

I’m going to go through two different things I did to help myself, starting with versioning.

When I first started this new job, our website code (.NET Core MVC, 2x .NET Core APIs) didn’t even have a versioning scheme, and I don’t know about you, but I like having versions that I can reason about.

Now, all three of those projects were built and deployed using docker containers, and eventually we had all of that going through Azure DevOps (so that’s what I’ll use to explain, though the concepts should apply anywhere).

First - how to version the assemblies into something that makes sense? I have never used, but had heard, that I could likely accomplish this was a Directory.Build.props file that would reside next to my solution file. I ultimately ended up with something that looks like this:

    <SLCBuild Condition="$(BuildId.Contains('#'))">1</SLCBuild>
    <SLCBuild Condition="!$(BuildId.Contains('#'))">$(BuildId)</SLCBuild>

Let’s take about each piece, because it took me multiple tries to get this figured out and working - both locally, and in AzureDevOps.

In the first PropertyGroup -


I’m declaring my BuildId variable (MSBuild is involved which allows us to do this). First thing you’ll notice is that weird string in there - #{Build.BuildId}#. That is a token that will get replaced during my Azure DevOps Pipeline with the environment variable of the same name.

In the second PropertyGroup -

  <SLCBuild Condition="$(BuildId.Contains('#'))">1</SLCBuild>
  <SLCBuild Condition="!$(BuildId.Contains('#'))">$(BuildId)</SLCBuild>

I’m declaring two more variables - SLCVersion and SLCBuild (which is duplicated because of the conditions). When I actually want to increment the version, I would manually change the SLCVersion. Then, my SLCBuild is set to 1 if the BuildId variable still has the tokens in it. This indicates that the application is being built locally. If the tokens are gone, then it means we’re in the midst of a pipeline build, so let’s use that number instead.

Finally, the last PropertyGroup -


Sets the AssemblyVersion and FileVersion for all the assemblies in the solution (about 10 or so). And, again, that part works because we are doing this in a Directory.Build.props which resides next to our solution file. Check out the docs for more info on this part

That file, along with my AzureDevOps pipeline files (we have three - one for each container) are checked into the repository. Inside of each container pipeline, I used the “Replace Tokens” extension/task from the marketplace to push the $(Build.BuildId) into the token we saw previously.

- task: replacetokens@3
  displayName: 'Replacing Tokens in Directory.Build.prop...'
    targetFiles: '**/Directory.Build.props'
    encoding: 'auto'
    writeBOM: true
    actionOnMissing: 'warn'
    keepToken: false
    tokenPrefix: '#{'
    tokenSuffix: '}#'

Finally, in my application code, I can pull the version information and display it, using code like:

var version = typeof(BaseController).Assembly.GetName().Version;

For the UI, its available on the login screen, and the APIs return it via JSON from our HealthCheck endpoints.

One downside to all of this - each container COULD have the same exact code, same exact major.minor.revision, but a completely different BuildId. I am (currently) okay with that trade-off, since it means I have something to reference instead of nothing.

This got pretty long winded, but that’s how I get all the assemblies versioned, while the containers are being built (via docker files) in our AzureDevOps pipelines.

In the next post, we’ll talk about Git Tags and Docker Container Tags, which makes up the other have of this endeavor.

Until next time, dear reader!

This post, “Docker Containers and my Adventures in Tagging”, first appeared on

comments devops, macos, testing edit

This post, “Running Your Test Suite in Azure DevOps On A Mac Build Agent And Publishing The Results”, first appeared on

I’ve been working on a Xamarin.Forms application for the last few months, and finally got to the point where I needed our CI system (in this case, Azure DevOps), to run our test suite and publish the results. Sounds pretty easy, huh? Well, maybe for some, but I struggled with it for a few hours because I needed to use a Mac Build Agent for the pipeline.

Hopefully, dear reader, this post will help you out, in the case you run into a similar issue.

Let’s start at the “beginning” -

In my Xamarin.Forms solution (using Visual Studio for Mac), I created a .NET Standard 3.1 class library project that will house my xUnit tests. However, one of the first requirements for xUnit, is that we change that project from a netstandard2.1 project, to a netcoreapp3.1 project by editing the project file manually. You can read more about that requirement here if you’re interested.

After the project was created (and edited) successfully, we can add the xUnit packages from NuGet - xunit and xunit.runner.visualstudio.

Let’s write some tests! Refer to this xUnit documentation page for ‘getting started’ writing xUnit tests. While the page mostly refers to the command line, I was doing most of this from Visual Studio for Mac.

Now that we have some tests in our project, let’s get our pipeline / macOS build agent, to build, execute, and publish the test results. If we had been building on a Windows build agent, we would be able to follow “most” instructions found online to use the “.NET Core” task (with the ‘test’ command and ticking the ‘Publish test results and code coverage’ checkbox). But we aren’t. So what do we do? Well, it requires a few tasks to make this happen.

First, since I do want to collect test coverage metrics alongside of test runs, we need to go back to our project and install one more NuGet package, coverlet.collector.

Okay, with that out of the way, let’s dig right into the pipeline tasks you’ll need to make this work -

Use the .NET Core task to build your test projects. Note that you may already be building your test projects. I was not, because of the Xamarin.Forms iOS project being build with the “Xamarin.iOS” task.

Yours will probably end up looking something like this:

- task: DotNetCoreCLI@2
  displayName: 'Build Test Projects...'
    projects: '**/*.Tests.csproj'

This next task, which installs a global tool, could technically come first. As I mentioned, I struggled getting this to work at all, so once I had a sequence of tasks that worked, I left it alone :). This task just uses the “Command Line” task, and installs a global tool for generating our code coverage reports (we’ll install it here, but use it later):

- task: CmdLine@2
  displayName: 'Install Global Tools...'
    targetType: 'inline'
    script: |
      dotnet tool install -g dotnet-reportgenerator-globaltool
  continueOnError: true

Okay, now let’s run our tests. This step again uses the “.NET Core” task, with the ‘test’ command. Note that we are telling it to publish test results, but we’re also adding a custom argument, which is to collect the code coverage metrics (which is important for our NEXT step):

- task: DotNetCoreCLI@2
  displayName: 'Execute Tests...'
    command: 'test'
    projects: '**/*.Tests.csproj'
    publishTestResults: true
    arguments: '--collect:"XPlat Code Coverage"'

Tests are ran and the results published. Now, we need to take those code coverage results and do something with those. This took some time to figure out, because its not apparent WHERE those metrics get published. I finally found them, in the temp directory of the agent, which you’ll see referenced in the next step’s YAML. Again, we’re using the “Command Line” task to execute the global report generator we installed previously, passing along the location of the results file created from the previous step, with the location of where we want the corresponding output stored, along with the report types:

- task: CmdLine@2
  displayName: 'Execute Code Coverage ReportGenerator...'
    targetType: 'inline'
    script: |
      reportgenerator "-reports:$(Agent.TempDirectory)/**/coverage.cobertura.xml" "-targetdir:$(System.DefaultWorkingDirectory)/coveragereport" "-reporttypes:HtmlInline_AzurePipelines_Dark;Cobertura;Badges"
  continueOnError: true

Shew, okay, so we have the results created into that targetdir in the previous step’s YAML, now how do we “publish” it to Azure DevOps?

We can use the “Publish Code Coverage Results” task for that, with some basic settings - the code coverage tool, “cobertura”, and where that xml file is.

- task: PublishCodeCoverageResults@1
  displayName: 'Publish Code Coverage Results...'
    codeCoverageTool: 'cobertura'
    summaryFileLocation: '$(System.DefaultWorkingDirectory)/coveragereport/Cobertura.xml'

I’m not really digging into “Cobertura” here, as I don’t know much about it. I piecemealed these tasks together from various sources, and that was a recurring tool mentioned, so I used it. It works out great, so I have no reason to worry about it for now.

With any luck and a single build (unlike my 30-ish failed builds to get this working), you will “get” two tabs on your Azure DevOps pipelines build results page. Summary, which is default, followed by “Tests” and “Code Coverage”. You’ll note, in the following screenshot, that I have “Releases” as well, which is because this build actually went through the Release pipeline as well. You may, or may not, also have that tab on any given build result page.

Build Results Page and Tabs

Flipping over to the “Tests” tab, you should see the results of the specific run of tests.

Test Results

And, finally, the “Code Coverage” tab, which is embedded HTML report (it doesn’t quite follow your Azure DevOps theme), followed by coverage results for each file in your projects, with rollups for the namespaces:

Code Coverage Results

And, sure, we could sit here and argue that I don’t have enough tests, or enough test coverage - but that’s really not the point of this article is it? Now that I know my numbers, I can monitor them, and increase them.

Thanks for reading, hope this helps somebody out!