| wsl, docker | edit

Docker Desktop just announced/released their new subscription model, and it hasn’t sat well with many folks. The good news is there are ways around it, even on Windows.

To get started, I’m running Windows 10 and have WSL2 installed running Ubuntu. Even more specifically:

Windows Version Information

WSL2 Version Information

Ubuntu Version Information

This likely works across multiple versions of each of these items, but just want you to know up front :).

Okay, so how do we get this working? Here we go.

First, open an instance of WSL2, because we need to type a number of commands.

If you’ve ever had Docker installed inside of WSL2 before, and is now potentially an “old” version - remove it:

sudo apt-get remove docker docker-engine docker.io containerd runc

Now, let’s update apt so we can get the current goodies:

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

Once thats finished, let’s add the official GPG key for Docker:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Now, let’s add the stable repository to apt:

echo \
     "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
     $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Now we can actually install Docker! Run the following commands:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Docker is now installed! Yay! And, I’m dumb, so I thought that was all, so I navigated to my source code directory and ran

docker-compose up

The error messages following that made me realize that I still need to install docker-compose, so here we go!

Since we’ve got everything updated and looking good, this part is just a single command:

sudo apt-get install docker-compose

Now, some caveats:

Docker isn’t always running by default when you launch WSL2. You can likely add it as a service to autostart, but I haven’t done that (yet). For now I just run the following command with every new session:

sudo service docker start

The version in my docker-compose.yml file was higher (because of Docker Desktop) than what is allowed with the current version of docker-compose within WSL2. I wasn’t using anything special, so I was able to simply “downgrade” the version in the compose file without issue.

My containers are coming from our private registery in AWS ECR. Because of that, I also had to install the AWS CLI tools and get that authentication working before docker-compose would actually start up given my compose file. If you’re interested, those commands are:

sudo apt-get install awscli
aws configure #answer the prompts to setup your profile
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <aws_account_number>.dkr.ecr.us-east-2.amazonaws.com

Once I did that, everything was good to go, and I was able to uninstall Docker Desktop.

Good luck, readers!

This post, “Installing Docker, and Docker-Compose, in WSL2/Ubuntu on Windows”, first appeared on https://www.codingwithcalvin.net/installing-docker-and-docker-compose-in-wsl2ubuntu-on-windows

| postgres, signalr, dotnet | edit

In one of my web applications at work, we provide a (Google) map and then set markers at various GPS coordinates.

Those GPS coordinates are obtained through third-party vendor APIs on a schedule, and the results are stored in our database. Since the webpage that shows this map and markers can be opened for an extended period of time, its possible that we will receive new GPS coordinates that never get presented on the page, unless the user refreshes.

Naturally, it was only a matter of time before the question came in - “can we automatically update those map pins when we get new data”?

By combining some features of Postgres, background workers, and SignalR, we were able to accomplish the request. I won’t go into excruciating detail, instead let’s consider this the “thirty-thousand foot view”.

First, I created a new .NET 5 web project to host the SignalR bits. I needed to do this because our web project was still running .NET Core 2.1, and SignalR wasn’t compatible with that version. This new web project is, more or less, a bare bones MVC application. In our Startup.cs class, we map our SignalR Hubs as usual / per documentation.

app.UseEndpoints(endpoints =>

Each hub handles registration from the client, and adding the connection to groups based on the data that person is allowed to access. That’s all the hub does.

Now that we have our project and our hub(s), we need to be able to send new data to the clients that have been added to those groups. We did this by taking advantage of ASP.NET Core Hosted Services, and listening to specific channels from the database for updates.

We can open a connection to the database and listen to a channel like so -

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    _connection = new NpgsqlConnection(_configuration.ConnectionString);
    _connection.Notification += ConnectionOnNotification;

    using var command = new NpgsqlCommand("LISTEN <channel_name>", _connection);

    while (!stoppingToken.IsCancellationRequested)
        await _connection.WaitAsync();

    await Task.CompletedTask;

Postgres has functionality built in for notify (to generate a notification on a chanell) and listen (receive a notification from a channel). We wrapped the notify functionality behind a trigger and procedure, so that when a new GPS entry is recorded, the trigger will fire and execute the procedure, which will take the full payload of the GPS entry and send it to our channel.

The trigger is pretty basic -

DROP TRIGGER IF EXISTS trigger_name ON table_name;

    EXECUTE PROCEDURE procedure_to_call();

The procedure does a little more work to create a JSON payload, but ultimately sends the notify command -

PERFORM pg_notify('<channel_name>', payload);

In this case, <channel_name> here must match the channel you’re listening to in your background worker.

The background services are always listening for updates on the same channel, and can act on the notification by deserializing the event data (the full payload of the GPS entry). Once we’ve deserialized the data, we make a couple small modifications to it and then serialize it again. Then, we can use SignalR’s functionality to send the data through the hub context to any clients awaiting updates. This maps to the event we added in the background service, ConnectionOnNotification, where you can respond to the new notification -

private void ConnectionOnNotification(object sender, NpgsqlNotificationEventArgs e)
        var data = JsonConvert.DeserializeObject<SomeObjectYouHave>(e.Payload);

        data.UpdatedOn = DateTime.Now.FormatPrettyForUsers();

        _hubContext.Clients.Group(group_id).SendAsync("<the SignalR event the front-end is waiting for>", JsonConvert.SerializeObject(data));
    catch (Exception ex)
        _logger.LogError(ex, $"Error in ConnectionOnNotification - Information [{e.Payload}]");

On the front-end, which is a Vue 3 app, we just wait for the notification from the hub context and add the payload data to our existing data object.

That looks a little bit like this -

var connection = new signalR
    .withUrl(`${baseSignalRUrl}/endpoint`) // < Matches the hub endpoint from Startup.cs

connection.on("<the SignalR event the front-end is waiting for>", function(payload) {
    // do something with the payload

Again, this is the “thirty-thousand foot view”, and it’s difficult to tease apart production code for a blog post, so there may be bits missing here. Please let me know if you have any questions, more than happy to help!

This post, “Real-Time UI Updates with Postgres and SignalR”, first appeared on https://www.codingwithcalvin.net/real-time-ui-updates-with-postgres-and-signalr

| gitkraken, git | edit

In my previous post, “GitKraken Git GUI How-To: Add & Remove Files”, we went over how to add and remove (stage and unstage) changes using the GitKraken Git GUI application.

In this post, I’m going to show you how to commit those changes to your repository.

The very first thing you need to do before you can commit, is to stage your changes!

Changes are Staged

Once you have your changes staged the way you like, now you must supply a commit message. Git, and by extension GitKraken, allow you to have a “summary” and “description” for a commit. You can use either one, or both, based on how you like to format your messages. Personally, I generally only use the “description” for my commit messages. Once you have entered a message (and this message can be as short as a single character, or much more polished), the green, “Commit changes to <#> files”, will become enabled -

Message Added, Commit Button Enabled

Clicking the, “Commit changes to <#> files”, button will take the staged files, commit them to your local repository, and add a line to the commit history graph in the middle of the window -

Changes Committed!

That’s it! You’ve now committed your changes, and the process can start over with the next set of changes you need to make. If you look closely in the last screenshot above, you’ll see that I already have changes that are “unstaged” that I’ll be “staging” and “committing” to make this blog post go live.

One thing to keep in mind (we’ll go over this in the next post in the series) is that these changes are still ONLY AVAILABLE TO YOU. You must “push” them to the remote repository to make them available to others.

I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post (the video for this will be coming next). If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends. :)

This post, “GitKraken Git GUI How-To: Committing Changes”, first appeared on https://www.codingwithcalvin.net/gitkraken-git-gui-how-to-committing-changes

| gitkraken, git | edit

In my previous post, “GitKraken Git GUI How-To: Cloning a Repository”, we went over how to do just that. Now that we have a repository to work with, we need to make some changes! Maybe that involves changing existing files, or adding new ones. However, just editing or creating files in the repository doesn’t necessarily mean they’ll be committed, pushed (future topics, I promise), and available for other folks to work with.

In this post, I’m going to show you how to add and remove files - or, in git lingo, stage and unstage files.

Let’s get an idea of what it means to “stage” (or “unstage”) your changes in a git repository. There are three primary reasons you might need to “stage” a file:

  1. When you make a change to a “tracked” file (a file that has previously been committed to the repository, for example, a file that you received during the cloning process), it simply exists in a changed state on the file system and git knows it changed. If you were to perform a commit on the repository right now, nothing would actually happen. We have to tell git that the changed file should be committed by “staging” it.
  2. If you add a new file to the repository (that isn’t being ignored by git - we’ll dive into git ignore files soon, too!), it will simply exist on disk, and git will know it’s new, but again if you commit now, nothing would actually happen.
  3. If you delete a file that was previously being tracked.

With all three of these types of changes, nothing is ready to commit until we “stage” them. You can see in the following screenshot, all three of these types of changes waiting to be staged -

Waiting to Stage

The yellow pencil icon indicates a change was made to a file. The red dash icon indicates a file was deleted. The green plus icon indicates a new file was created.

I can “stage” all of these changes at once by clicking the green, “Stage all changes” button in the area above the “Unstaged Files” list -

Stage All Changes

Clicking this button will move all of the lines shown from “Unstaged Files” to “Staged Files” -

Staged Files

Note that all the icons remain the same in this list, so you can easily tell which type of change was staged for a given entry.

At this point, you would be ready to “commit” these changes to your local repository, if desired. But, what if you decided you weren’t ready, and wanted to unstage the changes? Well, the GitKraken Git GUI gives you a simple button to “Unstage all changes” as shown in the following screenshot -

Unstaging Changes

You’ll see that the list moved back to the top, indicating all the changes are currently unstaged.

With that complete, you now decide you want to stage only a few of the changes. The GitKraken Git GUI makes this easy as well. Simply hover over the entry in the list you want to stage and a green, “Stage File” button will appear on that line, in the far right. While still hovering over the line, mouseover the button and click it.

Staging a Single File/Change

Alternatively, right click the line and choose, “Stage” from the popup menu -

Staging a Single File/Change

Be careful not to click “Discard changes”, as that will revert your change - i.e., you’ll lose your work!

Doing that for a couple of the items results in the following screenshot -

Some Changes Staged

As you can see, I still have the ability to “Stage all changes” for what remains in the “Unstaged Files” section, and the ability to “Unstage all changes” in the “Staged Files” section. Hovering over an item in the “Staged Files” section gives me a red, “Unstage File”, button, similar to its green counterpart mentioned previously -

Unstage a Single File/Change

Alternatively, right click the line and choose, “Unstage” from the popup menu -

Unstage a Single File/Change

Be careful not to click “Discard changes”, as that will revert your change - i.e., you’ll lose your work!

With the GitKraken Git GUI, you can dive even deeper into staging and unstaging, by staging individual LINES of a file or multiple lines known as “hunks”. Clicking the file in the “Unstaged Files” area will open a view allowing you to see the changes to the file -

Diff View

Once this view opens, you get those options I previously mentioned. The most visible ones are “Discard Hunk” and “Stage Hunk” in the upper right area of the diff view -

Discard and Stage Hunk Buttons

These are pretty straightforward - “discard hunk” reverts the change to the “chunk” of code that has been changed right below it. “Stage Hunk” will stage JUST that chunk of code. If you’re looking at this view from a file in the “Staged Files” area, you will be presented with an “Unstage Hunk” button instead, which unstages that chunk of code.

Unstage Hunk Button

Depending on how big the file is, or how many changes you made, you very well may see multiple “sections” in this view that allowing you to Discard, Stage, or Unstage multiple “hunks” from a single file. These actions are presented (and useful) for file changes, since adding or deleting a file is an atomic operation to the actual, whereas an edited file can change all over.

The last type of “staging”/”unstaging” is at the LINE level of a changed file. I mentioned this earlier, and although its present in some of the last few screenshots, I didn’t want to confuse anyone while covering “hunks”.

Added Lines

As you can see in the previous screenshot, while viewing the diff of a file in “Unstaged Files”, you’ll see the lines added to the file in green. Hovering over one of these lines will reveal a green “+” (plus) indicator in the left margin. Clicking this button will stage just that single line. Where “Stage Hunk” would stage both lines, this allows you to stage individual lines. Of course, if you were hovering one of these lines for a file in the “Staged Files” area, that green “+” plus icon would be a red “-“ minus icon to unstage that specific line -

Unstage a Single Line


There you have it - all the various ways you can add, or “stage”, files/changes as well as the various ways you can remove, or “unstage”, files/changes.

I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post (the video for this will be coming next). If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends :).

This post, “GitKraken Git GUI How-To: Adding Files”, first appeared on https://www.codingwithcalvin.net/gitkraken-git-gui-how-to-add-remove-files

| gitkraken, git | edit

If you’re new to the GitKraken Git GUI or interested in it, one of the first things you’ll want to do after installing it is clone a repository so you can get to work.

There are three ways in the GitKraken Git GUI to “initiate” the cloning of a repository. Each one of these items will lead to the same “Repository Management” popup dialog, with the “Clone” section selected, which I will show you at the end.

Launching the Repository Management Dialog

1. File | Clone Repo

From the File menu, click on Clone Repo. Alternatively, this menu item also comes with a keyboard shortcut of CTRL + N, if you prefer keyboard shortcuts

File Clone Repo

2. “New Tab” tab

From the “New Tab” page, which can be added (if you don’t already have one) by clicking the + button in the tab bar -

Add New Tab

Once the “New Tab” page is opened, click on “Clone a Repo” from the menu down the left-hand side -

Clone a Repo from the New Tab page

3. Repository Management Icon

This one is a little more subtle, but always available in view if you need it. On the far left of any open tabs (even the “New Tab”), there is a folder icon. Clicking on this icon will launch the “Repository Management” popup.

Launch the Repo Management Popup

Cloning from the Repository Management Dialog

Once you’ve successfully launched the “Repository Management” dialog, make sure you’re on the “Clone” item on the left-hand side -

The Repository Management Dialog

When “Clone” is selected, we are presented with a multitude of providers to clone our repo from.

If all you have is a URL that doesn’t correspond with any of the listed providers, you can do that using the “Clone with URL” item at the very top. Simply provide the local folder you want to clone the repository into and the URL to the remote repository.

For example, acquire the URL of your repository from GitHub -

GitHub Repository URL

And paste that value into the GitKraken Git GUI “URL” field -

Cloning a GitHub repository with a URL

With those fields provided, you will be presented with the “Full Path” field. This pre-populates with the “Where to clone to” plus the repository name. You can change the repository name by typing over the value in that field.

Once you’re satisfied, click on, “Clone the repo!” to initiate the clone process. The GitKraken Git GUI will ask you for credentials (if necessary), and then a progress dialog will be shown -

Cloning Progress

Once this process completes, you’ll be asked if you want to open the newly cloned repository -

Open the Clone?

Clicking on “Open Now” will open a new tab in the GitKraken Git GUI to your newly cloned repository -

Newly Opened Repository

You’re ready to work with your repository!

Now, I do want to back up just a little bit to the Repository Management dialog to take a look at another provider -

The Repository Management Dialog

If you’ve authorized the GitKraken Git GUI to interact with one of these other providers (perhaps another post is warranted for that?), you can select which repository you want to clone from a list. For example, I’ve authorized the GitKraken Git GUI to work with GitHub.com, so I am able to directly select which repository I want to clone -

Clone from GitHub.com

Click the drop down, and select any repository from your account (or organizations, if you’ve allowed the GitKraken Git GUI access to them) -

Remote Listing

Upon selecting a remote repository from the list, you’ll be presented with the “full path” item, so you can change the local folder name the repository is being cloned into, and the “Clone the repo!” button becoming active -

Where to clone?

Once you click the “Clone the repo!” button, the same progress dialog will launch, as well as asking whether you would like to open the newly cloned repository, just like the URL version shown previously.

I won’t show you anymore providers from the dialog, as they all work basically the exact same way, the only difference being that you will need to authorize the GitKraken Git GUI access to your accounts in those services. Just note that, although you see multiple providers in my screenshots, some of them required a paid version of the GitKraken Git GUI to access, so please check out the plan comparison page on the GitKraken website for more details!


I hope this post has shed some light on the various ways you can clone a remote repository using the GitKraken Git GUI. I’ll be posting more “how to” articles for using the GitKraken Git GUI in the near future, as well as accompanying videos for each post. If you need any help or have any questions, please feel free to reach out directly.

If you’re interested in downloading the GitKraken Git GUI client and taking it for a spin, please do me a favor and use my referral link to get started. No obligations, of course, if you decide to. And, if you don’t want to, we’ll still be friends :).

Thanks, dear reader, hope you enjoy unleashing your inner Kraken!

This post, “GitKraken Git GUI How-To: Cloning a Repository”, first appeared on https://www.codingwithcalvin.net/gitkraken-git-gui-how-to-cloning-a-repository

| csharp, git, github | edit

In a previous post, I discussed how I was able to get a .NET Framework application built using GitHub actions. Go check out that post for the full YAML’y goodness.

In this post, however, I want to explain how I modified that original GitHub Action to take advantage of git tags to automate the release (of that application).

To accomplish this, we’re going to add TWO items to our yaml file:

  1. Run the action when a git tag is pushed (some extra coolness here)
  2. Apply Conditionals to Deployment Steps

Part 1 - Run the Action when a git tag is pushed

Here’s our original YAML for triggering our action:

    branches: master

Right beneath push:, but before branches: master, we’re going to add our tag line:

    tags: releases/[1-9]+.[0-9]+.[0-9]+
    branches: master

Woah, is that…is that a regex in there?! Why yes it is! Let me explain….

I don’t necessarilly want any random tag pushed to the repo to trigger this event, so you have to be pretty specific. First, you need to prefix your tag with releases/, and then it must also confirm to the remaining regex - which enforces a “version number”.

Here are a couple example tags -

  • releases/1.2.0 = action RUNS
  • bob/tag123 = action does NOT run
  • v1.2.0 = action does NOT run
  • releases/v1.2.0 = action does NOT run
  • releases/12.5.12 = action RUNS

Alright. Given that we push the “correct” tag, we’ll trigger the action. How do we take that and actually deploy the application? ONWARD! (that’s a good movie, btw)…

Part 2 - Apply Conditionals to Deployment Steps

In our original action, we were already logging into Azure and deploying our application. For reference, that looks like this:

- name: Login to Azure
  uses: azure/login@v1
    creds: $

- name: Publish Artifacts to Azure
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

The problem is, as listed, these steps will ALWAYS run, and I only want them to when I’ve pushed a tag that (successfully) triggers the action. How do we do that?

We use a conditional on the two steps, and a built-in function from GitHub -

- name: Login to Azure
  if: startsWith( github.ref, 'refs/tags/releases/')
  uses: azure/login@v1
    creds: $

- name: Publish Artifacts to Azure
  if: startsWith( github.ref, 'refs/tags/releases/')
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

Breaking this down a bit, you’ll notice we added the if line to both actions. Within that, we utilize the startsWith function to see if the github.ref that triggered the build “starts with”, refs/tags/releases/. If that’s true, run the step. Now, github.ref is part of the data that we have access to during an action, and refs/tags/releases/ is a hard-coded string.

Why does this work? Well, our build will only get triggered if we push a new git tag that follows our standard at the top of the action, so by the time we get to this step, we’ve either:

  • pushed to master, but that “ref” would be refs/master
  • created a pull request against master (ref doesn’t match)
  • OR, pushed a tag (releases/1.2.5), which would have a “ref” of refs/tags/releases/1.2.5 and THAT matches our “starts with” conditional

To recap, if we push to master, we’ll get a build, but no deployment. If we create a pull request to master, we’ll get a build of the PR, but no deployment. If we push a non-standard tag, we get nothing. Finally, if we push the “correct” tag, we’ll get a build AND a deployment to Azure.

I’ll be honest, it took my a lot longer to piece this together than I care to admit (but I’m admitting it anyway). The documentation, quite honestly, left a bit to be desired around how to utilize these things together, so I have about 40 failed builds from various attempts before getting this right.

I think there will be one more post, at some point, about parsing that version number from the tag name, and automatically applying that to all the assemblies as the actual version of the software. Right now, this application isn’t “versioned”, and it should be. I’m still trying to piece together the right steps, since its a .NET Framework application.

Thanks again, dear reader. I hope this is useful!

*If you need a full yaml reference, please check out this gist

This post, “Git Tag Based Released Process Using GitHub Actions”, first appeared on https://www.codingwithcalvin.net/git-tag-based-released-process-using-github-actions

| csharp, dotnet | edit

One thing I’ve learned over the years is that being clever with your code is a waste of time and energy. The simpler, the better. Part of being “simpler”, to me, falls into the paradigm of “clean code”. But - what does “clean code” actually mean? In this post, we’ll look at what I consider to be a “clean(er)” conditional statement that reduces cognitive complexity/overhead.

For example, consider a “simple” authorization check (contrived, of course):

if(_authorizationService.HasClaim(Claims.Admin) || (_authorizationService.HasClaim(Claims.User) && _authorizationService.HasClaim(Claims.ModifyTimesheet))){
    // do something

That if statement is getting kinda hairy, huh? Take into consideration new folks joining your team trying to make heads or tails of that, too.

Yes, within a few seconds we gleam that if your an Admin or a User that also has the ModifyTimesheet permission, you should be allowed to //do something, but what if we just gave those “things” actual names?

Consider this refactor:

bool isAdmin = _authorizationService.HasClaim(Claims.Admin);
bool userHasPermission = _authorizationService.HasClaim(Claims.User) && _authorizationService.HasClaim(Claims.ModifyTimesheet);

if(isAdmin || userHasPermission){
    // do something

You can see we’ve introduced a couple of variables with very explicit names that we’ve swapped into the if statement. Now when you scan that code and come across that if statement, you don’t have to read into the logic to understand the condition that needs met. If you do care about what those two things are, then you can easily scan up to the variable declarations and “dig in” a little more.

Happy clean coding, dear reader!

This post, “Clean Coding in C# - Part I”, first appeared on https://www.codingwithcalvin.net/clean-coding-in-c-part-i

| github, azure, dotnet | edit

In this post, I’m going to show you how I finally managed to configure a Github action to build my .NET Framework web application and then deploy it to Azure. It took way too long, so I hope this helps somebody else out there save some time.

To get started, I didn’t know how to get started. I couldn’t find an action template to do this, like you can for .NET Core. Luckily, I put out a tweet and got a response:

As soon as he said use “windows-latest”…“no need to install .NET Framework, its already there” (paraphrasing), it clicked.

Okay, fantastic, but what steps will we ultimately need to get this thing built and subsequently deployed? That part took a little longer, unfortunately.

Let’s start with the “basics” of the action -

name: EZRep Build

    branches: master

    runs-on: windows-latest

    - uses: actions/checkout@v2

We’re calling this the “EZRep Build”, run when we push to master, use the latest Windows image/runner, and checkout the repository. Great, so we have our code checked out, now what do we do?

Since this is a .NET Framework application (that still uses packages.config, I might add), I needed two more steps to get going -

- name: Setup MSBuild
  uses: microsoft/setup-msbuild@v1

- name: Setup NuGet
  uses: NuGet/setup-nuget@v1.0.2

These steps get MSBuild and NuGet setup and added to the PATH variable (since we’re on Windows).

This next part is where I struggled a bit, trying to get the various steps to use environment variables, so there may very well be a better way, but I’ll show ya anyway -

- name: Navigate to Workspace

- name: Create Build Directory
  run: mkdir _build

Everytime I tried to call MSBuild (which I’ll show in a second), I was never in the right working directory. I tried calling it with $GITHUB_WORKSPACE/EZRep.sln (my solution file), but it never worked. Finally, after quite a few attempts, I just added a step to change the directory, and this solved all my problems.

My MSBuild step actually creates a package for deploying to Azure, and I later learned that it wouldn’t automatically create the directory I wanted to use, so that’s why there is the mkdir step in there. You may or may not need that at all, depending on how you package and deploy your code.

It’s finally time to actually build the solution -

- name: Restore Packages
  run: nuget restore EzRep.sln

- name: Build Solution
  run: |
    msbuild.exe EzRep.sln /nologo /nr:false /p:DeployOnBuild=true /p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True /p:platform="Any CPU" /p:configuration="Release" /p:PublishUrl="../_build"

A little long winded there, but first thing is to restore the packages. I’ve got a weird setup right now that I didn’t even realize until all of this, that I need to go back and research / fix. It seems like my solution is part way migrated from the old packages.config construct and the MSBuild construct, but not entirely. You may not need this step, specifically, but you might need the -t:restore flag for your MSBuild step. You’ll notice we’re using that _build directory we created earlier for our PackageLocation, except its back one directory from the default location, hence the double dot relative directory path - ../_build

Here we are, the actual build step. There are a lot of parameters/flags going on in there, but most of those are because of the packaging routine. You could easily get by with a simpler version, such as this -

- name: Build Solution
  run: |
    msbuild.exe EzRep.sln /p:platform="Any CPU" /p:configuration="Release"

Substitute your actual solution filename in there, of course :)

I also chose to upload the artifacts before deploying them, so they would exist alongside the build in Github -

- name: Upload artifact
  uses: actions/upload-artifact@v1.0.0
    name: EZRepBundle
    path: "./_build"

Again, using that _build directory (we’re back to our default working directory from our cd $GITHUB_WORKSPACE step earlier in the file). Give it a name. I chose EZRepBundle, but you can call this whatever you like / makes the most sense for your application. Now this step just stores those artifacts for us. It doesn’t do anything else, so we still need to deploy our application to Azure.

That looks like this -

- name: Login to Azure
  uses: azure/login@v1
    creds: $
- name: Publish Artifacts to Azure
  uses: Azure/webapps-deploy@v2
    app-name: ezrep
    package: "./_build"
    slot-name: production

First you’ll notice the “Login to Azure” step. There is a little bit of setup you have to do before this will work, that requires using the Azure CLI to create the necessary credentials, which you then store in the secrets area of the project so Github can access them when logging in. Check out this post to learn more about HOW to do that. If you’re using Azure, and comfortable at the commandline, you should have no problem here. If you do run into issues, ping me on Twitter, I’d be glad to help.

Now that we’re “logged in” to Azure, we can publish our _build package we created earlier. Give it the Azure WebApp name you want to deploy to, the local directory to find the package (_build for us), and the slot to deploy to. The slot is optional and defaults to ‘production’ anyway, but I like having it there as a reminder.

Hopefully, with any luck, you’ll have this thing working on the first try - unlike my 40-50 failed attempts :).

I am going to call this post, “version 1”, because I am also working on a versioning and release process using a few more steps, git tags, and step conditionals (You can have an if statement on a step in Github Actions!)

Since my complete file is in a private repo, you can get the full v1 file in this public gist

Thanks for reading!

This post, “Building .NET Framework Applications with Github Actions”, first appeared on https://www.codingwithcalvin.net/building-net-framework-applications-with-github-actions

| azure, devops, postman | edit

I’ve let them linger for too long. It’s time to figure out a way to delete the three service connections in my Azure DevOps project that don’t work, and can’t be deleted through the UI. There has to be a way!

There is. It involves the AzureDevOps API, a Personal Access Token (PAT), Postman (or curl), and some patience.

Before we dive in too far, go generate an AzureDevOps PAT under your account. I generated mine with full permissions with a 1 day expiration. Theoretically, you could probably get by with just giving it “Read, Query, & Manage” for the Service Connections scope.

Whatever tool you choose to make the API requests is up to you, but I prefer Postman these days. You will need to authenticate to the APIs with Basic Authentication. You will need to base64 encode a string comprised of a blank username, colon, and the PAT you generated earlier. It will look something like this (C# pseudo code) -

var apiToken = Convert.ToBase64String($":{PAT}");

Alternatively, using Postman, you would use “Basic Auth”, and put your PAT in the password field:

Postman Basic Authorization

The first API call you want to make will be to get the list of Service Endpoints:

GET https://dev.azure.com/{organization}/{project}/_apis/serviceendpoint/endpoints?includeFailed=true&api-version=5.1-preview.2

Replace your organization and project names as appropriate. Note that we are explicitly saying that we also want endpoints that are in a FAILED state. This was the only way I could get the three I wanted to delete. Otherwise, all I received was the single endpoint that was working fine.

You’ll end up with a giant response body that includes all your service endpoints. Find the “id”s for the ones you need to delete, and copy them out somewhere. You’ll need them for the next COUPLE of API calls….

Next you’ll want to retrieve the specific details for the endpoints (one at time, of course):

GET https://dev.azure.com/{organization}/{project}/_apis/serviceendpoint/endpoints/{endpointId}?api-version=5.1-preview.2

Again, replace the organization and project names as appropriately, but now you’ll need to also replace the endpointId within the query as well.

This will give you another giant response body that includes all the details for that specific endpoint.

Using that same URL, we need to switch the verb to a PUT -

PUT https://dev.azure.com/{organization}/{project}/_apis/serviceendpoint/endpoints/{endpointId}?api-version=5.1-preview.2

(replace tokens as needed)

The response body you received from the previous GET will now be the template for the body you need to send back (with some changes).

Inside of the body, find the creationMode field and change the value from Automatic to Manual.

At this point, if you send the PUT, you will likely receive errors that some fields should be omitted from the Body. Go ahead and remove whichever ones the error gives you, until it processes successfully. I had to remove azureSpnRoleAssignmentId, spnObjectId, and appObjectId from all of mine, but you may receive others.

Assuming you get a 200 OK response from this call, navigate to the UI that lists your service connections in Azure DevOps (https://dev.azure.com/{organization}/{project}/_settings/adminservices). Alternatively, navigate to the “Project Settings” in AzureDevOps for the project in question, and click “Service Connections” in the left-hand navigation menu.

In the list of service connections, click the one that corresponds to the service connection you just modified through the API. Then, click the kebab menu in the upper right, next to the ‘Edit’ button:

Location of Kebab Menu

From the menu that drops down, click delete:

Delete Item

And then confirm the deletion:

Confirm Delete

With any luck, the bad connection should disappear from the UI! Now, you just need to go back and perform the necessary API calls and deletions again for every bad connection you may have.

Luckily, I only had three, which took about 10 minutes (it took darn near an hour to figure out all the necessary steps).

I hope this helps somebody out there until we’re able to delete these bad service connections from the UI without hassle. Until then, good luck, dear reader!

This post, “Deleting Failed AzureDevOps Service Connections”, first appeared on https://www.codingwithcalvin.net/deleting-failed-azuredevops-service-connections

| csharp, dotnet, devops, docker | edit

Part II of II

See Part I here

In my previous post, I went through how we set up versioning for our assemblies during local development, and also during our Azure DevOps pipelines. In this post, I want to share how we did “tagging” - both to the repository, and to the docker containers.

I’ll start out by telling you that adding tags to our repository proved WAY TOO noisy, and was abandoned. I’ll still share what I tried, though.

As mentioned in the previous post, we have three different pipelines for Azure DevOps - one for each potential Docker container. And, remember, based on our configuration, a given commit could cause no containers (maybe just a README change), or up to all three (shared library change).

Going in, I thought it would be useful to know, from the repository, where a given container was built. I started by having each pipeline create a git tag at the commit/SHA that triggered the pipeline, formatted with the container name and the corresponding $(Build.BuildId).

After a handful of commits to master over the course of a day, we were up to around 10 tags. It was apparent before the end of that first day that this was going to be WAY too noisy to keep going, and we shut it off.

That’s all I’m gonna say about that. It was a bad idea - in our scenario. It may work better for you if you don’t have as many containers, or you do it less often.

Okay, now what? Twitter comes through again!

Kelly Andrews comes through with his preferred method:

That makes sense. Instead of tagging the git SHA with the build id, tag the container with the commit SHA. Sounds great, let’s give it a go.

First, how do I get the commit SHA that triggered the pipeline? I figured there could be enough of a delay that if I queried for the SHA from HEAD, that I could end up with a newer SHA that what actually triggered the pipeline. Off to the AzureDevOps docs!

After some searching through the Predefined Build Variables section of the documentation, I found what I was looking for:


The latest version control change of the triggering repo that is included in this build.

  • Git: The commit ID.
  • TFVC: the changeset.

This variable is agent-scoped, and can be used as an environment variable in a script and as a parameter in a build task, but not as part of the build number or as a version control tag.

Perfect! Since we’re using git (who isn’t these days?), all I needed to was alter our pipeline to tag our container with Build.SourceVersion, and then since we’re also using ECR, we needed to push that tag to our (private) registry.

First, our Docker task was already tagging our container with latest and Build.BuildId, as shown below -

- task: Docker@2
    displayName: Build an image
        # more stuff here, omitted for brevity
        tags: |

So, a simple modification here, just add in Build.SourceVersion:

- task: Docker@2
    displayName: Build an image
        # more stuff here, omitted for brevity
        tags: |
            $(Build.SourceVersion)   <-- THIS PART

With that part completed, we just need to push the $(Build.SourceVersion) as a tag to the image in ECR using the ECR Push Image task (AFTER the image has been pushed, of course). We were already pushing the $(Build.BuildId) separately, and latest goes by default on the initial push. With that said, we ended up with this -

- task: ECRPushImage@1
        # more stuff here, omitted for brevity
        pushTag: $(Build.SourceVersion)

Every time this pipeline pushes a new container to ECR, it will be tagged with the commit SHA that triggered it! Now, its the “long form” SHA, not the shortened version, so its little noisy when looking at the image list in ECR, but its much better than what we started with. And, how often do you look at the commit history in your repo compared to how often you review the list of images in your container registry?

The result ends up looking like this:

ECR Image List

The (currently) three digit number is the $(Build.BuildId) and the gnarly string is the $(Build.SourceVersion) from AzureDevOps. You’ll see a few others in there, but those are to know which environment the container is running in - quasi unrelated to this post :).

Hope you enjoyed tagging along in my adventure…get it…tagging?

Until next time, dear reader!

This post, “Docker Containers and my Adventures in Versioning and Tagging - Part II”, first appeared on https://www.codingwithcalvin.net/docker-containers-and-my-adventures-in-versioning-and-tagging-part-ii