blog post image
Andrew Lock avatar

Andrew Lock

~6 min read

Running .NET Core global tools in non-sdk Docker images

.NET Core global tools are great for providing small pieces of functionality. Unfortunately, they have a few limitations which can occasionally cause issues when you run them. In this post I describe how you can avoid these issues by containerising your global tools with Docker.

All the commands in this post describe using Linux containers - the same principal can be applied to Windows containers if you update the commands, but I don't know that the pay-off is worth it in that case, given the large size of Windows containers.

.NET Core global tools and their limitations

.NET Core global tools are handy command-line "tools" that you can install in your system and run from anywhere. They have evolved as the .NET CLI has evolved (and have changed again in .NET Core 3.0), but the current incarnation appeared in .NET Core 2.1.

There are a number of first-party global tools from Microsoft, like the dotnet-user-secrets tool, the dotnet-watch tool, and the EF Core tool, but you can also write your own. In the past I've described creating a tool that uses the TinyPNG API to squash images, and a tool for converting web.config files to appsettings.json format. I also use the Cake global tool, Nate McMaster's dotnet-serve tool, and the Nerdbank.GitVersioning tool nbgv.

Generally speaking, installing these tools is painless - you provide the ID of the associated NuGet package:

dotnet tool install -g nbgv

You can then run the tool using <toolname>:

> nbgv get-version

Version:                      0.0.236.24525
AssemblyVersion:              0.0.0.0
AssemblyInformationalVersion: 0.0.236+cd5f8f6636
NuGet package Version:        0.0.236-cd5f8f6636
NPM package Version:          0.0.236-cd5f8f6636

There are some downsides to the tools though.

  • ~~There's no way to specify global tools that are required to build a project. This was possible before .NET Core 2.1, and will be coming again in 3.0 though.~~ This is possible again in .NET Core 3.0.
  • Global tools are really framework-dependent .NET Core console apps, so they need the right runtime to be installed on your machine. You can't run a global tool compiled for .NET Core 2.2 on a machine that only has the 2.1 runtime installed.
  • When you install a new major or preview version of the .NET Core SDK, you might not be able to run your existing tools, based on the roll-forward rules.
  • They require the .NET Core SDK to install them, even though they only require the .NET Core Runtime to run them

If you are building the tool yourself, you can support multiple runtimes by multi-targeting the global tool for multiple runtimes e.g.

<TargetFrameworks>netcoreapp2.1;netcoreapp2.2;netcoreapp3.0</TargetFrameworks>

However that's only possible if you're the one in control of the code. If not, then an alternative option is package the global tool into a Docker container. Doing so encapsulates the dependencies of the tool away from the host system, so you can install any SDKs on the host you want, without having to worry about your global tools. This is the same philosophy as packaging any other CLI tool into a Docker container, as I described in my previous post.

Creating a Docker image for a .NET Core Global tool

On the face of it, creating a Docker image of a .NET Core global tool is easy. Let's take the nbgv tool for example. You could create a Docker image for the tool using the following Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:2.1

ENV NBGV_VERSION 2.3.38

RUN dotnet tool install --global nbgv --version $NBGV_VERSION 	

ENV PATH="/root/.dotnet/tools:${PATH}"	

ENTRYPOINT ["nbgv"]

This file starts from the .NET Core 2.1 SDK image, and uses dotnet tool install to install the global tool. Finally, it sets the nbgv executable as the entry point. You can build and tag the image using:

docker build -t example/nbgv .

Once the image has been built, you can run your global tool using the following command. I mounted the current working directory as a volume in the container, and set the working directory to that volume, passing the command get-version to calculate the version of the git repo in that directory:

> docker run --rm -v $PWD:$PWD -w $PWD example/nbgv get-version

Version:                      0.0.236.24525
AssemblyVersion:              0.0.0.0
AssemblyInformationalVersion: 0.0.236+cd5f8f6636
NuGet package Version:        0.0.236-cd5f8f6636
NPM package Version:          0.0.236-cd5f8f6636

This works perfectly, and you can create helper scripts for running your new containerised tool as I described in my last post.

Unfortunately, there's one big downside to this approach. We're using the SDK image to install the global tool (as you have to), which means the final images are big - 1.8GB! Compare that to the 115MB required for the containerised AWS CLI tool from my last post, and this clearly isn't ideal.

The problem is that we're including the whole .NET Core SDK and all associated packages in our container, when all it really needs is the .NET Core runtime. Luckily we can solve this one by using multi-stage builds.

Optimising the containerised global tool with multi-stage builds

Multi-stage builds allow you to use one Docker base image to build your project, and then copy the output into another Docker image. This is really important for production workloads, as it allows you to have a large builder image, with all the dependencies necessary to build your project, but then to copy your project to a small, lightweight image that only has the dependencies necessary to run your project.

We can apply the same approach to containerising .NET Core global tools. Even though we need to use the SDK to install them, we only need the .NET Core runtime to execute them, as they are simply .NET Core console apps.

The only difficulty with this approach is that it's not well documented. My workmate Mauricio suggested (and implemented) the approach shown below, where we simply copy the global tool's binary files from /root/.dotnet/tools/ to the runtime image:

# Install the .NET Core tool as before
FROM mcr.microsoft.com/dotnet/core/sdk:2.1 as builder

ENV NBGV_VERSION 2.3.38

RUN dotnet tool install --global nbgv --version $NBGV_VERSION 	

ENV PATH="/root/.dotnet/tools:${PATH}"

# Use the smaller runtime image
FROM mcr.microsoft.com/dotnet/core/runtime:2.1

# Copy the binaries across, and set the path
COPY --from=builder /root/.dotnet/tools/ /opt/bin
ENV PATH="/opt/bin:${PATH}"

ENTRYPOINT ["nbgv"]

This Docker image has exactly the same behaviour as the previous example, but it's now only 226MB, down from 1.8 GB! That's much more palatable.

Using the Alpine 3.9 runtime image gets the image size down to 132MB, but unfortunately we ran into libgit2 issues that we didn't look into further.

The big advantage of containerising your global tools like this is not having to worry about upgrades to .NET Core breaking anything. Theoretically that shouldn't be a big issue, but using containers guarantees it. That's especially useful in build scripts on CI servers that may be having to build a variety of projects, using a variety of .NET Core SDKs.

In some cases, the effort required to containerise global tools may not be worth it. If the tool needs to access your file system to perform its work, or needs access to the network (like the dotnet-serve tool for example), you'll need to consider how those things are affected by running the tool in a container. For many tools however, I expect there won't be any issues.

Summary

In this post I discussed some of the limitations of .NET Core global tools in relation to .NET Core SDK versions and updates. I described how you can avoid these issues by packaging tools in Docker containers. Finally, I showed an optimised container that significantly reduces the Docker image size by using multi-stage builds.

Andrew Lock | .Net Escapades
Want an email when
there's new posts?