Taking a look at Dagger

Taking a look at Dagger

Alan Raison
1st February 2024

Home Insights Taking a look at Dagger

Solomon Hykes is probably most famous for being the founder and former CTO of Docker. Docker revolutionised the way we package, run and distribute server applications, so when Hykes starts a new venture, it's worth checking out.

I heard an interview with Hykes on Bret Fisher's DevOps and Docker talk, where he went through the ideas of Dagger, his take on solving developers' CI and CD problems and I knew I needed to take a look.

The basis of Dagger is to define your pipelines as an object model using the same language as your project, much like CDK does for creating infrastructure in AWS. Dagger currently supports Go, NodeJS and Python and support for Elxir and Java on the way. All actions are performed in containers, so they are repeatable and cachable.

I started with the project I had open; a simple NodeJS express application. It was configured with a GitHub action which used a Dockerfile to build the application and then push to a Docker registry. Setting up the project was quite straightforward; the development environment (a devcontainer) required the docker CLI and control socket to be available, so these were added to the configuration, along with the Dagger CLI. I followed the Quick Start and NodeJS Getting Started guides and was quickly able to get a simple build running, which mimicked the GitHub action, reusing the existing Dockerfile.

So that was great; I had a pipeline that any developer could run locally and it would build and push an image to the registry. With that, I could remove the docker steps from my GitHub action and replace them with Dagger ones. But is that any better?

OK, perhaps my application wasn't complex enough to benefit from the power of Dagger; certainly I had deferred the deployment of the image using GitHub actions to another day. The inability to try out GitHub actions code (particularly when there is no manual trigger configured) before pushing to main makes the development of such pipelines very error-prone, so being able to run the same steps locally is definitely a benefit. However, when using Dagger, you still need to set up your CI runner (checking out the code, installing Dagger, configuring identities), and you're not taking full advantage of platform features such as reporting, caching and parallelism.

In steps Dagger Cloud; the paid-for cloud service which gives an overview of all of your Dagger activity. Setting up an account requires payment details, so I didn't complete this step myself, but it gives you an API token which you add to your build environments. Once this is added, your logs are visible in the dashboard and you can use a distributed cache to share build step results between any Dagger engine. It is not clear whether there are any limits or costs with this cache at this time.

The service is still under heavy development, so the prices have not been finalised, but at the time of writing, Early Access costs $0.05 per minute of Dagger-engine time. This means that any Dagger pipeline which has been provided the API key, it costs $0.05 for each minute that the pipeline is running. This sounds like it could get quite expensive, if you either have builds running very often or have long-running builds. Also, this is a cost on top of what you might be paying for the service that your CI is running on; be that GitHub Actions, Azure DevOps, AWS CodeBuild or similar. Where this might make more sense is where you already have fixed build infrastructure; for example if you have a build farm in Kubernetes or EC2 and you want to add visibility over your builds. Another benefit is that you can mix-and-match your CI compute and have a single location to monitor all builds.

The distributed nature of Dagger did present one further challenge: build versioning. With all CI systems I have ever used, there is the concept of a build number; an auto-incrementing version number that can reference an individual build. This is often useful to identify which version is deployed and whether another version is newer or older than that one. Looking through the API, it didn't appear that Dagger could support such a human-parsable ordered IDs, so version numbers would have to be linked to Git commit hashes or time-ordered UUIDs.

In general, Dagger is an interesting project, looking for a space in a market which is rapidly evolving. Pricing for the Cloud service will change as the project matures and it is cheap enough to have a play around with, as long as your pipelines are not too active or time-consuming. It is also quite possible to use Dagger without the Cloud service. It would be interesting to see a CI service offer native Dagger support, but this might not be in the interests of the major players.

Dagger excels at providing a way to declare your pipelines in a familiar coding language, and allowing developers to run pipelines locally. It may also be a good fit if you have a fixed build farm and want to standardise builds and deployments with containers.

Have you tried Dagger? Is it something that could help with your CI problems? Get in touch and let us know!

Share Article

Insights.

Running Terraform in a Lambda Function
Running Terraform in a Lambda Function

I recently set up a Terraform project which I wanted to run on a regular schedule. There are a number of ways to achieve this, but I decided to package the project as a Lambda function and schedule it with… 

Discover More
Setting up AWS SSO with Google Workspace
Setting up AWS SSO with Google Workspace

I recently configured Single Sign On (SSO) from our Google accounts to AWS. AWS SSO is the recommended way to configure SSO across multiple AWS accounts, yet Google is not a supported identity provider. However, this simply meant that there… 

Discover More
The Crowd Wisdom Project launches, through NewRedo
The Crowd Wisdom Project launches, through NewRedo

This morning we launched the Crowd Wisdom Project for our client Andrew Grey. It’s innovative and it’s intention is ground breaking. We genuinely can’t wait to see the data it collects and the impact it has. Rather than us try… 

Discover More