How Atomist Continually Improves its Delivery Experience
We use Atomist to develop Atomist (of course!) and we want to share some of how it works so that other developers can learn from our hard-won lessons, and hopefully avoid some of our mistakes. We’re also keen to hear from you about nifty techniques you’ve discovered that make your own developer experience better, so please let us know!
When you use an Atomist Software Delivery Machine (SDM) you begin by defining a set of delivery goals. Goals represent the best practices that teams use for all of their current projects. They also represent the best practices that teams will use on their next set of projects, so that you always start with the state-of-the-art. It’s a great way to ensure you don’t forget the lessons of the past.
Our best practices are constantly evolving. Today we use a different set of tools, deployment targets, and processes than we used a year ago. However, we still maintain just one set of delivery goals that we use for all our projects. This way, any changes we make affect all our projects, so all of them benefit.
In essence, we deliver all of our services using the set of goals that we consider to represent our best practices right now. As a company, we have attempted to remove the question of whether a service project was “keeping up” by removing the possibility of falling behind.
What A Goal Set Looks Like
Goals are defined in code, and specify a set of actions to take when certain events occur, like this:
whenPushSatisfies(IsLein,
not(HasTravisFile),
HasAtomistFile,
HasAtomistDockerfile,
MaterialChangeToClojureRepo)
.itMeans("Build a Clojure Service with Leiningen")
.setGoals(LeinDockerGoals)
This tells Atomist that when a code push happens, check it to see if a set of conditions is satisfied: If it’s a Leiningen managed repo, it doesn’t have a Travis file, it does have an Atomist file, and a Dockerfile, and it’s a material change to a Clojure repo, then it means we should build a Clojure service with Leiningen.
How do we do that? Using a set of goals.
In this case, our plan involves a set of goals referenced above as LeinDockerGoals
. Goals themselves are also code. Here is an example goal for autofixing code that doesn’t adhere to our code style conventions:
autofix.with({
name: "cljformat",
transform: async p => {
await clj.cljfmt((p as GitProject).baseDir);
return p;
},
pushTest: allSatisfied(IsLein, not(HasTravisFile), ToDefaultBranch),
});
Let’s look at goals in more detail.
Our Goals In Detail
We’ve implemented the Atomist service as a set of microservices. We have a set of delivery goals for our services, implemented in Clojure. The actual set of services changes over time, but they’re delivered the same way.
- Autofix goal: Every push runs the cljfmt Clojure code formatting tool. To help code reviews and make for clean diffs, we want to ensure that all of our code is formatted using standard team-wide conventions. If we find style violations, the goal auto-fixes them, and pushes a commit to the branch. I guess I kind of care how my IDE formats but less than I used to. It’s going to get cleaned up.
We also have an autofix that ensures all our logging config files use a minimal set of fields in our structured logging to keep our distributed logging consistent across our microservices. Sometimes we change this minimal set. It’s not a problem.
-
Versioning goal: All of our libraries and docker images use semver compliant versions that are stored in the leiningen project.clj file (leiningen is our build tool). For non-master builds, we add the suffix ${branch}-${timestamp} and for master builds we simply add the suffix ${timestamp}. We don’t increment the patch, minor, or major revisions during a build. We might change our minds about this. It will be contentious.
-
Build goal: We use leiningen as our standard build tool. Our builds all rely on a standard set of secrets encrypted into this goal. Each project can also encrypt and store its own secrets directly in the repo, although none of our projects currently need this particular capability.
-
Tag goal: All successful builds are tagged in our source code management system (we use GitHub). Failed builds are not tagged. Our bot complains in the project’s Slack channel, and that has been sufficient. We choose not to tag our shame.
-
DockerBuildGoal goal: We deploy code in Docker images. Our Docker build goal has a standard image naming policy. We have one “target” base image and the Atomist Software Delivery Machine nudges projects to adopt this base image but each project is allowed to choose when to upgrade.
All of our docker images are built using a layering strategy that keeps the parts that change rapidly (our code) in small top-level layers, while the slower changing libraries (all of our dependencies), are kept in a much lower-level layer. This was a huge win and we’ve used this technique in every service project. It was pretty hard won but really awesome once you get it right. We feel really bad for developers that don’t know about this technique.

-
Publish goal: Our Publish goal always calls the deploy task from each project’s local leiningen project file. This goal uses our bot (the conversational side of an SDM) to notice new published versions and then to ask developers from consuming projects whether Atomist should raise a PR with this new version. This makes the bot look quite smart but it’s really just asking the right question of the right people at the right time.
-
UpdateStagingK8SpecsGoal goal: All new Docker images are pushed to our Artifactory docker registry. If the Docker image has been built from a push to a master branch, then we update a corresponding kubernetes deployment spec in our staging cluster. We treat the lifecycle of different parts of kubernetes deployment specs very differently. You have to to get this right, but only once.
-
DeployToStaging goal: This goal is triggered by a kubernetes Pod becoming “Ready”, which means that our new Image has been scheduled and is ready to start serving traffic, handling events, being tested, etc. It also means that this goal can finally run tests against live urls!
-
DeployToProd goal: This is very similar to our DeployToStaging goal. However, part of this goal’s plan involves asking the bot to place an “actionable” message in the service’s channel to check whether we really want to start having production traffic routed to this version of the service. Sometimes goals become more interactive as you start getting closer to production traffic!

We don’t really consider these goals to form a “pipeline”, though they are a set of actions that are planned as a consequence of a push. When you watch them occur from a slack channel, they certainly look like a pipeline, but there are some important differences:
- New projects do not require any configuration. They start with these practices.
- When we plan a new goal, we do it in one place and update all of our pipelines.
- What does it mean for a pipeline to end? We still have goals for running services, because running services continue to emit actionable data. The closest thing to an “end” occurs when there are no longer any running pods containing that version of the software. But we actually have a plan for that too.
This was surprising
We kind of knew this was going to be a great way to develop; it’s a big part of the reason we felt that software delivery needed to break free from project specific pipelines. But we expected that we would require more project specific “hooks” than we’ve ended up needing in practice.
I think this is because we’ve ended up tackling problems with a different mindset.
We build goals for all of our projects. We roll out tools to the entire team. When a tool provides great new data, we focus on learning how to use the new data, instead of learning how to use the new tool. Only one person has to learn how to use it, but everyone else in the team benefits from what they learn. We bring to each problem our best engineering selves.
This is what I mean by my earlier statement:
We have attempted to remove the question of whether a project was “keeping up” by removing the possibility of it falling behind.
Projects should not be barriers for the spread of good delivery practices.
Try It For Yourself!
If you’re interested in trying out any of our methodologies, we have open-source SDMs for you to try. It should be easy to share best practices across teams too, and we’ve got examples ready to try (Spring Boot, Node, etc.)
And if you’re delivering Clojure services, you should definitely DM me!