Data-Driven Delivery Orchestration with Containers
Automating development and delivery just got a lot easier!
With the recent release of SDM 1.6, we made getting started with your own SDM easier than ever. Two features in particular make declaring delivery goals and the relationships between them simpler and clearer. The first feature is the ability to return your goal sets as a JavaScript object when configuring your SDM. The second feature is using Docker containers to execute your SDM goals. Combining these two features makes setting up your own software-defined delivery as easy as finding the right container. Let's take a look at some examples.
Your first container goal
Here is an example of a goal set named node
with a single goal. That goal, named node-test
, is executed using the Docker image node:12.4.0
, available from Docker Hub, and runs the command sh -c "npm install && npm test"
, setting the NODE_ENV
environment variable to development
so both regular and development dependencies get installed.
import { configure, container } from "@atomist/sdm-core";
export const configuration = configure(async sdm => {
return {
node: { // <== this is the start of the "node" goal set
goals: [ // <== the start of the goals in this goal set
container("node-test", { // <== the container goal
containers: [{
args: ["sh", "-c", "npm install && npm test"],
env: [{ name: "NODE_ENV", value: "development" }],
image: "node:12.4.0", // <== the Docker image
name: "npm",
}],
}),
],
},
};
});
That is all the code you need to start an SDM that will test Node.js projects when new commits are pushed!
If you already have an SDM, you can start adding container goals to it: you are free to mix and match container and non-container goals within the same goal set. For example, if you want to always run npm audit fix
before testing your project, you can add the npmAuditAutofix
from the Node.js SDM pack as an autofix goal to the above goal set.
import { Autofix, hasFile } from "@atomist/sdm";
import { configure, container } from "@atomist/sdm-core";
import { npmAuditAutofix } from "@atomist/sdm-pack-node";
export const configuration = configure(async sdm => {
return {
node: {
test: hasFile("package.json"),
goals: [
new Autofix().with(npmAuditAutofix()), // <== non-container goal
container("node-test", { // <== container goal
containers: [{
args: ["sh", "-c", "npm install && npm test"],
env: [{ name: "NODE_ENV", value: "development" }],
image: "node:12.4.0",
name: "npm",
}],
}),
],
},
};
});
In the above goal set, the autofix goal will be executed first followed by the node-test
container goal. If the autofix goal makes a change, it terminates its goal set and pushes a new commit, which triggers execution of the goal set on the new commit, executing the autofix again and node-test
.
Parallel goals and push tests
That was a nice start, but typically your delivery process is a little more complicated than one or two goals. Perhaps you have to support multiple technology stacks, or multiple versions of a runtime, or both. In the example below, we have two goal sets, one for Node.js and one for the JVM. We added a "push test" to each goal set, testing for a technology specific file so the right goals get scheduled for the right repositories. Then for each technology, we map an array of version strings into an array container goals. We can do this because Atomist uses real code to define your delivery, not a static data file that forces you to copy and paste or use some hacky templating, pseudo-variable solution.
import { hasFile } from "@atomist/sdm";
import { configure, container } from "@atomist/sdm-core";
export const configuration = configure(async sdm => {
return {
node: { // <== Node.js goal set
test: hasFile("package.json"), // <== NPM push test
goals: [ // <== elements of this array are executed serially
// map an array of versions into an array of goals
// elements of this array are executed in parallel
["8.16.0", "10.16.0", "11.15.0", "12.4.0"].map(v => container(
`node${v.replace(/\..*/g, "")}`, // make sure goal names are unique and valid
{
containers: [{
args: ["sh", "-c", "npm install && npm test"],
env: [{ name: "NODE_ENV", value: "development" }],
image: `node:${v}`,
name: "npm",
}],
},
)),
],
},
jvm: { // <== JVM goal set
test: hasFile("pom.xml"), // <== Maven push test
goals: [
["8", "11", "12"].map(v => container(`mvn${v}`, {
containers: [{
args: ["mvn", "test"],
image: `maven:3.6.1-jdk-${v}`,
name: "maven",
}],
})),
],
},
};
});
When a repository with a package.json
file receives new commits, four containers will spin up, each with a different version of Node.js, install dependencies and run tests. Those four containers will run in parallel. Why? Because the goals
property is an array of goals and/or arrays of goals. In the latter case, all goals in the same inner array are executed in parallel. In either case, goals that come later in the outer array depend on all goals that come before it in the outer array.
Similarly, when a repository with a pom.xml
file received new commits, three containers are spun up in parallel, each with a different version of the JVM, and run mvn test
.
Caching
Parallel steps are cool, but sometimes you need steps to run serially and have the output of a prior step be available in a subsequent step. If so, this is your lucky day! Container goals allow you to specify "output" and "input". The "output" will be cached for later goals and the "input" says what previous goals "output" a goal needs. In this example, we build a Node.js project just like in our first example, but we add an output
property to the node-test
goal, caching the node_modules
directory under the classifier "node-modules". Then in the docker
goal set, which depends on the node
goal set and therefore will only run if all the goals in the node
goal set completes successfully, the kaniko
container specifies an input
property which is an array of cache classifiers to restore before running the goal, just "node-modules" in this case.
import { hasFile } from "@atomist/sdm";
import { CompressingGoalCache, configure, container } from "@atomist/sdm-core";
import * as os from "os";
import * as path from "path";
export const configuration = configure(async sdm => {
sdm.configuration.sdm.cache = { // set the SDM cache configuration
enabled: true,
path: path.join(os.homedir(), ".atomist", "cache", "container"),
store: new CompressingGoalCache(),
};
return {
node: {
test: hasFile("package.json"),
goals: [
container("node-test", {
containers: [{
args: ["sh", "-c", "npm install && npm test"],
env: [{ name: "NODE_ENV", value: "development" }],
image: "node:12.4.0",
name: "npm",
}],
output: [{ // <== the goal output we want to cache
classifier: "node-modules",
pattern: { directory: "node_modules" },
}],
}),
],
},
docker: {
test: hasFile("Dockerfile"),
dependsOn: ["node"], // <== this goal set depends on the "node" goal set
goals: [
container("kaniko", {
containers: [{
args: [
"--context=dir://atm/home",
"--destination=atomist/samples:0.1.0",
"--dockerfile=Dockerfile",
"--no-push",
"--single-snapshot",
],
image: "gcr.io/kaniko-project/executor:v0.10.0",
name: "kaniko",
}],
input: ["node-modules"], // <== the classifiers to restore before running this goal
}),
],
},
};
});
At the start of the configure
callback, we provide a simple cache configuration. We make sure caching is enabled and use the CompressingGoalCache
, which creates a compressed tar file and by default stores in on the local file system. We also provide a path which it will use as the root directory for its cache files.
Give Me Code (Give Me Peace on Earth)*
Static goals are great, but sometime you need the flexibility of code. Lucky for you, you're dealing with Atomist and we always make sure you have the power of code at your fingertips. You may have noticed in the previous example the kaniko
goal will always build the same image: atomist/samples:0.1.0
. That is less than ideal. Focusing on the kaniko
goal, we can provide a function to modify the container information via the callback
property. This function is called right before the container is scheduled and it has access to details about the goal registration, project being built, container goal, event that triggered the build, and a context that provides information like Atomist workspace ID, message client, and GraphQL client. In the example below, we only need the container goal registration and project. We use the project information to determine the Docker image name and then modify the container's args
array, setting the kaniko destination to the image name. After we change the container information, we return the updated information.
export const configuration = configure(async sdm => {
// …
return {
// …
docker: {
test: hasFile("Dockerfile"),
dependsOn: ["node"],
goals: [
container("kaniko", {
callback: async (reg, proj) => { // <== goal registration and project information
// calculate image name from project information
const safeOwner = proj.id.owner.replace(/[^a-z0-9]+/g, "");
const dest = `${safeOwner}/${proj.id.repo}:${proj.id.sha}`;
reg.containers[0].args.push(`--destination=${dest}`);
return { // <== return container information
containers: [{
args: [
"--context=dir://atm/home",
"--dockerfile=Dockerfile",
"--no-push",
"--single-snapshot",
`--destination=${dest}`
],
image: "gcr.io/kaniko-project/executor:v0.10.0",
name: "kaniko",
}],
};
},
containers: [], // <== the callback will replace this
input: ["node-modules"],
}),
],
},
};
});
Where are the containers running?
At this point you may be saying, "Looks great, but how does it work?" The answer is, "It depends." Out of the box, the SDM framework can run the containers using a Docker daemon or Kubernetes. When scheduling a container goal, the SDM checks to see if it is running in Kubernetes. If so, it creates a Kubernetes job to execute the goal. If it is not running in Kubernetes, it checks to see if it has access to a Docker daemon, either through normal Docker environment variables or Docker socket. If Docker is available, it uses the Docker CLI to schedule the job. In both cases, it executes the job just like a normal goal, capturing the output from the job in the goal logs and responding appropriately upon success or failure.
Container goals provide a completely portable interface for executing your delivery process as containers
In other words, the container goal provides a technology-independent way to define what images you want to run and the SDM framework takes care of scheduling and monitoring it. You can move from local Docker daemon to production Kubernetes cluster without changing a single line of code! What's more, you can implement your own container scheduler if you would prefer to have your containers execute on, for example, AWS ECS or GCP Cloud Build.
Don't just sit there!
You can test out all these examples on your laptop. Sign up with Atomist and make sure you have access to a Docker daemon, then follow the instructions in the Atomist samples repository and try out one of the lib/sdm/container
samples.