The Future of Software Delivery is Code. And It’s Here
At Atomist, we think it’s time to put the development in DevOps — both software delivery and automation. Today, we’re making our API for software available to developers everywhere, in open source.
Within 5 minutes, you’ll be able to use Atomist open source on your laptop to perform tasks like creating new projects, reacting to code changes across all projects, and automatically fixing problems on any commit. All you need is Node and git.
We believe that a new approach to delivery and automation is desperately needed, and deserves a community, not just a product.
Why Do We Need an API for Software?
The application code we write has improved enormously over the last 10–15 years, due to the progressive achievements of frameworks like Spring, Rails, Node and Spring Boot. Our deployment experience in the era of Kubernetes and Cloud Foundry is a world away from the horrors of WebSphere. However, the all-important gap between the two — how our software gets delivered to production — hasn’t kept pace. Nor do we automate enough of our daily activities.
The rise of cloud native deployment, along with the breakup of the traditional monolith, has made the problems with the status quo increasingly obvious. More projects mean more delivery pipelines, bringing massive duplication and difficulty evolving delivery policies. Project proliferation exposes lack of automation: for example, in creating projects in a consistent manner, keeping projects up to date and staying informed about what’s happening across them.
It’s time for a rethink: for a strategic, rather than tactical, approach to software delivery and automation. We believe that three big ideas can change our delivery and automation experience for the better:
- Rethink the traditional model of one delivery pipeline per repo, replacing it with an event hub. One pipeline per repo made sense when we had a small number of large applications, each with a complex, special build. Today, it’s common to have hundreds or even thousands of applications, whose delivery flows have much in common. One pipeline per repo is the wrong model of our intent. An event hub allows teams to express and evolve policies in a single place. Activity on repositories results in events that we can handle centrally, making it easy to evolve policies in a single place.
- Provide a rich, correlated model for our projects and activity around them. Such a model enables us to treat CD not as something that’s hacked together by stretching CI technologies beyond their comfort zone, but as an important special case of a general solution for automating the key things we do. It enables a joined-up approach building out consistent concepts, rather than an agglomeration of local hacks like Bash scripts. There are many things beyond CD that we should automate, like creating projects in a consistent manner, keeping dependencies and configuration up to date, and notifying team members and external systems of changes they’re likely to care about. An API for software, spanning code and events on it, enables teams to use their core skills to develop their development experience. We don’t develop our business applications via one-off hacks; we shouldn’t deliver them to production that way, either.
- Embrace the power of modern programming languages and define delivery and automations in code. Real code, not an unnatural hybrid. Ever since the days of
makefiles, we’ve tended to express build and delivery steps outside of code. As programmers, we’ve acted as though delivery wasn’t our job — something to which we should apply our core skills. But it is. Every company is in the software delivery business. When expressing simple builds as an alternative to using a low level language like C, non-code delivery definitions made sense; when expressing complex delivery flows as an alternative to modern programming languages with rich libraries and modules, YAML and Bash make little sense. By failing to question this approach, we’ve sleepwalked into bizarre things like defining variables in YAML. Even if we’re content to ignore OOP and FP, there are good imperative programming languages. YAML is not among them, and Bash was last near the cutting edge before most of us were born. Traditional CI has its place, but it has failed to grow into something truly strategic and important. Without a compelling API or model, we don’t even try to automate many things that we should, to increase velocity and avoid errors.
We believe that these ideas amount to a vastly superior way forward for software delivery. We hope that the Atomist service will provide the best embodiment of them, but these ideas are bigger than any one service or product. Fortunately there’s a way of expressing ideas bigger than any one company or product: open source.
In the bad old days of WebSphere, big changes started at the golf course. Today, they often start on developer laptops, expressed in code. You can start working in this new way, on your laptop, with the open source Local Software Delivery Machine.
Today we’re announcing that Atomist’s next generation software delivery approach is available, free and open source.
What This Enables
Atomist local enables you to automate many important things, including:
- Creating projects consistently across a range of stacks.
- Reacting to code changes, including perform code reviews, across many projects
- Automating code changes across many projects
- Enforcing policies across many projects
- Achieving policy-based CD, including deployment to Cloud Foundry, on your local machine
For a taste of what this looks like in practice, let’s see how a software delivery machine can help ensure that code is in a desired state across many repos. An autofix makes an additional commit if necessary on the same branch following any commit.
This example uses Atomist’s API to ensure that a file named
filecount.md will always contain an up-to-date count of Java files in the source tree:
The code is written in TypeScript and runs on Node, using Atomist’s API. It benefits from excellent tooling and a rich module system. It’s unit testable. In short, it’s everything YAML and Bash are not.
After each commit, Atomist will make a further commit to the same branch if necessary to maintain the invariant. Here’s a git log resulting from the autofix running after a project has been created without the filecount file:
This autofix mechanism can help keep errors out of our codebases and save a ton of manual work.
At Atomist, we use many autofixes in our own software delivery, for automatic linting, adding and maintaining license files, and compiling legal information. The latter solves an important business problem for us. Atomist has a contractual requirement to our enterprise customers to keep dependency information up to date. With our own SDM instance, one small piece of code enforces this across all existing projects and any new ones we may create. We can evolve all these policies in one place, without having to touch many delivery pipelines.
All you need is Node and git. You’ll be able to experiment with the API in under 5 minutes.
A software delivery machine can help with your personal workflow, but it’s even more useful when its consistency and power applies to your whole team. For this, run your code unchanged against the Atomist service and have your generators and policies available across your team.
Seeing the Spring community grow and achieve amazing things over the last 15 years has been the most satisfying experience of my career. The power of developer communities and a code-centric approach has reshaped how we write applications and how we deploy them. Let’s apply that power to a new approach to software delivery, and see what happens when we unleash developer creativity in real programming languages on our shared daily problems.
I’m excited about what we can build together.