TL;DR Index your deployed images by the OSS packages they contain.

Zero day exploits represent new information. When these exploits are revealed, we're forced to reassess our current systems. In an ideal world, what data would we have available?

For example, when a zero day like log4shell occurs, we have to re-evaluate what we already have. Are we using log4j? Where are we using it? Is it a vulnerable version? Perhaps you are are packaging applications into container images, and then running those images on cloud infrastructure. If so, a breakdown by image, by deployment environment, and by impacted log4j version, could provide useful context. Ideally, this is not data that we have to spend time tracking down.

blurred

We can't predict where the next vulnerability is going to be (or perhaps more accurately, where it already is). Instead, we have to prepare for what we don't yet know.

Reacting to new information

In this case, it's useful to think of hypothetical cases. What would become true if I were to learn X? Or to frame it in our current domain, how would I be vulnerable if I were to learn of this exploit?

At Atomist, we ingest facts. As a simple illustration, consider this set.

  • ContainerImage X contains_package Y
  • Y is_named log4j
  • Y has-version 2.12.1
  • X was-built-from-commit-sha 3185891debe6ca92e27254bad61c7b81b2bfadd3
  • X is-used-by-deployment A

This is knowledge that we are always collecting by watching disparate systems such as git, container registries, kubernetes clusters, etc. This is the knowledge base that grounds our ability to put new information in context. We future-proof these data by making sure that we're always ready to insert new data. For example, one day we learn two new facts.

  • X security-advisory-for log4j
  • X version-ranges < 2.15.0

So we insert these facts, and then shake the tree to see if anything new falls out. Ultimately, you want to keep track of the following list.

all current deployments that contain matching packages from a security advisory

This is always changing. And this change is itself a trigger for automation. The platform brings together:

  1. Integrations - to maintain the facts for your devops-related systems - this includes adding new advisory data
  2. Subscriptions - programmable triggers for discovering and reacting to new information
  3. Developer Automation - containerized jobs for whenever we detect novelty

Our current focus is on devops-related automation because we believe tha unique challenges in cloud native computing. We'll describe a few of these challenges below.

Watching your existing process

Getting started is hard if you have to change your processes. For example, changing CI processes can be a real barrier to getting started. Instead, we can choose to focus on watching existing systems. Container registries are a good place to start. They offer an opportunity to bootstrap quickly, and without requiring any other changes.

integrations

The data are improved if we add more integrations. The Atomist GitHub application, for example, tracks vulnerabilities all the way back to the developer, and creates opportunities for feedback loops that incorporate developer-facing workflows like GitHub Pull Requests, and Check Runs.

checkrun

In a very real sense, getting started is about enabling integrations to the systems you already have.

Data pertaining to your latest container images are already very useful. The data shown below can be generated with only one integration. Given the following two statements, both are certainly useful, but the latter is an improvement.

All tagged Images that contain matching packages vulnerable to CVE-2021-44228

All Kubernetes Deployment specs containing Images with matching packages vulnerable to CVE-2021-44228

What separates these two is enabling a Kubernetes integration that tracks which images are referenced by your currently active deployment specs. Our processes can improve incrementally.

Compliance, Trust, and Automation

We believe that data in the DevSecOps world should always be migrating towards the developer. Finding ways to pull data into developer workflows is crucial. However, we must also consider some of the unique characteristics of security incident workflows.

  1. Compliance - what do individual teams need to do to get started?
  2. Trust - how do we ensure that the data are accurate?
  3. Automation - remediation workflows are still ultimately delivery workflows

Consider your view of vulnerability data when using Atomist.

vulnerabilities

These views need to be bootstrapped and maintained. It's an ideal scenario for a service that lives outside of existing delivery pipelines, operating as a trusted agent. Atomist brings this pattern of uniform data collection to an open set of integrations (ingestion), and containerized workflows (actions).

As an illustration of this consistency, we recently began maintaining a public data set for the Official Image repositories (and other public registries) using exactly the same integrations that Atomist users can enable for their private registries. Check it out here, and explore your own data by signing up at Atomist.

Continuously track what you’re already running. For the vulnerabilities that you don’t yet know, it’s about being prepared.