4 steps to DevSecOps in your software supply chain


Developers often want to do the “right” thing when it comes to security, but they don’t always know what that is. In order to help developers continue to move quickly, while achieving better security outcomes, organizations are turning to .

DevSecOps is the mindset shift of making all parties who are part of the application development lifecycle accountable for the security of the application, by continuously integrating security across your development process. In practice, this means shifting security reviews and testing left—i.e., shifting from auditing or enforcing at deployment time to checking security controls earlier at build or development time.

For code your developers write, that means providing feedback on issues during the development process, so the developer doesn’t lose their flow. But for dependencies your code pulls in as part of your , what should you do?

Let’s first define a dependency. A dependency is another binary that your software needs in order to run, specified as part of your application. Using a dependency allows you to leverage the power of open source, and to pull in code for functions that aren’t a core part of your application, or where you might not be an expert.

Dependencies often define your software supply chain. GitHub’s 2019 State of the Octoverse Report showed that on average, each repository has . (Disclosure: I work for GitHub.) An upstream vulnerability in any one of these dependencies means you’re likely affected too.

The reality of the software supply chain is that you are dependent on code you didn’t write, yet the dependencies still require work from you for ongoing upkeep. So where should you get started in implementing security controls?

, meaning build-time detection will contain more complete information.

To accurately detect dependencies in code—and to more easily control what dependencies you use—you’ll want to explicitly specify them as part of your application’s manifest file or lockfile, rather than vendoring them into a repository (forking a copy of a dependency as part of your project, aka copy-pasting it). Vendoring makes sense if you have a good reason to fork the code—for example, to modify or limit functionality for your organization—or to use this as a step to review dependencies (you know, actually tracking inputs from vendors). Some ecosystems also favor using vendoring.

However, if you’re planning on using the upstream version, vendoring makes updating your dependencies harder. By specifying your dependencies explicitly, it’s easier for your development team to update them, as updates require only a single line of code in a manifest, rather than re-forking and copying a whole repository. In certain ecosystems, you can use a lockfile to ensure consistency, so you’re using the same version in your development environment as you are for your production build, and review changes like any other code changes.

Standardize on ‘golden’ packages 

You might already be familiar with the concept of “golden” images, which are maintained and sanctioned by your organization, including the latest security patches. This is a common concept for containers, to provide developers with a base image on which they can build their containers, without having to worry about the underlying OS. The idea here is to only have to maintain one set of OSes, managed by a central team, that you know have been reviewed for security issues and validated in your environment. Well, why not do that for other artifacts too? 

To supplement a unified CI/CD pipeline, you can provide a reference set of maintained artifacts and libraries. This is just a pre-emptive security control. Rather than verifying that a package is up-to-date once it’s been built, give your developers what they need as an input to their build.

For example, if multiple teams are using OpenSSL, you shouldn’t need every team to update it. If one team updates it (and there are sufficient tests in place!), then you should be able to change the default for all teams. This could be implemented by having a central internal package registry of your known good artifacts, which have already passed any security requirements, and have a clear owner responsible for updating if new versions are released.

By providing a single set of packages, you’re ensuring all teams reference these. Keep in mind, the latest you can do this is in the build system, but this could also be done earlier in code, especially if you’re using a monorepo. An added benefit of sharing common artifacts and libraries is making it easier to tell if you’re affected by a newly discovered vulnerability. If the corresponding artifact hasn’t been updated, you are! And then it’s just one change to address the issue, and for the update to flow downstream to all teams. Phew.

Automate downstream builds and deployments

To make sure that developers’ hard work pays off, their changes actually need to make it to production! In creating a unified CI/CD pipeline, you cleared a path for changes that are made to code in a development environment to propagate downstream to testing and production environments. The next step is to simplify this with automation. In an ideal world, your development team only makes changes to a development environment, with any changes to that environment automatically pushed to testing, validated, and rolled out (and back, if needed).

Rather than applying DevOps and DevSecOps by requiring your development team to learn operations tools, you simplify those tools and feedback to what these teams need to know in order to make changes where they’re most familiar, in code. This should sound familiar—it’s what’s happening with trends like , or —define things in code, and let your workflow tools handle making the actual change.

If you can automate downstream builds, testing, and deployment of your code, then your developers only need to focus on fixing code. Following DevSecOps principles, they don’t need to learn tooling to do validation testing, phased deployments, or whatever you might need in your environment. Crucially, for security, your development team doesn’t need to learn how to roll out a fix in order to apply a fix. Fixing a security issue in code and committing it is sufficient to ensure that it (eventually) gets fixed in production. Instead, you can focus on quickly finding and fixing bugs in code.

Creating a unified CI/CD pipeline allows you to shift security controls left, including for supply chain security. Then, to best apply DevSecOps principles to improve the security of your dependencies, you should ask your developers to declare your dependencies in code and in turn provide them with maintained, “golden” artifacts and automated downstream actions so they can focus on code. Because this requires changes not only to security controls, but also to your developers’ experience, just using security tooling isn’t sufficient to implement DevSecOps. In addition to enabling , you’ll also want to take a closer look at your CI/CD pipeline and artifact management.

All together, applying DevSecOps allows you to gain a better understanding of what’s in your supply chain. By using DevSecOps, it should be simpler to manage your dependencies, with a change to a manifest or lockfile easily updating a single artifact in use across multiple teams, and with the automation of your CI/CD pipeline ensuring that changes developers make quickly end up in production.

Maya Kaczorowski is a product manager at overseeing software supply chain security. She was previously in security and privacy at Google, focused on container security, encryption at rest, and encryption key management. Prior to Google, she was an engagement manager at McKinsey & Company, working in IT security for large enterprises. Outside of work, Maya is passionate about ice cream, puzzling, running, and reading nonfiction.

Copyright © 2020 IDG Communications, Inc.