Continuous Security with OWASP ZAP and Azure ARM (part 1)
Automating your delivery pipeline allows you to deliver software quickly, reliably, and with minimal overheads. But as your delivery cadence increases, your security practises have to follow.
Automating your delivery pipeline allows you to deliver software quickly, reliably, and with minimal overheads. But as your delivery cadence increases, your security practises have to follow.
This post builds on Microsoft's Premier Dev Blog article in which Francis Lacroix demonstrates how to leverage the OWASP ZAP docker image as part of a delivery pipeline. Building baseline vulnerability scans into standard automated test processes won't guarantee software is "secure", but it does provide a repeatable baseline security verification, and give some confidence that no terrible security oversights have been built into a release.
In an earlier post on this blog I explained why I still rather like ARM templates, and I decided to port Francis' solution to Azure's ARM deployment model so that I could reuse it more easily. This was not particularly straightforward, and this post documents a few of the headaches I encountered. It will also serve as a reference for anyone looking to build on/extend the ARM solution.
It's published on Github here: https://github.com/nathankitchen/owasp-azure-arm
Problem 1: Mounting Filesystems
Francis' original solution made use of an Azure Storage account with a File Share mounted in the container. This allowed the ZAP baseline report to be saved to a local folder that was actually an external file share, from where it could be later retrieved.
Unfortunately it's not possible to provision a file share as part of an ARM template; the canonical quickstart template uses a container to run Azure CLI and create the file share programatically.
It is however possible to create an Azure Storage Blob Container via ARM, alongside a SAS token to access it. With a target Blob container and SAS token, it's then possible to use WGET to download AzCopy, then use it to upload the ZAP report as a blob.
Problem 2: Hogging the main thread
Once we have the container provisioned and running, the next issue is working out how to initiate ZAP and kick off the baseline run. Unfortunately, initiating ZAP as part of the exec command hogs the terminal session, so no subsequent commands run until ZAP exits!
To address this I used command chaining in bash. There are several ways to chain commands:
A ; B
– Run A and then B sequentially whether or not A succeedsA && B
– Run B only if A succeededA || B
– Run B only if A failedA & B
- Run B in parallel with A
The latter of these allows A to proceed in the background. I used the reasonably blunt sleep 30
in lieu of B to ensure that ZAP has enough time to initiate before the baseline scan starts.
Problem 3: SAS token formats
After solving problem ##1 using Blob storage I expected to use AZCopy to upload the output report to storage. I spent an inordinate amount of time trying to get this working before I eventually realised there was a bug in AzCopy 10's handling of SAS tokens.
While debugging the issue I had tested some SAS token formats using Postman, and realised that pushing to Blob Storage with a SAS token is actually only a single web request! I didn't particularly want to wait for an AzCopy patch, so I removed AzCopy download and install from the solution and simply reverted to using WGET:
wget --method=PUT --header="x-ms-blob-type: BlockBlob" --body-file=*filename* *url*
I encountered a "file not found" failure with this once, and I added another short delay between the ZAP scan and the file upload to ensure the output was fully written before it was uploaded.
At this point I had an end-to-end solution running the scan and writing the report to an Azure Storage Blob container!
Minor issues
There were a series of issues with this that are worth noting:
-
Encoding and escaping is complex. There are some oddities with the way the
exec
command is parsed in ARM which means that each part of the command has to be in its own string. Likewise, some commands aren't recognised when pushed into the terminal, I presume because of quoting or encoding issues. Using ARM variables seemed to mitigate this, I presume because the fabric of Azure handles encoding/escaping properly. -
Spider time is passed as a variable, but I don't think it's being respected by ZAP. I need to do more investigation here and get to know ZAP better, but it doesn't look like it's working correctly.
-
Signalling completion would be useful as I could then nuke the container instance and stop paying for it. I presume I'll just have to write a delay into the Azure DevOps pipeline to wait for the output report, rather than do something clever with triggers.
-
Container instances aren't available in every region, so you need to pick a resource group location where they are available when you use the template.
Summary
This solution allows on-demand provisioning of the latest ZAP image and execution of a baseline scan. My attempts at cost modelling in the Azure Calculator indicate that each run costs about $0.01, so it ticks the box from a financial perspective.
As it doesn't involve exposing any public endpoints it's also more secure than other solutions, and the fact that environments are created on demand (and then deleted) means that you don't have any long-lived infrastructure to manage, patch, or update.
Next steps
In a later post I'll document how to set up the template as part of an Azure DevOps deployment pipeline. This is going to follow Francis' overall approach quite closely, so please refer to his instructions if you want to set this up as part of your build and release process in the meantime.
Source image credits: Markus Spiske and Ciaran O'Brien.