Last Updated onReading Time: 9 minutes
This is a follow on from my previous blog where I compared the results of 5 container vulnerability scanners.
The results of that testing exposed vast differences in the vulnerabilities found between the scanners. The difference was so large it prompted me to revisit each scanner and find out why.
Every vulnerability scanner that I’ve tested works by collecting operating system package information and comparing it against corresponding package vulnerability databases.
There is a central vulnerability database operated by NIST called the NVD. There are also databases specific to a particular operating system or application package system.
I tested a single image last time which was docker.io/library/kong:1.0.0rc1 (the latest version) and it does have some real vulnerabilities. This image was picked totally at random.
In the initial test none of the scanners got all of these.
Aqua Microscanner [✓]
Aqua Microscanner [x]
Aqua Microscanner [x]
Two mediums and an unknown which is probably a low. To set the record straight the latest Kong image isn’t actually that bad but it is using a fairly old base image (Alpine 3.6 when the latest at the time of writing is 3.8).
CVE-2016-8610 (openssl): Twistlock
CVE-2016-7055 (openssl): Twistlock
I spoke to John Morello, CTO at Twistlock who contacted me via this website to explain the false positives.
“Vulnerability accuracy is something we take very seriously and we’ve invested many hundreds of man hours building a great ingestion stream with a wide variety of upstream sources such that we have the closest alignment between CVE source and component available. That’s important because it results in the most precise results with the lowest false findings (both positive and negative). So, if there’s anything incorrect in what we’re displaying, we treat it as an important bug and fix it quickly.
Your blog post was a good example of that, it helped us notice that the CVE-2016-8610 and CVE-2016-7055 findings were incorrect, due to a mistake in the NVD data ingestion, which we’ve corrected. We also work with upstream providers regularly to correct problems in their data, including with Alpine, NVD, and Red Hat most recently. You can read more about how we work with the community to improve CVE data sources here: https://www.twistlock.com/
Firstly, it’s awesome that John was quick to respond and cares about this stuff. It’s doubly great that Twistlock submit fixes upstream so that every scanner can improve.
CVE-2017-11164 (pcre, high): Dagda
CVE-2017-16231 (unknown): Dagda
These both came via the BID database so I’d image some detection rule is broken in that.
Why didn’t Twistlock find the 3 vulnerabilities on the first test? The reason is that my Intelligence Stream at work wasn’t up to date. Twistlock works by running a binary locally that scans the image for packages and then submits the package list up to a remote API (the Twistlock service). Customers can manage their own vulnerability database via a web console. Ours wasn’t up to date and we now have tickets to stop this happening again.
As of today Twistlock is 100% correct in its vulnerability scanning of the Kong image.
There isn’t a lot of information about how this works and it’s not open source so I can’t look at any code. For practical reasons I think we can forgive a scanner not returning the unknown or low severity curl vulnerability CVE-2018-14618.
There is a paid upgrade available which may change the results. Hopefully somebody at Aqua can let me know why it didn’t find CVE-2018-14618 and I can update this blog.
Anchore engine found CVE-2018-14618 which is the unknown/low severity vulnerability but missed both CVE-2015-9261 and CVE-2018-12434. I checked the system feeds list and everything seemed up to date but I was missing the NVD feeds.
By default the packages and NVD database are disabled due to memory concerns. Docker needs >4gb memory set if running on OSX and then we can enable these in the config.
Enabling these unlocks the scanning of RubyGems and NPM’s and also enables the NVD database. I was hopeful that Anchore-Engine would now find the other vulnerabilities. Unfortunately the results were unchanged.
Update: Daniel the CTO at Anchore got in touch and provided this information.
“First, we wanted to verify that your understanding of how anchore-engine is doing its matching is correct, in particular for OS package (apkg, rpm, dpkg) artifacts that are found during image analysis. You are correct in that, currently, anchore-engine is using the upstream OS vendor security data sources exclusively when matching discovered OS packages to potential vulnerability records.
Specifically, in the case of the image in your blog analysis (based on Alpine 3.6), anchore-engine first discovers any alpine packages installed, has a constantly updated vulnerability feed (from Alpine upstream, in the form of the alpine sec-db (from https://github.com/
For the three CVEs in the blog post, only one has any mention in the alpine secdb, which explains why anchore-engine is finding a match for the other two. Since anchore-engine is periodically pulling the latest version of the upstream vulnerability data, if the vulnerabilities ever do appear in the secdb, then anchore-engine scans will start showing the matches.
As we’re always looking to improve the matching fidelity of anchore-engine, while at the same time keeping a balance between lots of false positives (which some other technical solutions can produce at the base case!), we have discussed including other analysis/mapping features to anchore including the use of binary scanning, generating CPEs for OS packages to match against raw NVD data directly, and the inclusion of additional 3rd party vulnerability sources to find more vulnerabilities in containers. Stay tuned as anchore-engine open-source moves forward!”
So this explains how it found the vulnerability and confirms use of alpine-secdb. Daniel goes on to explain the use of the NVD feeds.
“Second, we also wanted to clarify that while anchore-engine does pull down NVD data, the current usage of this data is exclusively for what we consider to be ‘non-os’ package-to-vulnerability matches. By ‘non-os’, we currently support NPM, GEM, Java (jar/ear/war), and Python (pip) language oriented software packages that are not installed using the ‘os’ package management tools. Since anchore analyzers discover both OS and non-os software, the current vulnerability scanner will match the os packages with the upstream os vendor vulnerability feeds, and will match the non-os packages with the NVD vulnerability feed. Users of anchore can then retrieve a vulnerability report for os only, non-os only, or a combination.”
Now the NVD stuff makes sense too. My containers didn’t have any NPM, Gem, Java or Pip package inside. Daniel also provided some information about the policy features of Anchore.
“Finally, when it comes to image scanning, while we’re intensely focused on making sure the vulnerability matching capability of anchore is accurate and complete, there are additional related capabilities of our approach that may be of interest.
While anchore supplies a ‘list’ of vulnerabilities as you’ve seen, many of our users prefer to rely on the ‘policy evaluation’ aspect of anchore-engine, whereby a user can create a tuned policy that describes what types of checks (vulnerability, searching for secrets/keys in the container, ensuring dockerfile is in-line with site best practices, whitelisting/blacklisting software artifacts/licenses/etc, and many other checks) should be applied and reporting back a result that contains a report on which checks passed/failed and a final ‘action’ of pass/fail that can be used to gate container images as they move from build to deploy.
The policy evaluation feature of anchore also contains the ability to ‘whitelist’ policy check results (such as CVE/vulnerability false positives), giving users a path for refining the result of an anchore analysis as images move along through their lifecycles.”
This certainly helped me understand Anchore better and confirms my recommendation to use anchore-engine over any other open source vulnerability scanning tool.
The documentation for Clair is actually quite nice and straight forward. The databases used when scanning an Alpine image are alpine-secdb and Nist NVD. This means we should expect it to find our two medium severity vulnerabilities.
It turns out that CVE-2018-14618 came from the alpine-secdb. Reviewing the docker logs showed many errors related to the NVD database so this scan needed to be rerun after that problem was fixed.
After reading through the logs it looks like I initially had a problem with the NVD database sync. A search through Github issues revealed that the v2.0.6 container has a fix for this as opposed to the latest container tag I was using from the docs. Again, I was hopeful that Clair would now find the missing vulnerabilities but I left disappointed and the results remained the same.
Dagda uses quite a few vulnerability databases. It pulls CVEs (Common Vulnerabilities and Exposures) from the Nist NVD database like all of the other scanners, BIDs (Bugtraq IDs), RHSAs (Red Hat Security Advisories) and RHBAs (Red Hat Bug Advisories), and the known exploits from Offensive Security database
It also uses the OWASP dependency checker to check for application vulnerabilities in java, python, nodejs, js, ruby and php. On top of all this it also scans the image with ClamAV looking for viruses and malware plus it integrates with Falso to detect anomalies.
On paper Dagda is the clear winner in terms of sheer numbers of vulnerability databases it checks. But in reality it only found CVE-2015-9261, CVE-2017-11164 and CVE-2017-16231 so it was 50% correct and turned up 2 odd results from the BID database that are false positives.
Three out of the five systems tested arguably were setup badly or it’s my fault. I got a bit excited by seeing some output and didn’t really dig into the configuration of each and enable extra settings. In my defense some of the docs for this stuff are quite bad.
Twistlock : Out of date vulnerability database fixed and now showing the correct results
Clair: Broken NVD sync due to bad docs. Still not detecting the medium vulnerabilities.
Anchore Engine: Misconfigured, needed to enable NVD database. Still not detecting the medium vulnerabilities.
Aqua Microscanner: Found 1 out of 2 medium severity
Dagda: Found 1 out of 2 medium severity and some false positives
However, even after fixing my scanners I still didn’t get good results. Why? I dug a little deeper into how the mapping works between Alpine package images and CVE’s and found a few reasons.
Alpine is really popular due to its small size. A minimal Centos or Ubuntu container is often 250mb+ whereas Alpine is typically around 5mb. This has a dramatic effect on the speed of image pulling.
To understand the problem with Alpine we need to go back to the basics of how image scanning works. A scanner will collect all packages installed along with their versions and compare them to a database containing the CVE’s. The problem is that distributions back-port patches or in some cases simply modify the way software is installed. This can often either make the package vulnerable or remove the vulnerability. The devil is in the detail and it’s quite common to disable some flag in a package and completely negate a vulnerability.
This blog does a great job at explaining the mapping problem. With my new understanding I don’t find it surprising that the example CVE-2014-6277 that Anchore investigate in their blog has a Redhat base image. Redhat provide a public security mapping database called the RHSA which makes it quite easy to see if a vulnerability is relevant. In their case they found:
Red Hat no longer considers this bug to be a security issue. The change introduced in bash errata RHSA-2014:1306, RHSA-2014:1311 and RHSA-2014:1312 removed the exposure of the bash parser to untrusted input, mitigating this problem to a bug without security impact.
A follow up blog from Anchore investigates a vulnerability in a Debian based image. Again, they can clearly determine if a CVE is relevant by consulting the Debian security tracker. In this case they found the vulnerability was listed on the security feed but a review of the CVE info showed it wasn’t a valid vulnerability. This is less helpful than the way Redhat reports things.
Alpine has no such public mapping database. There is no easy way to determine if a vulnerability in a piece of software is applicable to the version packaged on Alpine. And that explains why we have such different results. A lot of the scanners will directly compare package version to NVD database because that’s all of the information they have.
What about alpine-secdb? As you can see this is simply a directory full of yaml files maintained by a single person. A stark contrast to the RHSA provided by Redhat. I suspect Clair and Anchore Engine use alpine-secdb to filter out the other results which would explain only finding a single CVE. The rest don’t seem to use it so the results are a total guess.
Why does Twistlock find them all? The basic explanation is that they maintain their own mapping database as part of their intelligence stream. They pay people to examine whether a CVE is relevant to Alpine.
This whole exercise has made me question if running Alpine in production is a good idea. There are images available now that are reasonably small and based on Redhat, Ubuntu or Debian. These images would produce much better scan results as the scanners have access to a public vulnerability database.
For example Minideb is 23mb compressed and 53mb uncompressed. Similarly, Ubuntu have minimised their containers down to around 80mb uncompressed. This is still a lot larger than the 4mb Alpine latest container but given Docker image layer caching and the speed of modern networks and size of disks I’d argue this isn’t really a problem for most people.
There is an argument to be made for Alpines smaller surface area for vulnerabilities. I’m not sure this is a good trade-off vs transparency of package vulnerabilities.
If you are mostly running Alpine images then I don’t see much benefit in running open source vulnerability scanners against them at this time as the results are a guess. The only option right now for Alpine is to buy a paid service and hope their mapping database is correct.
When scanning any image you have to be extremely careful with the configuration and ensure that the databases being referenced locally are up to date with the latest upstream vulnerability databases. This needs monitoring so you get alerted if you fall out of sync.
My current recommendation for paid product if you have budget is Twistlock. My recommendation for free product is Anchore Engine simply because it does more once you enable all of the feeds. Review the features of both as they do a lot more than what I’ve covered here.
So the question now is Redhat, Ubuntu or Debian? I’ll do a blog comparing scan results between images for both and hopefully this time we’ll get sane results.
Update: the base image comparison blog is now complete.
This is part 1 of a 2 part blog where we look at Kubernetes cluster creation times on Azure AKS and Google GKE. In this…
Kubernetes is one of the hottest technologies at the moment and showing off your skills is only going to boost the…
Tell us about a new Kubernetes application
Never miss a thing! Sign up for our newsletter to stay updated.
Discover and learn about everything Kubernetes