We also needed to develop and codify the AWS ECR security patterns to ensure the solution was compliant from the outset, and that the standards and processes defined for the centralised solution were also implemented consistently across the global developer community.
How does it work?
The solution operates from a central AWS ‘solution’ account that application teams interact with. Within the account, workflow services are provided such that pushing an image automatically triggers the required actions. Segregation of images is provided so that consumers may only push to “unscanned” repositories, with the system automatically promoting any approved images to “scanned” repositories, which can only ever be read from.
Workflow orchestration is provided through AWS Lambda functions, in combination with CloudWatch Event Rules, SNS and SQS for message queueing.
Scanning services are provided through the deployment of Aqua Cloud Native Security Platform (CSP). This tool is deployed within the solution account inside a private EKS cluster. The dashboard interface provided by Aqua CSP allows development teams to search, report and export the state of the recorded image scans and for security teams to manage vulnerability policies.
In the event of an image push, the scanning workflows orchestrate ad-hoc Kubernetes jobs within the EKS cluster, scanning the image within an ephemeral scan container.
These scan jobs are monitored by the workflow processes, which further orchestrate ephemeral image promotion jobs to move the image from “unscanned” to “scanned” in the event of success or reject the image in the event of a scan failure.
Throughout the process, SNS event notifications are generated at key events, notifying the consuming team and pipeline processes of events within the system, including scan acceptance, completion, promotion or rejection.
Results from the scans are written to HTML reports and persisted to S3 objects which are shared with the originating consumers, as well as being made available in the Aqua CSP dashboard for security and business users to investigate. All events, actions, scan results and responses are also logged to DynamoDB audit tables which are used for integration with pre-existing customer security tools.
- A conduit from pipeline to cluster that gives us a level of confidence that what we’re putting into the cluster is what we wanted it to be in the first place
- Great developer feedback, and the pattern is currently being further developed for use across multiple cloud providers.