Red Hat Quay 3.9 is generally available as of today! This version increases the vulnerability reporting coverage of container image content, broadens audit logging coverage and integration with external log management systems as well as provides a more scalable way of tracking storage consumption across a large number of tenants using the registry. We also added resiliency improvements to Quay’s unique geo-replication feature as well as an automated update to the embedded PostgreSQL database. In the future, Red Hat Quay will align with the lifecycle of the Red Hat OpenShift Container Platform.
Increased and more precise vulnerability reporting
With supply chain security moving to the center of attention for many IT leaders, the breadth and depth of reporting vulnerabilities is ever more important. While increasing the reporting capability, it is also important to not overwhelm the security auditors and developers with noise in vulnerability reports that come from false positives.
Red Hat Quay 3.9 includes a new release of Quay’s internal vulnerability reporting engine, Clair. Clair is now capable of showing vulnerabilities for Golang modules found in Golang binaries, as well as CVEs found in Ruby-based applications using RubyGems.
Via Clair, Quay already reports vulnerabilities for a lot of operating system package managers like Red Hat’s rpm/yum/dnf in Red Hat Enterprise Linux, Oracle Enterprise Linux, AWS Linux or SuSE Enterprise Linux, but also apt in Debian and Ubuntu as well as apk in Alpine. It also already covers vulnerabilities in language package managers like Java maven and Python, and enhancements have been made to remediate false positives that occurred when installing any of these language packages via an operating system’s package manager.
Clair now also uses the OSV.dev vulnerability database for programming language package managers, which now enables reporting on Java CVEs available offline. Red Hat Quay 3.8 and earlier had a dependency on an online service.
And by the way: since Quay supports OCI artifacts, it is also capable of storing and serving image signatures, SBOMs and attestations generated by the SigStore.dev toolchain.
Audit Logging integration with Splunk
Quay audits a lot of events that occur in a central registry service, and in the 3.9 release the scope of this has been broadened to include creation and modifications of organization and organization settings as well as robots, whereas before the auditing focused mostly on repository-based events.
Additionally, based on popular requests we’ve integrated the auditing system of Quay with Splunk’s REST API, so that all audit events in Quay can be forwarded to a central Splunk instance. This allows users to centrally query and efficiently store vast amounts of auditing data from a Quay instance over longer periods of time than what would normally be possible when storing the audit data in Quay’s own internal database. Note that once Splunk integration is enabled, the user is expected to carry out log review in Splunk directly, rather than the Quay UI.
Scalable storage consumption tracking
Storage quotas were already introduced in earlier Red Hat Quay releases, but the implementation hit limits with some of our customers that have very large Quay deployments. We’ve also worked in feedback about the attribution of layer sharing benefits, and the general calculation coverage of the storage consumption.
We have completely reimplemented the tracking in Red Hat Quay 3.9 , and as a result users now enjoy a much faster storage consumption calculation that scales to deployments with 10s of thousands of users and organization without impacting the performance of image pushes or the rendering of the UI. Compared to typical implementations, Quay shows the storage consumption also at the individual repository level versus just the organization level, which helps organization admins correctly attribute storage usage to certain teams or applications.
The calculation at the organization level has also changed: users will now be incentivized to use a common base image across repositories in an organization, because these will only be counted once for the organization, compared to previously where the use of a common base image in different repositories resulted in double counting.
A common base image from Red Hat for instance is UBI which is available in various footprints depending on the specific need of the developer or containerized application. In Red Hat Quay 3.9, the storage required by such a base image will only be attributed to an organization once, regardless of how many of its repositories and image tags are based on it. Multiple organizations using the same base image in their repositories will still see the storage consumption attributed to them respectively. Regardless of that however, every unique container image layer is only ever stored once in Quay.
See this demo for a video walkthrough of the renewed storage consumption tracking experience: Red Hat Quay 3.9 demo: New Storage Consumption Tracking
In geo-replication, multiple geographically dispersed Quay instances replicate images transparently to the client among each other and act as one single large registry service with a common entry point. In Red Hat Quay 3.9 it is now simpler to deal with scenarios where one of those instances and its storage is irrecoverably lost. Previously database modifications were necessary in order to remove the site from the geo-replication setup completely.
This is now automated with utilities inside the Quay container image.
A new version of Postgres
With Postgres v10 going end of life in May 2024 Quay moves to Postgres v13. For customers running Red Hat Quay with the operator on OpenShift, this will be a fully automated migration, leveraging the power of the Kubernetes operator pattern.
Under the hood there is a lot going on: the operator creates new storage volumes, spins down Quay and Clair pods and leverages Postgres tooling to dump and copy the database content of Quay and Clair into the new storage volumes. Eventually it starts new database instances running Postgres v13. Moving to this version will entail an on-disk data format change of the Postgres database files and the operator is managing this by re-importing the database dumps into the new database instances. It will hold the update until that process is complete and the databases are back online. If something breaks the operator will allow the user to rollback these changes and restore the registry service.
However during the update downtime (minutes depending on the database size) needs to be taken into consideration as well as temporarily increased storage consumption or the second database volume. Users of standalone Quay deployments on RHEL server will migrate the database manually by bringing down the old database instance and running a migration script.
With every Quay release numerous bugs are fixed and vulnerabilities within dependencies of Quay are addressed. Two additional improvements stand out in this release: Quay officially supports Nutanix Objects Storage as a backend for the image layers. This configuration is now added to our support matrix.
The new PatternFly-based UI also progressed to add UI screens for Repository Settings, Permission management, Robot accounts, Notifications. The new UI is still opt-in but can easily be taken for a test drive by adding FEATURE_UI_V2 = True to the Quay configuration file.
Let us know what you think about it in this quick survey.
Future changes support lifecycle
So far, Red Hat Quay followed a n-2 support model: for any given Red Hat Quay version, the two previous versions would be in the Maintenance support phase and continue to receive patches at Red Hat’s discretion.
Starting with Red Hat Quay 3.10, the lifecycle will start to align with Red Hat OpenShift. Consequently, the 3.10 release will be available within 4 weeks of the OpenShift Container Platform 4.14 release at the beginning of Q4 this year and it will have the same support phase dates. This will allow customers to version-lock the Quay deployments on top of OpenShift with the lifecycle of the underlying cluster. In turn, this means each Red Hat Quay release will be supported 18 or 24 months, which is up to 12 months longer than the previous support cycles. In general, this will benefit customers who have a long lead time when planning maintenance windows for upgrades to their central registry instance.
Red Hat Quay continues to be tested and supported on older OCP releases as it is today and customers can expect to have a supported update path between the various versions at all times, but specifically between the Quay versions that are aligned with even OCP releases (Extended Update Support versions). What changes is the release cadence and support length. Red Hat Quay also continues to be supported when deployed outside of OpenShift Container Platform.
At the time of Red Hat Quay 3.10 generally availability, in order to accommodate customers on earlier releases of Quay, the lifecycle of Red Hat Quay 3.8 and 3.9 have been extended to provide enough time for customers to update.
The lifecycle information and support phase dates can always be viewed on the Red Hat Quay Lifecycle Policy page.