Abonnez-vous au flux

Ever wondered how new code lands in a newly-released RPM package? Or why some issues that already have published fixes take longer to be released by Red Hat? This blog post will give you a glimpse of a typical code lifecycle, using the kernel package as an example.

The Scenario

First of all, let's put some context here. We will be discussing a code update for something that is already released. That is, some package errata. We are not talking about new products (for example, an imaginary future new Red Hat Enterprise Linux major version), but a maintenance release instead. We will be using the kernel package as our walkthrough example.

That said, why was my kernel package updated?

Red Hat Errata fall into three main classes:

  • Product Enhancement;

  • Bug Fixes; or

  • Security.

The names are self-explanatory and usually, the priority increases from the first Enhancement (lowest) to Security (highest). So for the purpose of this article suppose that, due to a vulnerability, a support case, or a really good enhancement, a package has to be updated. What's next?

Upstream first!

Every product that we ship has an upstream project. We work really hard for every snippet of code in our products to be accepted in their upstream projects unless it makes no sense to do so.

Why?

First of all, because it is the right thing to do! If you want to harness the power of open source, you go and contribute to the upstream. Not only is this community-minded, but it helps us avoid supporting code which is not maintained or tracked somewhere else. Let's use a quick example here: suppose we write a Awesome-Code-Change, but the patch was never uploaded, vetted and accepted in Kernel.org.

Fast forward.

Another developer finds another solution for the very same issue and it makes its way into the upstream Kernel.org. However, this patch has significant changes to the structure, variable and ABI names.

Now what? We are in hot water! The further kernel evolution will take the upstream patch as a base, but we have chosen to use our own solution, with different code changes, variable naming, etc. 

The result? Porting further updates from upstream is going to be unnecessarily painful, cumbersome and no one gains from that. It takes longer to solve future issues and leads confusion and anxiety all around.

We grow when we share!

This is why upstream acceptance is very important and taken very seriously here. Without code being accepted upstream, it gets really hard to move the code forward internally. Usually the upstream acceptance requires sending the code fix to the project mailing list, where it is vetted by respected community members. They read the code, evaluate it and if someone spots potential errors, whether they be new bugs, regressions or are performance related, the patch is returned for reworking. After all is settled and accepted (the famous ACKs), then the code is merged into the upstream project.

The code sketch

Our Awesome-Code-Change final patch is accepted upstream, which is awesome!

However, this isn’t the end of our story. Internally here at Red Hat, the patch is still undergoing major work. Remember that our Enterprise products are normally frozen and backported versions of the upstream projects. Take the Red Hat Enterprise Linux 7 kernel: it is 2014, Fedora 18 upstream kernel snapshot plus thousands and thousands of patches that built up over the years fixing or adding things.

What's the big deal about it? That means that in several areas, the upstream project code has drifted from the Enterprise product! And that means that we need to craft a special version of that upstream patch to fit our supported products. This helps us provide the stable functionality our subscribers demand and still address bugs and vulnerabilities!

Think about the heavy work of backporting and fitting upstream patches all way down to Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 5, sometimes even Red Hat Enterprise Linux 4 and at the same time having to keep the kABI (kernel ABI) stable within the same product major version… Yikes!

Sometimes a code change is just too big and intrusive to release into a product that is already in Production Phase 2 or 3 of its life cycle and because the risk of introducing new bugs or regressions (or <gasp> possibly breaking API/ABI compatibility for our known-stable releases), we choose not to implement it.

Defending your code

Now that we have the backported code that supposedly fixes that bug and fits the affected versions, the developer posts the code change into our internal mailing lists, stating the problem, the code, the upstream reference and asks for feedback.

Other developers add fresh eyeballs to the code and look for potential problems, regressions, and the correctness of the patch and do it more or less like upstream. You need at least three different developer ACKs to accept the new code into the product. Then, that leads us to...

The validation

Did you ever look into our Bugzilla standard bug filing template? For everyone who files a bug, there are two very very important fields there: "How reproducible" and "Steps to Reproduce". These are crucial for building a reproducer case for QA/QE validation. Of course, there are corner cases that “happen randomly”. These should never happen… But hey, who said that life is easy?

QA/QE will then write a test case and add it to their tests library. Having the test case, then the patched package build is tested against every existing test case for that component and version. But, wait, there's more! Depending on the type of update and product, we also test it against the certified hardware too. This means we have to test it with some hundreds of different hardware models because the patch might tickle some particular hardware and it may not like it. Yes, that takes time.

If the patch survives the QA/QE phase, then we go to...

The Release

We have two possible routes here.

If the issue is really severe and potentially could affect lots of customers, we might release a Z-stream package. What is a Z-stream? Take the current Red Hat Enterprise Linux 7 version as of this post: 7.3. “7” is the X (major) version, “3” is the Y (minor) version. Z-stream then indicates a sub-release within the 7.3 version, prior to the next minor release (which is going to be 7.4).

The Zs have a Red Hat-internal cadence, but sometimes something is really a show-stopper and we have to release an out-of-band Z-stream. The other possible scenario, usually for medium and lower priorities, is to enqueue the package update for the large batch of updates: the next Y stream (next minor release version).

Security errata, usually referred to as RHSAs, are sometimes so Critical that they are released the moment they are ready, regardless of any previous release plans for the same component or product. In fact, we have a track history of releasing fixes for most Critical security flaws in Red Hat Enterprise Linux in the same or next day of them becoming public knowledge.

After it is decided how the patch will be released, it is a matter of time until it enters the next publishing cycle. Then, the update is pushed to our CDN and you can happily and confidently get the new package in your next yum update.

What a ride!


Rodrigo Freire is a TAM in the LATAM region. He has expertise in Performance and is currently finding his way in the magic OpenStack land. Find more posts by Rodrigo Freire at https://www.redhat.com/en/about/blog/authors/rodrigo-freire

Innovation is only possible because of the people behind it. Join us at Red Hat Summit, May 2-4, to hear from TAMs and other Red Hat experts in person! Register now for only US$1,000 using code CEE17.

A Red Hat Technical Account Manager (TAM) is a specialized product expert who works collaboratively with IT organizations to strategically plan for successful deployments and help realize optimal performance and growth. The TAM is part of Red Hat’s world-class Customer Experience and Engagement organization and provides proactive advice and guidance to help you identify and address potential problems before they occur. Should a problem arise, your TAM will own the issue and engage the best resources to resolve it as quickly as possible with minimal disruption to your business.


À propos de l'auteur

Rodrigo is a tenured professional with a distinguished track record of success and experience in several industries, especially high performance and mission critical environments in FSI. A negotiator at his heart, throughout his 20+ year career, he has leveraged his deep technical background and strong soft skills to deliver exceptional results for his clients and organizations - often ending in long-standing relationships as a trusted advisor. Currently, Rodrigo is deep diving on AI technology.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Parcourir par canal

automation icon

Automatisation

Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

open hybrid cloud icon

Cloud hybride ouvert

Découvrez comment créer un avenir flexible grâce au cloud hybride

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise