This article was originally published on the Red Hat Customer Portal. The information may no longer be current.

We live in an electronic age. Nowadays, more and more manifestations of human identity are available via electronic media. Besides advantages, this facility brings challenges to us as well. As computer systems are getting more capable and complex, it is ever more important to set and keep the underlying computer system secure against security threats.

The objective to secure a computer system is a complex and continuous process. Besides the requirement the system to be designed with security in mind right from the scratch, the subsequent related actions often involve (but are not limited to) the following:

  • proper system configuration,
  • presence of means for users' privilege separation,
  • periodically updating the underlying system software with available security patches,
  • presence of system tools regularly performing security scans, integrity checks etc.

In this article we will present how the Security Content Automation Protocol (SCAP) can be used for automated system monitoring and predefined security policy compliance checks.

Towards automated system's security level monitoring

Keeping in mind the objective to comply with a predefined level of security, let's first look what we already have available:

  • we have got a set (often many thousands) of computer systems we want to monitor / administer, and
  • central security policy we want to be applied to each of these systems.

From observation, the security policies often come in a form of set of rules (a checklist), where the system has to be compliant to all the rules in order to comply with that security policy. Also, since there might be differences in the controlled systems, we would want to abstract from concrete system's specifics in order to fulfill the main goal.

Considering the above, let's define the protocol features we are searching for as follows:

  • the mechanism should be capable of performing system state scans, regularly and in automated way (locally and / or remotely),
  • it should be possible to specify what should be audited in a form of a checklist,
  • that checklist would ideally not depend on the type of the systems, we are going to apply the security policy against,
  • the obtained results should be reported preferably in some inter-operable form for their further (possibly again automated) reuse / processing,
  • also, once results have been analyzed, the mechanism should be capable of correcting the local system inconsistencies for compliance with the predefined centralized policy.

Taking into account the aforementioned expectations on the formalism, we will consider the Security Content Automation Protocol (SCAP) in the following sections. SCAP has been chosen as a representative of a community evolved protocol, which meets the above criteria. Due to the way it is developed it is also satisfied that its functionality will cover wide range of scenarios that arise when proposing automated system scanner. To abstract from underlying computer system characteristics we will use Open Vulnerability and Assessment Language (OVAL) standard of SCAP. For representation of a security policy we will entertain the Extensible Configuration Checklist Description Format (XCCDF) concept of SCAP 1.

Without trying to dive too deep into the description of all XCCDF possibilities, for our purpose it is necessary to mention the following – in XCCDF terminology the security policy is constituted as checklist. Checklist is represented by benchmark. Benchmark consists of items (groups, rules, values, profiles, etc.). Rule is a named entity, that should act as system check holder and can contain also information about steps needed for correction in case system check Dued. Group merges particular rules into logically related sections. Value is a named entity, which can be used in other items to hold particular state information and to be able to pass this information further. Finally profile is a subset of rules available in the benchmark would be executed when performing the system scan 2.

There are two types of products utilizing XCCDF benchmarks:

  • benchmark producer – a product that creates XCCDF benchmark documents, and
  • benchmark consumer – a product that accepts existing XCCDF benchmark document, process it during the system scan, and produces a final XCCDF results document.

Currently we have a protocol to represent the automated checks (SCAP), a way how to represent a computer system details (OVAL), and manner how to comprise a security policy (XCCDF). What we are missing yet is a tool, which would glue all these items together (would understand SCAP, would be able to interpret OVAL statements and XCCDF benchmarks) and would perform the actual audit of the system and correction of failures post the system scan. In the following examples we will use the OpenSCAPlibrary and related toolkit as a representative of such aid.

Workflow of a system scan (consuming the benchmark)

Let's briefly document the process how a computer system is scanned during the automated check.

Yet before it is possible to perform the check, some independent trusted authority defines a security policy in the form of a binding document. This document is converted into an XCCDF benchmark (OpenSCAP terminology often references this benchmark simply as “SCAP content”), where XCCDF rules correspond to the requirements listed in the policy document (XCCDF rules are possibly joined together into logically related groups in order to the final benchmark to be easy to understand).

During the scan evaluation the OpenSCAP toolkit interprets each rule of the XCCDF benchmark (one at a time), using OVAL definitions to compare actual system property values with the expected ones (defined in the particular OVAL check for each of the tests). After evaluating all rules, partial results of the scan are turned into a final XCCDF results document to present the results of the system audit in an universal form.

In case a comparison of selected XCCDF rule with corresponding system property did not meet the policy's expectation (the OVAL check failed), in the subsequent independent run it is possible to correct (XCCDF terminology refers to this act as remediation) that particular system property.

To lighten the theory, let's provide examples for selected Red Hat products. On Red Hat Enterprise Linux 6 the OpenSCAP toolkit can be installed via openscap-utils3 package, the XCCDF consumer benchmark / SCAP content via scap-security-guide4 package:

# yum install openscap-utils scap-security-guide -y

To perform an actual scan:

# oscap xccdf eval --profile stig-rhel6-server \
--report /var/www/html/report.html \
--results /var/www/html/results.xml \
--cpe /usr/share/xml/scap/ssg/content/ssg-rhel6-cpe-dictionary.xml \
/usr/share/xml/scap/ssg/content/ssg-rhel6-xccdf.xml

The above command instructs the oscap tool to perform evaluation. We request it to work in XCCDF evaluation mode (xccdf eval), use stig-rhel6-server as an XCCDF profile, save the generated HTML XCCDF report document and XML form of XCCDF results document into /var/www/html directory5, check only rules applicable for Red Hat Enterprise Linux 6 as a product6, and evaluate ssg-rhel6-xccdf.xml XCCDF benchmark file.

On Fedora operating system the OpenSCAP toolkit is available via openscap-utils package, the XCCDF benchmark (SCAP content) via scap-security-guide package:

# yum install openscap-utils scap-security-guide -y

To perform the Fedora system scan:

# oscap xccdf eval --profile common \
--report /var/www/html/report.html \
--results /var/www/html/results.xml \
--cpe /usr/share/xml/scap/ssg/content/ssg-fedora-cpe-dictionary.xml \
/usr/share/xml/scap/ssg/content/ssg-fedora-xccdf.xml

Here we again evaluate XCCDF benchmark via oscap tool, use common as profile, ssg-fedora-cpe-dictionary.xml to specify that only rules applicable to Fedora product should be checked, and evaluate ssg-fedora-xccdf.xml benchmark file.

Alternative way to run aforementioned system scan on Fedora operating system is to use scap-workbench8 GUI tool:

# scap-workbench

ensure 'Common Profile for General-Purpose Fedora Systems' profile is selected (the default) in the Profile field, and click the Scan button.

Understanding the results of the scan

scap_workbench.

Since there are various XCCDF rule's evaluation results possible (except plain pass or fail) we will briefly document the differences between them:

  • pass – the target system (its relevant component) satisfied all the conditions of the XCCDF rule,
  • fail – the target system (its particular component) did not meet certain condition of the XCCDF rule. For simple rules (containing reference just to one OVAL check) this means relevant system property did not meet its expected value, for compound rules at least one OVAL check of the set didn't succeed. Particular system property should be corrected and scan rerun,
  • error – the checking engine was not able to complete the rule evaluation due some reason (scanner run with insufficient privileges etc.). Therefore it is not possible to decide if particular system is compliant with the requested policy or not. Reason of the error should be further investigated, corrected, and scan rerun to obtain trustworthy report,
  • unknown – a problem different from the error was encountered during rule evaluation (for example the checking engine might have presented the result that did not get understood by the testing tool),
  • notapplicable – particular rule is not applicable to be tested on this system (system component / property scanned by this rule is not present on this system),
  • notchecked – relevant XCCDF rule does not have its OVAL counterpart defined (therefore it was not possible to obtain actual system's property state), or the OVAL check is written in language not recognized / supported by the checking engine, or rule was not checked because it depends on fulfillment of some previous “parent” rule, and this parent rule didn't evaluate to success earlier,
  • notselected – particular rule is not selected for evaluation in the XCCDF benchmark,
  • informational – the rule was checked, but the obtained data is rather meant to be an information to share, than a comparison of actual system's property with expected policy value,
  • fixed9 – previously the rule evaluated to failure, but has been corrected already (either by a tool capable of automated remediation or by human intervention).

Structure of an OVAL system check

Yet before we will dedicate focus to describe a way how SCAP content (XCCDF benchmark) is produced, it is necessary to briefly mention the internals of the expected structure of an OVAL system check.

We have previously mentioned that OVAL language is the mechanism allowing us to abstract from concrete computer system properties, and express them in unified way so it would be understandable for the OVAL interpreter on one hand, but also for producing final system XCCDF results on the other.

Basic concepts from OVAL language required for system checks implementation are UnixLinux, and Independent OVAL tests structure descriptions10. Each OVAL definition can contain one or more tests, that should check if the system is compliant with the desired policy. Looking further at the structure of tests within the Independent set for example (let's consider textfilecontent54_test concrete case) it can be seen that each test element consistsof object and state11. Let's suppose an example of implementing OVAL check validating if minimum length of user provided password (defined in /etc/login.defs for passwords managed via shadow-utils package) file meets required length, the object here would be /etc/login.defs file, while the state would be the current value of PASS_MIN_LEN row within that file.

Since we are familiar with the difference between OVAL object and state already, it is possible to provide and describe example of a complete OVAL test definition:

<definition id="accounts_password_minlen_login_defs" version="1"> 
  <metadata>
    <title>Set Password Expiration Parameters</title>
    <affected family="unix">
      <platform>Fedora 19</platform>
    </affected>
    <description>
      The password minimum length should be set appropriately.
    </description>
  </metadata>
  <criteria operator="AND">
    <criterion test_ref="test_etc_login_defs" />
  </criteria>
</definition>

<ind:textfilecontent54_test check="all"
 comment="check PASS_MIN_LEN in /etc/login.defs"
 id="test_etc_login_defs" version="1">
  <ind:object object_ref="object_etc_login_defs" />
  <ind:state state_ref="state_accounts_password_minlen_login_defs" />
</ind:textfilecontent54_test>

<ind:textfilecontent54_object
 id="object_etc_login_defs" version="1">
  <ind:filepath>/etc/login.defs</ind:filepath>
  <ind:pattern operation="pattern match">
    ^PASS_MIN_LEN\s+(\d+)\s*$
  </ind:pattern>
  <ind:instance datatype="int">1</ind:instance>
</ind:textfilecontent54_object>

<ind:textfilecontent54_state
 id="state_accounts_password_minlen_login_defs" version="1">
  <ind:subexpression operation="greater than or equal"
   var_ref="var_accounts_password_minlen_login_defs"
   datatype="int" />
</ind:textfilecontent54_state>

<external_variable comment="password minimum length" datatype="int"
 id="var_accounts_password_minlen_login_defs" version="1" />

 

The definition itself is encapsulated into <definition> element. There are five possible types of a class: compliance, inventory, patch, vulnerability, and miscellaneous. We have used the compliance one as it best suits for our purpose. id should be unique definition identifier. Once a new definition proposal is reviewed and submitted into official OVAL Repository, and official id would be assigned. Since this is just a test, we have assigned a temporary id.

The <definition> element contains <metadata> element to further clarify title, description (purpose) of the OVAL definition, and system environment the test is intended for (family and platform).

Once the metadata has been added to the definition, it's time to include criteria operator. Purpose of criteria element is to join the individual tests together and clearly specify the logical operation (possible AND, OR, XOR or ONE values of the operator attribute), which should be performed to obtain the final result value. criteria element contains one or more criterion elements, with actual reference to the test (test_ref attribute), possible comments (comment attribute), and if the result of the test should be negated (negate attribute “true”) prior applying the logical operation mentioned before, or not.

As can be seen, in our example the criterion element references “test_etc_login_defs”, which is the actual id of our textfilecontent54_test. The test contains reference to both, textfilecontent54_object and textfilecontent54_test. The required check=”all” attribute determines how many of the existing objects must satisfy the specified state requirements ('all' in our case).

In textfilecontent54_object definition we specify subset of allowed child elements12, namely that the file path we are interested in is /etc/login.defs, the operation that should be applied against the chunk of that text file is matching a pattern (we also specify the form of that pattern), and that in a case a pattern match is found we are interested in the first occurrence (value of instance element)13.

The textfilecontent54_state element is an actual expression of the rule expectations. In our example we require the operation to be performed with value found to be “greater or equal” comparison against the value of ''var_accounts_password_minlen_login_defs'' variable (whose value is defined outside of our OVAL definition), and that data type of value of that variable should be integer number.

Producing sample XCCDF rule

Earlier we mentioned that to be able to in an automated way to scan computer system, the following parts are necessary – tool implementing SCAP and understanding OVAL and XCCDF concepts, and previously provided XCCDF benchmark (ssg-rhel6-xccdf.xml and ssg-fedora-xccdf.xml mentioned in “Workflow of a system scan” section above) against which we could actually scan our system. This section will in more detail explain how an example rule for an XCCDF benchmark is produced.

As we said already in XCCDF terminology a security policy can be represented in terms of a checklist. Checklist has the form of a benchmark. Benchmark contains particular rules14, and rules are possibly joined into logically related sections via groups. In previous section we have also provided example of an OVAL system check, that would check if the minimum password length, the system requires via its /etc/login.defs file is greater or equal of the value, we specify (value of var_accounts_password_minlen_login_defs variable). Now let's see how a corresponding XCCDF rule for this OVAL system check would look like:

<Rule id="accounts_password_minlen_login_defs" selected="false"
 severity="medium"> 
  <title xml:lang="en-US">
    Set Password Minimum Length in login.defs
  </title>
  <description xmlns:xhtml="http://www.w3.org/1999/xhtml"
   xml:lang="en-US">
    To specify password length requirements for new accounts,
    edit the file <xhtml:code>/etc/login.defs</xhtml:code>
    and add or correct the following lines:
    <pre xmlns="http://www.w3.org/1999/xhtml">PASS_MIN_LEN 12</pre>
    <br xmlns="http://www.w3.org/1999/xhtml"/>
    <br xmlns="http://www.w3.org/1999/xhtml"/>
    Nowadays recommended values, considered as secure by various
    organizations focused on topic of computer security,
    range from <xhtml:code>12 (FISMA)</xhtml:code> up to
    <xhtml:code>14 (DoD)</xhtml:code> characters for password
    length requirements. If a program consults <xhtml:code>
    /etc/login.defs</xhtml:code> and also another PAM module
    (such as <xhtml:code>pam_cracklib</xhtml:code>) during a
    password change operation, then the most restrictive
    must be satisfied. See PAM section for more information about
    enforcing password quality requirements.
  </description>
    <reference
   href="http://csrc.nist.gov/publications/nistpubs/800-53-Rev3/sp800-53-rev3-final.pdf">
    IA-5(f)
    </reference>
    <reference href="http://iase.disa.mil/cci/index.html">
      205
  </reference>
  <rationale xmlns:xhtml="http://www.w3.org/1999/xhtml" xml:lang="en-US">
    Requiring a minimum password length makes password cracking
    attacks more difficult by ensuring a larger search space.
    However, any security benefit from an onerous requirement
    must be carefully weighed against usability problems, support
    costs, or counterproductive behavior that may result.
  </rationale>
    <check system="http://oval.mitre.org/XMLSchema/oval-definitions-5">
  <check-export export-name="oval:ssg:var:153"
       value-id="var_accounts_password_minlen_login_defs"/>
    <check-content-ref name="oval:ssg:def:127" href="ssg-fedora-oval.xml"/>
  </check>
</Rule>

 

The rule definition starts with unique id, value of selected attribute says that this rule would not be selected to be run by default, severity level is used for metrics and tracking (can be one of unknown, info, low, medium, or high).

Rule's definition then continues with title (short rule summary), description (longer description), possible references, and rationale (clarify purpose of the rule).

The aforementioned OVAL system check is linked with this rule via the <check> element. The system attribute of the check element specifies lower-level system language, the OVAL rule should be expected to be written in.

The presence of check-export name suggests this OVAL test uses certain OVAL variable (var_accounts_password_minlen_login_defs in our example), and that the check definition itself can be found in ssg-fedora-oval.xml file present on the local system, and oval:ssg:def:127 is the OVAL internal name for the accounts_password_minlen_login_defs OVAL system check, we have defined above.

This definition to be complete we need to provide definition of the var_accounts_password_minlen_login_defs variable yet:

<Value id="var_accounts_password_minlen_login_defs" type="number">
  <title xml:lang="en-US">minimum password length</title>
  <description
   xmlns:xhtml="http://www.w3.org/1999/xhtml" xml:lang="en-US">
    Minimum number of characters in password
  </description>
  <warning xmlns:xhtml="http://www.w3.org/1999/xhtml"
   xml:lang="en-US" override="false" category="general">
    This will only check new passwords
  </warning>
  <value>12</value>
  <value selector="6">6</value>
  <value selector="8">8</value>
  <value selector="10">10</value>
  <value selector="12">12</value>
  <value selector="14">14</value>
</Value>  

In the definition we specify the variable to be numeric, and having actual value of '12'. We use the selector attribute to define the allowed values for this variable to be just one of '6', '8', '10', '12', or '14'. XCCDF profiles can be used to assign actual values to OVAL variables (various profiles can define various values for particular variable).

That's it for today's lesson. Next time we will speak more how to combine particular XCCDF rules into groups, and groups into final benchmark file, and also describe more functionality available for simplification of writing the SCAP content process (functionality and process that is currently used for creation of SCAP content for Red Hat Enterprise Linux 6 and Fedora operating systems). Hope you enjoined the reading.


[1] The possibilities of SCAP protocol are not limited to OVAL and XCCDF concepts, but for purpose of our article these two are sufficient. Refer to SCAP protocol web page for further information.
[2] For brevity we have described selected XCCDF benchmark parts in simplified way. For further details refer to XCCDF specification
[3] openscap-utils package is available via Red Hat Network.
[4] scap-security-guide package is available via EPEL 6 repository.
[5] HTML form of XCCDF results document is suitable for subsequent human review of results, while XML form of the XCCDF results for later machine evaluation and processing.
[6] CPE stands for Common Platform Enumeration and is another concept roofed by the SCAP protocol.
[7] scap-workbench tool is available after installation of the scap-workbench RPM package.
[8] Refer to scap-workbench upstream page for further information regarding scap-workbench, and videos demonstrating scap-workbench's scanner and editor in action.
[9] scap-workbench tool reports previously failing rules that got already corrected as pass-ed. Fixed state was listed to pinpoint the slight difference between originally passing rules, and subsequently corrected rules.
[10] There are much more data models (like OVAL directivesOVAL system characteristicsOVAL variables, or OVAL results constructs) available in the OVAL language specification. But for purpose of our article to focus on OVAL tests is sufficient.
[11] It is actually possible to define OVAL test containing just OVAL object, but for simplicity we will consider that each OVAL test contains both (the object and the state) fields, since majority of OVAL tests would contain both of them.
[12] See child elements of <textfilecontent54_object> section in https://oval.mitre.org/language/version5.10.1/ovaldefinition/documentat…
[13] The instance entity calls out a specific match of the pattern. The first match is given an instance value of 1, the second match the value of 2, and so on.
[14] Besides rules benchmark can contain again contain various elements. But for simplicity we will consider only rules and groups.