Security professionals and organizations such as the US National Institute of Standards and Technology have developed more complex models that include additional aspects of security such as risk assessment, timeliness, physical possession, legality, and utility. A complete treatment of security best practices is beyond the scope of this paper. Rather, we’ll focus on important or novel technologies and practices relating to hybrid architectures, cloud-native infrastructures, application development using DevOps approaches, and commercial open source security vulnerability response.
It’s a measure of the seriousness and sophistication of attacks that strategic chief information security officer (CISO) positions are becoming more common and that incident response plans are starting to look more like those associated with actual firefighting. There are several reasons for this.
The first is that security needs to be approached in the context of the business as opposed to just a technology problem. This means, for example, defining the business’ risk appetite in terms of loss tolerance. A credit card issuer knows that it’s going to have losses due to fraud. Preventing fraud entirely would make using credit cards so onerous that no one would use them. Instead, the card issuers put sufficient controls in place to keep losses at an acceptable level, while minimizing the overall impact on the user experience.
Another reason security is taken more seriously is that, as with a fire or a car accident, minutes count. Roles, responsibilities, and processes must be established ahead of time. Technical expertise matters, but so does having clear communication plans to share information with those potentially affected by the incident and with broader constituencies such as the press.
GETTING STARTED WITH SECURITY
Security starts with a dream of stability and safety but is often driven by fear, concern, and a need to keep assets from being compromised throughout their life cycle. Whatever complexities today’s IT architectures and external threat environment may add, it’s still good to start with time-tested technologies and practices that you can extend into today’s world.
Open source offers a case in point. The open development model allows entire industries to agree on standards and encourages their brightest developers to continually test and improve technology. Developing software in collaboration with users from a range of industries, including government and financial services, provides valuable feedback that guides security-related discussions and product feature implementations. No one can solve IT security issues alone. Collaborating with communities to solve problems is the future of technology.
Linux has been the beneficiary of a wide range of security-related technologies built using the open source model. These include:
- A dynamically managed firewall.
- SELinux for mandatory access controls.
- A wide range of userspace and kernel hardening features.
- Identity management and access control.
- SHA-512 based password hashes.
- File system encryption.
Furthermore, the open source development process means that when vulnerabilities are found, the entire community of developers and vendors can work together to update code, security advisories, and documentation in a coordinated manner.
Red Hat Enterprise Linux is the IT foundation in some of the most regulated and sensitive industries; it’s incorporated open source security advances in predictable, consumable ways. These same processes and practices apply across hybrid cloud infrastructures as the role of the operating system evolves and expands to include new capabilities like Linux containers. Furthermore, components are reused in the form of microservices and other loosely coupled architectures that interact using application programming interfaces (APIs). So maintaining trust in the provenance of those components and their dependencies (when making up applications) becomes more important, not less.
OPERATIONALIZING SECURITY
Historically, security was often approached as a centralized function. An organization might have established a single source of truth for user, machine, and service identities across an entire environment and described the information they are authorized to access and the actions they are allowed to perform.
Today, the situation is often more complicated. It’s still important to have access control policies that govern user identities, delegating authority as appropriate and establishing trusted relationships with other identity stores as needed. However, application components running on top of Linux or other operating environments may be subject to multiple authorization systems and access control lists.
It’s important to have insight into and control over such complex hybrid and heterogeneous environments. For example, real-time monitoring and enforcement of policies can not only address performance and reliability issues before the problems become serious, but they can also detect and mitigate potential compliance issues. Automating in this way reduces the amount of sysadmin work that is required. However, it’s also a way to document processes and reduce error-prone manual procedures. Human error is consistently cited as a major cause of security breaches and outages.
Operational monitoring and remediation needs to continue throughout the life cycle of a system. It starts with provisioning. As with other aspects of ongoing system management, it’s important to maintain complete reporting, auditing, and change history.
The need for security policies and plans doesn’t end when an application is retired. The ownership and policies pertaining to the data associated with an application need to be well understood so that the proper steps can be taken to comply with retention requirements and the sanitization of personally identifiable information (PII).
With traditional long-lived application instances, maintaining a secure infrastructure also meant analyzing and automatically correcting configuration drift to enforce the desired host end-state. This is often still an important requirement. However, with the increased role that large numbers of short-lived “immutable” instances play in cloud-native environments, it’s equally important to build in security in the first place. For example, you may establish and enforce rule-based policies around enabled services in the layers of a containerized software stack.
Taking a risk management approach to security goes beyond putting an effective set of technologies in place. It also requires considering the software supply chain and having a process in place to address issues quickly.
For example, it’s important to validate that software components come from a trusted source. Containers, an agile and streamlined model for application delivery, provide a case in point. Containers are a simple and efficient way to assemble, distribute, and deploy software. This very simplicity can turn into a headache if IT doesn’t ensure that all software comes from trusted sources and meets the highest standards of security and supportability.
As described earlier, incident response goes well beyond patching code. However, a nimble software deployment platform and process with integrated testing is still an important part of quickly fixing problems (as well as reducing the amount of buggy code that gets pushed into production). A continuous integration/continuous delivery (CI/CD) pipeline that is part of an iterative, automated DevOps software delivery process means that modular code elements can be systematically tested and released in a timely fashion. Furthermore, explicitly folding security processes into the software deployment workflow makes security an ongoing part of software development—rather than just a gatekeeper blocking the path to production.
GOVERNANCE AND COMPLIANCE ACROSS HYBRID CLOUDS
While reflexive fears about a lack of security in public clouds may be naive, public and hybrid clouds do introduce risk and compliance considerations and challenges that are different from concerns you have with traditional on-premise datacenters. It’s important to understand which areas you still maintain responsibility for when using public clouds. For example, in the case of Infrastructure-asa-Service (IaaS), you need to exercise the same care in sourcing and maintaining your operating system and applications as you do when running it on-premise.
A variety of frameworks can help IT executives and architects evaluate and mitigate the risk associated with using public cloud providers. A good example is the Cloud Controls Matrix (CCM) from the Cloud Security Alliance (CSA).
The CSA CCM provides a controls framework across 16 domains, including:
- Business continuity management and operational resilience.
- Encryption and key management.
- Identity and access management.
- Mobile security.
- Threat and vulnerability management.
CCM v3.0.1 defines 133 controls and maps the relationship between each control and other industryaccepted security standards, regulations, and controls frameworks such as ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum, and NERC CIP.
Using the CCM as a reference framework, Red Hat products and partnerships are most relevant in these domains:
- Change control and configuration management.
- Data security and information life-cycle management.
- Encryption and key management.
- Identity and access management.
- Infrastructure and virtualization security.
- Interoperability and portability.
Red Hat also works with partners in all these areas and provides support for other domains, such as threat and incident management, by providing effective and timely response for exploits as they are discovered.
Service design for delivery through hybrid architectures can also be informed by more traditional IT methodologies. For example, the IT Infrastructure Library (ITIL) Service Strategy is one of five ITIL life-cycle modules. It can guide you through designing, developing, and implementing a service provider strategy that aligns with an organizational strategy. Thus, ITIL practices can be used to help design appropriate, complete services for hybrid IT.
From a technology perspective, a key component of governance and compliance is a policy-based hybrid cloud management platform (CMP) like Red Hat CloudForms®. An effective CMP provides access to service catalogs with role-delegated automated provisioning, quota enforcement, and chargeback across virtualization and cloud platforms. It supports complex policy-based task and resource orchestration and automation to help ensure service availability and performance. All this helps IT maintain control of applications and infrastructure capacity.