Technologies come and go, but one concept has remained at the forefront of IT conversations for decades: Security. While security remains a perennial top priority for IT departments, what it means to be secure, and the processes required, continue to change. Let's talk about how security, open source, and cloud computing can co-exist.
Open doesn't mean insecure
It may seem counterintuitive, but open source code isn't inherently less secure than proprietary code. Security by obscurity can be a good practice in some cases (the less attackers know about your organization's IT architecture, the better) but hiding source code isn't going to prevent attackers from finding vulnerabilities.
Access to source code does give interested parties more opportunity to find, and fix, vulnerabilities.
Most organizations have moved beyond fears of open source, and have enthusiastically embraced letting developers consume and ship open source code. We're the first to applaud that, but with a note of caution.
Trust, but verify
Developers, by and large, aren't security experts. Organizations need safeguards to consume open source, to understand the provenance of the code they use and to improve its security. They also need to be able to track its deployment and respond when vulnerabilities are found upstream or in a product based on an open source project.
Organizations need a method of inventorying the usage of open source in their environment, and the ability to assess security impacts of vulnerabilities. Not to mention, an infrastructure in place that notifies them of vulnerabilities as they are discovered and mitigations or fixes that are made available quickly.
It's also important to know which vulnerabilities actually matter, and which ones just have flashy names and logos that -- while entertaining, catchy, and beneficial to the security researcher who presented the vulnerability at Black Hat -- may not actually pose a significant threat. Just because it's a named vulnerability doesn't mean it's something to worry about.
Bring security in early
As organizations adopt new technologies, such as moving into the public cloud or adopting containers and Kubernetes, it's important to bring security into the conversation early.
The security team should be involved in deciding which cloud technologies to adopt, how to secure a hybrid cloud environment, and setting up automation to help avoid misconfigurations. Part of mitigating risks is to "shift left" and identify and try to prevent risks early before it creeps into your environment and applications.
Before you stand up applications in the cloud or on-premise, you need to have a strategy for data security. Where is your data, how are you managing it, what are the compliance and governance policies, and so forth. You need to be thoughtful about where sensitive data resides.
And make use of the public cloud service providers' security tooling. Public cloud providers have data tagging, data notification and key management services that you can use to help better protect your data.
Don't let the fact that you're using cloud services distract you from the features of your operating system, either. Make use of things like Red Hat Enterprise Linux's access control list, encryption and other data security features whether your RHEL instances live in the cloud or on-prem.
The ‘defend the castle’ perimeter security strategies you've used successfully before, may not apply now. On-prem solutions that, for instance, track IP addresses and other traditional tooling don't make sense in a cloud-native environment with containers that might have a shelf-life of minutes or seconds.
Similarly, some of the compliance frameworks that were created 20+ years ago have security policies that assume that you are doing security after the server is already deployed. However, in a cloud-native environment, security is done before deployment, at the application pipeline. As a result, auditors may incorrectly say that you need to install anti-virus or other 3rd party security agent-based software, due to an outdated security policy. However, this doesn’t make sense in a containerized environment where the container host itself is immutable.
Automation and management
Security and automation go hand in hand. Cloud infrastructure is valuable, in part, because it can be automated.
Automation isn't just valuable for doing things quickly and efficiently, it's valuable because you can use automation to apply processes and policies at a wide scale for improved security and compliance. It's valuable because you can reduce human errors, such as misconfiguration issues that give attackers a potential toehold into your environment. In fact, misconfiguration issues are one of the top causes of data breaches in the cloud.
Make sure to have a consistent automation strategy across application development, infrastructure operations, and security for repeatability, consistency, and audibility.
Culture is key
Finally, all of the tools in the world won't help if you don't have a culture of cross-collaboration and a practice of viewing security as a process. Work together to develop and hone your organizational and individual skills, to automate "all the things" and solve security challenges together as a team.
We have the tools to address security across the hybrid cloud, but we have to choose to pick them up and make use of them in a way that makes sense in the hybrid cloud world. Security should be looked at as a team sport, not something that is thrown over the wall and done on the 11th hour via panicked pen-testing and security scans. When you tackle security in this panicked last-minute approach, you will make mistakes since it is human nature to make mistakes when done last minute versus early in the application and infrastructure life cycle - which is key to successfully tackling security in the hybrid cloud world