We often compare the security of containers to virtual machines and ask ourselves "...which is more secure?" I have argued for a while now that comparing containers to virtual machines is really a false premise - we should instead be comparing containers to
We aren’t forced to get rid of virtual machines when we run containers. Containers can be run, in conjunction with virtual machines, in three ways - so it’s a straw man comparison.
- Containers inside of virtual machines.
- Containers in some places, virtual machines in others (the comparison).
- Virtual machines in containers (yes, you can do this).
The Premise
We can run workloads using any of the three techniques as listed above... so forcing a security comparison isn’t exactly "natural". I would argue that it's more "natural" to think about the tenancy requirements of the workloads and the "amount" of isolation required.
The Tenancy Scale
What is the Tenancy Scale? It's the result of a brainstorming session with the leader of Red Hat's product security team (i.e. Josh Bressers).
While I'm not sure that everyone remembers now, but when I started college (back in 1997), multi-user Unix systems were still "all the rage". Individual users would telnet, yes telnet, into a Unix server and each user would run their own processes. Some users would run research batch jobs, while others would run their own web servers, or use the system’s shared web server. When you logged into the system, you could do a process list and view everybody’s processes. In fact, if a given user had the permissions in their home directory set wrong, you could even get into their personal files. Crazy times.
In 2016, not many systems administrators would consider regular Linux process isolation enough to allow multiple users to log into a system - especially if those tenants worked for different organizations or were private individuals.
But, hypothetically, let’s say that I am a systems administrator for a university and I have different research teams that want to run jobs. Let’s say I have a one group running biology computations and I have another group running geology computations - would containers be enough isolation? I would argue, yes.
In another hypothetical scenario, I am a systems administrator working for a public cloud provider and I have users from different companies, government organizations, and research facilities all wanting to share physical resources. Containers probably wouldn’t provide enough isolation by themselves. I would argue that we should slide up the scale to virtual machines for isolation.
With those two hypothetical situations out of the way, what are some of the next questions an end user who is security conscious will ask for?
- Can you add anti-affinity rules to make sure that my workloads run in different virtual machines on different physical machines?
- Can you make sure that those different physical machines are in different racks, so that they use different power distribution units (PDUs) and different rack switches?
- Can you make sure that two copies of my workload run in two different data centers that are affected by different weather and earthquake patterns (note: I worked at a data center and customers really did ask this question)?
- Can you have one workload run on the moon in case the earth gets blown up?
OK, I made the last one up, but I think you get the point! ;-)
Conclusion
Isolation and tenancy are granular needs. Typically, a workload needs “enough” isolation. What is enough? Well, due diligence is different for every application.
I would argue, let’s stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload's integrity requirements.
Questions? Feedback? Reach out using the comments section (below).
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit