We often compare the security of containers to virtual machines and ask ourselves "...which is more secure?" I have argued for a while now that comparing containers to virtual machines is really a false premise - we should instead be comparing containers to
We aren’t forced to get rid of virtual machines when we run containers. Containers can be run, in conjunction with virtual machines, in three ways - so it’s a straw man comparison.
- Containers inside of virtual machines.
- Containers in some places, virtual machines in others (the comparison).
- Virtual machines in containers (yes, you can do this).
The Premise
We can run workloads using any of the three techniques as listed above... so forcing a security comparison isn’t exactly "natural". I would argue that it's more "natural" to think about the tenancy requirements of the workloads and the "amount" of isolation required.
The Tenancy Scale
What is the Tenancy Scale? It's the result of a brainstorming session with the leader of Red Hat's product security team (i.e. Josh Bressers).
While I'm not sure that everyone remembers now, but when I started college (back in 1997), multi-user Unix systems were still "all the rage". Individual users would telnet, yes telnet, into a Unix server and each user would run their own processes. Some users would run research batch jobs, while others would run their own web servers, or use the system’s shared web server. When you logged into the system, you could do a process list and view everybody’s processes. In fact, if a given user had the permissions in their home directory set wrong, you could even get into their personal files. Crazy times.
In 2016, not many systems administrators would consider regular Linux process isolation enough to allow multiple users to log into a system - especially if those tenants worked for different organizations or were private individuals.
But, hypothetically, let’s say that I am a systems administrator for a university and I have different research teams that want to run jobs. Let’s say I have a one group running biology computations and I have another group running geology computations - would containers be enough isolation? I would argue, yes.
In another hypothetical scenario, I am a systems administrator working for a public cloud provider and I have users from different companies, government organizations, and research facilities all wanting to share physical resources. Containers probably wouldn’t provide enough isolation by themselves. I would argue that we should slide up the scale to virtual machines for isolation.
With those two hypothetical situations out of the way, what are some of the next questions an end user who is security conscious will ask for?
- Can you add anti-affinity rules to make sure that my workloads run in different virtual machines on different physical machines?
- Can you make sure that those different physical machines are in different racks, so that they use different power distribution units (PDUs) and different rack switches?
- Can you make sure that two copies of my workload run in two different data centers that are affected by different weather and earthquake patterns (note: I worked at a data center and customers really did ask this question)?
- Can you have one workload run on the moon in case the earth gets blown up?
OK, I made the last one up, but I think you get the point! ;-)
Conclusion
Isolation and tenancy are granular needs. Typically, a workload needs “enough” isolation. What is enough? Well, due diligence is different for every application.
I would argue, let’s stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload's integrity requirements.
Questions? Feedback? Reach out using the comments section (below).
关于作者
产品
工具
试用购买与出售
沟通
关于红帽
我们是世界领先的企业开源解决方案供应商,提供包括 Linux、云、容器和 Kubernetes。我们致力于提供经过安全强化的解决方案,从核心数据中心到网络边缘,让企业能够更轻松地跨平台和环境运营。