피드 구독

We often compare the security of containers to virtual machines and ask ourselves "...which is more secure?"  I have argued for a while now that comparing containers to virtual machines is really a false premise - we should instead be comparing containers to

processes.

We aren’t forced to get rid of virtual machines when we run containers.  Containers can be run, in conjunction with virtual machines, in three ways - so it’s a straw man comparison.

  1. Containers inside of virtual machines.
  2. Containers in some places, virtual machines in others (the comparison).
  3. Virtual machines in containers (yes, you can do this).

OpenStack and Containers - Container Patterns.png

The Premise

We can run workloads using any of the three techniques as listed above... so forcing a security comparison isn’t exactly "natural". I would argue that it's more "natural" to think about the tenancy requirements of the workloads and the "amount" of isolation required.

The Tenancy Scale

What is the Tenancy Scale?  It's the result of a brainstorming session with the leader of Red Hat's product security team (i.e. Josh Bressers).

Container Defense in Depth - The Tenancy Scale.png

While I'm not sure that everyone remembers now, but when I started college (back in 1997), multi-user Unix systems were still "all the rage".  Individual users would telnet, yes telnet, into a Unix server and each user would run their own processes. Some users would run research batch jobs, while others would run their own web servers, or use the system’s shared web server.  When you logged into the system, you could do a process list and view everybody’s processes. In fact, if a given user had the permissions in their home directory set wrong, you could even get into their personal files. Crazy times.

In 2016, not many systems administrators would consider regular Linux process isolation enough to allow multiple users to log into a system - especially if those tenants worked for different organizations or were private individuals.

But, hypothetically, let’s say that I am a systems administrator for a university and I have different research teams that want to run jobs. Let’s say I have a one group running biology computations and I have another group running geology computations - would containers be enough isolation? I would argue, yes.

In another hypothetical scenario, I am a systems administrator working for a public cloud provider and I have users from different companies, government organizations, and research facilities all wanting to share physical resources. Containers probably wouldn’t provide enough isolation by themselves. I would argue that we should slide up the scale to virtual machines for isolation.

With those two hypothetical situations out of the way, what are some of the next questions an end user who is security conscious will ask for?

  1. Can you add anti-affinity rules to make sure that my workloads run in different virtual machines on different physical machines?
  2. Can you make sure that those different physical machines are in different racks, so that they use different power distribution units (PDUs) and different rack switches?
  3. Can you make sure that two copies of my workload run in two different data centers that are affected by different weather and earthquake patterns (note: I worked at a data center and customers really did ask this question)?
  4. Can you have one workload run on the moon in case the earth gets blown up?

OK, I made the last one up, but I think you get the point!  ;-)

Conclusion

Isolation and tenancy are granular needs. Typically, a workload needs “enough” isolation. What is enough? Well, due diligence is different for every application.

I would argue, let’s stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload's integrity requirements.

Questions?  Feedback?  Reach out using the comments section (below).


저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Original series icon

오리지널 쇼

엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리