We often compare the security of containers to virtual machines and ask ourselves "...which is more secure?" I have argued for a while now that comparing containers to virtual machines is really a false premise - we should instead be comparing containers to
We aren’t forced to get rid of virtual machines when we run containers. Containers can be run, in conjunction with virtual machines, in three ways - so it’s a straw man comparison.
- Containers inside of virtual machines.
- Containers in some places, virtual machines in others (the comparison).
- Virtual machines in containers (yes, you can do this).
The Premise
We can run workloads using any of the three techniques as listed above... so forcing a security comparison isn’t exactly "natural". I would argue that it's more "natural" to think about the tenancy requirements of the workloads and the "amount" of isolation required.
The Tenancy Scale
What is the Tenancy Scale? It's the result of a brainstorming session with the leader of Red Hat's product security team (i.e. Josh Bressers).
While I'm not sure that everyone remembers now, but when I started college (back in 1997), multi-user Unix systems were still "all the rage". Individual users would telnet, yes telnet, into a Unix server and each user would run their own processes. Some users would run research batch jobs, while others would run their own web servers, or use the system’s shared web server. When you logged into the system, you could do a process list and view everybody’s processes. In fact, if a given user had the permissions in their home directory set wrong, you could even get into their personal files. Crazy times.
In 2016, not many systems administrators would consider regular Linux process isolation enough to allow multiple users to log into a system - especially if those tenants worked for different organizations or were private individuals.
But, hypothetically, let’s say that I am a systems administrator for a university and I have different research teams that want to run jobs. Let’s say I have a one group running biology computations and I have another group running geology computations - would containers be enough isolation? I would argue, yes.
In another hypothetical scenario, I am a systems administrator working for a public cloud provider and I have users from different companies, government organizations, and research facilities all wanting to share physical resources. Containers probably wouldn’t provide enough isolation by themselves. I would argue that we should slide up the scale to virtual machines for isolation.
With those two hypothetical situations out of the way, what are some of the next questions an end user who is security conscious will ask for?
- Can you add anti-affinity rules to make sure that my workloads run in different virtual machines on different physical machines?
- Can you make sure that those different physical machines are in different racks, so that they use different power distribution units (PDUs) and different rack switches?
- Can you make sure that two copies of my workload run in two different data centers that are affected by different weather and earthquake patterns (note: I worked at a data center and customers really did ask this question)?
- Can you have one workload run on the moon in case the earth gets blown up?
OK, I made the last one up, but I think you get the point! ;-)
Conclusion
Isolation and tenancy are granular needs. Typically, a workload needs “enough” isolation. What is enough? Well, due diligence is different for every application.
I would argue, let’s stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload's integrity requirements.
Questions? Feedback? Reach out using the comments section (below).
Sobre el autor
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit