In base allo stato cliente, dal tuo account Red Hat puoi accedere al profilo personale, alle preferenze e ai seguenti servizi:
Non ti sei ancora registrato? Ecco alcuni motivi per cui ti consigliamo di registrarti:
- Per poter consultare gli articoli della Knowledgebase, gestire i casi con il supporto tecnico e le sottoscrizioni, scaricare gli aggiornamenti e altro ancora da un'unica posizione.
- Per poter visualizzare gli utenti all'interno dell'azienda e modificarne le informazioni di account, le preferenze e le autorizzazioni.
- Per poter gestire le tue certificazioni Red Hat, visualizzare la cronologia degli esami e scaricare logo e documenti relativi alle certificazioni.
In base allo stato cliente, dal tuo account Red Hat puoi accedere al profilo personale, alle preferenze e ad altri servizi.
Per tutelare la tua sicurezza, se stai usando i servizi Red Hat da un computer pubblico, assicurati di disconnetterti.Esegui il log out
There’s no surprise that Linux containers continue to be of interest to developers, especially with the docker project providing a great user experience around building containerized applications. It’s this experience and ease-of-use that make Linux containers such a valuable tool to application development, and has made containers an essential tool in a developer’s toolbox, enabling them to package application services and dependencies for consistent deployment, from the desktop across the hybrid cloud.
Increasingly, many enterprises want to bring this same Linux container innovation to their production environments. Containers can provide greater efficiency for infrastructure utilization and operational management, allow for application portability across physical, virtual, and cloud environments, and support both traditional and modern, cloud-native applications. That’s why Red Hat works with many enterprise organizations that are using solutions like Red Hat Enterprise Linux and OpenShift Container Platform, all to provide a trusted container platform from development to production today.
In order to continue expanding the usage of containers in production, these organizations need a simple, stable environment for running production applications in containers. Providing this environment is the focus of a new, Red Hat-backed project in the Kubernetes incubator called OCID (Open Container Initiative Daemon). The OCID project aims to help organizations optimize containers running in production environments and the IT operations teams managing these deployments.
Taking containers beyond development
The OCID project is aimed at exploring new innovations in container runtime, image distribution, storage, signing and more, with an emphasis on driving container standards through the Open Container Initiative (OCI). OCID is not competitive with the docker project - in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. We expect to bring developers more flexibility by adding other image builders and tooling in the future. As we provide new innovations through OCID, our goal will be to drive these technologies as OCI standards, working together with Docker Inc., and other OCI members.
Running containers at scale, in production environments, also requires robust container orchestration and cluster management. Red Hat is a leader in driving these types of capabilities in the Kubernetes project and, together with Google and a growing number of contributors, we are working to provide governance and standardization around Linux container orchestration through the Cloud Native Computing Foundation (CNCF). We also deliver an enterprise Kubernetes solution to customers in the form of OpenShift, which is built on a foundation of Red Hat Enterprise Linux.
Container orchestration engines like Kubernetes must have native, optimized integration with the container runtime daemon, and OCID will work to provide this by implementing the Kubernetes standard container runtime interface. Kubernetes is a great proving ground for OCID and, as OCID is a Kubernetes incubator project, Red Hat plans to continue working with the Kubernetes community to drive innovations for running containers at scale, across development, test and production environments.
What is OCID?
OCID is an implementation of the Kubernetes standard container runtime interface. In order to run containers, the daemon needs to be able to pull, store and execute the container images.
OCID follows the time-tested Unix philosophy of modular programming by breaking out its functionality across the following sub-projects:
OCI Container Runtime Environment (runC)
OCI Runtime Tools
CNI (Container Network Interface)
What is runC?
The Open Container Initiative introduced the Container Runtime specification and a default implementation of it, called runC. At a basic level, a "runtime" is software that supports the execution of a computer program; in this case, runC is the runtime supporting the execution of Linux containers according to the OCI specification. As of docker-1.11, upstream docker uses runC by default for running its containers. OCID plans to also use runC as the default runtime for its containers, and contributors will work on various ways to improve runC functionality for upstream docker as well as the standalone tool.
What are oci-runtime-tools?
OCI runtime tools are a set of utilities to work with an OCI runtime. oci-runtime-tools can be used to generate an OCI Runtime specification that runC can be used to run a container. In OCID, we use the config generation library from this project to generate configuration for the containers running in a pod in OCID.
# oci-runtime-tool generate --bind /var/lib/data:/data1 --tmpfs /run --rootfs /mnt/fedora /bin/bash
# runc start -b /mnt/fedora
What is containers/image?
This library allows OCID to pull and push container images from a registry. With the creation of Project Atomic, we began building tools like `atomic verify` that would look at the version of a container image and compare it to the container image on a docker registry. We did not want to pull down the image, but rather just look at the image data - sort of like "looking without touching." The docker community showed little interested in such a capability, so we decided to build a small tool to meet our need, leading to the birth of skopeo, which means "remote viewing" in Greek. Eventually skopeo expanded in functionality to be able to pull and push images ("touching") as well as just looking at image data - for the future, we’re even considering making simple image signing a capability of the tool. Since skopeo is a command line tool we decided to split it into skopeo plus a library called containers/image to pull and push container images from a registry.
What is containers/storage?
This library aims to provide methods for storing filesystem layers, container images, and containers. Originally, we set out to build a tool called `atomic mount`, which would allow users to mount a container image stored in docker on the host file system. For example, `atomic mount fedora /mnt` will mount the image onto /mnt and allow you to explore the image. The problem with this is that doing so is somewhat tricky, in that the docker daemon has no way of knowing that the image is mounted, and someone could attempt to remove the image using `docker rmi fedora.`
The docker daemon stores its images into something called a graph. We wanted to separate out the graphdriver into a separate library and, instead of using in-memory locks which are only visible to the docker daemon, move to file system locks which could be shared by multiple cooperating processes. As we worked on this, upstream docker was changing at a rate that made it difficult to build off of. We decided to split out storage altogether, allowing us to experiment with using COW (Copy On Write - like overlayfs, or devicemapper graphdrivers for docker) as well as read-only file systems on non-COW systems, and also experiment with using tools like NFS for remote storage of container images. This efforted has culminated in http://github.com/containers/storage for the continued development of this effort. Once we are happy with how this is working, we would like to open a pull request with docker to potentially use this storage library.
For both containers/image and containers/storage, we want to drive broad collaboration. To do so, we will look to work with rkt and other tools by encouraging them to incorporate these technologies into their respective projects.
What is CNI?
CNI is a standard for providing networking support to containers. OCID plans to integrate and use CNI for container networking so that it can reuse existing CNI plugins in the Kubernetes ecosystem.
With OCID, the goal is to build a new, more stable container runtime option for Kubernetes clusters, emphasizing running containers in production as opposed to just developing containerized applications. As always, Red Hat achieves this goal by engaging with upstream open source communities and contributing our ideas and code. OCID will be designed to work well with docker and the existing Linux container ecosystem, so developers can continue to use docker and the other tools that they are comfortable with to build OCI images. As the OCI Image format formalizes into a full-fledged standard, we look forward to keeping this work going by engaging with other emerging tools to build images that can be deployed into production, and that incorporate enterprise requirements of stability, reliability, security and performance.
About the authors
Daniel Walsh has worked in the computer security field for over 30 years. Dan is a Senior Distinguished Engineer at Red Hat. He joined Red Hat in August 2001. Dan leads the Red Hat Container Engineering team since August 2013, but has been working on container technology for several years.