How to fix permission errors in pods using service accounts
Learn how to change a default security context constraint (SCC) in OpenShift to manage permissions within a cluster.
There's a lot to learn and understand about running a cloud. Kubernetes makes it easier by helping you manage a cloud, and one of the most important tasks of managing a cloud services cluster is tending to your containers and container pods. OpenShift takes care of a lot of the complexity you'd otherwise have to configure directly with raw Kubernetes and therefore helps keep you from getting overwhelmed by those details.
But as with anything, there's the potential for something to go wrong even within the (ideally) predictable realm of containers. By default, every pod uses the default service account, which provides access-only permissions to get information out of the API. Sometimes a pod can't run with the default service account restrictions. When this happens, it's time to learn about security context constraints (SCCs).
When you want a pod to run with a different SCC, you must create a service account with the permissions you want the pod to inherit. A service account is like a user account, except it's meant for services and processes rather than for human users.
[ Getting started with containers? Check out this no-cost course. Deploying containerized applications: A technical overview. ]
To see which SCC you need to apply, you can parse the pod's configuration with the
$ oc get pod podname -o yaml | oc adm policy scc-subject-review -f -
Assume your cloud has the user janedoe and a cluster admin user vcirrus-consulting. Both accounts are configured to log in using the HTPasswd identity provider:
$ oc login -u janedoe Logged into "https://api.crc.testing:6443" as "janedoe" using existing credentials.
The user janedoe runs with standard user privileges:
$ oc get users Error from server (Forbidden): users.user.openshift.io is forbidden: User "janedoe" cannot list resource "users" in API group "user.openshift.io" at the cluster scope $ oc get nodes Error from server (Forbidden): nodes is forbidden: User "janedoe" cannot list resource "nodes" in API group "" at the cluster scope
The janedoe user creates a new project called scc-demo:
$ oc new-project scc-demo Now using project "scc-demo" on server "https://api.crc.testing:6443".
Add applications to this project with the
$ oc new-app --name sccnginx --docker-image nginx Flag --docker-image has been deprecated, Deprecated flag use --image --> Found container image 670dcc8 (2 days old) from Docker Hub for "nginx" * An image stream tag will be created as "sccnginx:latest" that will track this image --> Creating resources ... imagestream.image.openshift.io "sccnginx" created deployment.apps "sccnginx" created service "sccnginx" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/sccnginx' Run 'oc status' to view your app.
Next, verify the status of the pod:
$ oc get pods NAME READY STATUS [...] sccnginx-77...kw 0/1 ContainerCreating...
[ Getting started with Red Hat OpenShift Service on AWS Learning Path - Getting started with Red Hat OpenShift Service on AWS. ]
Everything looks good at first, but then the pod fails:
$ oc get pods NAME READY STATUS RESTARTS AGE sccnginx-77cd9bf654-4wgkw 0/1 CrashLoopBackOff 2 (21s ago) 82s
Take a look at the logs for a hint as to why the pod failed to run. I've added a comment in the code below (
## IMPORTANT) to draw your attention to the most significant lines. In real life, unfortunately, log files don't have comments directing your attention to the most important output.
$ oc logs sccnginx-77cd9bf654-4wgkw /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?) /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh ## IMPORTANT /docker-entrypoint.sh: Configuration complete; ready for start up 2022/07/22 14:23:21 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 2022/07/22 14:23:21 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
The pod encountered permission errors because a user is running it without sufficient privileges. Log in as a user with the cluster admin role-based access control (RBAC) role (vcirrus-consulting in this example):
$ oc login -u vcirrus-consulting Logged into https://api.crc.testing:6443" as "vcirrus-consulting" using existing credentials. Using project "scc-demo".
[ Get this complimentary eBook from Red Hat: Managing your Kubernetes clusters for dummies. ]
Parse the pod's YAML configuration to determine which SCC permissions are required for the pod to run:
$ oc get pod sccnginx-77cd9bf654-4wgkw -o yaml | \ oc adm policy scc-subject-review -f - RESOURCE ALLOWED BY Pod/sccnginx-77cd9bf654-4wgkw anyuid
A Source-to-Image (S2I) pod requires access beyond the scope of its container, and so it must be run by a service account instead of a human user. Create a new service account:
$ oc create sa nginx-sa serviceaccount/nginx-sa created
Connect the service account nginx-sa to the SCC anyuid using a role binding:
$ oc adm policy add-scc-to-user anyuid -z nginx-sa clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "nginx-sa"
Now log back in as user janedoe and confirm that you have access to the scc-demo project:
$ oc login -u janedoe Logged into "https://api.crc.testing:6443" as "janedoe" using existing credentials. You have one project on this server: "scc-demo" Using project "scc-demo".
Bind the service account nginx-sa to the pod or sccnginx deployment to allow it to run with its new permissions:
$ oc set sa deploy sccnginx nginx-sa deployment.apps/sccnginx serviceaccount updated
Verify that the changes are in effect and that your pod, which previously failed due to root access permissions, now runs with the anyuid SCC:
$ oc get pods NAME READY STATUS RESTARTS AGE sccnginx-594899cc8-cz9tb 1/1 Running 0 12s
You can verify that the pod is indeed using the anyuid SCC with the
oc describe command:
$ oc describe pod sccnginx-594899cc8-cz9tb | grep scc Name: sccnginx-594899cc8-cz9tb Namespace: scc-demo Labels: deployment=sccnginx openshift.io/scc: anyuid […]
Service accounts and SCCs
Service accounts and SCCs are important ways to manage permissions within a cluster. OpenShift has plenty of ways to query your projects and cluster to learn about resources, so get familiar with the commands and constructs available to you. My follow-up article goes into more detail on using and managing service accounts and SCCs.
You can also read the OpenShift SCC documentation to see the different SCC options available to control the SELinux context of a container, request additional capabilities to a container, change the user ID, and use host directories as volumes.
Learn how to configure service account access restrictions and security context constraints (SCCs) to control permissions for pods.
Go beyond OpenShift's default options to set up custom role-based access control (RBAC) permissions for local and cluster roles.
RHACS monitors runtime data on containers to help you uncover potential vulnerabilities during product testing.
Sometimes the process to delete Kubernetes namespaces gets hung up, and the command never completes. Here's how to troubleshoot terminating namespaces