If you have used relatively recent versions of OpenShift, you must have come across the oc debug command (or you can check this man page). One of the interesting things about the new OpenShift is that it suggests not to use SSH directly (you can see this in sshd_config on the nodes because they have PermitRootLogin no set on them). So if you were to run oc debug node/node_name, it will create a pod for you and drop you in the shell (TTY) of this pod.
[ Deploy an application with Red Hat OpenShift Service on AWS - Overview: How to deploy an application using Red Hat OpenShift Service on AWS. ]
Although it is a pod, it is a special kind of pod. Once the pod is launched, you can open a second terminal window and run oc get pods and find the corresponding pod named node-name-debug and use oc get -o yaml podName to display its YAML output.
Observe the output:
apiVersion: v1
kind: Pod
metadata:
name: ip-xx-x-xxx-xxxus-east-2computeinternal-debug #1
namespace: default #2
...
<snip>
....
spec:
containers:
- command:
- /bin/sh
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57e87210a3f3a3ba4fc85dde180c76988a5f68445f705fd07855003986c75ab0
name: container-00
...
securityContext: #3
privileged: true
runAsUser: 0
...
tty: true #4
volumeMounts:
- mountPath: /host #5
name: host
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-dnkrx
readOnly: true
...
hostNetwork: true #6
...
volumes:
- hostPath:
path: / #7
type: Directory
name: host
- name: default-token-dnkrx
secret:
defaultMode: 420
secretName: default-token-dnkrx
status: #8
…
hostIP: 10.0.111.111
phase: Running
podIP: 10.0.111.111
podIPs:
- ip: 10.0.111.111
This is what the YAML looks like (I have snipped some parts of it for brevity). I have added #x numbers in the YAML. Each number highlights a specific fact about this debug pod that is special as compared to the regular application pods.
YAML References
#1
name: ip-xx-x-xxx-xxxus-east-2computeinternal-debug #1
This shows the pod gets the name that is formed using the node name. In my case the node name was ip-x-x-x-x-.us-east-2.compute.internal, so oc debug simply attaches -debug in the end and replaces dots with dashes.
#2
namespace: default #2
It may create the pod in whatever namespace you are in. In this case, the namespace is default.
#3
securityContext: #3
privileged: true
runAsUser: 0
This is where it gets interesting. As you know, you usually won't run pods as a privileged pod and as user 0 (root). However, since this pod is meant to provide you some equivalent of the SSH access to the node as a root user, it has such a securityContext set up. Being privileged adds AllCapabilities (giving highly unrestricted access) to this pod. You can see those by using setpriv -d (output below) within the debug shell. This causes you to receive almost unlimited access to the node via this pod. Needless to say that this pod will be likely scheduled on the node that you are debugging.
sh-4.4# setpriv -d
uid: 0
euid: 0
gid: 0
egid: 0
Supplementary groups: [none]
no_new_privs: 0
Inheritable capabilities: chown,dac_override,dac_read_search,fowner,fsetid,kill,setgid,setuid,setpcap,linux_immutable,net_bind_service,net_broadcast,net_admin,net_raw,ipc_lock,ipc_owner,sys_module,sys_rawio,sys_chroot,sys_ptrace,sys_pacct,sys_admin,sys_boot,sys_nice,sys_resource,sys_time,sys_tty_config,mknod,lease,audit_write,audit_control,setfcap,mac_override,mac_admin,syslog,wake_alarm,block_suspend,audit_read
Ambient capabilities: chown,dac_override,dac_read_search,fowner,fsetid,kill,setgid,setuid,setpcap,linux_immutable,net_bind_service,net_broadcast,net_admin,net_raw,ipc_lock,ipc_owner,sys_module,sys_rawio,sys_chroot,sys_ptrace,sys_pacct,sys_admin,sys_boot,sys_nice,sys_resource,sys_time,sys_tty_config,mknod,lease,audit_write,audit_control,setfcap,mac_override,mac_admin,syslog,wake_alarm,block_suspend,audit_read
Capability bounding set: chown,dac_override,dac_read_search,fowner,fsetid,kill,setgid,setuid,setpcap,linux_immutable,net_bind_service,net_broadcast,net_admin,net_raw,ipc_lock,ipc_owner,sys_module,sys_rawio,sys_chroot,sys_ptrace,sys_pacct,sys_admin,sys_boot,sys_nice,sys_resource,sys_time,sys_tty_config,mknod,lease,audit_write,audit_control,setfcap,mac_override,mac_admin,syslog,wake_alarm,block_suspend,audit_read
Securebits: [none]
SELinux label: system_u:system_r:spc_t:s0
#4
tty: true #4
TTY is set to true, which means you get an interactive shell for the pod immediately after the pod is created.
#5, #7
- mountPath: /host #5
path: / #7
Here it gets even more interesting. As you can see, you are mounting the volume named host on the path /host. If you look at #7 you will see that host volume is set to the path / on the host, which is the root directory. This ensures that you have full access to the host filesystem through this pod. However, when you initially jump into the tty of this pod, you are not in the /host directory. You are in the filesystem of the container with its own root (/) filesystem. To change to the host filesystem as root, you can use chroot /host, which would give you access to all the programs installed on the host, and it would feel identical to how you'd feel otherwise if you were to SSH into this node.
#6, #8
hostNetwork: true #6
status: #8
…
hostIP: 10.0.111.111
phase: Running
podIP: 10.0.111.111
podIPs:
- ip: 10.0.111.111
From the networking standpoint, this pod uses a hostNetwork, which is equivalent to running Docker or Podman with --net=host when running a container. In this case, it is used to make the programs inside the container look like they are running on the host itself from the network's perspective. It allows the container greater network access than it can normally get. You usually have to forward ports from the host machine into a container. However, when the containers share the host's network, any network activity happens directly on the host machine—just as it would if the program was running locally on the host instead of inside a container. With this, you are gaining access to host networks, which would probably be unrestricted as well. It is important to note that the host node gives the container full access to local system services such as D-bus and is therefore considered insecure. If you look at the status, you can see the hostIP, podIP, and podIPs fields have a common value that matches the node's original IP address. This proves that you are indeed using a host network pod.
[ Get this free ebook: Managing your Kubernetes clusters for dummies. ]
Wrap up
This article is a brief overview of how the oc debug command would work for a node debugging in an OpenShift cluster.
저자 소개
유사한 검색 결과
More than meets the eye: Behind the scenes of Red Hat Enterprise Linux 10 (Part 6)
Red Hat Enterprise Linux now available on the AWS European Sovereign Cloud
The Overlooked Operating System | Compiler: Stack/Unstuck
Linux, Shadowman, And Open Source Spirit | Compiler
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래