In OpenShift 3.5 we've introduced Kubernetes Network Policy as a Tech Preview feature to improve the way we configure allowable traffic between pods. Put simply, Network Policy is an easy way for Project Administrators to define exactly what ingress traffic is allowed to any pod, from any other pod, including traffic from pods located in other projects.
Wait...isn't there already a mechanism for that?
One way of isolating pods is to replace OpenShift's default SDN plug-in, ovs-subnet, with the ovs-multitenant plug-in which isolates projects (namespaces) from each other. The limitation of the ovs-multitenant plug-in is that it does not provide a mechanism for exceptions.
To provide exceptions for the ovs-multitenant plug-in, the "oadm pod-network" functionality was introduced in OpenShift 3.1 to allow two projects to access each other's services, or allow all projects to access all pods and services in the cluster. The limitation of this is that it operates at the level of the entire project, and its always bidirectional. That is, if you can access a service in a project, you can access every service in that project, and you also necessarily granted access to all of your project's services. Because the permissions are bidirectional, this could only be configured by a cluster admin.
Network Policy, in contrast, allows configuration of isolation policies at the level of individual pods. Because Network Policies are not bi-directional and apply only to ingress traffic of pods within a Project Administrators control, Cluster Admin privileges are not required.
How does it work?
The first step is to enable Network Policy, by replacing the current SDN plug-in with the ovs-networkpolicy plug-in.
Next, a policy must be created. The method of defining a Network Policy leverages the core Kubernetes concepts of label and selectors.
Labels are arbitrarily-named key:value pairs that can be assigned to individual pods or entire projects and can define a wide variety of pod characteristics (e.g. "role: db").
Selectors are expressions used to filter or combine labels in the creation of a subset of pods and/or pod traffic type that is allowed for ingress (e.g. "matchLabel:").
Applying selectors to labels abstracts away all the underlying network complexities implemented to achieve the desired result of access privilege. Developers get policies automatically applied to their pods with easily automated labeling mechanisms that follow along as it scales up and down.
In the Network Policy model, unlike with the ovs-multitenant plug-in, there is no automatic isolation between projects (something we may change in a future release for compatibility with ovs-multitenant); you must enable it on a per-project basis by setting an annotation on each project that you want to be isolated:
oc annotate namespace ${ns} 'net.beta.kubernetes.io/network-policy={"ingress":{"isolation":"DefaultDeny"}}'
With this annotation, all traffic into pods in that namespace is blocked, and Network Policies can be created to define what traffic to allow. Note that even traffic from other pods in the same namespace will be blocked.
To allow pods to communicate with each other within the namespace, create a file with your JSON or YAML-formatted policy template. The following policy only allows ingress TCP traffic on ports 80 and 443, from all database pods (those with the label: "role: db") to all web server pods (those with the label: "role: web").
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-http-and-https
spec:
podSelector:
matchLabels:
role: web
ingress:
- from:
- podSelector:
matchLabels:
role: db
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Now, upload the policy template into the current project's template library using the file you created that contains the policy:
oc create -f my-file.json
Additional policies can be added/combined using subsequent "oc create -f" commands.
Nice! More Examples, Please
The following examples assume the above “deny all traffic” isolation annotation was first put into place, and then the policy on the right side of the graphic was applied. The black arrows signify allowable ingress traffic, as defined by the associated namespace policy.
Example 1
The allow-from-same-namespace policy specifies "all pods in namespace ‘project-a’ allow traffic from any other pods in the same namespace."
An empty podSelector means that this policy will apply to all pods in this project. The first usage in this policy refers to the targeted destination pods, and the second usage refers to the pods the traffic is coming from.
Example 2
The allow-http-and-https-ns-a policy specifies “allow ingress traffic to sockets tcp/80 and tcp/443 from any pod in namespace ‘project-a’ to any other pod in namespace ‘project-a’.”
Example 3
The allow-to-red-ns-a policy specifies "all red pods in namespace ‘project-a’ allow traffic from all other pods in the same namespace."
The label name used (“type”, in this example) is arbitrary; it could also just as easily have been “color: red”.
Example 4
The allow-to-red policy specifies "all red pods in namespace ‘project-a’ allow traffic from any pods in any namespace."
Note that this does not apply to the red pod in namespace ‘project-b’, because podSelector only applies to the namespace in which it was applied.
It is the change from an ingress rule of podSelector (empty or specific) to “{}” that makes inter-namespace communication possible in this case.
Example 5
The allow-from-red-ns-a policy specifies "all pods in namespace ‘project-a’ allow traffic from red pods in the same namespace."
NOTE: There can’t be an “allow from red pod, any namespace” example, because podSelectors can only be used to discriminate pods in your namespace, not those in other namespaces.
Example 6
The allow-from-red-to-blue policy specifies “the only traffic allowed is from any red pod in namespace ‘project-a’ to any blue pod in namespace ‘project-a’.”
Example 7
The namespaceSelector in the allow-from-alice policy specifies "all pods in namespace ‘project-a’ allow traffic from any pod in the same namespace or pod whose namespace is labeled with ‘user: alice’."
NOTE: this is impossible to do using the ovs-multitenant plug-in, where allowed traffic is always bidirectional.
Example of labeling the two namespaces:
oc create namespace project-a
oc create namespace project-b
oc label namespace project-a user=bob
oc label namespace project-b user=alice
Conclusion
Network Policy in OpenShift 3.5 is Tech Preview, so not all features will be immediately available, but provides an early look for testing/development. We recommend you take it for a “test drive” and learn how fine-grained ingress firewall control is being put back into the hands of the project administrator.
執筆者紹介
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit