Red Hat OpenShift Service on AWS (ROSA) has three streams of logs: “Audit”, “Infrastructure” and “Application.” The different streams of logs are of interest to different people. For instance the “Audit”logs may be of interest to your Security and Compliance teams, while the “Application” logs may be of interest to an SRE team that is tasked with keeping applications working.

These logs are all viewable from the OpenShift Console or CLI, however in many enterprise use cases you may want to ship them to other systems or tools for further aggregation and analysis.

For example it’s very common to send the “Audit” logs to a Security Information and Event Management (SIEM) system.

OpenShift has several built-in ways to ship logs to another system, and many vendors also have plugins to read or extract logs from OpenShift. In this blog post we’ll explore the built-in Cluster Logging and Cluster Log Forwarding operators, and configure them to ship and store logs to S3 and visualize them with the LokiStack operator.

The instructions in this blog assume that you have a ROSA 4.11 (or higher) cluster already deployed. It is also deploying components to use minimal resources for the sake of demonstration.

Installing and Configuring the Logging Operators

Browse to the AWS S3 console and create a bucket named for your cluster in the same region as your cluster.        

Browse to the AWS IAM Policy Console and create a policy

Create a Policy named rosa-loki-stack that contains the following permissions

{
  "Version": "2012-10-17",
  "Statement": [
       {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket",
              "s3:PutObject",
              "s3:GetObject",
              "s3:DeleteObject"
          ],
          "Resource": [
              "arn:aws:s3:::<bucket_name>",
              "arn:aws:s3:::<bucket_name>/*"
          ]
      }
  ]
}

Create an IAM User named rosa-loki-stack and attach the new “rosa-loki-stack” policy to it.

Create IAM Access keys for the new rosa-loki-stack user and paste the credentials to use later

Be sure you’re logged into the OpenShift Console (You can locate and log into your cluster via the OpenShift Cluster Manager Hybrid Cloud Console).

In the OpenShift Console (administrator tools) click Operators > OperatorHub

Install the “Red Hat OpenShift Logging” Operator (accept all defaults)

Install the “Loki” Operator (If multiple Operators are available, choose the Red Hat one, accept all defaults)

Click through the OpenShift menu Workloads > Secrets, set the Project to openshift-logging and select Create > From YAML

Paste in the following, ensuring you replace the values to match your environment and hit Create

apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: <access_key_id>
access_key_secret: <access_key_secret>
bucketnames: s3-bucket-name
endpoint: https://s3.<region>.amazonaws.com
region: <region>

Next click to Operators > Installed Operators > Loki Operator

Click LokiStack / Create instance and pick YAML view, and paste in the following and hit Create.

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
size: 1x.extra-small
storage:
  schemas:
  - version: v12
    effectiveDate: "2022-06-01"
  secret:
    name: logging-loki-s3
    type: s3
storageClassName: gp3
tenants:
  mode: openshift-logging

 

Next click to Operators > Installed Operators > Red Hat OpenShift Logging

Next Cluster Logging -> Create Instance and paste in the following and hit Create.

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
  type: lokistack
  lokistack:
    name: logging-loki
collection:
  type: vector

 

Next you can click to Developer > Topology to see the Logging stack as it deploys

After a few minutes you can check if logs are flowing and viewable by clicking Developer > Observe and selecting Aggregated Logs. (If this menu item isn’t available you may need to refresh your browser).

You can also view Infrastructure logs by clicking Administrator -> Observe -> Logs and choosing infrastructure.

When there’s enough log data Loki will upload it to S3 for long term storage. You can see this by viewing your S3 bucket in the AWS S3 Console.

Congratulations you have successfully configured your ROSA cluster to ship metrics to S3 using the OpenShift Logging and Loki Operators.  There is a lot more to explore here, such as capacity planning the Loki infrastructure, Installing Grafana and creating Dashboards, and more.

To learn more about this topic, check out the lightboard video about ROSA logging options.


关于作者

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来