Today we're announcing the latest release of Red Hat OpenShift distributed tracing and Red Hat build of OpenTelemetry 3.2.
What’s new in OpenTelemetry?
The interest and adoption of OpenTelemetry has exceeded our expectations, and we're excited about the interest in co-creating solutions that help to avoid vendor lock-in and also bring the latest observability data collection capabilities to our open platform. That’s why this release comes with many components in Technology Preview to open the path to new use cases and gather feedback.
Previously we published an article about how we are working towards making Red Hat OpenShift as the OpenTelemetry (OTLP) native platform. While there’s still a lot to do, we are working every day with our users to build a truly open platform.
In this article, we outline the components that are enabled in this release, and you can read our official documentation to learn more. We'll also provide a number of links for upstream documentation as well.
Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry operator release v0.100.1.
Host metrics receiver
Gaining access to important metrics of the host system while being able to process and export in various formats is crucial. That’s what the host metrics receiver is built for. Metrics related to CPU, disk, load, filesystem, memory, network and paging for both the host system and per-process are available now.
Kubernetes cluster receiver
The Kubernetes cluster receiver (or k8sclusterreceiver) can help you to gather cluster-level metrics and entity events from the Kubernetes API server. It utilizes the Kubernetes API to stay informed about updates. Authentication for this receiver is exclusively supported through service accounts.
From the collection interval to the Kubernetes distribution is configurable. OpenShift can gain added value of OpenShift-specific metrics in addition to standard Kubernetes ones by setting the distribution
value to openshift
.
Kubernetes events receiver
The Kubernetes events receiver, also known as k8seventsreceiver, enables users to collect all events from the Kubernetes API server. These events can be forwarded via the OpenTelemetry Protocol (OTLP) or other formats, allowing users to build a customized observability platform for their Kubernetes environment. Furthermore, the authentication type and namespaces within the Kubernetes events receiver are configurable.
Kubernetes objects receiver
The Kubernetes objects receiver, also known as k8sobjectsreceiver, has the ability to either pull or watch objects from the Kubernetes API server and export them as logging signals.
This receiver functions as a tool to collect the same information that can be obtained by typing 'kubectl get po <my-po> -oyaml
' in the Kubernetes API server. However, it does so through an automated process. The receiver can gather any type of Kubernetes objects after selecting them based on labels, fields or versions.
Kubelet stats receiver
The Kubelet stats receiver, also known as kubeletstatsreceiver, is able to pull node, pod, container and volume metrics from the Kubernetes API server on a kubelet and further process them in the corresponding metrics pipeline.
Available statistics span from container CPU time and utilization, filesystem capacity, usage and availability all the way to Kubernetes node typical status metrics including with pods and volumes. All crucial metrics are now more easily accessible.
Load-balancing exporter
The load-balancing exporter, also known as loadbalancingexporter, helps to consistently forward spans, metrics and logs to the desired destinations based on user-defined routing keys such as service, traceID, resource or even individual metric names. In this way, users can define their telemetry pipelines up to the backend where they want particular data to end.
Cumulative to delta processor
The cumulative to delta processor is a component that changes histogram and monotonic cumulative sum metrics to monotonic, delta ones. This processor is useful for users wanting to export metrics to backends that do not accept these kinds of common Prometheus metrics.
Forward connector
The forward connector helps users put together pipelines of the same type. This is especially useful when several signals of the same type are collected via different mechanisms, but are expected to be merged in the same processing pipeline and/or exporter.
Journald receiver
The jornald receiver is able to parse journald events from systemd journal as logging events. It allows users to configure many capabilities, such as filtering priority ranges, define the list of units to read entries from, or even define retry policies.
Filelog receiver
As easy (and complex) as it sounds. The filelog receiver helps to tail and parse logs from files. Many rules, filters and options are available to ensure the right amount of data and wanted information is collected. And thanks to this awesome community, parsing container logs will be easier in next releases.
OIDC Auth extension
The OpenTelemetry collector is not only about receivers and exporters. Extensions add capabilities on top to the existing components of the collector. This helps to avoid creating a separate distribution or changing the code of the collector.
In this case, the Authenticator - OIDC extension helps to configure authentication capabilities for receivers via the implementation of a configauth.ServerAuthenticator
. It does so by authenticating incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request.
File storage extension
The File storage extension helps to persist state to the local file system, which is a great tool to achieve caching observability data in the filesystem
Loki exporter
In Developer Preview, we are also enabling the Loki exporter to allow users to ship their logs to Loki instances.
A note about syntax
In this release, use of empty values and null
keywords in the OpenTelemetry Collector custom resource is deprecated and is planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will eventually become unsupported. As an alternative to empty values and null
keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {}
instead.
What’s new in distributed tracing?
As previously announced, we made the decision to deprecate Jaeger, and our support will reach end of life by November 2025. We are gathering feedback from users to help simplify the migration process.
According to user reports, one of the most valuable approaches for initiating distributed tracing or aiding in troubleshooting sessions was the Jaeger all-in-one strategy with in-memory storage. This strategy configured a Jaeger instance with minimal resource requirements, ensuring successful installation on a default OpenShift deployment via the AllInOne deployment strategy. In this release, we are enabling the technology preview of the Tempo Monolithic mode via the operator, which provides a similar experience to the Jaeger All-in-One deployment. We'll talk more about this in another article soon.
The Red Hat OpenShift distributed tracing platform 3.2 is based on the open source Grafana Tempo 2.4.1 version via the Tempo operator release v0.10.0.
Links and further reading
- Check Release Notes for both products under the Observability official documentation
- The Path to Distributed Tracing: an OpenShift Observability Adventure
- The Red Hat build of OpenTelemetry reaches general availability
- The Path to Distributed Tracing: an OpenShift Observability Adventure
- Red Hat OpenShift as OpenTelemetry (OTLP) native platform
- Developer Preview Support Scope - Red Hat Customer Portal
- Technology Preview Features - Scope of Support - Red Hat Customer Portal
- Developer and Technology Previews: How they compare - Red Hat Customer Portal
- Enhanced observability in OpenShift 4.15
- Grafana Tempo 2.4 release: TraceQL metrics, tiered caching, and TCO improvements
- Introducing the new container log parser for OpenTelemetry Collector
We value your feedback, which is crucial for enhancing our products. Share your questions and recommendations with us using the Red Hat OpenShift feedback form.
Sobre el autor
Jose is a Senior Product Manager at Red Hat OpenShift, with a focus on Observability and Sustainability. His work is deeply related to manage the OpenTelemetry, distributed tracing and power monitoring products in Red Hat OpenShift.
His expertise has been built from previous gigs as a Software Architect, Tech Lead and Product Owner in the telecommunications industry, all the way from the software programming trenches where agile ways of working, a sound CI platform, best testing practices with observability at the center have presented themselves as the main principles that drive every modern successful project.
With a heavy scientific background on physics and a PhD in Computational Materials Engineering, curiousity, openness and a pragmatic view are always expected. Beyond the boardroom, he is a C++ enthusiast and a creative force, contributing symphonic and electronic touches as a keyboardist in metal bands, when he is not playing videogames or lowering lap times at his simracing cockpit.
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit