Growing revenue and reducing costs are two key challenges that telecommunication service providers need to solve in order to deliver tangible business value.
These challenges are not new, but are becoming more and more dependent on emerging technologies that are being used to build networks and deliver compelling services while addressing challenges at scale—quickly, cost-effectively and consistently.
Extending reach into new business opportunities requires more flexibility in managing and operating existing service provider networks and services. Of course, this flexibility has to be balanced with consistency, which can be difficult when networks span different cloud environments. Achieving some measure of consistency helps simplify operational complexity and accelerate time-to-value.
Deploying and managing networks at scale also needs to be done efficiently to optimize costs. Networks that are open, software-defined, automated and use cloud-native principles help create new opportunities for revenue and help simplify operational processes that can lower costs through increased efficiencies and higher productivity.
Putting AI to work
To do this, service providers need to operationalize AI at scale. Operationalizing AI has its own challenges, including the complexity of integrating with an array of data sources and legacy systems and linking to automation capabilities.
If service providers are successful with operationalizing AI, it can become a fundamental framework to enhance automation, operational intelligence and network agility. It can also lead to more stable and predictable service performance that improves customer experiences and increases revenue.
Bringing AI into the RAN
As part of the ongoing development of the radio access network (RAN) architecture, network functions have been separated or disaggregated, allowing them to be virtualized or cloudified to run on cloud platforms. Along with the introduction of open RAN, these changes are best served by an open source approach and the use of a common cloud-native telco platform.
Deploying and operating RAN and AI workloads on a cloud-native platform provides greater flexibility to launch innovative services to consumers, enterprises and within industries in ways that aren't possible with purpose-built, traditional RAN.
Using the RAN intelligent controller (RIC) for RAN optimization
AI-RAN will maximize the use of AI for enhanced RAN network orchestration and optimization. The near real-time RIC will enhance the performance, scalability and security posture of the RAN. This is achieved with real-time resource optimization that continuously adjusts network parameters for optimal performance.
As outlined with our collaboration with VVDN, RIC xApps can use received metrics to detect anomalous devices in the RAN based on signal strength and throughput, and use AI to identify the best neighbour cells and trigger automatic handovers to provide a smoother user experience.
The deployment of the VVDN RIC on Red Hat OpenShift offers streamlined scalability with the automatic scaling of RIC components to help meet demand and support high availability.
Using a common cloud-native platform for both RAN and AI workloads
In addition to using AI-RAN to dynamically adjust network parameters, bringing together AI and RAN on a common cloud-native platform will help service providers operate diverse applications with greater consistency and flexibility.
As outlined with our collaboration with SoftBank, enhanced multicluster network orchestration and optimization is made possible for virtualized RAN and AI-enabled workloads. This requires the scalable deployment and operation of both AI and RAN applications on a common cloud-native telco cloud platform. Coupled with intelligent orchestration capabilities, service providers can optimize the placement of compute and GPU-intensive workloads while maintaining the necessary performance.
Using AI to build autonomous intelligent networks
An autonomous intelligent network is a fully automated, zero-touch deployment and operations infrastructure for information and communication technology (ICT) services that is self-configuring, self-healing, self-optimizing and self-evolving.
To function properly, an autonomous intelligent network has to be highly automated, with data analytics and AI models providing deep learning for advanced decision making, and autonomy and governance providing privacy and use policies to enforce compliant deployment and operational decisions and actions.
There are various techniques and approaches involved in building an autonomous intelligent network:
- Event-driven automation, the process of responding automatically to changing conditions to help resolve issues faster and reduce routine, repetitive tasks (EDA)
- GitOps, a set of practices for managing infrastructure and application configurations via a single source of truth
- Artificial intelligence operations (AIOps), an IT operations approach and an integrated software system that uses data science to augment manual problem solving and fault resolution
Service providers need to integrate these techniques to build autonomous intelligent networks that can evolve at the speed of innovation.
Collecting and curating data
In order for AI to be used effectively as part of an autonomous intelligent network it needs data. Data sources within a network are varied, and can include alarms, metrics and logs. This data is presented in a variety of formats, which require aggregation, structure and cleaning prior to being consumed by an AI model.
AI uses this structured, curated data to generate insights that are used to identify and remediate network issues and to accelerate root cause analysis.
Investing in good data engineering practices is crucial for maximizing the business value of AI. It leads to improved model accuracy, efficiency and speed, and simplifies security and compliance.
Using data to train foundation models
Foundation models (FMs), also known as large language models (LLMs), are large AI models pre-trained on extensive and diverse datasets, making them versatile for various tasks. Foundation models can bring a multitude of benefits to service providers when they are trying to infuse AI into new or existing applications and systems:
- Time to value: FMs reduce the initial training burden
- Accuracy: FM responses become more accurate with the amount of data use during training
- Accessibility: FMs make advanced AI capabilities available to non-experts
- Versatility: FMs are usually trained to be general purpose, so they can be used for a wide range of tasks and applications
Since FMs are primarily general purpose, however, using one huge model to support all service provider operations can be complex and expensive. As a result, small language models (SLMs) that are purpose-built and customized with smaller, specific data sets tend to be more efficient and cost effective.
For example, a simple algorithm can be used to train and deploy a SLM to detect anomalies within network data. The SLM can be used in combination with an LLM for root cause analysis. This can be further enhanced with dynamic retrieval augmented generation (RAG) that provides specific context to the analysis. This can help accelerate network efficiency and enhance service reliability while automating manual processes to reduce operational costs.
How Red Hat can help
Service providers are looking for ways to expedite the deployment of AI and autonomous intelligent networks and the development of models and AI-enabled applications. This involves:
- Maintaining a balance between encouraging innovation and keeping costs in check
- Modernizing applications with AI to accelerate innovation and development
- Having access to a scalable, flexible and consistent platform that can integrate with existing systems and reduce network complexity
- Consider data sovereignty and privacy and how much of this data can be effectively and responsibly utilized for AI projects
Red Hat believes that autonomous intelligent networks built on a common cloud-native platform provide a robust evolution path for innovative services, achieve lower operational expenditure and provide higher flexibility and security. In addition, Red Hat can help service providers more consistently use AI to automate operations across their entire infrastructure to improve performance and overall service experience.
Learn more
Success with AI starts with having a common cloud-native platform to build flexible, scalable and automated networks.
关于作者
Rob McManus is a Principal Product Marketing Manager at Red Hat. McManus is an adept member of complex matrix-style teams tasked to define and position telecommunication service provider and partner solutions with a focus on network transformation that includes 5G, vRAN and the evolution to cloud-native network functions (CNFs).