AIML Lifecycle Management in ORAN RIC

RIC-RAN Intelligent Control  is a suite of Software Apps to enable SDN functionalities in open RAN networks. It is basically responsible for all RAN operation and optimization procedure like radio connection management, mobility management, QoS management, edge services, and interference management, radio resource management, higher layer procedure optimization, policy optimization in RAN, and providing guidance, parameters, policies.

To achieve all RAN operation and optimization procedure AI/ML models can be used. These AI/ML models use algorithms to process data by analyzing previous and current data events and finding patterns. Implementing AIML within Open RAN helps eliminate human error.

RIC AI/ML Workflow

The AI/ML lifecycle procedure and interface framework is mentioned in ORAN document “O-RAN.WG2.AIML-v01.03 Workflow Description and Requirement”. It has describe different phases that are expected to be applied to any ML-assisted solution planned in the O-RAN architecture:

  • ML Model and Inference Host Capability Query/Discovery: This step involves the discovery of various capabilities and properties of the ML inference host and assisted solution.
  • ML Model Training and Generation: This process is about the design time selection and training of an ML model in relation to a specific ML-assisted solution. The ML model and relevant metadata are selected and onboarded into the ML training host, where the model training is initiated. Once model is trained and validated, the model is published back in the SMO/Non-RT RIC catalogue.
  • ML Model Selection: The ML designer checks whether a trained ML model from the catalogue can be deployed in the ML inference host for the given ML-assisted solution.
  • ML Model Deployment and Inference: The selected AI/ML model is deployed via a containerized image to the ML inference host.
  • ML Model Performance Monitoring: The performance of the ML model (e.g., accuracy, running time, network KPIs) is monitored by the ML inference host and actors, and this feedback is reported to the SMO/Non-RT RIC.

RIC AI/ML Life Cycle Management Best Practice

AI/ML lifecycle in Open RAN RIC

Some of Best Practices for RIC AIML Life Cycle Management is list below.

  • Offline Learning as Best Practice: O-RAN recommends having some form of offline learning as a best practice, even in scenarios typically associated with online learning such as reinforcement learning.
  • Training and Testing Before Deployment: Models must be trained and tested before being deployed in the network. The document specifies that a completely untrained model should not be deployed.
  • Modular Design of ML Applications: ML applications should be designed in a modular manner, allowing them to be decoupled from one another. This design principle includes the ability of applications to share data without knowing each other’s specific data requirements and without understanding the source or nature of the data.
  • Service Provider Flexibility in Deployment: The criteria for determining the deployment scenario for a given ML application may vary between service providers. Therefore, it is recommended that service providers have the flexibility to decide whether an ML application should be deployed to a Non-RT RIC or a Near-RT RIC as its inference host.
  • Optimization and Compilation of ML Models for Inference: To improve execution efficiency and inference performance in the inference host, ML models should be optimized and compiled with consideration of the inference host’s hardware capabilities.

References

  • O-RAN.WG2.AIML-v01.03 Workflow Description and Requirement

Related Post