During the last Kubecon North America in San Diego, a cross-vendor team of engineers from Red Hat and several other companies rolled a half-rack of servers and a self-made Faraday cage onto the keynote stage and demoed live a full 5G/4G network connected to two additional deployments in Canada and France, all containerized and running on Red Hat OpenShift Container Platform clusters.
This live demo was the culmination of an intense, multi-month community effort supported by Linux Foundation Networking, and we had the honor of working on the site located in France at Eurecom, a research institute on telecommunications, that is the initiator and main contributor to the OpenAirInterface 5G/4G project. In this post we explore how that 5G network was constructed and deployed on the Kubernetes-based open source OpenShift platform.
Where open source has changed the way we understand software development, that change of mindset arrived to the telecom industry about five years ago with Network Function Virtualization (NFV), the concept of running traditional telco hardware appliances (routers, firewalls, load balancers) on commodity servers in the form of virtual machines. Today, the industry is participating in recently created consortiums such as OpenAirInterface or O-RAN, that have the goal of evolving radio access networks in an open way, with a lot of that work now focusing around containers.
Into this situation with NFV and containers is where Red Hat’s engineers bring expertise in open source software development, such as around Kubernetes. Kubernetes is described as a “portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.” OpenShift is Red Hat’s distribution of Kubernetes that allows enterprises to run their applications on the hybrid cloud. With enterprises around the globe adopting Kubernetes and OpenShift as a de facto standard platform to deploy applications, can we do the same for the new generation of mobile networks?
How we kubernetized a 5G network
Let’s get into some of the details of how we deployed OpenAirInterface on OpenShift. The OpenAirInterface project fosters a community of industrial as well as research contributors for software and hardware development for the core network (EPC) and access network and user equipment (EUTRAN) of 3GPP cellular networks. The main repository with the container image build recipes, Kubernetes manifests, and helper scripts can be found at the project’s GitHub repo.
Our first step was to containerize all OpenAirInterface components and produce consistent and reproducible container image builds. For this particular demo, we used OpenAirInterface’s Evolved Packet Core (EPC) for the core network. The EPC consists of three main components:
- Mobility Management Entity (MME): authenticates and authorizes users and manages both their current session state as well as mobility state, i.e. which base station the user is attached to and how to hand over to a different base station.
- Home Subscriber Server (HSS): it is a master database that stores all users’ subscription profiles, authentication keys, etc.
- Serving & Packet Data Network Gateway (S/P-GW): serves as entry and exit point for traffic, enforces the operator’s traffic policies, acts as mobility anchor and routes traffic to a user’s current base station, and so forth.
The S/P-GW was deployed as one component processing the user traffic and another component handling the control signalling. This so-called Control / User Plane Split (CUPS) architecture allows scaling control traffic and user traffic capacity independently.
On the Radio Access Network side, the Evolved Node B (eNodeB or eNB) in a 4G network, is the element that communicates directly and wirelessly with mobile handsets. It uses different protocols to connect to the MME and S/P-GW and handles processing of the radio signals. In a 5G network, this very same component is called Next Generation Node B (gNB). It features advanced Software Defined Radio (SDR) technology to achieve better performance and flexibility.
The project’s code repository contains the scripts necessary to build all these components from source into small, ready-to-run container images. For the container base layer, we used Red Hat’s Universal Base Image (UBI), a lightweight enterprise-grade base image with curated, hardened, and stabilized package content that allows developers to focus on their applications while having the option to run images in a fully vendor-supported manner.
Next we worked on deploying our own 5G/4G network on the OpenShift Kubernetes distribution. The main challenges we had to overcome were typical for migrating software designed to run on physical hosts to Kubernetes: ensuring a service makes no assumptions about the specific host it runs on or where other services are deployed relative to it. This includes looking up services from the Kubernetes clusters’ Domain Name System (DNS), ensuring services gracefully restart, retry, etc.
Further, OpenShift and Red Hat Enterprise Linux (RHEL) as enterprise-grade Kubernetes and Linux distributions, respectively, default to a more locked-down security model, but like many workloads that make extensive use of kernel or hardware features, OpenAirInterface services assumed they had full root-level access. We instead ran them as regular users with the least amount of privileges to certain system capabilities. The necessary Kubernetes manifests are in the project’s code repository.
How we configured OpenShift to run the 5G network
Telco / 5G network functions are among the more exigent Kubernetes workloads, but they are not unique: customers from high performance computing, high frequency trading, industrial control, et al. are asking for pretty much similar sets of capabilities. This is why we at Red Hat work to develop these capabilities upstream alongside the rest of the Kubernetes community and through this to become native capabilities of OpenShift, too, instead of telco-specific extensions.
To support the 5G network in a production-like deployment, we configured OpenShift to segregate real-time and non-real-time compute workloads as well as management, control, and data plane traffic according to the following logical deployment architecture:
The “Distributed Unit” (DU) part of the eNBs / gNBs are highly latency and jitter sensitive, so they are deployed onto real-time capable Kubernetes workers. These require a number of special configurations:
- BIOS configuration:When the hardware, firmware, or firmware settings for the host machine running the real-time workload introduce non-deterministic latency spikes, there’s nothing the host OS or Kubernetes can do to mitigate this. Therefore, the first step is to eliminate hardware- and firmware-level sources of non-determinism such as disabling C-states (CPU power saving), P-states (CPU frequency scaling), EDAC (ECC memory scans), etc.
- Host OS configuration: Next, the host OS needs to run a low latency kernel with the real-time preempt patches and certain OS level tuning, such as enabling huge pages, isolating CPU cores, disabling timer ticks, disabling IRQ load balancing, etc. In OpenShift, which is running on immutable RHEL CoreOS hosts, this can be configured declaratively using Kubernetes MachineConfig resources to enable the real-time RHEL kernel and auto-tune the host using the tuned real-time profile.
- CPU Resource Management: Finally, to ensure Kubernetes places a real-time workload onto isolated cores on the real-time capable Kubernetes worker, we need to configure the static cpuManagerPolicy on the Kubelet, and set resource requests and limits for both CPU and memory.
On the networking side, the following changes are required:
- Multiple Interfaces: Most of the telco deployments require a clean segregation of networks for control, user data, and management traffic. OpenShift Container Platform 4 supports this out of the box using Multus CNI. In deployment, we use the Kubernetes cluster network for management traffic to connect the OpenAirInterface services, but create secondary networks to segregate the 3GPP control and data plane networks. The eNodeB (4G) and the gNodeB (5G) pods are connected to the USRP software-defined radios via dedicated, bonded interfaces.
- SCTP: Some 3GPP protocols rely on SCTP for the network transport layer. Related OpenAirInterface services therefore open SCTP sockets to be able to communicate. That meant we had to enable the Kubernetes SCTP feature gate and whitelist and load the Linux SCTP kernel module on worker nodes. Thanks to OpenShift and RHEL CoreOS, this is again a matter of creating a MachineConfig and using labels and selectors to apply this configuration to all worker nodes.
What can radio hackers do with this?
Now that we have reviewed most of the software-side requirements, what else would you need to have a fully functional 5G/4G network?
In the end, a smartphone has to connect to the network via a radio unit and antenna built for a certain frequency band. Professional hardware for production networks comes with a steep price tag, though. Fortunately, the open hardware movement has led to a democratization of software-defined radio hardware and there are more and more radio hackers doing research on these technologies.
That is why there is low-end hardware available for prototyping for less than a thousand US dollars.
Possible shopping list:
- USRP B200-mini ($500)
- Up to 50 MHz BW
- Custom 20 dBm PA/LNA/Switch ($300)
- Band 38, 42/43, n38/n77-78
- (low-end $90 PC)
- GbE frontHaul POE+
The telco industry is truly working with community development models and open source technologies to make the new generation of mobile networks a reality. At Red Hat, we work every day to make our platforms, such OpenShift, suitable for new telco use cases, and 5G is clearly a very powerful one. 5G is designed to bring to the enterprise world as well as to the regular consumer, high throughput and low latency bandwidth that will enable the use cases of the future like IoT, autonomous cars, and many other applications deployed at the edge of the networks. Red Hat plans to keep working with service providers to make sure 5G stays open.