Superfluidity Project: One Network to Rule Them All!

by | Jun 13, 2018 | Hybrid Cloud

The Superfluidity Project was a 33-month European (H2020) research project (July 2015–April 2018) aimed at achieving superfluidity on the Internet: the capability to instantiate services on-the-fly, run them anywhere in the network (core, aggregation, edge), and shift them transparently to different locations. The project especially focused on 5G networks and tried to go one step further into the virtualization and orchestration of different network elements, including radio and network processing components, such as BBUs, EPCs, P-GW, S-GW, PCRF, MME, load balancers, SDN controllers, and others.

For more information about it, you can visit both the official project website.

Integration Work

One of the main achievements of the project was the design of an orchestration framework composed of many different open source projects, enabling management of both VMs and containers—regardless if the containers are running bare-metal or nested. Among others, the framework included the integration of (see figure below):

  • Virtual Interface Management (VIM) level: OpenStack, Kubernetes, and OpenShift, all of them using Kuryr to ensure VMs and containers L3/L2 network connectivity
  • Virtual Network Functions Manager (VNFM) level: Heat templates, Mistral, Kubernetes/OpenShift templates, and Ansible Playbooks
  • Network Functions Virtualization Orchestration (NFVO) level: OSM and ManageIQ

One of the main problems addressed by the above framework was enabling container orchestration at any point of the network together with virtual machines (VMs). This orchestration was needed by the different partners, as some of the applications and components were running on VMs and others on containers, depending on their performance, isolation, or network performance needs, to name a few. The problem of having both VMs and containers together is not just how to create computational resources, but also how to connect these computational resources among themselves and to the users, in other words, networking.  That was one of the main problems we addressed by extending the Kuryr OpenStack project: enabling nested containers (running on top of OpenStack VMs) without double encapsulation.

Another important problem we addressed with Kuryr was the booting time of the containers when they are connected to the OpenStack Neutron networks. Note that spending one or two seconds creating and configuring the Neutron ports is not an important issue for VM booting time, but it is of great impact for container booting time. Thus, a new feature named ports pool was added to pre-create Neutron ports to have them ready for the containers before they booted up, enabling order-of-magnitude faster booting times, especially at scale, as well as reducing the load on the Neutron server-side upon container creation spikes.

Demonstrators

We prepared a final demonstrator where different parts of the integration work were shown. Specifically, we created a set of scenes where different functionalities were presented:

We had a distributed cloud (Edge + Core) managed from a central ManageIQ appliance Notice, and we moved the Edge servers to the review meeting room!

During the demo, from ManageIQ we deployed a new mobile network by instantiating the CRAN (containerized) components at Edge—RRH, BBU, and EPC; then the MEC components (MEO at Core, TOF at Edge); and finally a video streaming application at Core. Once everything was deployed, we tested that a mobile device was able to connect to the just-created network and watch the videos being streamed from the Core cloud, as well as not losing the connectivity while later moving components of the video streaming application from Core to Edge.

The live demo could not be recorded, but here are some pre-recorded demos of some pieces of that demo:

Review and Lessons Learned

The project was rated with the highest score: ‘excellent‘. Besides the technical work and the upstream contributions, the project also had other impacts, such as: 30 talks/keynotes presentations, more than 50 scientific research publications, nine organized events, and contributions to standardization bodies such as ETSI NFV ISG, ETSI MEC ISG, OASIS TOSCA, and ISO MPEG, to name a few.

In addition, part of the work was demonstrated at a keynote of DevConf.CZ 2018, Brno, as well as presented at the OpenStack Summit (Vancouver 2018).

To sum up, we accomplished great things and it was a positive experience. It was a bit chaotic to synchronize between all the partners (this was a really big project with 18 partners!) and the reporting overhead was a bit too high, sometimes feeling like you need to spend more time just filling reports rather than doing the actual work (the fun part!).

Overall, it was a nice experience to learn about new problems as well as to figure out how to help fixing issues with already existing technologies—but adding the needed modifications/extensions that in turn make these technologies better.