The TripleO project is transitioning from bare-metal to containers-based OpenStack deployments. This transition started almost a year ago and it was split in two phases. The first phase targets Docker as the container runtime, whereas the second phase moves these container images to Kubernetes. In this post, we will focus on the second phase of the transition–specifically, how to deploy these services.
The TripleO team has been following a set of requirements for the migration to containers, which include: backwards compatibility, service isolation, and upgradability, among other things. The criteria had to complement the existing, more conservative, set of requirements, as well as allow innovation to happen.
- Building community: The containerization of OpenStack is a great opportunity for the TripleO team to collaborate with other deployment projects, either on the high-level tools or on the tools consumed by these projects.
- Simplicity: Ideally, the chosen tool will enable TripleO to move away from some of the tools that exist in the stack today. The chosen tool mustn’t add more complexity to the stack.
- Flexibility: It must be possible to integrate the tool with TripleO. It should also be possible to consume the tool independently. This will allow the TripleO team to build a better community and also enable users of TripleO to use parts of the TripleO stack independently. It’s important to have a proper separation of concerns in the implementation and consumption of TripleO.
- Ease of adoption: Ideally, this tool should not be hard to adopt by our users and it should have a relatively simple learning curve. As a team we don’t want to have a tool that will be difficult for us to understand, adopt, and that will also make migrations hard for our users.
The following list of technologies does not aim to be exhaustive. It focuses on tools that could meet our existing criteria. Furthermore, it focuses on tools that exist–or are consumed by existing projects–in the OpenStack community. Finally, it’s important for these tools to have some level of maturity upon which we can help improve and build.
Helm is a package manager for Kubernetes templates. It allows for defining the Kubernetes templates required to run an application and then replace the application options dynamically. It bundles all the templates in `tgz` packages called charts.
- Using Helm provides a unified way to deploy applications on Kubernetes.
- It provides a way to collaborate with existing upstream efforts in the OpenStack community.
- It enables consumers to create their own charts and share them with the rest of the community.
- Helm doesn’t have support for multi-tenancy
- The Helm community, although growing, is currently sponsored by a single company.
- It adds one extra, stateful, layer (Tiller) to maintain in the TripleO stack.
There are two projects that have adopted Helm in the OpenStack community.
The openstack-helm project consists in a set of Helm charts to deploy OpenStack. This set of charts provides support for setting the various configuration options in a per service level as well as customizing parts of the install process.
By adopting openstack-helm, the TripleO team would be able to collaborate with other communities in OpenStack and be more consistent with the tools used to deploy it.
Configuration management would be one area of collaboration on this project. The openstack-helm team is currently generating config files using templates, which is not sufficient for complex deployments of OpenStack. We could collaborate with the team on aligning their charts with OpenStack’s goal to use etcd as the system to store configurations or adding support for Ansible as the configuration management tool.
Kolla-kubernetes is an OpenStack project developed by the Kolla team. It uses Helm for packaging the Kubernetes definitions for each service just like OpenStack Helm does. The main difference between these two projects is that kolla-kubernetes aims to manage the entire lifecycle of an OpenStack deployment, which makes the charts’ definition more opinionated and harder to integrate from TripleO.
Ansible Roles and Ansible Playbook Bundle
The Ansible Playbook Bundle (APB) is an image format being created as part of the Ansible Service Broker (ASB) initiative–ASB is an implementation of the Open Service Broker standard–for managing Ansible playbook execution.
The APBs are container images that bundle Ansible roles and playbooks. These images can be run independently without the use of an Ansible Service Broker and they can be customized like any other container image.
The current format supports four actions: provision, deprovision, bind, and unbind. The last two only work with OpenShift. These actions correspond to the name of the bundled playbooks, which are executed by the container’s entrypoint based on the input parameters.
By adopting APB as a format, we would be adopting a solution that is closer to our plan of converting most of our deployment workflow to Ansible with the huge cost of having to create an APB per service, which also requires writing roles that know how to configure and deploy the image. One additional downside is that by using a pure Ansible module, we would not be able to adopt a solution that works well on both OpenShift and Kubernetes.
Here’s a small proof of concept of an APB that runs mariadb and other services
- It’s pure Ansible
- It allows for managing dependencies, start order and more complex deployment scenarios
- It allows for consumers of TripleO to write their own bundles, playbooks and roles that can be also shared with the rest of the community.
- It will require writing the playbooks for every OpenStack service we support. There’s been no progress made here.
- We will have to build a community around this solution. Proposing this upstream might convince other projects to join us. We’ll start from scratch either way.
- There’s no Ansible module that would enable us to use both Kubernetes and OpenShift transparently. Using the Kubernetes Ansible module should be enough to support both Kubernetes and OpenShift and It’s possible to deploy OpenStack on Kubernetes without any OpenShift specific features. It remains to be researched what OpenShift features (I’m working with the OpenShift team on this) would make the new architecture better, if any.
Ansible Playbook Bundles and OpenStack Helm
Another option is a combination of two of the strongest options described above. It is possible to create anAnsible playbook bundle that knows how to talk to Helm by using an Ansible Helm module. I’ve created a proof of concept for this that can be found here.
- It allows for creating custom bundles, charts, etc.
- All the benefits mentioned above are available.
- It allows for consuming Ansible while preserving and contributing to the upstream community.
- It introduces an extra layer APB and Helm on top of the other layers (Heat) that we have already. This could be avoided by picking just one of the two solutions.
The list below goes through some other, containers related, projects that exist in the OpenStack community. This list is presented for the sake of completeness as none of these projects would actually help with the goal.
This project uses Ansible to deploy OpenStack on Ubuntu Linux with the option to do the deployment on a LXC container. These set of playbooks focus on bare-metal deployments and not on running pre-built container images on a containers orchestration engine (COE).
Kolla-ansible uses Ansible to run Kolla images on docker. We can’t use kolla-ansible for this task because our target is Kubernetes but we may be able to take some pieces from it as well as from kolla-kubernetes, specifically on the configuration steps.
It seems that the best option available is to build a set of Ansible roles (and Ansible playbook bundles, if needed) to deploy OpenStack services on Kubernetes. This sounds like the best solution available for us right now because:
- Ansible provides a better way to control the execution flow of the deployment. If we decided to use Helm, we’d still need another tool that would help us control the deployment order for of each service.
- There’s no plan to support Helm as part of our downstream product. If we decide to use Helm the OpenStack team might have to support Helm and I don’t think this is something worth doing.
- An Ansible + Helm solution will add an extra layer of complexity to the stack and there’s been quite some effort on trying to remove technologies from TripleO’s stack.
- Ansible is already integrated in TripleO. We have mistral actions to run playbooks and we can also execute it from Heat.
- Ansible has a wider adoption compared to Helm.
Unfortunately, this choice is not for free. Choosing to use Ansible means we lose the opportunity to collaborate with other communities in upstream OpenStack (either openstack-helm or kolla-kubernetes) and we’ll lose some momentum behind this projects.
On the bright side, we can still build a community around these Ansible roles and I believe they could have as much of an adoption as the Helm charts. Furthermore, Ansible has a big and a great community of operators and developers, which will help generating interested around shared Ansible roles.