In Part 1 of Now + Next’s closer look at the future of container storage, we examined the beginnings of the storage solution with a look at how hardware trends will affect the way storage and containers will evolve together.
In this installment, Ceph Project Lead Sage Weil continues our conversation, moving “up” the stack to software platforms. Specifically, Sage discusses where container technology is now and where it is going.
Containers All The Way Down
It’s not news that Red Hat is very much invested in container space. “Kubernetes and OpenShift are the new distributed operating system of the future,” Sage emphasized, and you can’t get more emphatic than that. Because of that clear path, the Red Hat Storage team has invested a lot of time and resources to make Ceph and Gluster storage work well with these platforms.
This means implementing ReadWriteOnce (mounting a volume as read-write by a single node) for block storage, ReadWriteMany (mounting a volume as read-write by many nodes) for file system storage, as well as advancement in Kubernetes local volumes.
The challenges for maintaining persistent container storage can be addressed in one of two ways.
“One is containers consuming our storage, such as Kubernetes on Ceph and Gluster, “ Sage explained, “And then the other way is running our storage systems in Kubernetes, using Kubernetes to orchestrate Ceph and Gluster themselves.”
In effect, this latter approach essentially makes Kubernetes the new operating system, with software-defined storage running in Kubernetes. Today, that is expressed as Container Native Storage (CNS), which is GlusterFS containerized within OpenShift. Looking forward, Red Hat is working with others on the Rook project, a recent addition to the Cloud Native Computing Foundation.
“We’re going with Rook for Ceph, which is an operator pattern, the modern way to automate services in Kubernetes that allows them to be all hands free, so we’re very excited about that,” Sage explained.
“And then the other thing going on is the ceph-csi drivers, which are a work in progress,” he added. Red Hat team members are working with associates at Cisco and Sage predicted that Red Hat would be one of the earliest CSI implementations.
Clearing the Way for Kata Containers
Kata Containers–or Clear Containers as Intel previously liked to call them–adds a layer of virtual machine into a container-like runtime. The advantage of this approach is that applications get the security features of a virtualization layer and the speed of containers.
But, for storage, there are challenges. For instance, Sage explained, working with block storage is easy, because virtual machines are great at presenting any kind of block storage as a virtualized disk. “But passing files through that virtualization barrier is harder. So this is an area we are actively investigating.”
One approach could be to purpose-build some sort of file pass-through for qemu. “If we do do that,” Sage said, “then the nice thing is that it will solve problems with kata containers and have a solution that would also be applicable in OpenStack, KubeVirt, and everywhere else that we use KVM-based virtualization.”
Since Sage first shared this news at Red Hat Summit, he added, there is already a small team of developers who are starting to kick around potential solutions for such a pass-through feature.
The implication that work around the specific problem of storage in containers could have broader implications for virtualization storage is very exciting. And the work isn’t stopping there. In Part 3, learn more about the challenges and solutions Red Hat Storage is facing in multi-cloud and hybrid cloud environments.