The Mass Open Cloud (MOC) is an open cloud exchange that provides compute resources to university researchers. The virtualization infrastructure is built on Red Hat OpenStack Platform, using Foreman for provisioning and Ceph for distributed storage. But the MOC has also developed its own tools to make bare metal computing available. We talked to Naved Ansari, one of the MOC developers, about some of these developments.
In a typical cloud computing environment, users are provided with a virtual machine running on the same physical machine as other virtual machines. This is a way to maximize compute resources, by not leaving too many machines unused. Virtual machines work well for a lot of workloads, but occasionally people need access to bare metal without a virtualization layer.
A researcher might want to use bare metal instead of a virtual machine for a number of reasons. One reason is simply to get more computational power. Virtual machines are fast enough for many workloads, but they do share CPU time and other resources with the host operating system and hypervisor, and potentially with other virtual machines. For researchers doing extremely computationally intensive research, even a 5% improvement in performance can be worthwhile.
Another use case is when researchers are doing virtualization work themselves. Doing virtualization on virtualization can be tricky and may not provide the same performance. Finally, some researchers need access to GPUs (graphics processing units) or field-programmable gate arrays (FPGAs). There have been impressive improvements to GPU virtualization, but some people may still prefer to access these resources on bare metal.
Hardware Isolation Layer
One of the technologies the MOC has developed is the Hardware Isolation Layer, or HIL. HIL does logical isolation of bare metal resources, assigning computers to individual users. It communicates with network switches to control any machine that supports IPMI. HIL provides an API for users to perform out-of-band operations on their leased machines. This way, users can still fully control machines without having credentials that would otherwise allow them to circumvent the isolation layer.
HIL doesn’t install an operating system or do any provisioning, instead allowing users to set up their machines exactly as they prefer. Of course, most users will want to provision their machines with whatever environment is necessary for their research, and the MOC has developed tools for this as well.
Another tool the MOC has developed is Bare Metal Imaging, or BMI. Using HIL and BMI, users can mount and boot from remote iSCSI drives. Not only does this allow users to provision a single machine quickly, it also allows for easy replication. Users can set up one machine, take a snapshot of the disk image, then boot other machines from that snapshot.
This provides the flexibility and deployment speed of typical virtual machine environments, but with all the advantages of bare metal computing. Another great advantage of using remote-mounted disks is, even when the lease on the machine expires, the user’s work is still preserved on the remote disk. They can resume their work from exactly where they left off, without having to reset each time.
HIL and BMI are exciting new projects that explore different ways that compute resources can be made available to a large number of users. Future development may involve tighter integration with OpenStack, which the MOC uses for its virtualized resources. OpenStack does have its own bare metal service, but it works differently than HIL. Allowing these tools to work together could be beneficial. Whatever the future holds, the MOC is showing how innovations in cloud computing can advance academic research.