Developments in Kubernetes object storage support

Object storage is fast becoming a solution of choice for storing massive amounts of unstructured data.

The popularity of object storage is due in part to how it can scale efficiently.  This in particular sets it apart from file and block as users can quickly expand their storage footprint with much less overhead.  Testing has shown that Ceph Object can ingest up to one billion objects, spread across ten thousand buckets “with zero operational or data consistency challenges.”  The stability, scalability, and sheer capacity of object storage has made it the ideal solution for technologies that can generate massive amounts of data at a time.

Continue reading “Developments in Kubernetes object storage support”

Quantum on OpenShift – part one, an introduction to quantum computing

Many people are talking about the use and purpose of quantum computing of late, so we wanted to take an opportunity to talk about what Red Hat is doing around quantum computing. This first post will give an overview of a few of Red Hat’s activities with quantum computing, beginning with some background.

The Emerging Technology team in Red Hat’s Office of the CTO have formulated our general goal to define how the classical and the quantum spaces can be connected together. Broadly speaking, our goal is to use the OpenShift Container Platform to run and manage both classical as well as quantum applications, in essence running hybrid workloads in an open hybrid cloud.

Continue reading “Quantum on OpenShift – part one, an introduction to quantum computing”

Managing application consistency and state during Disaster Recovery for Ceph RBD mirroring

This is the third post in our series investigating how Rook-Ceph and RBD Mirroring can be best utilized to handle Disaster Recovery scenarios. The first post in the series, “Managing application and data portability at scale with Rook-Ceph,” laid some foundational groundwork for how Rook-Ceph and RBD mirroring can enable application portability. Then in our second post, “Managing Disaster Recovery with GitOps and Ceph RBD Mirroring,” we talked about some key features of Rook-Ceph RBD mirroring and presented a solution to help manage and automate failover using a GitOps model.

In this post we explore some additional tools and concepts to help with the synchronization of application consistency and state across multiple clusters, reducing the manual steps and providing an automated approach for recoverability and maintainability of the application on failover.

Continue reading “Managing application consistency and state during Disaster Recovery for Ceph RBD mirroring”

Cloud-native software development with Virtual Application Networks

Communication between distributed software components in a cloud-native application is an important and challenging aspect of cloud-native development. This post introduces a solution to that problem using Virtual Application Networks (VANs).  A VAN can be set up by a developer and used to connect the components of an application that are deployed in different public, private, and edge cloud environments.

Cloud-native development is about writing software in such a way that it can be deployed easily, flexibly, and automatically into the hybrid-cloud ecosystem to take advantage of the scale of the cloud.  A big part of taking advantage of cloud scale is the ability to deploy components of a distributed system in different locations.

Continue reading “Cloud-native software development with Virtual Application Networks”

Enarx – project maturity update

It’s been a busy time since we announced Enarx and our vision for running workloads more securely to the world in August 2019.  At the time, we had produced a proof of concept demo, creating and attesting a Trusted Execution Environment (TEE) instance using AMD’s Secure Encrypted Virtualization (SEV) capability, encrypting a tiny workload (literally a few instructions of handcrafted assembly language) and sending it to be executed.  Beyond that, we had lots of ideas, some thoughts about design, and an ambition to extend the work to other platforms.  And since then, a lot has happened, from kicking off the Confidential Computing Consortium to demos with AMD’s SEV and Intel’s Software Guard Extensions (SGX), from contributor improvements to the recent efforts to provide a Wasm module for multiple silicon vendor architectures.

Continue reading “Enarx – project maturity update”

Data integration in the hybrid cloud with Apache Spark and Open Data Hub

In this post we introduce the basics of reading and writing Apache Spark DataFrames to an SQL database, using Apache Spark’s JDBC API.

Apache Spark’s Structured Streaming data model is a framework for federating data from heterogeneous sources. Structured Streaming unifies columnar data from differing underlying formats and even completely different modalities – for example streaming data and data at rest – under Spark’s DataFrame API.

Continue reading “Data integration in the hybrid cloud with Apache Spark and Open Data Hub”

Managing disaster recovery with GitOps and Ceph RBD mirroring

In our previous blog, Managing application and data portability at scale with Rook-Ceph, we talked about some key features of Rook-Ceph mirroring and laid groundwork for future use case solutions and automation that could be enabled from this technology. This post describes recovering from a complete physical site failure using Ceph RBD mirroring for data consistency coupled with a GitOps model for managing our cluster and application configurations along with an external load balancer all working together to greatly minimize application downtime.

This is done by enabling a Disaster Recovery (DR) scenario where the primary site can failover to the secondary site with minimal impact on Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO).

Continue reading “Managing disaster recovery with GitOps and Ceph RBD mirroring”

Deploying a full-service 5G network on OpenShift

During the last Kubecon North America in San Diego, a cross-vendor team of engineers from Red Hat and several other companies rolled a half-rack of servers and a self-made Faraday cage onto the keynote stage and demoed live a full 5G/4G network connected to two additional deployments in Canada and France, all containerized and running on Red Hat OpenShift Container Platform clusters.

This live demo was the culmination of an intense, multi-month community effort supported by Linux Foundation Networking, and we had the honor of working on the site located in France at Eurecom, a research institute on telecommunications, that is the initiator and main contributor to the OpenAirInterface 5G/4G project. In this post we explore how that 5G network was constructed and deployed on the Kubernetes-based open source OpenShift platform.

Continue reading “Deploying a full-service 5G network on OpenShift”

Using machine learning and analytics to help developers

It was the talk title that caught my eye – “Developer Insights: ML and Analytics on src/”. I was intrigued. I had a few ideas of how machine learning techniques could be used on source code, but I was curious to see what the state of the art looked like now. I attended the session at DevConf.cz 2020 by Christoph Görn and Francesco Murdaca of the AI and ML Center of Excellence in Red Hat to hear more.

The first question I had was “where did they come up with the project name Thoth?” My initial guess was that “Thoth” was an ice moon from the Star Wars universe, or maybe a demon from Buffy the Vampire Slayer. It turns out that Thoth is the Ancient Egyptian god of writing, magic, wisdom, and the moon. The Egyptian deity theme runs through the project, with components called Thamos, Kebechet, Amun, and Nepthys, among others.

The set of problems that Thoth aims to solve is an important one. Can we help developers identify the best library to use, by looking at what everyone else is using for a similar job? Can we help identify the source of common performance issues, and suggest speed-ups? Can we create a framework that can enforce compliance, and help minimize risk, as applications grow?

Continue reading “Using machine learning and analytics to help developers”

Size matters: how Fedora approaches minimization

As part of a modern IT environment, Linux distributions can look to optimizing their size to be better suited for container use. One of the ways this improvement can happen is through reducing the size of a distribution, a process known as minimization. A new tool is being put together that will enable developers and operators to create minimal images of the appropriate size for the container use cases they need.

Graphic represents the relationships between all of the software repositories in Fedora Linux, many thousands of green dots cross-connected to appear like a cloud nebula.
Graphical representation of Fedora repository relationships. Image by: Adam Šamalík

Continue reading “Size matters: how Fedora approaches minimization”