Using the Crossplane Operator to manage and provision Cloud Native Services

This post by the Red Hat Office of the CTO seeks to expand on previous work and further explore Crossplane as a Kubernetes Operator for provisioning, managing, configuring, and consuming cloud services. These services can then, in turn, be used to create and deploy cloud-native applications.

In this post, we will be discussing what an enterprise implementation of Crossplane could look like for infrastructure teams and developers. Additionally, we will be creating multiple collections of cloud infrastructure, which seek to abstract away the provisioning and configuration of managed resources. Finally, we will be creating an instance of Quay that will consume the collection of AWS abstracted services and infrastructure.

Note this implementation is not something that’s currently supported but we know people want to read about ideas at earlier stages.

Continue reading “Using the Crossplane Operator to manage and provision Cloud Native Services”

Developments in Kubernetes object storage support

Object storage is fast becoming a solution of choice for storing massive amounts of unstructured data.

The popularity of object storage is due in part to how it can scale efficiently.  This in particular sets it apart from file and block as users can quickly expand their storage footprint with much less overhead.  Testing has shown that Ceph Object can ingest up to one billion objects, spread across ten thousand buckets “with zero operational or data consistency challenges.”  The stability, scalability, and sheer capacity of object storage has made it the ideal solution for technologies that can generate massive amounts of data at a time.

Continue reading “Developments in Kubernetes object storage support”

Quantum on OpenShift – part one, an introduction to quantum computing

Many people are talking about the use and purpose of quantum computing of late, so we wanted to take an opportunity to talk about what Red Hat is doing around quantum computing. This first post will give an overview of a few of Red Hat’s activities with quantum computing, beginning with some background.

The Emerging Technology team in Red Hat’s Office of the CTO have formulated our general goal to define how the classical and the quantum spaces can be connected together. Broadly speaking, our goal is to use the OpenShift Container Platform to run and manage both classical as well as quantum applications, in essence running hybrid workloads in an open hybrid cloud.

Continue reading “Quantum on OpenShift – part one, an introduction to quantum computing”

Cloud-native software development with Virtual Application Networks

Communication between distributed software components in a cloud-native application is an important and challenging aspect of cloud-native development. This post introduces a solution to that problem using Virtual Application Networks (VANs).  A VAN can be set up by a developer and used to connect the components of an application that are deployed in different public, private, and edge cloud environments.

Cloud-native development is about writing software in such a way that it can be deployed easily, flexibly, and automatically into the hybrid-cloud ecosystem to take advantage of the scale of the cloud.  A big part of taking advantage of cloud scale is the ability to deploy components of a distributed system in different locations.

Continue reading “Cloud-native software development with Virtual Application Networks”

Managing application and data portability at scale with Rook-Ceph

One of the key requirements for Kubernetes in multi-cluster environments is the ability to migrate an application with all of its dependencies and resources from one cluster to another cluster. Application portability gives application owners and administrators the ability to better manage applications for common needs such as scaling out applications, high availability for applications, or just simply backing up applications for disaster recovery. This post is going to present one solution for enabling storage and data mobility in multicluster/hybrid cloud environments using Ceph and Rook.

Containerization and Container Native Storage has made it easier for developers to run applications and get the storage they need, but as this space evolves and matures it is becoming increasingly important to move your application and data around, from cluster to cluster and cloud to cloud.

Continue reading “Managing application and data portability at scale with Rook-Ceph”

Kiali: An observability platform for Istio

Istio exists to make life easier for application developers working with Kubernetes. But what about making Istio easier? Well, that’s Kiali’s job. Read on to learn more about making Istio even more pleasant to use.
Deploying and managing microservice applications is hard. When you break down an application into components, you add complexity in how those components communicate with each other. Getting an alert when something goes wrong, and figuring out how to fix it, is a challenge involving networking, storage, and potentially dozens of different compute nodes.

Continue reading “Kiali: An observability platform for Istio”

Scaling workload storage requirements across clusters

A number of multi-cloud orchestrators have promised to simplify deploying hundreds or thousands of high-availability services.  But this comes with massive infrastructure requirements. How could we possibly manage the storage needs of a thousand stateful processes?  In this blog, we’ll examine how we can leverage these orchestrators to address our dynamic storage requirements.

Currently in Kubernetes, there are two approaches in how a control plane can scale resources across multiple clusters.  These are commonly referred to as the Push and Pull models, referring to the way in which configurations are ingested by a managed cluster.  Despite being antonyms in name, these models are not mutually exclusive and may be deployed together to target separate problem spaces in a managed multi-cluster environment.

Continue reading “Scaling workload storage requirements across clusters”

Red Hat and NVIDIA bring scalable, efficient edge computing to smart cities

Teams from Red Hat and NVIDIA have collaborated on creating a scalable hybrid cloud application that could revolutionize smart city initiatives such as traffic-flow monitoring and transportation management around the world. By working together, the two companies are creating solutions that make cities smarter and more efficient by taking sensor data and processing it in real-time to provide insights for traffic congestion, pedestrian flow, and infrastructure maintenance.

Running on top of the NVIDIA EGX platform with the NVIDIA GPU Operator, the application is built with NVIDIA’s Metropolis application framework for IoT that brings together innovative capabilities for real-time image processing where NVIDIA DeepStream SDK is used to extract metadata from live video streams at the edge. It then forwards the right metadata to the cloud for deeper analytical processing and further representation in an information dashboard depicted below.

Continue reading “Red Hat and NVIDIA bring scalable, efficient edge computing to smart cities”

Passing Go: polyglot Kubernetes Operators

Operators within Kubernetes are useful tools, designed to extend the container orchestration platform with additional resources. More directly, an Operator, sometimes referred to as custom controllers, is a method of packaging, deploying, and managing a Kubernetes application. 

As useful as Operators are, they have had one limitation: originally they all had to be written in the Go programming language. Thanks to the Operator SDK, you do not need to develop your Operators in Go. The Operator SDK has options for Ansible and Helm that may be better suited for the way you or your team work. But, it can still be limiting for dev teams trying to build an operator if they don’t happen to be skilled in Helm or Ansible.

Continue reading “Passing Go: polyglot Kubernetes Operators”

Managing chaos in a containerized environment

Quick, name some weird stuff that’s happened to your production machines.

Accidentally dropping a production database table? Rolling out a patch that enabled any user to log in with any password? Disabling a load balancer? Using a dictionary to physically keep keyboard keys depressed so “terminals [could] repeatedly [hit] the enter key in order for the logins and print jobs of about 40,000 people to work”?

It’s happened to Alex Corvin, a senior engineer at Red Hat. Well, not that last one. But Corvin has been around long enough in his career to have met Mr. Murphy and his Law: if it can go wrong, it will.

Continue reading “Managing chaos in a containerized environment”