This post by the Red Hat Office of the CTO seeks to expand on previous work and further explore Crossplane as a Kubernetes Operator for provisioning, managing, configuring, and consuming cloud services. These services can then, in turn, be used to create and deploy cloud-native applications.
In this post, we will be discussing what an enterprise implementation of Crossplane could look like for infrastructure teams and developers. Additionally, we will be creating multiple collections of cloud infrastructure, which seek to abstract away the provisioning and configuration of managed resources. Finally, we will be creating an instance of Quay that will consume the collection of AWS abstracted services and infrastructure.
Note this implementation is not something that’s currently supported but we know people want to read about ideas at earlier stages.
Continue reading “Using the Crossplane Operator to manage and provision Cloud Native Services”
Communication between distributed software components in a cloud-native application is an important and challenging aspect of cloud-native development. This post introduces a solution to that problem using Virtual Application Networks (VANs). A VAN can be set up by a developer and used to connect the components of an application that are deployed in different public, private, and edge cloud environments.
Cloud-native development is about writing software in such a way that it can be deployed easily, flexibly, and automatically into the hybrid-cloud ecosystem to take advantage of the scale of the cloud. A big part of taking advantage of cloud scale is the ability to deploy components of a distributed system in different locations.
Continue reading “Cloud-native software development with Virtual Application Networks”
In this post we introduce the basics of reading and writing Apache Spark DataFrames to an SQL database, using Apache Spark’s JDBC API.
Apache Spark’s Structured Streaming data model is a framework for federating data from heterogeneous sources. Structured Streaming unifies columnar data from differing underlying formats and even completely different modalities – for example streaming data and data at rest – under Spark’s DataFrame API.
Continue reading “Data integration in the hybrid cloud with Apache Spark and Open Data Hub”