Communication between distributed software components in a cloud-native application is an important and challenging aspect of cloud-native development. This post introduces a solution to that problem using Virtual Application Networks (VANs). A VAN can be set up by a developer and used to connect the components of an application that are deployed in different public, private, and edge cloud environments.
Cloud-native development is about writing software in such a way that it can be deployed easily, flexibly, and automatically into the hybrid-cloud ecosystem to take advantage of the scale of the cloud. A big part of taking advantage of cloud scale is the ability to deploy components of a distributed system in different locations.
Continue reading “Cloud-native software development with Virtual Application Networks”
It’s been a busy time since we announced Enarx and our vision for running workloads more securely to the world in August 2019. At the time, we had produced a proof of concept demo, creating and attesting a Trusted Execution Environment (TEE) instance using AMD’s Secure Encrypted Virtualization (SEV) capability, encrypting a tiny workload (literally a few instructions of handcrafted assembly language) and sending it to be executed. Beyond that, we had lots of ideas, some thoughts about design, and an ambition to extend the work to other platforms. And since then, a lot has happened, from kicking off the Confidential Computing Consortium to demos with AMD’s SEV and Intel’s Software Guard Extensions (SGX), from contributor improvements to the recent efforts to provide a Wasm module for multiple silicon vendor architectures.
In this post we introduce the basics of reading and writing Apache Spark DataFrames to an SQL database, using Apache Spark’s JDBC API.
Apache Spark’s StructuredStreaming data model is a framework for federating data from heterogeneous sources. Structured Streaming unifies columnar data from differing underlying formats and even completely different modalities – for example streaming data and data at rest – under Spark’s DataFrame API.
Continue reading “Data integration in the hybrid cloud with Apache Spark and Open Data Hub”
In our previous blog, Managing application and data portability at scale with Rook-Ceph, we talked about some key features of Rook-Ceph mirroring and laid groundwork for future use case solutions and automation that could be enabled from this technology. This post describes recovering from a complete physical site failure using Ceph RBD mirroring for data consistency coupled with a GitOps model for managing our cluster and application configurations along with an external load balancer all working together to greatly minimize application downtime.
This is done by enabling a Disaster Recovery (DR) scenario where the primary site can failover to the secondary site with minimal impact on Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO).
Continue reading “Managing disaster recovery with GitOps and Ceph RBD mirroring”
During the last Kubecon North America in San Diego, a cross-vendor team of engineers from Red Hat and several other companies rolled a half-rack of servers and a self-made Faraday cage onto the keynote stage and demoed live a full 5G/4G network connected to two additional deployments in Canada and France, all containerized and running on Red Hat OpenShift Container Platform clusters.
This live demo was the culmination of an intense, multi-month community effort supported by Linux Foundation Networking, and we had the honor of working on the site located in France at Eurecom, a research institute on telecommunications, that is the initiator and main contributor to the OpenAirInterface 5G/4G project. In this post we explore how that 5G network was constructed and deployed on the Kubernetes-based open source OpenShift platform.
Continue reading “Deploying a full-service 5G network on OpenShift”
It was the talk title that caught my eye – “Developer Insights: ML and Analytics on src/”. I was intrigued. I had a few ideas of how machine learning techniques could be used on source code, but I was curious to see what the state of the art looked like now. I attended the session at DevConf.cz 2020 by Christoph Görn and Francesco Murdaca of the AI and ML Center of Excellence in Red Hat to hear more.
The first question I had was “where did they come up with the project name Thoth?” My initial guess was that “Thoth” was an ice moon from the Star Wars universe, or maybe a demon from Buffy the Vampire Slayer. It turns out that Thoth is the Ancient Egyptian god of writing, magic, wisdom, and the moon. The Egyptian deity theme runs through the project, with components called Thamos, Kebechet, Amun, and Nepthys, among others.
The set of problems that Thoth aims to solve is an important one. Can we help developers identify the best library to use, by looking at what everyone else is using for a similar job? Can we help identify the source of common performance issues, and suggest speed-ups? Can we create a framework that can enforce compliance, and help minimize risk, as applications grow?
Continue reading “Using machine learning and analytics to help developers”
As part of a modern IT environment, Linux distributions can look to optimizing their size to be better suited for container use. One of the ways this improvement can happen is through reducing the size of a distribution, a process known as minimization. A new tool is being put together that will enable developers and operators to create minimal images of the appropriate size for the container use cases they need.
Continue reading “Size matters: how Fedora approaches minimization”
One of the key requirements for Kubernetes in multi-cluster environments is the ability to migrate an application with all of its dependencies and resources from one cluster to another cluster. Application portability gives application owners and administrators the ability to better manage applications for common needs such as scaling out applications, high availability for applications, or just simply backing up applications for disaster recovery. This post is going to present one solution for enabling storage and data mobility in multicluster/hybrid cloud environments using Ceph and Rook.
Containerization and Container Native Storage has made it easier for developers to run applications and get the storage they need, but as this space evolves and matures it is becoming increasingly important to move your application and data around, from cluster to cluster and cloud to cloud.
Continue reading “Managing application and data portability at scale with Rook-Ceph”
Istio exists to make life easier for application developers working with Kubernetes. But what about making Istio easier? Well, that’s Kiali’s job. Read on to learn more about making Istio even more pleasant to use. Deploying and managing microservice applications is hard. When you break down an application into components, you add complexity in how those components communicate with each other. Getting an alert when something goes wrong, and figuring out how to fix it, is a challenge involving networking, storage, and potentially dozens of different compute nodes.
Continue reading “Kiali: An observability platform for Istio”
If you run software on someone’s servers, you have a problem. You can’t be sure your data and code aren’t being observed, or worse, tampered with — trust is your only assurance. But there is hope, in the form of Trusted Execution Environments (TEEs) and a new open source project, Enarx, that will make use of TEEs to minimize the trust you need to confidently run on other people’s hardware. This article delves into this problem, how TEE’s work and their limitations, providing a TEE primer of sorts, and explaining how Enarx aims to work around these limitations. It is the next in a series that started with Trust No One, Run Everywhere–Introducing Enarx.