Scaling workload storage requirements across clusters

A number of multi-cloud orchestrators have promised to simplify deploying hundreds or thousands of high-availability services.  But this comes with massive infrastructure requirements. How could we possibly manage the storage needs of a thousand stateful processes?  In this blog, we’ll examine how we can leverage these orchestrators to address our dynamic storage requirements.

Currently in Kubernetes, there are two approaches in how a control plane can scale resources across multiple clusters.  These are commonly referred to as the Push and Pull models, referring to the way in which configurations are ingested by a managed cluster.  Despite being antonyms in name, these models are not mutually exclusive and may be deployed together to target separate problem spaces in a managed multi-cluster environment.

Continue reading “Scaling workload storage requirements across clusters”

Red Hat and NVIDIA bring scalable, efficient edge computing to smart cities

Teams from Red Hat and NVIDIA have collaborated on creating a scalable hybrid cloud application that could revolutionize smart city initiatives such as traffic-flow monitoring and transportation management around the world. By working together, the two companies are creating solutions that make cities smarter and more efficient by taking sensor data and processing it in real-time to provide insights for traffic congestion, pedestrian flow, and infrastructure maintenance.

Running on top of the NVIDIA EGX platform with the NVIDIA GPU Operator, the application is built with NVIDIA’s Metropolis application framework for IoT that brings together innovative capabilities for real-time image processing where NVIDIA DeepStream SDK is used to extract metadata from live video streams at the edge. It then forwards the right metadata to the cloud for deeper analytical processing and further representation in an information dashboard depicted below.

Continue reading “Red Hat and NVIDIA bring scalable, efficient edge computing to smart cities”

Passing Go: polyglot Kubernetes Operators

Operators within Kubernetes are useful tools, designed to extend the container orchestration platform with additional resources. More directly, an Operator, sometimes referred to as custom controllers, is a method of packaging, deploying, and managing a Kubernetes application. 

As useful as Operators are, they have had one limitation: originally they all had to be written in the Go programming language. Thanks to the Operator SDK, you do not need to develop your Operators in Go. The Operator SDK has options for Ansible and Helm that may be better suited for the way you or your team work. But, it can still be limiting for dev teams trying to build an operator if they don’t happen to be skilled in Helm or Ansible.

Continue reading “Passing Go: polyglot Kubernetes Operators”

Managing chaos in a containerized environment

Quick, name some weird stuff that’s happened to your production machines.

Accidentally dropping a production database table? Rolling out a patch that enabled any user to log in with any password? Disabling a load balancer? Using a dictionary to physically keep keyboard keys depressed so “terminals [could] repeatedly [hit] the enter key in order for the logins and print jobs of about 40,000 people to work”?

It’s happened to Alex Corvin, a senior engineer at Red Hat. Well, not that last one. But Corvin has been around long enough in his career to have met Mr. Murphy and his Law: if it can go wrong, it will.

Continue reading “Managing chaos in a containerized environment”

Trust No One, Run Everywhere–Introducing Enarx

When you run a workload as a VM, container or in a serverless environment, that workload is vulnerable to interference by any person or software with hypervisor, root or kernel access.  Enarx, a new open source project,  aims to make it simple to deploy workloads to a variety of trusted execution environments (TEEs) in the public cloud, on your premises or elsewhere, and to ensure that your application workload is as secure as possible.

When you run your workloads in the cloud, there are no technical barriers to prevent  the cloud providers–or their employees–from looking into your workloads, peeking into the data, or even changing the running process.  That’s because when you run a workload as a VM, container or serverless, the way that these are implemented means that a person or software entity with sufficient access can interfere with any process running on that machine.

Continue reading “Trust No One, Run Everywhere–Introducing Enarx”

Understanding and Applying Storage Federation Patterns Using KubeFed

As a cloud user, how do you avoid the pull of data gravity of one provider or another? How can you get the flexibility and tooling to migrate your infrastructure and applications as your needs change? How do you get to the future of storage federation as data agility?

In this blog we cover the primary motivations and considerations that drive the enablement of flexible, scalable, and agile data management. Our subsequent blogs cover practical use cases and concrete solutions for our 6 federated storage patterns. 

All of this is grounded in Red Hat’s work to take a lead in multi-cluster enablement and hybrid cloud capabilities. Our work is focused on leading and moving forward projects that lay the groundwork for this vision in OpenShift, driven via the Kubernetes project KubeFed.

Continue reading “Understanding and Applying Storage Federation Patterns Using KubeFed”

Building a Scalable TensorFlow Twitter Bot for Red Hat Summit

Red Hat’s AI Center of Excellence and PerceptiLabs wanted a way to demonstrate a TensorFlow model to the public during the 2019 Red Hat Summit. The plan was for this model to take images as input, and then respond with the likelihood of a Red Hat fedora being in that image. Here’s what we learned during Red Hat Summit.

This application, which we called Fedora Finder Bot, would be featured during Red Hat CTO Chris Wright’s keynote, where PerceptiLabs demoed their AI platform.

Our initial solution for this objective would be a Twitter bot that receives tweets or direct messages and replies with the output from the TensorFlow model. Twitter being a public service, we felt it could make the model available to a large number of users, so that any user could just tweet to the bot with a picture and the bot would respond with the model’s output.

Continue reading “Building a Scalable TensorFlow Twitter Bot for Red Hat Summit”

Rook Changes the Kubernetes Storage Landscape

It’s no secret that if you want to run containerized applications in a distributed way, then Kubernetes is the platform for you. Kubernetes’ role as an orchestration platform for containers has taken center stage to become a main player for automating deployment, scaling, and management of applications within containers. Red Hat’s own OpenShift Container Platform is a Kubernetes distribution that uses Kubernetes optimized for enterprises.

Storage has been one of the areas of potential optimization. Many containers, by their very nature, are usually small enough to be easily distributed and managed. Containers hold applications, but the data those applications use needs to be held somewhere else, for a number of reasons. Of particular interest in this post, we want to avoid the containers themselves becoming too large and unwieldy to be effectively managed.

Continue reading “Rook Changes the Kubernetes Storage Landscape”

Next Generation Tools for Container Technology

In this video from the 2018 Red Hat Summit, Dan Walsh and Mrunal Patel lead a journey through a set of next generation tools for creating, deploying, and maintaining containers.

This journey covers tools such as CRI-O, Buildah, and Skopeo, which are being developed with other tools by Red Hat and the community into a complete toolchain for developing, operating, and maintaining Open Container Initiative (OCI)-compliant containers.

Continue reading “Next Generation Tools for Container Technology”

Kubernetes and the Platform of the Future

In another installment from the Red Hat Summit track from the Office of the CTO, this video is an informal discussion between Brandon Philips (previously CTO of CoreOS, acquired by Red Hat) and Clayton Coleman (Chief Engineer for OpenShift), interviewed by Steve Watt. They focus on Kubernetes as a platform of the future, identifying interesting trends in the open source ecosystem.

This discussion is a good example of the type of technologists that comprise the modern open source ecosystem, and epitomized by these three from Red Hat. Their backgrounds in real world development and operations combines with a genuine desire to help people that fuels their work in open source communities and product creation.

Continue reading “Kubernetes and the Platform of the Future”

  • Page 1 of 2
  • 1
  • 2
  • >