Red Hat’s Open Source AI Vision

Analytics, Machine Learning, and AI represent a fundamental transformation that over the coming decade will affect every aspect of society, business, and industry. It will fundamentally change, how we interact with computers – and how we develop, maintain, and operate systems. It’s impact will be visible in our part of the universe much sooner than for the analog world. This deeply affects both open source in general, as well as Red Hat, its ecosystem, and customer base.

In this video from the inaugural DevConf.US 2018, Daniel Riek who leads the AI Center of Excellence in Red Hat Office of the CTO, talks about this coming change.

Continue reading “Red Hat’s Open Source AI Vision”

Transforming IT Operations: A Roadmap

Digital transformation is more than just a fancy buzzword. With 85 percent of Global 2000 CEOs believing in digital innovation as a driver of business success, it is estimated that nearly $2.1 trillion will be invested in digital transformation technologies in 2019.

According to Mary Johnston Turner, Director, Management Software BU Evangelism,  the drivers to digital transformation are going to play a significant role in driving IT decision-making for the near-term future. Turner outlined the significant driving factors in her 2018 Summit breakout session “Transforming IT Ops: The future of IT automation & management.”

Continue reading “Transforming IT Operations: A Roadmap”

Machine Learning as a Service

Experimenting with machine learning algorithms or integrating such techniques into an existing environment often presents challenges, like selecting and deploying the right infrastructure, in addition to having the necessary data science background and skills, etc. In this post, we present a service that allows users to train machine learning models, run analyses using trained models, as well as manage data required for such models or analyses. Now machine learning models or the prediction results can be easily integrated in to existing continuous integration (CI) or IT infrastructure using REST API.

Overview

The main components of such a service are Apache OpenWhisk, Red Hat OpenShift, and Ceph Storage. These components are available under AI Library at https://gitlab.com/opendatahub/ai-library. OpenWhisk is a serverless computing platform that provides the interface through which users can submit HTTP requests to train or execute machine learning models. HTTP requests submitted to OpenWhisk are actually targeted to stateless functions, called actions, that run on the platform. Ceph Storage is used for storage of training and prediction data, models and results. Users can submit data in to Ceph backend through OpenWhisk actions provided in our implementation (s3.py) or any custom tools such as RADOS object storage utility that can interact with Ceph storage. The action ‘s3.py’ not only supports Ceph, but also any S3-compatible storage backend.

Continue reading “Machine Learning as a Service”

Blockchain: A Primer on How to Identify Good Use Cases

Everyone has an opinion on how Blockchain will change business and society. Quite a few startups are working on their Blockchain-based products or services, and some of them are even using initial coin offerings (ICOs) as a funding vehicle. However, it’s hard to find good use cases that haven’t been solved already with more traditional technologies and business models.

To overcome this hurdle  I have created a simple framework that might help people to evaluate use cases and identify the most promising ones. There are many things to take into account when evaluating a Blockchain use case but there are few that are crucial, the others can be considered implementation details. We need to begin with the most important one: what class of problems Blockchain is designed to address.

Continue reading “Blockchain: A Primer on How to Identify Good Use Cases”

Modularity: Establishing Balance Between Devs and Ops

It’s no secret that to do their jobs well, developers often need to use as many tools as they can get their hands on to build the best application they can. For them, the right tools for the right job may consist of this version of component X and that version of component Y. But for another tool, entirely different versions of the same components might be needed.

For coders, this is usually just a matter of grabbing the different version of software they need off the internet, installing it, and using it to their heart’s content. No problem, right? Perhaps not for the developer, but from a systems administrator’s point of view, such installations can create systems that are very difficult to manage, particularly on the server side, where having software in packages that are supported and auditable is very much the preferred option.

Continue reading “Modularity: Establishing Balance Between Devs and Ops”

UKL: A Unikernel Based on Linux

Unikernels are customized, single address space bootable images composed of an application and the required bare-minimum kernel functionality. Today’s unikernels have demonstrated substantial performance and security advantages over monolithic and microkernels, but none have yet achieved widespread adoption.
The fundamental problem is that today’s unikernels, which have been developed by forking existing operating systems or as clean-slate designs, have abandoned the evolutionary community process that has made Linux such a success. In this post we describe an alternative approach we are pursuing with the goal of making unikernels a community supported, evolving capability of Linux and and the GNU C LIbrary (glibc).

Continue reading “UKL: A Unikernel Based on Linux”

Seeing the Trees in the Forest: Anomaly Detection with Prometheus

Red Hat’s work within the field of artificial intelligence is primarily taking three directions right now. First, our engineers see the inclusion of AI features as a workload requirement for our platforms, as well as AI being applicable to Red Hat’s existing core business in order to increase open source development and production efficiency. In short, Red Hat thinks AI can be good for our customers and good for us, too.

Second, Red Hat is collaborating with the Mass Open Cloud project to establish the one thing that all AI tools need the most: data. Our team members are working on the Open Data Hub, a cloud platform that lets data scientists spend less time on dealing with infrastructure administration and more time building and running their data models.

The third aspect of Red Hat’s work in AI right now is at the application level. More to the point, how can developers plug in AI tools to applications so that data from those applications can be gathered for storage and later modeling?

Continue reading “Seeing the Trees in the Forest: Anomaly Detection with Prometheus”

Getting Strategic About Security

In this video from the Red Hat Summit 2018, Chief Security Architect Mike Bursell takes an enthusiastic look at three open source security technologies: DevSecOps, serverless computing, and Trusted Execution Environments.

These technologies are examples of where Red Hat’s longview is aimed for the security realm.

Continue reading “Getting Strategic About Security”

A Hub for Open Data at Mass Open Cloud

Open source software is good. Open source plus open data is even better. That makes initiatives such as the Open Data Hub both useful in and of themselves and as a template for maintaining control over your data.

Access to, and the ability to collaboratively build upon, open source code is genuinely useful. If it weren’t, open source software wouldn’t have become such an important part of how technology has developed over the past couple of decades. There are ideological reasons to prefer open source as well, but its effectiveness as a development model has won over the pragmatists.

Continue reading “A Hub for Open Data at Mass Open Cloud”

Building trust in cloud computing with Keylime

The goal of the Keylime project is to connect the features of Trusted Platform Modules (TPMs) and cloud computing. Keylime is a scalable trusted cloud key management system, providing an end-to-end solution for both bootstrapping hardware-rooted cryptographic identities for Infrastructure-as-a-Service (IaaS) nodes and for system-integrity monitoring those nodes via periodic attestation. Keylime extends the attestation capabilities of the TPM into the cloud, allowing tenants to verify that their applications, operating systems, and everything down to the hardware have not been tampered with.

A TPM (Trusted Platform Module) is a chip, present in most modern computers, that can perform various cryptographic statements in a tamper-proof fashion. In particular, through UEFI secure boot, a TPM can be used to verify at boot time that anything from the firmware up through the kernel and applications has not been modified from what the distributor originally shipped.

Continue reading “Building trust in cloud computing with Keylime”