Red Hat’s work within the field of artificial intelligence is primarily taking three directions right now. First, our engineers see the inclusion of AI features as a workload requirement for our platforms, as well as AI being applicable to Red Hat’s existing core business in order to increase open source development and production efficiency. In short, Red Hat thinks AI can be good for our customers and good for us, too.
Second, Red Hat is collaborating with the Mass Open Cloud project to establish the one thing that all AI tools need the most: data. Our team members are working on the Open Data Hub, a cloud platform that lets data scientists spend less time on dealing with infrastructure administration and more time building and running their data models.
The third aspect of Red Hat’s work in AI right now is at the application level. More to the point, how can developers plug in AI tools to applications so that data from those applications can be gathered for storage and later modeling?
Continue reading “Seeing the Trees in the Forest: Anomaly Detection with Prometheus”
In this video from the Red Hat Summit 2018, Chief Security Architect Mike Bursell takes an enthusiastic look at three open source security technologies: DevSecOps, serverless computing, and Trusted Execution Environments.
These technologies are examples of where Red Hat’s longview is aimed for the security realm.
Continue reading “Getting Strategic About Security”
Open source software is good. Open source plus open data is even better. That makes initiatives such as the Open Data Hub both useful in and of themselves and as a template for maintaining control over your data.
Access to, and the ability to collaboratively build upon, open source code is genuinely useful. If it weren’t, open source software wouldn’t have become such an important part of how technology has developed over the past couple of decades. There are ideological reasons to prefer open source as well, but its effectiveness as a development model has won over the pragmatists.
Continue reading “A Hub for Open Data at Mass Open Cloud”
The goal of the Keylime project is to connect the features of Trusted Platform Modules (TPMs) and cloud computing. Keylime is a scalable trusted cloud key management system, providing an end-to-end solution for both bootstrapping hardware-rooted cryptographic identities for Infrastructure-as-a-Service (IaaS) nodes and for system-integrity monitoring those nodes via periodic attestation. Keylime extends the attestation capabilities of the TPM into the cloud, allowing tenants to verify that their applications, operating systems, and everything down to the hardware have not been tampered with.
A TPM (Trusted Platform Module) is a chip, present in most modern computers, that can perform various cryptographic statements in a tamper-proof fashion. In particular, through UEFI secure boot, a TPM can be used to verify at boot time that anything from the firmware up through the kernel and applications has not been modified from what the distributor originally shipped.
Continue reading “Building trust in cloud computing with Keylime”
The world of multi-tenant bare metal cloud computing in the datacenter is increasingly important. With tenants being offered their own servers rather than locked-down VMs or compute services, the potential for innovation is much higher. Mass Open Cloud aims to offer a multi-tenant cloud where hardware would be shared between organizations, such as universities, with tenants able to access bare metal instances directly. Here’s how we propose to create a standardized architecture to provide a seamless elastic bare-metal experience for Mass Open Cloud and similar environments.
Our solution to the bare-metal-as-a-service problem combines two projects: Mass Open Cloud’s Malleable Metal as a Service (M2) and the Red Hat stewarded Foreman Project. Where M2 provides the means for provisioning servers, Foreman provides the orchestration and user interface.
Continue reading “Malleable Metal – Integrating SAN-booting with Foreman”
In this video from the 2018 Red Hat Summit, Dan Walsh and Mrunal Patel lead a journey through a set of next generation tools for creating, deploying, and maintaining containers.
This journey covers tools such as CRI-O, Buildah, and Skopeo, which are being developed with other tools by Red Hat and the community into a complete toolchain for developing, operating, and maintaining Open Container Initiative (OCI)-compliant containers.
Continue reading “Next Generation Tools for Container Technology”
In another installment from the Red Hat Summit track from the Office of the CTO, this video is an informal discussion between Brandon Philips (previously CTO of CoreOS, acquired by Red Hat) and Clayton Coleman (Chief Engineer for OpenShift), interviewed by Steve Watt. They focus on Kubernetes as a platform of the future, identifying interesting trends in the open source ecosystem.
This discussion is a good example of the type of technologists that comprise the modern open source ecosystem, and epitomized by these three from Red Hat. Their backgrounds in real world development and operations combines with a genuine desire to help people that fuels their work in open source communities and product creation.
Continue reading “Kubernetes and the Platform of the Future”
The challenges of maintaining persistent storage in environments that are anything but persistent should not be taken lightly. My recent conversation with Ceph founder Sage Weil certainly made that clear. Thus far, the conversation with Sage has highlighted key areas of focus for the Red Hat Storage team as they look to the horizon, including how storage plans are affected by:
- Hardware trends (examined in Part 1)
- Software platforms (reviewed in Part 2)
- Multi-cloud and hybrid cloud (discussed in Part 3)
In the last segment of our interview, Sage focused on technology that’s very much on the horizon: the emerging workloads. Specifically, how will storage work in a world where artificial intelligence and machine learning begins to shape software, hardware, and networking architecture?
Continue reading “The Future of Storage in Container Space: Part 4”
As an industry we look to open source communities as our core innovation engine. At Red Hat we’re always monitoring, participating in, and even creating these open source communities. Here’s how you can garner some insight into where the industry, and Red Hat, might be going next.
Continue reading “Introducing now + Next”
It was not that long ago when organizations had in-house servers humming along running applications and storing data. Today, the opportunity afforded by containers means that applications can now live on a cloud platform (either public or private), or one of several available cloud platforms.
But while applications and microservices housed in stateless containers are easy to move from place to place (indeed, that’s a big part of the appeal of containers), the data the applications are accessing are stateful and very, very difficult to relocate while still maintaining consistency, latency, and throughput. This is one of the challenges faced by the Red Hat Storage team, and addressed by Sage Weil in his recent presentation at Red Hat Summit: maintaining data availability with acceptable latency when working with applications in multi-cloud and hybrid cloud environments.
Continue reading “The Future of Storage in Container Space: Part 3”