The wonders of automation have been thoroughly enjoyed by sysadmins in recent years with tools like Ansible enabling rapid deployment of applications and services across servers and cloud-based platforms. But as the IT world evolves to more container-based technologies, tools like Ansible have not translated well to orchestration-level actions.
At the first signs of Spring, all Red Hatters turn at least one eye toward Red Hat Summit. Over the years, we’ve had many conversations with attendees about what kind of information and perspectives they’d like to hear at Summit. We learned that attendees appreciated the actionable technical information they received, but that they were interested in getting some insight into Red Hat’s point of view on emerging technology trends and their thoughts on the future. That was the motivation behind a new set of sessions from the Office of the CTO that we’re very excited to announce.
Sitting in the frigid air-conditioned room somewhere under the surface of a tropical island, it soon became obvious that I was very likely the dumbest person in the place. And, if the men and women around me have their druthers, in a few years, I might not be the smartest sentient entity in the room, either.
It wasn’t a mad scientists’ convention, but rather Supercomputing Asia 2018 that brought me to this place on Sentosa in Singapore a couple weeks ago, where engineers, computer scientists, and business people gathered to discuss the trends and technology within the supercomputing realm.
Blockchain is everybody’s latest buzzword–right up there with AI and IoT–but what does it mean, and how is it relevant to the enterprise?
The answer to those questions is likely “a lot,” but before we get to that, let’s define what a blockchain is–and isn’t.
If you could visualize the code that comprises our current technology landscape, you might imagine in your mind’s eye a glowing field of interconnected lines with bright bits of information flowing along the lines’ paths. Here and there, you might see flaws in the network–places where human error have introduced gaps and openings among the lines.
In the previous blog, my colleague David Bericat discussed why Internet of Things (IoT) architecture should be built with open source. One of the core components of end-to-end IoT architecture listed in that article was an intelligent IoT gateway that can process data near its source in near real time and filter/prioritize the actionable data. In this article, we’ll explore the reasons behind the need for an intelligent IoT gateway.
Designing, implementing, securely operating, managing and maintaining IoT projects is complex. In fact, there are entire organizations whose sole mission is solving a specific problem within an IoT architecture. The problems that can be found within such architectures can range from connectivity to figuring out where apps live.
Computing styles ebb and flow. The centralized mainframe in the glass room largely ebbed in favor of the PC revolution that itself gave way, at least in part, to the web and the cloud. Today, we have a complex mix of massive datacenters, Internet-of-Things (IoT) devices, and sophisticated computers we can hold in the palm of our hand.
The TripleO project is transitioning from bare-metal to containers-based OpenStack deployments. This transition started almost a year ago and it was split in two phases. The first phase targets Docker as the container runtime, whereas the second phase moves these container images to Kubernetes. In this post, we will focus on the second phase of the transition–specifically, how to deploy these services.
In January of 2015, the Open vSwitch (OVS) team announced they planned to start a new project within OVS called OVN (Open Virtual Network). The timing could not have been better for me as I was looking around for a new project. I dove in with a goal of figuring out whether OVN could be a promising next generation of Open vSwitch integration for OpenStack and have been contributing to it ever since.