It’s no secret that to do their jobs well, developers often need to use as many tools as they can get their hands on to build the best application they can. For them, the right tools for the right job may consist of this version of component X and that version of component Y. But for another tool, entirely different versions of the same components might be needed.
For coders, this is usually just a matter of grabbing the different version of software they need off the internet, installing it, and using it to their heart’s content. No problem, right? Perhaps not for the developer, but from a systems administrator’s point of view, such installations can create systems that are very difficult to manage, particularly on the server side, where having software in packages that are supported and auditable is very much the preferred option.
Continue reading “Modularity: Establishing Balance Between Devs and Ops”
Red Hat’s work within the field of artificial intelligence is primarily taking three directions right now. First, our engineers see the inclusion of AI features as a workload requirement for our platforms, as well as AI being applicable to Red Hat’s existing core business in order to increase open source development and production efficiency. In short, Red Hat thinks AI can be good for our customers and good for us, too.
Second, Red Hat is collaborating with the Mass Open Cloud project to establish the one thing that all AI tools need the most: data. Our team members are working on the Open Data Hub, a cloud platform that lets data scientists spend less time on dealing with infrastructure administration and more time building and running their data models.
The third aspect of Red Hat’s work in AI right now is at the application level. More to the point, how can developers plug in AI tools to applications so that data from those applications can be gathered for storage and later modeling?
Continue reading “Seeing the Trees in the Forest: Anomaly Detection with Prometheus”
The challenges of maintaining persistent storage in environments that are anything but persistent should not be taken lightly. My recent conversation with Ceph founder Sage Weil certainly made that clear. Thus far, the conversation with Sage has highlighted key areas of focus for the Red Hat Storage team as they look to the horizon, including how storage plans are affected by:
- Hardware trends (examined in Part 1)
- Software platforms (reviewed in Part 2)
- Multi-cloud and hybrid cloud (discussed in Part 3)
In the last segment of our interview, Sage focused on technology that’s very much on the horizon: the emerging workloads. Specifically, how will storage work in a world where artificial intelligence and machine learning begins to shape software, hardware, and networking architecture?
Continue reading “The Future of Storage in Container Space: Part 4”
It was not that long ago when organizations had in-house servers humming along running applications and storing data. Today, the opportunity afforded by containers means that applications can now live on a cloud platform (either public or private), or one of several available cloud platforms.
But while applications and microservices housed in stateless containers are easy to move from place to place (indeed, that’s a big part of the appeal of containers), the data the applications are accessing are stateful and very, very difficult to relocate while still maintaining consistency, latency, and throughput. This is one of the challenges faced by the Red Hat Storage team, and addressed by Sage Weil in his recent presentation at Red Hat Summit: maintaining data availability with acceptable latency when working with applications in multi-cloud and hybrid cloud environments.
Continue reading “The Future of Storage in Container Space: Part 3”
In Part 1 of Now + Next’s closer look at the future of container storage, we examined the beginnings of the storage solution with a look at how hardware trends will affect the way storage and containers will evolve together.
In this installment, Ceph Project Lead Sage Weil continues our conversation, moving “up” the stack to software platforms. Specifically, Sage discusses where container technology is now and where it is going.
Continue reading “The Future of Storage in Container Space: Part 2”
The rise of container technology has created a new challenge for the storage industry. Within containers, applications, and computation resources are now incredibly mobile, while storage still has to remain persistent and accessible. Here’s how Red Hat is working to address the storage needs of container workloads.
In modern microservice-based architectures, each container is a transient object. It might live on one server for a while and then get moved over to another if directed by an orchestrator tool. While a container keeps its bundle of application software and dependencies during its lifecycle, it usually does not keep application data within the container. Nor should it. After all, in this model a container is designed to run only what is needed and when it is needed. When done, the container is allowed (in fact encouraged) to disappear. If an application’s data were held inside that same application container, too, then pfft!
That’s a challenge.
Continue reading “The Future of Storage in Container Space: Part 1”
The wonders of automation have been thoroughly enjoyed by sysadmins in recent years with tools like Ansible enabling rapid deployment of applications and services across servers and cloud-based platforms. But as the IT world evolves to more container-based technologies, tools like Ansible have not translated well to orchestration-level actions.
This is changing rapidly, thanks to the new Automation Broker project. Part of the OpenShift ecosystem, Automation Broker connects the gap between provisioning servers and provisioning containers.
Continue reading “Bringing Automation to Container Space”
Sitting in the frigid air-conditioned room somewhere under the surface of a tropical island, it soon became obvious that I was very likely the dumbest person in the place. And, if the men and women around me have their druthers, in a few years, I might not be the smartest sentient entity in the room, either.
It wasn’t a mad scientists’ convention, but rather Supercomputing Asia 2018 that brought me to this place on Sentosa in Singapore a couple weeks ago, where engineers, computer scientists, and business people gathered to discuss the trends and technology within the supercomputing realm.
Continue reading “Artificial Intelligence Will Be More than an Upgrade”
If you could visualize the code that comprises our current technology landscape, you might imagine in your mind’s eye a glowing field of interconnected lines with bright bits of information flowing along the lines’ paths. Here and there, you might see flaws in the network–places where human error have introduced gaps and openings among the lines.
Continue reading “Open Source Strength Within Distributed Weakness Filing”