The Superfluidity Project was a 33-month European (H2020) research project (July 2015–April 2018) aimed at achieving superfluidity on the Internet: the capability to instantiate services on-the-fly, run them anywhere in the network (core, aggregation, edge), and shift them transparently to different locations. The project especially focused on 5G networks and tried to go one step further into the virtualization and orchestration of different network elements, including radio and network processing components, such as BBUs, EPCs, P-GW, S-GW, PCRF, MME, load balancers, SDN controllers, and others.
For more information about it, you can visit both the official project website.
The wonders of automation have been thoroughly enjoyed by sysadmins in recent years with tools like Ansible enabling rapid deployment of applications and services across servers and cloud-based platforms. But as the IT world evolves to more container-based technologies, tools like Ansible have not translated well to orchestration-level actions.
At the first signs of Spring, all Red Hatters turn at least one eye toward Red Hat Summit. Over the years, we’ve had many conversations with attendees about what kind of information and perspectives they’d like to hear at Summit. We learned that attendees appreciated the actionable technical information they received, but that they were interested in getting some insight into Red Hat’s point of view on emerging technology trends and their thoughts on the future. That was the motivation behind a new set of sessions from the Office of the CTO that we’re very excited to announce.
Sitting in the frigid air-conditioned room somewhere under the surface of a tropical island, it soon became obvious that I was very likely the dumbest person in the place. And, if the men and women around me have their druthers, in a few years, I might not be the smartest sentient entity in the room, either.
It wasn’t a mad scientists’ convention, but rather Supercomputing Asia 2018 that brought me to this place on Sentosa in Singapore a couple weeks ago, where engineers, computer scientists, and business people gathered to discuss the trends and technology within the supercomputing realm.
Blockchain is everybody’s latest buzzword–right up there with AI and IoT–but what does it mean, and how is it relevant to the enterprise?
The answer to those questions is likely “a lot,” but before we get to that, let’s define what a blockchain is–and isn’t.
If you could visualize the code that comprises our current technology landscape, you might imagine in your mind’s eye a glowing field of interconnected lines with bright bits of information flowing along the lines’ paths. Here and there, you might see flaws in the network–places where human error have introduced gaps and openings among the lines.
In the previous blog, my colleague David Bericat discussed why Internet of Things (IoT) architecture should be built with open source. One of the core components of end-to-end IoT architecture listed in that article was an intelligent IoT gateway that can process data near its source in near real time and filter/prioritize the actionable data. In this article, we’ll explore the reasons behind the need for an intelligent IoT gateway.
Designing, implementing, securely operating, managing and maintaining IoT projects is complex. In fact, there are entire organizations whose sole mission is solving a specific problem within an IoT architecture. The problems that can be found within such architectures can range from connectivity to figuring out where apps live.
Computing styles ebb and flow. The centralized mainframe in the glass room largely ebbed in favor of the PC revolution that itself gave way, at least in part, to the web and the cloud. Today, we have a complex mix of massive datacenters, Internet-of-Things (IoT) devices, and sophisticated computers we can hold in the palm of our hand.