Artificial Intelligence Will Be More than an Upgrade

by | Apr 13, 2018 | AI

Sitting in the frigid air-conditioned room somewhere under the surface of a tropical island, it soon became obvious that I was very likely the dumbest person in the place. And, if the men and women around me have their druthers, in a few years, I might not be the smartest sentient entity in the room, either.

It wasn’t a mad scientists’ convention, but rather Supercomputing Asia 2018 that brought me to this place on Sentosa in Singapore a couple weeks ago, where engineers, computer scientists, and business people gathered to discuss the trends and technology within the supercomputing realm.

Supercomputing–or high-performance computing (HPC)–is certainly an area where Red Hat has a significant degree of participation: a majority of student competitors at this event and past Supercomputing conferences in 2017 use CentOS, preferring the Red Hat ecosystem of tools to other Linux distributions. And the theme of this, the first Asian edition of the Supercomputing conference franchise, “Convergence of AI and HPC Bringing Transformation,” meshes very well with topics Red Hat is already thinking about: machine learning and artificial intelligence (AI).

AI was a theme that wove through many of the talks during the four-day conference, as scientists outlined highly technical plans to squeeze as much raw computing power from hardware/software combinations. Speed and power were very much on the minds of the conference attendees in many of these discussions that, frankly, flew over my head like a jet airplane. But some of the more accessible conversations gave some insight into what challenges lie ahead for the advancement of AI.

The Center No Longer Holds

First off, it appears that Isaac Asimov was right (as he was in so many things) when he placed a supercomputer on board the fictional intra-system spaceship Discovery in his 1968 novel 2001: A Space Odyssey. For those of us who have seen IT architecture go from centralized (mainframe) to distributed (client/server) and back to centralized (cloud), the thought of launching a supercomputer into space seems, well, like overkill. But according to Dr. Goh Eng Lim, VP and CTO, HPC and AI, at Hewlett Packard Enterprise, getting HPC machines closer to the action is an imperative, because of one important physical constant: the speed of light.

In his keynote to the conference, Lim reminded the audience that for any sort of intra-system network, the distances involved will automatically increase network latency to levels that will render remote supercomputing useless. Even the 1.3 light-second delay between Earth and the Moon could be too much for a life-critical decision to be made. And you can just throw out any notion of effective communications between Earth and Mars, which are on average 12.5 light minutes apart, depending on the time of the year.

Better, then, to move the computing power closer to where it is needed. This is why, Lim shared, HP had an experimental supercomputer platform installed in the U.S. Destiny Lab on the International Space Station last August, to learn how vibrations, shifting gravity, and radiation will affect supercomputing hardware.

This kind of approach is not just limited to computers in space. Vehicles on the surface of our planet will need the equivalent of a blade server running onboard very very soon, because when a self-driving car is zipping down the highway at 80 mph, the last thing you want is a low-bandwidth network delaying a decision on whether or not the car should slow down when it “sees” brake lights up ahead.

More detailed explanations on this new type of architecture for an Internet of Things architecture were recently delivered by my colleagues David Lacima and Ishu Verma. And while AI is not IoT, the need for much stronger edge computing is very prevalent in both sectors.

Elementary, My Dear Watson

Another area where scientists are trying to improve AI is in how to “teach” machines.

Right now, most computer scientists follow a top-down process for designing supercomputing applications. Models of laws and rules are created as completely as possible and, once the model is as complete as possible, a set of conditions is input into the model and a result (hopefully expected) will be output. Climate models fall into this category of reasoning, which is deductive in nature.

But lately, some scientists are trying a bottom-up, inductive model, where facts and records are input and then the computer is required to predict results on what has happened in the past. A recent experiment Lim outlined took all of the known weather data for the San Francisco, California area and then asked the application: “given all of this data, will it rain today?” No climate models or meteorologic rules. The inductive application had an 85% success rate after running for just five minutes. The traditional deductive application ran a climate model for three hours and was 95% accurate.

Deductive applications are still more accurate, but they take a lot more power and time to achieve that better result. It is not hard to imagine that with more and better data, the inductive model may soon catch up and perform with similar accuracy in a much shorter period of time.

What’s Going On With Your Nose?

Attaining human-like AI is still going to be hard to achieve because there are some things that humans can do quite well that are hard to teach.

Many of us are familiar with the 1997 victory of IBM’s Deep Blue supercomputer over chess grandmaster Garry Kasparov. Deep Blue was modeled to perform up to 10^47 chess moves to ultimately defeat its human opponent. In 2016, Google’s AlphaGo was programmed with just the rules of Go and inductively figured out all 10^171 moves to defeat its human opponent Lee Sedol. (To give you scale on a 1 with 171 0s after it, scientists estimate there are “just” 10^82 atoms in the observable universe.)

But when pitting AI against humans in other games that are far less complex, computers don’t always do so well. In a game like poker, for instance, with “only” 10^160 moves, AI and HPC are certainly powerful enough to take this. But unlike chess and go, which are based on “pure” tactics and strategies, humans have one huge edge in a game like poker: we lie. Quite well.

In a game where hands are hidden, and the rules of the game allow deception, computers don’t do well at all. For now. But the capability to deceive (or more accurately, detect deception) is another strong obstacle to developing AI.

Computer scientists are working all of these angles, and in the years to come, we should see a lot of innovation all across the IT landscape to bridge these gaps.