Inside Advanced Scale Challenges|Wednesday, September 20, 2017
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

The Emerging ‘Internet of Machines,’ FPGAs, and the Discovery of Knowledge We Don’t Know Exists 

The advanced scale computing landscape of the future will have a broader diversity of processors, a focus on matching processors to application domains, and the use of machine learning techniques that teach systems to self-optimize as they take on problems of the highest complexity. A key enabler: FPGA-driven accelerated computing in the cloud. That’s the vision of Steve Hebert, CEO of Nimbix, a provider of HPC cloud computing services. Hebert outlined the path to this future at the recent Nimbix Developer Summit held near Dallas. Here are highlights from Hebert’s keynote:

I’m going to talk about a concept called the Internet of Machines, the interconnection and networking of machines. I want to paint a picture of the world we see coming that’s a direct product of the Internet of Things and all the data that’s being produced that we have to process.

We’ve lived by the idea of Moore’s Law, it’s woven into our DNA. What’s really interesting is not whether Moore’s Law is alive or dead. What’s interesting is the concept of “predictive comfort,” the idea that the entire industry, every equipment manufacturer that builds something that uses chips, has this predictive comfort of knowing that in two years we’re going to have double the transistors and new performance and new capabilities.

It’s hard to step back from this because we’ve thrived for 50 years on Moore’s Law. It begs the question: what happens if this predictive comfort goes away? As we’ve move into the era of multicore and heterogeneity of architecture, we see erosion of predictive comfort. The market is tussling over what comes next. And the very question is forcing innovation in new areas. Specifically, I believe we’re seeing a deeper focus on the applications, the functions these applications are demanding, versus general purpose architectures. And we’re going to have the silicon real estate to start to explore this.

Nimbix CEO Steve Hebert

Nimbix CEO Steve Hebert

Our conundrum at this moment, as developers and technology workers in this ecosystem, is this: at the very moment that our chips stopped getting faster we have this tremendous explosion in data that we have to process to help solve problems, to introduce new services, to scale and tackle all the complexities of our world today.

Part of this challenge is that we have a number of different application domains that are extremely demanding. It’s clear in these domains there are particular compute requirements that might be favored in one application domain versus another. Some applications may want traditional CPU cores, whether its x86 or POWER or CUDA. Some applications demand significant amounts of memory; some want different flavors of floating point , and all of these things impact what solutions are brought to market.

What’s interesting is that the general purpose architectures of the past many years are not quite getting us there when it comes to the specific demands of these application domains and what they’re requiring of infrastructure. When I talk about the Internet of Things, data transformation and data analysis is there as well. We have billions and trillions of attached devices to the internet that are producing troves and troves of data. So we then have to think about the useful things we want to do with all this information, how do we transform that data into answers and new knowledge. We’re in the midst of transforming and accelerating how we tackle the demands that are going to be thrust upon us in this model.

What are the demands of this paradigm? One of the key demands is real time answers, or very near real time. For example, what Google has done with internet search (with voice command). We’re now simulating whole systems, not just samples or a subset. Or gaming, where NVIDIA is doing multiple physical simulations in real time of fluid dynamics, with smoke, water, explosions and all being computed in real time, on the fly. This has exponentially increased demand on computation.

So if we argue we’re approaching, or have arrived at, the end of Moore’s Law, this is driving the adoption, from a software perspective, of new architectures. We’ve had alternative architectures for a long time. But we’re finally seeing a point where the economics of driving new architectural evolution is being thrust upon us.

One of the big takeaways of this is that the tools are arriving for software developers to take advantage of this right now. One of the more prolific co-processing architectures is attached GPUs. And because they were designed to do image processing, a specific application, really well, we’ve been able to apply that to a whole host of other problems. The other thing about GPUs is that for specific applications you can see significant speed-ups relative to CPUs, it’s an alternative architecture to a general purpose CPU-based approach to solving a given set of problems.

(At the core) of my thesis on the Internet of Machines are FPGAs. The idea is you have a blank slate of silicon that you can put whatever you want inside to affect your computation or specific function. These have been very important components in the communications industry, and with FGPAs we’re seeing the entry point of a revolution in the computing industry. This is evident in Intel’s purchase of Altera for $17 billion. That makes a statement.

As you reach the end of traditional process technology you re-orient your thinking about what we can do with that silicon real estate. The innovation trajectory becomes more about what to do in the silicon, and that drives costs down, which means we can now start to see a more widespread use of this kind of technology.

Let’s look at machine learning. It’s no coincidence it’s emerging around this time of transformation because we need automation to help us chug through all this data. Let’s train machines to help us. We can apply “unsupervised learning” to assemble unstructured data. This is very important because there’s a lot of information we don’t even know, or knowledge we don’t know exists. It’s there, but because we don’t have the capacity to process the petabytes of data to create structure or categorization of unstructured datasets, we need to teach our machines to do that.
This is evolving very rapidly. It’s due to the availability of cloud computing, the ability to scale out to help any developer leverage and create a data model and teach it to learn.

Let’s apply machine learning to reconfigurable silicon (FPGAs) and data processing as a process. We have access to millions of API calls. We can create in the Internet of Things millions and millions of API calls that can be fed to train machines how to best process them. And you can actually define those rules that you wish to optimize around. So we might want a machine to, say, look at the most energy efficient way to process for a payload. Or let’s optimize for run time. Or let’s optimize for lowest cost. Whatever the set of rules for the environment, we can teach the machines to tune for those specific things.

The idea with reconfigurable silicon as an integral part of a cloud paradigm is to take a set of workloads, whether those are grouped as labels, rendering or simulation, allow the machines to define the optimal way to process those workloads. Further, let the machines teach the machines what needs to be processed to begin with. There may be a set of things we don’t know we should process, but the machines then can develop an idea based on what we teach them what should be processed, what data is valuable, what information we might new insights from.

Where we arrive at is at the “engine room” for the Internet of Things. I think of the IoT as the attached devices themselves, and the Internet of Machines sits around it and is the engine room that processes all this information to drive meaningful answers. So think of it as intelligent systems that, with reconfigurable silicon, have the tools to self-optimize, to program their own bitstreams, to be able to not just automate but accelerate the distribution, collection and transformation of the massive amounts of data that we have.

I like the term “accelerate” because we live in a world of exponentials. In the technology industry we see the acceleration of the curves we are on, and these systems help give us yet another exponential component in accelerating how we can process information. Which means we can cure cancer faster, solve world hunger faster, colonize Mars faster, solve transportation logistics faster.

As our population continues to swell, there are problems and challenges we want to meet with answers faster than we could if we were on a traditional linear curve. This is what I’m extremely passionate about: how we help accelerate our time-to-results of some of our most complex problems.

We believe this is the initial evolution of reconfigurable cloud computing for hyperscale data processing, the ability to have a set of machines that can self-optimize at the appropriate time. We’re not there yet. We’re at the first evolution. Let’s introduce the concept, let’s introduce the technology, put it into the hands of the community, into the hands of the smart people who can help write the code, create the algorithms that then allow us to evolve these sets of systems that help us process the data.

Add a Comment

Share This