Inside Extreme Scale Tech|Thursday, December 25, 2014
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

Breaking Down Barriers to HPC 

One week ago, this year’s International Supercomputing Conference (ISC’13) was wrapping up one of the largest gatherings that the high performance computing (HPC) community will have all year. There, in Leipzig, Germany, conference organizers prepared to bring in 2,500 attendees eager to hear the latest news from HPC vendors and discuss issues facing the community at large.

One such session, hosted by Horst Simon from Lawrence Berkeley National Lab, addressed the all-too-common tale of the “missing middle,” which refers to the small-to-medium sized enterprises (SMEs) that are stuck between using basic workstations and the sort of HPC you see coming out of national labs and universities. 

For the top five percent of U.S. manufacturers—names like Boeing and GM—the digital divide has already been bridged, and the advanced modeling and simulation capability offered by high-performance infrastructure is now integral to their business operations. They use HPC to simulate the crash tests and wind tunnel environments that can cost hundreds of thousands of dollars in a regular facility. And when a change needs to be made to the model or simulation conditions, it takes hours instead of weeks to be able to implement and test.

Although R&D efforts such as these aren’t critical to every SMM, the benefit of HPC-enabled competitive advantages should be enough to lead more manufacturers to the HPC watering hole.

To help provide an international perspective to this discussion, Simon brought together in his “chat” representatives from the United States, Europe and Asia each with a unique professional background.

The impetus for the gathering was the ongoing exascale debate that looms over high performance computing. Specifically, Simon asked the group whether the relentless drive toward higher and higher FLOPS will impede the flow of HPC infrastructure to industry.

The overall response from the three panel members suggested that the exascale push could be a boon to SMEs if a technology is developed that can be passed down to smaller systems, but there was no indication that the exascale agenda is endangering the missing middle. Cynthia McIntyre, senior vice president of the Council on Competitiveness, indicated that based on her experience, the exascale horizon is expected to bring with it petascale options that will be much more affordable and manageable from the standpoint of size, to the point where more and more SMEs will suddenly find that petascale is within their price range. To put that in perspective, it will be like getting access to the current best-in-class systems but without the hefty price-tag and space requirement.

One such example of a technology that the top ten supercomputers can help pass down to smaller systems are GPUs and accelerators. Karin Lukaszewicz, a software developer at MAGMA, saw accelerator implementation as the victim of a time delay, and expects that soon other systems in the TOP500 (including systems in industry) will be taking advantage of these technologies. Jysoo Lee, president of the Korean Institute of Supercomputing and Networking, followed up, noting that accelerators are best suited for certain types of problems, which means that it’s only natural for certain customers to overlook them. Ultimately, he pointed to software as the true barrier, since the code must be rewritten to take advantage of accelerators and in turn give SMEs a true incentive for investment.

Next–Why the Wait?–>

Why the Wait?

When Simon and his group tried to identify one main barrier to broader HPC adoption, it was difficult to identify a sole obstacle. First to speak was Lee, who pointed to gaps in infrastructure and human resources. But perhaps surprisingly, he cited human resources as the harder problem to solve, with McIntyre seconding him in noting that most SMEs have neither the internal expertise nor a source to find the necessary expertise it takes to run HPC systems.

In the end, this boils down to a problem that manufacturers are all-too familiar with: lack of talent coming down the education pipeline. McIntyre cited the University of Chicago as one of several universities looking at offering degrees specifically tailored to high performance computing and helping end-users to more easily upscale to HPC.

But Lukaszewicz wants to see HPC adoption become something that won’t require dedicated rooms full of specialized cooling and HPC experts trained to manage everything from hardware concerns down to proficiency with a non-standard operating system. So when it comes down to it, she said that HPC systems “just have to be easier to maintain and use” if they are to make their way their way into the hands of SMEs.

According to McIntyre, this is already in the works. She reported that already universities are building apps on top of the software packages used in high performance computing for manufacturers to bypass that need for such granular specialization. But even though they are making the entry easier, McIntyre said the need for experts has by no means gone away.

Many SMMs have committed to the cloud in order to deal with these concerns, as this solution is scalable and much easier on the wallet from both the infrastructure and human resources perspectives. But when Simon asked Lukaszewicz if the cloud was a viable way to step around these barriers, her answer was an immediate “no,” which she backed up by citing security risks, which have proved a limiting factor for any SMM with significant investments in R&D who are concerned about protecting their intellectual property.

So where does that leave us? In his final question, Simon asked his colleagues what they had picked up during the conference that may prove useful to SMEs in light of the issues that they discussed. The single chord that each panelist then struck was one of policy. McIntyre noted that for many countries, HPC adoption has been embraced by government as an accelerator to new technologies, which played a major role in industry implementation.

But as far as industry is concerned, Lukaszewics believes that the greatest way to help speed adoption along will be to market manageably sized clusters and systems that look more like workstations, which may just make something as daunting as supercomputing and turn it into a tool that the missing middle lives by.

Add a Comment