Advanced Computing in the Age of AI | Thursday, March 28, 2024

Making Digital Manufacturing Affordable: A Vendor Perspective 

A new breed of GPU-based workstations have the potential to make advanced computing resources available to small- to medium-sized manufacturers — especially the "missing middle" who are not yet making full use of this technology. NVIDIA's Sumit Gupta lends his perspective on how the barriers to digital manufacturing can be overcome.

If you're building an aircraft carrier, designing a wing for a new jetliner, or building a state-of-the-art light water nuclear reactor, chances are you're using a supercomputer and the very latest modeling, simulation and analytic software. Also, you probably work at a very large company, government lab, or university. And you have some serious funding.

NVIDIA village sceneBut if you're a small- to medium-sized manufacturer (SMM) located further down on the supply chain, a big high performance computing (HPC) system is probably not part of your development environment. You may have some older but serviceable workstations, some 2D CAD software, a limited budget, and a small, overworked IT staff primarily dedicated to fighting fires. Reducing design and prototyping time and costs through the adoption of HPC is a desirable but not yet affordable option both in terms of money and staffing.

Organizations like the National Center for Manufacturing Science (NCMS) and the Alliance for High Performance Digital Manufacturing (AHPDM) are trying to change all that (see last week's feature article, Hope for the Missing Middle). But some HPC industry vendors are stepping up to the plate as well.

NVIDIA and the Pervasive GPU

NVIDIA is one of those companies. We spoke with Sumit Gupta, product lead, computing products, who gave us his perspective on how the benefits of HPC can be made available to SMMs — in particular the "missing middle" who are not yet making full use of the technology.

Gupta points out that 15-20 years ago manufacturing software was running on desktop workstations and this setup was relatively affordable. But over time, desktop workstations did not keep up with the performance requirements of the manufacturing software. The software migrated to increasingly powerful HPC clusters to take advantage of the raw horsepower that these systems provided at a lower cost than the typical high-end supercomputer.

Says Gupta, "As soon as software products migrate off the desktop, they start to become prohibitively expensive for small business users. For HPC to truly make inroads into the SMMs it has to be easily available — and the best way to make this happen is through an affordable desktop machine. Not every office has an HPC cluster; but every office does have a desktop system." He points out that these new affordable workstations are not only powered by multicore CPUs and GPUs, but the software has also evolved to take advantage of this parallel computing capability in the workstations.

One of the problems that has to be overcome is the fundamental wall that the manufacturing software has run up against. A macro-level dynamic is taking place: applications are not scaling in proportion to the addition of multiple cores. Adding one or two cores may result in a 2X speedup, but because of fundamental memory bandwidth limitations, adding more cores to a system with the same size bus and memory can actually choke the system.

Because all applications are extremely memory and bandwidth sensitive, NVIDIA's solution, as one might expect, is to use GPUs in these desktop systems. For example, the company recently announced that Dassault Systèmes is using its Quadro and Tesla GPUs coupled with CPUs to run computer-aided engineering (CAE) simulators — its Abaqus 6.11 finite element analysis (FEA) suite — twice as fast as with a CPU alone.

Now a 2X speedup may not seem like a huge leap forward, but the fact is that for the past five years, manufacturing has been experiencing only incremental speedups despite trying all sorts of technological fixes. GPUs, however modestly, are breaking the logjam. And GPUs have a history of becoming faster every 18 months to two years through the addition of hundreds of small cores — a technique that works very well with manufacturing application software.

Memory and I/O are still limiting factors, but the memory bandwidth of a GPU is about 10X that of a CPU and this advantage is expected to be maintained as solutions such as fast graphic memory are incorporated. For example, the NVIDIA Tesla M2070Q features 6 GB of GDDR5 memory per GPU with ultra-fast bandwidth. This kind of capability is particularly important for modeling, simulation and analysis — the backbone of digital manufacturing.

So what does all this have to do with SMMs who would like to leverage HPC for their business, but can't afford the price tag and overhead associated with conventional clusters or supercomputers? This is where the new breed of personal supercomputers comes in.

NVIDIA, along with a number of other companies, began offering these powerful desktop systems about three years ago. NVIDIA claims that the Tesla Personal Supercomputer delivers the performance of a cluster in a desktop system — nearly 4 teraflops (up to 250 times faster than your average PC or workstation) — for under $10,000.

The company hopes that by offering a relatively low-cost system that can easily handle advanced modeling and simulation software, it will make inroads into the some 285,000 SMMs that constitute the "missing middle." However, as we noted in a recent blog, there are a number of other speed bumps that have to be navigated before digital manufacturing, modeling and simulation and the personal supercomputers that make it possible enjoy widespread adoption in this nascent mid-market. (HPC in the cloud is another rapidly developing option for those manufacturers that don't want to own and support their own HPC system.)

But the odds are that as the price of personal supercomputers continues to drop while their processing power continues to rise, an increasing number of SMMs will be ready to take the plunge.

Dirty Cotton and Microwaved Pizza — Affordable Supercomputer Solutions

cotton photo by Martin LaBar on flickrWhen them cotton balls get rotten you can lose a lot of money. Fortunately, the cotton manufacturing industry has gotten an assist from researchers in the U.S. Department of Agriculture. They used NVIDIA GPUs to create a machine vision system that does a far better job of detecting contaminants on cotton lint traveling down an assembly line for cleaning.

Current CPU-based solutions can't react fast enough to make precise reading of the level of trash contamination on the cotton. The result is overwashing and significant lint loss. The GPU-based system uses pattern recognition software to identify the dirt level on each batch of cotton and precisely control the washing process.

The prototype system indicates that a more than 30 percent reduction in lint loss could be gained, speeding up processing and saving a significant amount of cotton fiber that would otherwise be washed away. This simple innovation could result in savings of up to $100 million per year for the US cotton industry.

Zapping a Pizza

General Mills is not exactly a member of the "missing middle," but the company did recently use a CUDA-based system in a way that could be emulated by SMMs in the food industry.

The question: what's the optimal way to cook a frozen pizza in the microwave? Instead of experimenting with thousands of combinations, the company created virtual pizza models to test out the effects of microwave radiation on various permutations of mozzarella cheese, tomato paste and crust. This allowed the researchers to only cook up the best candidates, a great savings in time and money and, presumably, a lot easier on their digestion.

EnterpriseAI