Inside Extreme Scale Tech|Monday, December 29, 2014
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

GM Invests in HPC Center for Crash Test Simulations 

As the cloud becomes an increasingly attractive option for manufacturers with big needs in IT, scalable options such as outsourced data centers have become a must-have for many companies.

But while the current craze may be outsourcing IT and taking advantage of the cloud, General Motors (GM) has taken a step in the opposite direction when its $130 million datacenter went online Monday in Warren, Michigan.

The 5,040 sq-ft facility packs 34,000 rack units, most of which house Intel eight-core Sandybridge processors. Within the datacenter is a dedicated set of servers for HPC applications used for CFD and crash test simulations, which by itself claims about 30,000 cores.

The origin of the Warren datacenter stretches back to 2011, when GM was still outsourcing its IT infrastructure to Hewlett-Packard (HP). After an HP mainframe went on the fritz for three days, nearly causing the automaker to shut down some of its plants, the company decided that an organization of its size and scope needed IT infrastructure to match.

GM has been keen to point out how integral the new infrastructure will be, and has referred to the datacenter as the “capstone of GM’s efforts to transform its IT operations.” The emphasis on this particular center comes because this new Warren datacenter and the recently announced a $100 million datacenter coming to the Milford Township will replace the 23 existing facilities spread across the globe.


GM chairman and CEO Dan Akerson said that this will help deliver new car designs and technologies more quickly. “IT is back home where it should be, and it further drives unnecessary complexity from our businesses while improving our operational efficiency and better supporting our business strategy.”

By 2015, the commute from GM’s first datacenter to its second and last will be a mere 45-minute car ride. Together, the datacenters will provide the company with its own private cloud along with a subset of servers dedicated to HPC applications such as CFD and crash test simulations.

Although most of the cars GM is crash testing retail for less than $30,000, the cost to produce one prototype that will be crashed and abused in all sorts of creative ways totals over ten times that amount, meaning that every crash test that is simulated, rather than run in a brick-and-mortar test facility, saves the automaker hundreds of thousands of dollars.

What makes these prototypes so expensive is that they are being built without the sophisticated automation processes that are used once a model hits the assembly line for mass production.

And it’s not just one prototype and one crash per car model. For each car GM designs, the car must be crashed multiple times, and in multiple ways. Each crash helps to formulate a comprehensive image of what aspects of the car could be improved for safety. Once these tweaks are implemented, the test process starts over again.

It’s easy to see how bypassing the creation and destruction of numerous six-figure vehicles makes HPC-powered simulations an easy choice as far as costs are concerned. But perhaps more notable is the amount of time saved. Instead of having to wait through the manufacturing of a brand new vehicle just to make an adjustment on the hood, an engineer need simply make a few adjustments in their simulation program and run the test again to determine whether or not a modified part is worthwhile.

But at the bottom line of all this is safety. Having more time and fewer expenses associated with each crash test means more simulations, and ultimately more design improvements, can be run, meaning that opting to go digital ultimately drives safer cars off the assembly line.

Of course, this doesn’t remove the need for actual crash testing altogether. Physical tests are still necessary to verify the results of digital simulations, as well as to pass industry safety requirements.

But GM sees its journey to consolidate its distributed global IT into the Warren and  Milford enterprise datacenters as being key to their future success. Jeff Liedel, executive director of infrastructure engineering pointed to GM’s HPC environment as being one of the largest in the business. Nick Bell, CIO of Global Product Development agreed, noting how the combination of GM’s scale with the new enterprise datacenters should put them ahead of the competition.

“We think [the Warren datacenter] positions us to take advantage of our scale and new technologies that emerge. The use of information technology is a key piece of every business function at General Motors and the datacenters are the engine room of all of that,” said Bell.

And, according to Liedel, this reliance on HPC and advanced simulations will only grow stronger as the company—and the industry—moves into the future. “The amount of simulations we can do and the kinds of simulations we can do keeps increasing year after year. This is almost an insatiable demand from engineering,” said Liedel.

Bell pointed out that this will hold especially true as the industry transforms to adopt new propulsion technologies, such as electric motors, and they continue to strive to make even more lightweight vehicles.

“We’ve come so far in 20 years, and that’s only going to continue. The kinds of things you can simulate before building a physical model,” Liedel continued. “I think that’s just an endless opportunity to more efficiently design and build cars  by putting that compute power at a very high scale as close to the customer. I think that’s going to bring us some real long-term advantages.”

Add a Comment