SGI Reveals NUMAlink 7, Plans For Data-Intensive Computing
SGI started out as a workstation maker, expanded into supercomputing, and merged with an innovator in hyperscale datacenter systems. The company knows the kinds of systems that governments, research institutions, and large Web application providers need, and it is good at delivering them. Like others who peddle such high-end systems, SGI wants to take what it has engineered and tailor it for high-end enterprise customers who are looking for greater performance and density than they can get out of plain-vanilla clusters or NUMA systems.
The company wants to do this for a number of reasons. First, the opportunity is large – on the scale of hundreds of millions of dollars at the very least, according to executives from SGI who spoke to EnterpriseTech recently. Second, selling UV big memory machines to large enterprises for in-memory and other kinds of processing will no doubt give SGI higher margins than trying to close the next big 10,000-lot server deal at one of the hyperscale datacenter operators.
To get a sense of this evolving market for what SGI calls data-center computing, EnterpriseTech sat down with Eng Lim Goh, SGI’s chief technology officer, and Bob Braham, its chief marketing officer, to talk about the market forces at work and the technology that SGI can bring to bear to solve some of the peskiest analytics problems for companies.
Timothy Prickett Morgan: What I am trying to understand is how SGI is going to be deploying technologies that it has developed for supercomputing in the business environment. I know about the deal you have done with SAP on a future HANA system, but this question goes beyond in-memory databases. I just want to get my brain wrapped around the shape of the high-end enterprise market you are chasing and what potential it has. I keep thinking there is an opportunity for ever-larger shared memory systems to simplify the programming model for people and to get faster access to data. Clusters all sounded so cheap and wonderful, but you end up paying for it in terms of software development and inefficiencies in the infrastructure.
One of the founding ideas behind EnterpriseTech is that technologies developed for the national and academic supercomputing market make their way into the enterprise datacenters eventually. We have these other sets of technologies coming from Google, Yahoo, and others who have an alternative, HPC-style technologies for processing large amounts of data. So we are getting this confluence of different kinds of styles of computing, and they can be mixing or they can be fighting for adoption.
Eng Lim Goh: Brilliant insights. In fact, you will see that a customer of ours, PayPal, has been moving their Hadoop and MapReduce clusters to an HPC system with InfiniBand and ultimately will be going to a shared memory system. This has been a four-year process, lived by PayPal, exactly as you described it.
The term that we are starting to use more instead of big data is data-intensive computing. So in a system that we build, we can build it for high-performance computing or data-intensive computing. They are basically the same structure at a baseline, but they start to diverge depending on needs. With data-intensive, you build in more bandwidth and have more of a focus on data latencies as opposed to more floating point calculations per second.
TPM: How do you characterize this movement from supercomputing centers to the enterprise? In the past, you have worked with Microsoft to scale Windows Server and SQL Server across your UV 2000 systems. But Windows Server can’t span more than 256 threads or cores, depending on whether you have HyperThreading turned on, and the UV 2000 can be much larger than that, with 4,096 threads and 64 TB of shared memory. Obviously, Linux can span an entire UV 2000 system because it does so for HPC workloads, but I am not sure how far a commercial database made for Linux can span.
Bob Braham: Let’s take a step back from feeds and speeds. If I take a look at how we are going to attack this data-intensive market, the first thing I talk about is how the market is bifurcated and that it is a lot broader than Hadoop. There are three different types of players.
The first are the solution stack vendors – Hewlett-Packard, IBM, and Oracle – and I have been running around SGI telling people that we are going to see these players again and again and it is not going to be Dell or Cray or Groupe Bull anymore. What these big players do, if you look at analytics or big data as a solution stack, all the way from presentation to framework down to infrastructure – call it servers, storage, and networking, and some kind of systems software – these three players want to own the whole thing. And they are acquiring companies like crazy to get there.
The second category is the platform players. They are platform as a product – EMC, Dell, SGI – or platform as a service – the classic cloud vendors, such as Google Microsoft, and Amazon.
The third category that we see is a group of companies that complement the platform vendors, and they are the best-of-breed, point solution vendors that provide the software.
The name of the game, as we are seeing it, is that the platform as a product vendors work with the solution providers – so for example, like us working with SAP on HANA – and end up competing with the vertical stack players. And in fact, the reason why SAP was so eager to do that deal with us is that while SAP partners with HP, IBM and Oracle have become solution stack players and SAP doesn’t have a high-end vendor to compete with those two. That’s where we see ourselves getting traction with HPC in this enterprise space.
TPM: Is there a place for a Microsoft and SGI partnership going forward? Microsoft doesn’t sell hardware.
Bob Braham: That’s an interesting question. We recently reconnected with Microsoft, and we are actually revisiting those conversations right now.
TPM: What about a similar partnership with Red Hat? They have some skin in the enterprise game, obviously, particularly in financial services and they also have Java middleware. But they do not have the applications.
Bob Braham: I have said this over and over again. When you are working with ISV partners, there are really four you have to worry about: Microsoft, SAP, Oracle, and VMware. Beyond that, it falls off a cliff. We work with Red Hat a little bit, but am I anxious to go 30,000 feet deep with them? SAP will keep us busy, and Microsoft is far more interesting. So we have got to prioritize.
Eng Lim Goh: On the server side, you are right. There is the cluster approach, like that used for Hadoop, and this is great for the search-type operations. But really, in data-intensive computing, there are two types – search and discovery – and they are very different. If you know exactly what you want to find in the haystack, then Hadoop clusters are great for this. Where Hadoop falls off is where you are doing more discovery – uncertainty goes up, and you are not quite sure if the data you want to find is there, or you are looking for relationships or if your database constantly changes.
An example of this is the system we installed at the United States Postal Service, who is one of our biggest data-intensive customers. What the USPS is looking for is fraud, specifically people copying those bar-coded stamps, and this is a huge search problem against a huge database of all stamps. The problem is that every new stamp that has been scanned and compared needs to join the collective and the very oldest stamp in that database falls off. Hadoop does not handle this very well because it doesn’t like for the database to be changed too often.
In the case of the USPS system, it is streaming changes. As such, a big UV 2000 system with 16 TB of shared memory is used to support Oracle’s TimesTen in-memory database. A half billion pieces of mail a day are streamed in, and if there is a copy of a stamp, it is flagged.
So you can see that when you go from search to discovery, you start to move from clusters to shared memory machines, especially if the data you are analyzing against constantly changes. On top of that, the programming model for shared memory systems is much easier. For these data-intensive customers, what we tell them that it is essentially like a PC running Linux, except that the memory is huge, up to 64 TB.
We have sold two machines of that size, with 64 TB, to the Japanese government recently for statistical analysis and data-intensive computing. The one at eBay’s PayPal unit is 6 TB. As you might imagine, the focus is more on the memory capacity than on the core count in these systems. And in fact, on machines like the USPS machine, you want as few cores as possible because Oracle charges per core for the database. In fact, we worked out a configuration where we reduced the core count.
TPM: At the Innovator’s Breakfast at SC13 last November, you said that SGI had sold 687 shared memory machines, and you showed a small portion of the customer list. It had the USPS and eBay/PayPal machines on there, as well as “Pangea” at Total and a handful of others that I would consider enterprise customers among a list of 44 machines. How fast does this change happen? Do we end up with the UV systems expanding their market share and these enterprise applications driving a larger portion of SGI’s revenues?
Bob Braham: Let me jump in on this. I think you really articulated it well. With SAP, you look at us getting business from some fraction of their customers that need high end, and you are looking at an opportunity for us that is on the order of hundreds of millions of dollars a year. Are we going to get all of that right away? No. But it is a much bigger market.
Here’s another example. I just got back from Europe, and we will soon have a new customer who is a major banking institution and who is looking at UV systems for their software development environment. They were on VMware for server virtualization and they want to move to KVM, and anytime there is a change like that there is an opportunity to take another look at the hardware infrastructure. So they did some benchmarking and discovered that our cost of ownership for UV was quite a bit less than for clusters.
This is a much different marketplace, and one with much different motion. We can expand the market with our UV product, and we see some others jumping in but we just have a big head start.
Eng Lim Goh: One third of the non-scientific machines we have sold are doing genome analysis. If you think about this, and data-intensive computing in particular, what they do is dump all of the data from banks of genome sequencers into a big memory machine for assembly. A cluster is good for genome alignment, but you need a shared memory system for assembly. With alignment, you start with a template, and you fit all of the sequences into that template. Assembly is when you do not have a template, and you are doing a first principle assembly of a genome – you are trying to complete a jigsaw puzzle with three billion pieces and this is where you need shared memory.
In the commercial world, you find similar things to the alignment and assembly functions in genomics. Our intention is to move this proven capability from the genomics world into the commercial world. We have seen early value with this. At USPS for streaming search. eBay/PayPal for relationship analytics and fraud detection. At the SEC, using the Blacklight system at the Pittsburgh Supercomputing Center, for serendipity discovery.
TPM: On the UV machines you have built so far, you have NUMAlink 6 clustering, which has been in the field for a while. Do you need to expand the scalability of the UV machines to attack this enterprise market, or is it big enough for now? Let’s forget about the shape of the CPU core ramp for once and talk about the shape of the memory ramp.
Eng Lim Goh: The memory is big enough for now, mainly because it is limited by what the Intel Xeon processor can see. Every processor in the UV system can see all of the memory, and it is limited to the 46-bit addressing of the Xeon chip, which is 2 to the 46th power or 64 TB. We are waiting for Intel to increase that before we can have more memory. For now, we have only sold two at 64 TB, and most of the systems have less than that. We have government customers who are asking for larger memory spaces, but we cannot increase it until Intel does.
TPM: Do you have any idea when that will happen? Usually, when you have a couple of customers hitting the ceiling, that is a good sign that the ceiling needs to be raised.
Eng Lim Goh: We have communicated with Intel, and we are in a non-disclosure situation so I can’t tell you the plan.
In the meantime, we are working on NUMAlink 7, which is the next-generation of our interconnect that will be used in the SAP HANA machine. The goal with NUMAlink 7 is not more memory, but reducing latency for remote memory.
Even with coherent shared memory, it is still NUMA and it still takes a bit more time to access the remote memory. This gap is still very much smaller than the latencies in a cluster, by two orders of magnitude. Our goal is to reduce that even further with NUMAlink 7. As databases stretch across the UV system, you would like more uniformity so you don’t pay a penalty anywhere in the memory. We have, in fact, shipped a test NUMAlink 7 machine to SAP in Germany.
This is going to be big. Microsoft and SAP are the only two without hardware, and we have big and proven shared memory and proven customers. We just have to make sure we get all of our ducks in order and do this right.
We slap the wrists of engineers every time they ask for 10 picoseconds. We are very, very latency conscious because that is where we shine. We have a saying: You can buy bandwidth, but latency you sweat for. This is our key differentiation. With NUMAlink 6, we are talking about a cut-through latency of our switches that is down now to around 100 nanoseconds. A foot of cable consumes 1 nanosecond, so we are even now counting how long the cables are. NUMAlink 7 will be even better, and I know it because when I was an engineer, my wrist got slapped until it was red. So now, I do the slapping, figuratively speaking.