SGI To Paint SAP HANA In-Memory Appliance UltraViolet
Systems maker SGI and application and database software maker SAP are teaming up to create a big, bad system for running SAP’s HANA in-memory database. The partnership will, says SGI, increase the opportunity for selling its UV shared memory systems by at least an order of magnitude and will allow the largest SAP shops to improve the performance of their HANA systems while reducing the complexity and cost.
The SGI deal with SAP also highlights the fact that the system maker is continuing to expand its software partnerships with the key players in the enterprise software space as it seeks to push systems like its “UltraViolet” UV 2000s beyond its more traditional supercomputing customer base. SGI has been a Microsoft partner for several years, pushing the combination of Windows Server and SQL Server on the UV 2000 systems, and is also an Oracle partner. It is noteworthy that the Oracle’s eponymous database was at the heart of the $16.7 million fraud detection and mail sorting system at the United States Postal Service.
The SGI-SAP partnership does not involve an OEM relationship between the two companies, so SGI is not embedding HANA in its systems and SAP is not going to put its badge on SGI’s systems, according to Bob Braham, chief marketing officer at SGI. But the two companies have posted engineers in their respective Fremont, California and Walldorf, Germany headquarters. They are working on tuning up HANA for a future system based on SGI’s NUMAlink that will allow for a system with up to 64 TB of shared memory to support the SAP database.
This machine, with the obvious codename “HANA Box” inside of SGI, will be able to address a total of 64 TB of shared memory in a single system image. That is considerably larger than the current HANA clusters available from IBM, Dell, and Hewlett-Packard, which have up to 512 GB of main memory in a single node configuration and which offer up to 8 TB across four nodes. SAP is very strict about the configurations of HANA nodes, right down to the clock speed of the Xeon E7-4800 (for four-socket nodes) or Xeon E7-8800 (for eight-socket nodes) and the memory capacity per node. To scale up the HANA system, you have to manage a cluster and partition the data across the nodes and this can impact performance and can make it tougher to manage.
In-memory databases are not just relational database management systems that have database tables that fit into main memory. They are unique pieces of software that are tuned to take advantage of the higher bandwidth and lower latency of storing data in main memory instead of on disk drives, on flash drives, or a mix of the two. Flash memory, depending on the technology, offers somewhere around a factor of 250 better I/O performance and significantly lower latency compared to spinning disk drives; main memory has about a factor of 200 speedup over flash. So moving from disk straight to main memory can radically reduce the time it takes to run transactions or queries.
“We have been talking to SAP for years, and there was a confluence of factors that came together that made both parties very interested in this partnership,” says Braham. “First, if you look at HANA, a meat-grinding in-memory database that needs a high performance platform, when you use it on clusters, SAP is finding that some percentage of their customers run out of gas. Clusters don’t do it. So shared memory is attractive, which we provide with our UV platform.”
The other big change that has come along concurrently with the advance and adoption of in-memory databases in general and HANA in particular, says Braham, is that the competitive landscape in both the database and systems markets.
Here is how Braham sees it. Oracle acquired Sun Microsystems four years ago, and is a system vendor as well as a staunch competitor to SAP in databases and applications. Oracle has announced its Exalytics appliances, based on its TimesTen in-memory database, targeted it right at HANA, and it is also aiming its flash-enhanced Exadata appliances at speeding up online transaction processing and data warehousing workloads. For those who want to run a truly large instance of the Oracle database on a single machine, Oracle has a 32-socket, 32 TB Sparc SuperCluster M32 system, announced last fall, and this machine can, in theory, be tripled in size based on the current architecture. Of course, that big bad Oracle box requires Solaris and is not as inexpensive as X86 machinery.
That said, Oracle is also a partner of SGI, and in fact a large UV 2000 deal that the company did for the US
IBM’s DB2 database with its BLU Accelerator extensions, has also been morphing to become a HANA competitor says Braham. That said, IBM has installed the largest HANA installation in the world, at SAP itself, which has over 100 TB of aggregate main memory capacity. So IBM may be competing with Oracle and SAP when it comes to in-memory databases, but it also partners with SAP on the hardware.
But neither Oracle nor IBM have anything like SGI’s UV system, and neither does Dell, by the way. And even if Hewlett-Packard is working on a sixteen-socket machine, code-named “Kraken,” that machine, which will have up to 12 TB of shared main memory across those sixteen sockets, HP has not said when the Kraken machine will come to market. The system will use the impending “Ivy Bridge-EX” Xeon E7 v2 chips from Intel, is not available now and probably will not be until after these new high-end processors are announced sometime in the first quarter.
That gives SGI an opportunity to capture some high-end HANA share. But to do so will require the company to do some engineering. With the UV 2000 system launched in June 2012, the NUMAlink 6 interconnect that implements the shared memory across the nodes in the machine was moved to the Xeon E5-4600 processors, which are less expensive and have a less complex architecture than the Xeon 7500 and E7 processors. All of these chips can be used in four-socket machines, and the Xeon 7500, E7 v1, and E7 v2 can also be used in eight-socket machines. With the upcoming HANA box, SGI is creating a machine that will be based on the Xeon E7 v2, and this is because SAP requires for HANA machines to use this class of processor rather than the ones that SGI has chosen for its supercomputer workloads. This should not be a big deal, of course, since SGI already knows how to couple together Intel’s high-end chips.
The precise feeds and speeds of the HANA Box from SGI were not divulged because Intel has not launched the Xeon E7 v2 chips yet, and because SGI and SAP are not expected to unveil the machine until SAP’s Sapphire customer and partner event in June. But Braham did confirm that the memory footprint of the forthcoming HANA machine would be 64 TB, just like the UV 2000. In theory, each Xeon E7 v2 processor can address up to 1.5 TB of main memory using 64 GB memory sticks in 24 memory slots, which is 50 percent more than was possible in the Xeon E7 v1 chip that SGI did not use in the UV 2000s. The Xeon E5-4600 chips that were used in the UV 2000 systems had 256 GB per socket. It is a reasonable guess that SGI wants a machine with fewer processors and more memory per socket for the HANA machine. But the specs are really going to be determined by what SAP sets for the rules for HANA configurations.
What has SGI all fired up is the opportunity to sell more UV shared memory systems. “The architecture of UV has a finite opportunity to grow in the traditional HPC space,” says Braham. “It has at least an order of magnitude more opportunity to grow in the enterprise. There is an inflection in the market taking place.” Financial services companies are approaching SGI to see how to use the UV 2000 systems instead of loosely coupled clusters, and the USPS deal above is another example of an enterprise account that wanted a big shared memory system. “I will tell you, SAP HANA alone is an order of magnitude more opportunity for UV.”
The scale of the UV system provides an ease of use advantage over clusters, says Braham, and that is for both programming and administration. And among some financial services companies that are shifting away from VMware ESXi and vSphere tools for their X86 server virtualization layer to a combination of the KVM hypervisor and the OpenStack cloud controller, SGI is able to demonstrate a performance and a price/performance lead with UV systems. (EnterpriseTech will be following up for more details on this.) For in-memory database processing in particular, SGI is forecasting that it can drop administration costs compared to loosely coupled clusters by around 50 percent.
When the HANA Box is unveiled at Sapphire in June, SGI will talk more about the roadmap for the UV product line and provide more technical details. The system is expected to be generally available in the third quarter of this year.