Dell EMC Launches At-Scale All-Flash, Object Storage Products
The new Dell EMC combination put itself on display this week at Dell EMC World in Austin, and made two announcements that flex the company’s storage muscles: an all-flash array (AFA) offering that Dell EMC said sets new density, performance and scalability standards, and an at-scale object storage platform positioned to underprice public cloud-based storage.
The flashier of the two is All-Flash Isilon, which the company said combines the high performance of flash technology with the Isilon scale-out NAS platform and is targeted at unstructured data analytics workloads. A single 4U system includes a 4-node Isilon cluster with a maximum of 924TB of capacity, 250,000 IOPS and up to 15GB/s of aggregate bandwidth. Dell EMC said a single Isilon scale-out NAS cluster can support up to 100 systems with 400 nodes, storing 92.4PB of capacity, 25M IOPS and up to 1.5TB/s of total aggregate bandwidth within a single file system and single volume.
Isilon All-Flash is intended for wringing value out of unstructured data, which account for 80 percent of new data produced and stored by enterprises today, Dell EMC said. To do this, organizations are using increasingly powerful next-generation applications and workloads that require high performance to process massive unstructured data sets. “This unstructured data opportunity (is) the final flash frontier that requires a scale-out architecture designed to take advantage of flash media and provide the same enterprise-grade data protection, management, access and security that these unstructured data applications require today,” the company said.
In developing the product, “what (we) did was re-look at the problem of providing the most performance out of flash if you had a clean slate,” John McCool, SVP and GM, DSSD, at Dell EMC’s Emerging Technologies Division, told EnterpriseTech. “So no legacy protocols, none of the ways you connected compute to storage before – everything was new. They designed a system where the flash devices connect directly into an integrated PCI fabric, which connects directly to servers through a PCI connector. So the native way compute talks to storage in the flash world, NVME, runs over these connects, and provides outstanding performance.”
He said that while the typical AFA might be 1M IOPS, “we run at 10M IOPs with a DSSV fabric. It’s a new architectural segment, we call it rack space, rack scale flash. It’s very different.”
Isilon All-Flash use cases include high performance data analytics for financial trading systems, security analytics and the biggest of Oracle and SAP databases. According to market watcher IDC, Dell EMC leads all-flash storage with 30.9% market share.
“With traditional methods today, you either go with an AFA or a deep pool of storage, which encounters latencies across the network,” McCool said. “So you have some customers trying to avoid that by putting flash directly in the server. The challenge (there) is you get tremendous performance but very limited data sets. It’s physically inside the server, and the problem is that it becomes stranded. You might provision 15TB for one server and 15 for another – it can’t be reused effectively. So by pooling it we can aggregate all the servers into one place. It’s easier to manage, it has high availability, it’s redundant and it has its own proprietary RAID that has some better performance characteristics.”
Put another way, Isilon All-Flash’s IOPS capabilities means data sets used in analytics workloads aren’t limited to the memory capacity of servers. “We’re seeing a number of customers who actually reduce the footprint of their servers, and forcing the data directly onto the 100 TB that we have available, so they can get the data scientists to use these larger data sets and get better results,” McCool said.
Isilon All-Flash supports multiple protocol access, including NFS, SMB, HDFS, Object, NDMP and FTP for unstructured data read and write access to data. Dell EMC also offers automated storage tiering via Isilon SmartPools and CloudPools, which lowers costs and reserves Isilon All-Flash storage for more demanding applications. The company said the product provide up to 80 percent storage utilization and can leverage Isilon data de-duplication capabilities that can reduce storage requirements by up to 30 percent.
Dell EMC also announced a new version of its Elastic Cloud Storage (ECS), an object-storage platform designed for low-cost, long-term storage of high volumes of data. ECS is positioned as an alternative to public cloud storage, and Dell EMC cited a finding by analyst firm ESG that ECS offers 60 percent lower TCO than public cloud storage options.
ECS runs on PowerEdge Servers, such as the Dell EMC DSS 7000 and the Dell EMC R730xd. It’s a high-density appliance aimed at helping companies with tape replacement efforts that can store up to 6.2 PB of data in one rack.
“Public cloud storage is very valuable if you’re trying out a new application and you want to build something and try it out, or you think your data footprint is going to be on the lower side,” Varun Chhabra, head of product marketing at Dell EMC’s Advanced Software Division, told EnterpriseTech. “What we find with the large customers is as their data footprint grows, the cost of repeatedly renting storage in a public cloud, over large amounts of data, can add up a lot. So we have quite a few customers who…find it’s a lot more costly than they thought it would be.”
Chhabra said that, like most object platforms, ECS is “really built for capacity at scale, as opposed to being a performance optimized platform. So it’s not a backbone where we see a lot of directly live HPC usage. The scale piece is where we play.”
A typical use case, he said is a financial services company that needs to archive a high quantity of data, “they had a lot of tape data they felt was not super sensitive so they could keep it in the public cloud.” But as cloud costs grow, a less expensive alternative is sought after.’
Chhabra said ECS has “more than 50 PBs in production, we haven’t hit any limits with scale. The capacity limits are something we’ve helped circumvent for a lot of our customers because they don’t have to pay that rent or pay those data access costs that they do every time they hit the public cloud.”