Study: Flash Going Mainstream, Driving Adoption of Big Data Analytics
The rapid emergence of Flash-based storage into the mainstream technology is a primary factor in the embrace of advanced scale computing in the enterprise – particularly big data analytics projects.
That’s a key finding of a new survey of more than 1,000 IT managers worldwide from industry watcher 451 Research, which reported that enterprises are quickly transitioning to a variety of Flash-based storage architectures for primary storage. Indeed, nearly 90 percent of organizations have some form of Flash-based storage installed in their datacenters, and All-Flash-Arrays (AFA) approaches are becoming increasingly standard to support transactional applications.
In fact, nearly one in five organizations surveyed have entirely replaced – or will over the next two years – HDD technology with Flash for SAN-based storage workloads.
451 Research estimates the total AFA market at $4.6 billion this year, up from $3.3 billion in 2015. By 2020, the market is projected to be $9.6 billion, for a CAGR of 24 percent. EMC currently leads the market with a 41 percent market share, followed by HPE at 25 percent and NetAPPS at 14 percent, according to 451 Research.
The survey found that the most common Flash deployment in the datacenter is as a tier in a hybrid SAN array, with 51 percent of organizations using this method and a further 29 percent planning to do so in the next two years. But AFA adoption is growing most rapidly, with 27 percent of enterprises having deployed this technology already, and another 28 percent planning to deploy an AFA within the next two years. About three-quarters of AFA deployments support multiple applications, and while databases and virtual desktop infrastructure are currently the top two use cases, data analytics is expected to be a top-two use case within two years.
Flash technology supports the broadening adoption of advanced scale computing in the enterprise, according to Simon Robinson, research vice president at 451, and Research Director of the Voice of the Enterprise: Storage service. He told EnterpriseTech that 26 percent of surveyed organizations are running data analytics and business intelligence applications today, and 52 percent said they’ll be running applications of this type by 2018. “That says to me that many organizations have a big data analytics project and they’re looking at all flash to help them deliver that project,” Robinson told EnterpriseTech. “So it’s an enabling technology.”
Flash is displacing HDD technology primarily because it delivers about a 10x performance improvement while simplifying storage implementation. Robinson explained that to squeeze more performance out of hard drives, which have not improved in basic performance for more than 20 years, programmers have resorted to techniques like “short stroking,” writing data to the outer layer of the disk platter to minimize rotational latency. “Which kind of met the performance challenge, but it also meant you have systems full of disk drives that, from a capacity perspective, go unutilized,” Robinson said. Maximizing HDD performance also meant large storage footprints, high cooling, power and datacenter space costs, and additional IT staff management and maintenance overhead.
“With flash you spend less time deciding, provisioning and optimizing the system for different levels of performance to ensure that the hot data is on the hot tier and the cold data is on the cold tier,” said Robinson.
The biggest barrier to flash market growth is cost – 51 percent of respondents told 451 Research that AFAs are too expensive. A further 47 percent said their existing storage performance was sufficient as a reason for not purchasing an AFA. But Robinson said Flash pricing is coming down to the point where it is approaching $5/Gigabyte, which he said is competitive with HDD pricing. He said a blend of Flash and HDD technologies will likely continue.
“On a capacity basis, relatively speaking, hard drives are vastly cheaper,” Robinson said. “But Flash vendors are building optimization technologies into their systems, such as deduplication and compression, which effectively squeeze down the capacity that the applications require. So we’re seeing AFA technologies coming into the same price ballpark as traditional disk-based systems. The precise numbers do vary a lot and the amount of compression or deduplication that you can impose on a given application varies hugely, therefore end users are rightfully skeptical about claims that vendors make about the effectiveness of these technologies. But they are an important component in the process of making this a viable market.”