Advanced Computing in the Age of AI | Friday, April 26, 2024

IBM Bolsters Storage for AI, De-Dupe and Cloud DR 

via Shutterstock

IBM made several announcements today designed to help customers get more out of their existing storage infrastructure, including IBM Storwize and FlashSystem arrays, the SAN Volume Controller (SVC) software-defined storage product, and even non-IBM arrays from Dell EMC and HPE that IBM storage software manages through its VersaStack program.

Let’s start with data de-duplication. For years IBM has offered de-dupe in its high-end FlashSystem A9000 array. Now it’s tweaked that de-dupe code so it can run on IBM’s Storwize, SVC, FlashSystem V9000, and the 440 external storage systems it supports via VersaStack.

It’s the first time IBM is providing de-dupe on those products, although it previously supported other data reduction techniques on them, including compression, thin provisioning, compaction, and SCSI unmap (which reclaims storage resources on arrays when virtual machines disconnect from them). Taken together, these five data reduction techniques can deliver up to a five-to-one reduction in the amount of raw data that organizations are storing on their IBM systems, IBM claims.

IBM is guaranteeing customers can cut storage requirements by least 2x and up to 5x

Reduced storage requirements translate into big savings for customers. IBM says a five-to-one reduction in data can save a client that storing 700TB  across 7.7TB flash drives in a Storwize V7000F $3.7 million in operating expenditures (opex) and $800,000 in capital expenditures (capex) over the course of three years. That big savings on big data can’t be ignored.

“Storage analysts, depending on who you talk to, say the cost to manage 1TB of storage runs to $1,500 to $2,100 per year per terabyte,” says Eric Herzog, chief marketing officer and VP of worldwide storage channels for IBM’s Storage and Software Defined Infrastructure. “So if you don’t have to buy as many terabytes, guess what? You reduce your capex as well.”

IBM is so confident that its big data reduction capabilities can save customers big money that it’s making a big guarantee about it.

“We don’t care if you only do de-dupe or only do compression or do combination — the bottom line is you have data reductions guarantees,” Herzog he tells Datanami. “You can get up to five-to-one, which we can guarantee.” (Customers will have to submit to a data analysis to get the five-to-one guarantee; IBM is guaranteeing at least a two-to-one reduction without the data analysis.)

IBM also hopes to reduce the amount of money customers spend on storage through its new Storage Insights offering, which uses Watson-based artificial intelligence and machine learning capabilities to optimize storage requirements for customers’ specific workloads, provide capacity planning capabilities, and otherwise optimize the storage environment.

“The value is how to make sure you don’t overbuy,” Herzog says. “You’ll be able to see the patterns and see what your historical use is so you can predictably look forward.”

For the rest of this article, which originally appeared in sister publication Datanami, please use this link.

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

EnterpriseAI