The Three Rs of Data Storage: Resiliency, Redundancy, and Rebuilds

This post was originally published on Pure Storage

At our Pure//Accelerate® conference in June, Pure Storage showed the world some incredible technology with two big announcements. The first was that we would be introducing a 75TB DirectFlash® Module (DFM) this year and 150TB and 300TB DFMs to follow in the next few years

The second was what I would call a pretty hot take (as far as hot takes go in the storage industry): With the rapid pace of flash innovation, we predicted that there would be no place for hard drives in the data center by 2028.

Some of the questions I’ve gotten a lot since those announcements are: Isn’t Pure worried about making modules that big? What happens when one of them fails? How long would a rebuild take? Does it affect system performance? 

These questions don’t have a simple answer, and that’s because data resiliency at scale is not simple. But the good news is that we’ve made this a non-issue for customers. 

How Pure Storage Does Resiliency at Scale

Like all things at Pure Storage, that’s because of how we’ve designed our systems. They’re capable and sophisticated inside while remaining simple to manage and consume on the outside.

Those of us who have been

Read the rest of this post, which was originally published on Pure Storage.

Previous Post

Intel Sees Glass as a Vital Material in the Race to Power AI

Next Post

How Automation Can Evolve (and Improve) the Role of DBA