San Storage Architectures

Storage arrays come in a lot of shapes and sizes and I have been involved with quite a few different ones. A subject that comes up quite a lot is what is the ideal design for the new generations of flash media?

I am going to try a new analogy to explain my thoughts. Storage systems are often described in terms of analogies, and these can be as varied as cruise ships, sushi restaurants or pencil and paper. I am going to try something easy ?

All storage arrays have two basic components; controllers that receive and return data and storage media that the data is placed in. So let’s try people as storage controllers, balls as data and buckets as the storage medium.

Screen Shot 2016-05-06 at 10.38.19

The most basic storage arrays had/have one controller and a bunch of buckets of storage (disk drives). Of course the controllers do extra work nowadays like squashing the data and expanding it, protecting it, replicating it etc. However that would be hard to draw so I am going to ignore. For the moment my thoughts are concentrated on the controller and the media.

Of course the problem with the basic storage controller design is what happens when that controller goes wrong or is overwhelmed?

Screen Shot 2016-05-06 at 10.41.25

So the next stage of storage array has a standby storage controller just waiting to take over. This is more reliable and fairly easy to design and code but does not help with performance.

Screen Shot 2016-05-06 at 10.47.45

A much better design has both the controllers active. Its better because failover can be instant (because the second controller is already handling IO) and because you are getting use of both of the controllers you have purchased.

This is quite a common design and works ok for a lot of storage requirements, but what happens if your requirements needs more capacity. Most arrays support adding more capacity behind the controllers this is known as a Scale UP architecture.

Screen Shot 2016-05-06 at 10.50.24

However we have this new fantastic storage media called Flash. In my analogy Flash is a better class of bucket, it can store and retrieve more things faster. Interestingly storage controllers can only cope with the performance of about 16-24 flash modules before they can run no faster. So even as you scale up you get no performance.

Screen Shot 2016-05-06 at 11.17.44

A storage array that supports scale out will allow you to scale out will allow you to add controllers.

When looking at scale out arrays  you need to be aware that some will only allow you to add capacity by also adding controllers. The ideal scenario is one where you can either add capacity or performance.

So what is the ideal situation for an all flash array.

To me the following is clear Active / Passive solutions are wasteful of resources and failover is likely to take longer so there should be a minimum of two active controllers.

Active/Active solutions should mean that all volumes are equally accessible on all controllers. For detailed discussion see a future blog post on why.

So the final question is do you need a system that can scale to more than 2 controllers. There are two reasons that you might want to.

  • If, during the life of the storage array, you may need more than the capacity supported by 2 controllers? If you do and you want want a global dedupe pool then you need an array that supports scale out.
  • If, during the life of the storage array,  you may need more performance than that supported by two controllers.



Future Impact of Flash memory?

In the consumer world NAND Flash storage as a way of storing data has had a massive impact. Since the first commercial Flash storage products became available more than 25 years ago a number of products have exploded in popularity in ways that would simply have not been possible with out the high volume, low cost, availability of Flash memory.

If you think of products like Sat Nav, Digital Cameras, Portable Music Players, Tablet computers or even mobile phones, they would all look very different if they were still had to rely on older data storage devices like Optical Disks, (DVD’s), magnetic disks (HDD) or even tape.

Screen Shot 2016-03-29 at 14.59.48

As the volume of flash memory used in consumer devices has exploded, the price per GB has dropped, and the reliability increased. The price decrease, increasing reliability and increasing capacities has made it ever more attractive to be deployed for “Enterprise Workloads”. Pools of Flash media creating into the data centres for at least the last 8 years. We now see customers aiming for “All-Flash Data Centres” because they believe the economic and technological benefits are so large they want no other medium for to be used in their DC.

Having been involved with the deployment of Flash based storage technologies for about 6 years I find myself  wondering,  are any significant technology advances directly attributable  to the deployment of Enterprise Flash?

For example VDI deployments are more viable with Flash storage, databases are bigger and faster with Flash but both can run without.

So what technological shift will we see in the next few years that will be fuelled by the ability to perminantly store massively more data, retrieve it near instantly and at a much lower cost than has ever been possible before?

Not just something bigger or faster…… something altogether new…something different?