ZT: How do you increase storage utilization?

We should make the concepts clear.

Source: http://blogs.hds.com/hu/2010/03/how-do-you-increase-storage-utilization.html

By: Hu Yoshida on March 16, 2010

A while back we did a storage assessment for a non HDS customer and showed him that his storage utilization was actually around 30% which is typical in most accounts. While that was not surprising to the operations people this was a surprise to the financial people who could not understand why 70% of their storage capacity, in this case, several hundred TBs, was not being utilized.

Management was embarrassed and immediately fingers were being pointed at the storage architect and storage administrators, who in turn pointed to the application users who were asking for way more storage than they appeared to need. Management decided that they didn’t need to buy more storage and edicted that storage utilization should be managed to 60%. They decided to stay with their current vendor and buy that vendor’s proprietary software tools to better manage storage utilization. IT operations and storage administrators had to work overtime to implement the tools, monitor allocation and usage, enforce allocation edicts, and recover from increasing outages caused by out of space conditions.

In my view that was the wrong decision since edicting an increase in utilization by working harder was not going to solve the problem. Low storage utilization has been a standard practice to reduce operational costs and provide flexibility. In their case, they bought storage on a three year cycle so they had to have a lead time buffer for storage capacity that would hold them until the next acquisition cycle. The IT operations people knew that additional capacity was needed beyond what their users requested, for administration of backup, business continuity, development test, data transformation, and data mining, so they added to the capacity buffer. Application users knew that they needed head room to grow their applications, and bad things happen if you run out of capacity. Lacking a crystal ball to accurately predict their growth they requested more storage than they expected to use. Storage administrators who wanted to avoid those midnight and weekend calls to shift storage around when some one ran out of capacity would also add to that buffer. Low utilization is one way to manage growth in a dynamic business environment. However, increasing utilization by working harder and micro managing allocations could lead to more costs and less business agility

A more effective way to increase utilization is to implement storage virtualization and Dynamic Provisioning. Dynamic Provisioning eliminates the waste of allocated unused space and allows application users to over allocate as much as they think they need so that they never run out of capacity. Storage virtualization enables existing storage and lower level storage to contribute to Dynamic Provisioning pools through virtualization in an enterprise storage controller. With storage virtualization there is no vendor lock in for storage capacity. Storage virtualization enables the dynamic movement or reallocation of storage capacity during prime time, eliminating the need for midnight and weekend callouts.? Storage virtualization and Dynamic Provisioning also eliminate the need for 3 to 5 year lead time buffers for storage acquisition. The lead time buffers become virtual and you can add storage capacity incrementally, as you need it, taking advantage of the 30% to 35 % yearly price erosion in storage capacity costs.

When you increase utilization you run a greater risk of outages caused by out of space conditions, especially if you have silos of storage. If you have 10 storage frames running at 60% utilization, there is a great likelihood that you will run out of capacity on one of those storage frames. Even if you did thin provisioning in each of those frames, one of those frames could run out of capacity. That is where storage virtualization can help by pooling all the frames into one common pool of storage capacity. The excess capacity in any or all the other storage frames can be used to address a peak demand from any application connected to this pool.

In the hype over thin provisioning some analysts were claiming that thin provisioning could enable user to run their storage at 60 to 80% utilization of real capacity. I would caution against running utilization of real capacity at that level especially if it is not part of a virtualized pool of storage. For example if you have a 50 TB storage frame and you have it at 80% utilization with thin provisioning, you only have 10 TB of head room to support that new business application or a sudden spike in demand when something spooks the currency markets on the other side of the globe. Remember your users think they have 60 or 80 TB allocated to their applications when the real capacity is only 40TB. So thin provisioning without the pooling benefit of storage virtualization can be risky when you start to drive higher utilizations.

Even with Storage virtualization and Dynamic Provisioning I would recommend staying below 60% utilization to avoid running out of capacity before you are able to acquire and provision more storage. Another thing to remember is that not all files systems or data bases are thin provision friendly and if your thin provisioning solution does not have the ability to reclaim deleted file space, you will have to defrag or your thin provisioning pool will be eaten up by deleted file holes.

Storage capacity is cheap and getting cheaper while operating costs continue to increase. I see nothing wrong in using a buffer of storage capacity as a management tool to reduce operational costs. Yes you can certainly improve utilization with Dynamic Provisioning and Storage virtualization, but don’t go overboard. Leave yourself enough head room for growth and the unexpected spike in demand.