We have all heard the term “tier” but do you really know what it means and how it can affect your entire enterprise? You might even have tiered storage built out in your environment as you read this, but do you know for sure you are using it to its fullest extent? Tiering your storage simply means separating out data based on certain criteria such as age or how often it’s used. Often, we hear about tiers 0 through 4, but these terms are a bit misleading and sort of subjective. Typically, tier 0 or 1 is the most important data set, on the highest performing array available, while tier 2 is usually slower / denser, and so on with tier 4 being cheap, slow cloud storage such as Amazon Glacier. This approach might seem unnecessary these days, but I would like to point out a few reasons why you should consider tiers even in today’s world of ultra-dense SSDs and cheap cloud storage.
The biggest benefit is not necessarily the most obvious, but it is the one that should be driving you to explore this storage strategy: properly designed tiered storage can give you nearly hands-off management. Imagine that the rate of new data being created in your environment matched the rate at which data aged off your flash or hybrid array and moved to something less expensive and performant, but denser automatically? You would be able to easily spot trends and hot spots and quickly tell which data was utilized the most. It would be on the most expensive storage. Now imagine that this dense, cheap archive of old data also automatically aged data off to ultra-cheap, long term cloud storage in the same way. Now you have nearly static storage numbers on site, but very slow growth of mostly stale data that changes very little if at all. You can easily forecast growth, you have a treasure trove of data on secondary tiers that can be mined and analyzed to further refine the new data you create, and decreased management overhead.
When it comes time to scale, having a tiered storage architecture lends you more options on how and where to put your dollars. If the flash disks are nearly at capacity, we can look at utilization and move volumes that are not being accessed as often and are not benefiting from high-speed drives to less performant storage. We can also look at scaling our tier 2 to save more space on tier 1, buying slower but larger capacity drives to take the extra load from the higher-speed tier. Even tiering within the same array can be beneficial. The goal with storage is to utilize as much as possible without impacting performance.
Having all this extra control over where and how you house your data can give you far greater insight into data growth and how your organization responds to that growth. In a one size fits all type of design, a large array with many of the same drive models, we tend to lump storage types together by type or by department, not looking at performance requirements or other important attributes like age or last accessed date. This is called data sprawl and account for up to 70% of all storage data. Moving stale or less critical data off this array onto an archive tier can clear the smoke so you can really see what data is working and what is just taking up space.
Tiered storage might be considered one of the older ways to approach the topic, especially with some of the larger more impressive and easier to scale node-based architectures available today but remember that storage is really about efficiency and organization, not just capacity. Look at your storage and ask yourself if it could be better, faster, or more organized. Are you struggling to get proper reporting on your data or maybe wanting to lower your yearly storage spend? Swish has experts who can help you tame data sprawl and declutter your storage increasing efficiency and scalability while growing your storage infrastructure the right way.