Here are some good tips on how to make your data storage run better.
Obviously, adding flash in one form or another is a good way to boost performance in many cases. Be aware of the various form factors out there. As well as solid state drives, there are SD cards, and various other ways to augment storage with flash.
“There are hardware solution approaches such as adding SSD drives, SSD PCIe cards and so forth,” said Greg Schulz, an analyst at StorageIO Group.
Schulz noted that there are cache and micro-tiering tools available, some of which are built into hypervisors (VMware, for example) and operating systems (e.g. Windows Server 2016). There are also add-ons such as Enmotus FuzeDrive that provide a similar functionality.
Enmotus describes micro-tiering as the automated movement and manipulation of the virtual and physical data pages, specifically related to flash and SSD. A data migration engine keeps statistics of each virtual page in the user volume, allowing it to determine when pages should be moved. The idea is to use all of the capacity of the fastest tier of storage, dumping least used pages and exchanging them with more requested content.
NAND flash SSD need to have some garbage collection and optimization done periodically. Some operating systems or storage systems do that automatically, while others allow you to do it when suits your needs. TRIM and UNMAP are commands issued to an SSD drive, device or system to tell it to do its low level NAND flash cleanup and maintenance. For example, on Windows, if you have an SSD drive/device/card, you click on Volumes properties, and then Optimize, it will do a TRIM/UNMAP, but on a hard drive it will conduct a defrag.
“TRIM and UNMAP are ways of telling an SSD to self-optimize itself,” said Schulz.
In scale-out storage solutions, adding nodes to boost throughput may not help performance if the network traffic to the cluster is not balanced to take advantage of available throughput. SwiftStack, for example, provides integrated load balancing as a software feature to give admins control over maximizing the throughput performance of a cluster.
“For storage teams that do not have control over load balancing hardware in the network, there are commercial and open-source software options to choose from, including HAProxy, Nginx, and Hipache to name a few popular choices,” said Mario Blandini, chief evangelist, SwiftStack.
A lot of the time, people are blind when it comes to what is really going on in their SAN or with the storage in their server farm. Yet there are plenty of tools available that enable IT to see what is really going on. Solarwinds and Virtual Instruments are a couple of the well-known names in this field. There are also Dell Foglight and Spotlight, as well as other vendors.
“Gain insight and awareness of your server, storage and the I/O between the server and storage, hardware as well as software,” said Schulz.
A traditional IT skillset that is staging a revival is capacity management. The basic idea is to stay on top of how your capacity is growing, monitor trends, model how capacity is likely to increase in the future and then to buy in such a way what you keep on top of performance spikes while not overbuying. TeamQuest, for example, has a full suite of capacity management products. Other options include SolarWinds, Aptare and BMC.
Throwing more exotic and expensive storage at performance problems, in some cases, may only marginally improve the time it takes to fulfill each request.
“Database tuning after system profiling may also be used to boost performance,” said Augie Gonzalez, Director of Product Marketing, DataCore Software.
He gave the example of performance tuning tools such as Heraflux.
“The smoking gun behind unacceptably slow I/O response can often be uncovered by analyzing the depths of I/O queues inside the servers and the number of idle cores,” said Gonzalez.
As data grows, so does the need to introduce different tiers of storage to serve performance, cost, and location requirements. For environments that need vendor-independent data migration and management across multiple tiers of storage, iRODS (integrated Rule-Oriented Data System) can be a very good option, suggested Laura Shepard, Senior Director Vertical Markets, DDN.
Shepard added that the culprit in lackluster performance for unstructured data can often be file system metadata.
“Look for storage solutions that can do more than simple caching,” said Shepard. “Look for product features that can extend advanced caching to tiers of flash, interfacing to file systems and applications to accelerate data and metadata automatically.”
Deploying a hybrid storage system can act as a solid foundation for implementing an effective performance storage infrastructure. The Storwize 7000, for instance, is a modular storage system that is flexible and includes the storage services required to support the ever-changing business needs that is placed on storage infrastructures, recommended Don Mead, Vice President of Marketing, SVA Software.
“Applications today are deployed across on-premises data centers, private clouds, and public clouds scattered around the world,” he said. “These environments require an advanced storage performance model that not only cares about the health, utilization and performance of physical storage, but also about the service levels and costs that must be delivered across diverse storage resources.”
Many More Approaches
Of course, the above are far from the only approaches to making data storage run faster, smoother and more economically. There are dozens of additional means of achieving those ends. Just a few of those not even mentioned include: doing some space cleanup and reclamation using the tools within existing servers and storage arrays; making sure your servers, storage and I/O devices are up to date with software updates as well as firmware and drivers; and double checking your servers to make sure that they are in high performance vs. balanced or low power modes.
“Also, don’t forget about boosting or addressing server storage I/O (e.g. interfaces, adapters, networks, switches and connectivity bottlenecks),” said Schulz.
Source credit: infostor