Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Few feats in IT are as rewarding as standing up a new infrastructure and molding it into a production system. Though we may be clicking more than racking these days, thanks to virtualization, it's still exciting to see the fruits of your labor ripen into a solid and reliable resource. In a perfect world, everything comes together perfectly, with every aspect planned and executed precisely as required, when required, and the end result is immediately ready for work.
In reality, we know it's not quite that easy.
I'll use a new virtualization cluster as an example here, but this applies to just about everything in IT, from the network layer to the application layer. The nuts and bolts of the construction are somewhat formulaic. You pull together your chosen server, switching, and storage hardware and start hooking up everything. Your original design is followed more or less to the letter, and as you progress through the build, you hope not to run into anything too surprising, such as an unexpected driver incompatibility or a buggy software stack. Even if all proceeds according to plan and everything looks like it's ready to go, you're far from done. You're really just starting.
Because now you have to hammer the bejesus out of everything until you're confident it's 100 percent ready for production. And you're never going to be 100 percent certain.
It's human nature to hurry toward the light at the end of the tunnel, to hasten your steps as you sense the end of a journey or task approaching. Where we might have been methodical and painstaking with our progress initially, we have an urge to gloss over many seemingly innocuous or minor details when we're nearing the end. And here there be tygers.
Back to our new virtualization cluster: We're replacing an older cluster with a large pile of bigger better faster more, moving from 1G to 10G, from relatively slow storage to fast storage, maybe even as far as Clovertown to Sandy Bridge. The new gear is going to make everything easier and faster, and there are few who don't look forward to its implementation. Ports are wired, switches configured, storage initialized, shares and LUNs created, the whole miasma of a modern virtualization build happens in relatively rapid succession. After all, if it's a good design -- a cookie-cutter build.
A few test VMs are built, and they're fast and responsive, blowing the doors off of their elderly counterparts. They appear to work perfectly, and quick testing shows everything as 5-by-5. This is precisely where the desire to leap ahead and throw the system into production takes hold -- and where cooler heads need to lock it down to spend days or weeks running comprehensive tests on every element before production workloads are introduced.
First, we need to thoroughly exercise the storage from every host in the cluster. Fortunately, it's extremely easy with virtualization. A quick Linux VM build and scripts using Bonnie++ or even dd run through a loop and clone the whole shebang as many times as necessary to introduce a significant load on each physical host in the cluster, hitting every planned LUN or share on the storage. With randomized sleep times, this produces a randomized workload of streaming reads and writes or a randomized workload of random reads and writes or whatever you like. If you really want to stress out a storage subsystem, there are few better ways to do it.