“Innovation comes from the producer; not the customer” – W. Edwards Deming
The Innovation problem
A classic challenge with new technology and true innovation is that most of the time, the chasm of change is too wide and steep for most customers and consumers to cross. It’s easy for great, game changing ideas to have slow adoption as a result. The reason is simple: It’s the fear of change that gets in the way. Even the thought or feeling of change and the disruption or pain that it causes can be enough to slow adoption. As I mentioned in my post, Why “Disruptive Technology” makes CIO’s run the other way ,it took the automobile almost 20 years to gain broad adoption over horses, mainly due to the fact the change was too great for consumers to embrace. The storage and infrastructure industry has been deeply affected by this. It took over 10 years for the hypervisor to land on 50% of the physical hosts in the datacenter. Creating abstractable software solutions with tremendous innovation and value will help accelerate the next major evolution.
How it Affects Change in Storage Infrastructure
Since data is such a critical piece of the datacenter, customers are cautious to move the change needle too much for fear of disruption. Forklift upgrades are sometimes easier than real appreciable change, because it becomes a file copy. Changing the actual infrastructure design is an entirely different story however. With virtualization now mainstream, storage infrastructure has grown tremendously as it had enjoyed being a core component of the movement. Most of the innovation has come from software residing in the storage controllers or from the incremental upgrade of the hardware components over time. As a result, many of the innovations have trickled into existence in the datacenter through the storage platform.
Convergence, Hyperconvergence and Uberconvergence
A number of new, innovative vendors are now chasing down the fact that traditional ‘backend” storage (the storage array) has become commoditized, with only incremental new features being added. Insert converged and hyperconverged infrastructure. Their approach helps remove the complexities of shared storage in the datacenter and link storage to the compute layer in the same physical box. While this solves the issue of reducing complexity, it does not solve the innovation barrier and design challenges to get the datacenter to the next major evolution. By simply bringing resources closer to the application, you do get some performance benefits. However, a major challenge still exists. There is no abstraction in the actual storage subsystem that will allow for quick and easy intelligence and automation to be introduced, without significant reconfiguration (the forklift will still be required). Customers, once again, will be tied to hardware performance advancements and constrained scale out (tying all components together into a scale-out block if you will vs. decoupling resources). As a result, customers will not have the benefit of truly software abstracted design within the storage layer.
How the Abstraction Layer Saves the day
Anytime we’ve witnessed massive change, it’s been carried by the ability to abstract today’s business-as-usual and incrementally add innovation in consumable chunks, if you will. The hypervisor abstracted memory, compute, network and storage from the application as a first step (basic virtual machine abstraction). What came next was the ability to move and
change workloads (vMotion), followed by rudimentary dynamic performance intelligence (DRS), etc. The hypervisor was a pane of glass to the resources that allowed more rapid change to be added over time. I will argue that the hypervisor by itself is not the massive innovation that occurred, but an explosive agent of change as the intelligence and automation was incorporated.
With storage, the innovation layer today has existed at the storage controller, but the challenge with that is major increases in performance now need to be closer to the applications – in the compute layer nearest to the application. While the ability to abstract storage IO at the hypervisor kernel level may seem rudimentary today, the rapid introduction of innovation is just being unlocked. It’s unlocked by abstracting storage control plane for IO and intelligence from the traditional storage layer and moving it closer to the applications. While the need for shared storage does not go away, the relevance of services layered into the controller come into question as the storage intelligence layer matures. Flash in the array, deduplication, replication, etc. all become a feature, vs. real change.
For a deeper view into what is possible when you unlock storage intelligence from the shared array into the server tier, check out the PernixData presentation from Virtualization Field Day 5.
Got questions or comments on storage performance and innovation? Want to learn more about storage intelligence? Send me a note on Twitter @BriVirtual and I will be happy to discuss!
About the Author
Brian Gagnon has over 20 years of consulting experience in the field as well as building, leading, and operating technology-based professional services organizations. He spends his time working with technology companies such as VMware, PernixData and others as a technology evangelist, business leader, and services champion. When he’s not talking tech with businesses and partners, he’s most likely on the ski slopes, traveling, enjoying the outdoors or exploring the world on his motorcycle