I recently wrote an Expert Paper explaining Data Center Efficiency in practice using smart tools and methods to ensuring an energy efficient operation of a data centre, without taking on risk in the process.
While writing the paper I came to re-visit one of my big dreams for our beloved data center industry: The truly dynamic data center, capable of self-optimising the efficiency, while honouring any dynamic load requirement at any times.
To explain the idea in a few details:
- IT compute power is always maintained at a NEED+ level, meaning all applications always have the resources they need to operate, but only a small amount of hot standby computer power is active at any given time. The need for proactive, dynamic scaling of the IT infrastructure is obvious.
- The corresponding network and storage infrastructure is similarly scaled dynamically to reflect the changes in need for IT compute.
- The underlying physical infrastructure scales dynamically with the need dictated by IT compute.
Why would that be a good idea, you might ask? Well, first of all, trying to run a dynamic IT infrastructure on top of a very static facility infrastructure will – by definition – not reveal all potential efficiency gains. Also using facility equipment at a level significantly lower than their optimal performance point, will provide you with some efficiency setbacks. Only by dynamically keeping the two in sync, will we be able to get the most out of the equation.
So how realistic is this in todays environment? Well, with Software Defined Infrastructure (SDI) we already have many of the elements needed for this to happen. Virtualization continues marching on, enabling organisations to abstract, pool and automate many elements of the compute side (Software Defined Compute) and corresponding network (Software Defined Network or Network Functions Virtualization) and storage (Software Defined Storage) needs.This coupled with the dynamic server power consumption capabilities provided by Intel’s Node Manager (CPU firmware) and Data Center Manager SDK we’re pretty well off on the consumption (IT) side of this equation.
Not as bright is the opportunity on the provider (Facility) side. Though we’ve been deploying modular concepts around power and cooling for ages (modular UPS’es, in-row cooling, hot/cold aisle containments, etc) and many data centers today are instrumented better then ever before (aggregating loads of sensor data for power consumption, temperature, humidity, etc), we still seem to be lacking the actual capability of dynamically controlling the provision of facility capacities.
So let this be a wake up call to the data center facility equipment manufactures of the world: You need to stop sub-optimising your UPS and Cooling products and step up and provide the industry with capabilities to dynamically control your equipment. This CANNOT be isolated to your own proprietary domain, since the equation is bigger the you! Your lack of actions for years are preventing the industry from achieving REAL efficiency goals!
We all know this can be done and we all know it takes a radical different mindset to get there. However if we fail to try, we fail to achieve on the most important mission we have after guaranteeing uptime and availability.