Is the Software-Defined Datacenter just another buzz term?

Original Source

The software-defined data center (SDDC) describes, in effect, a server-room Utopia, an ideal state of maximum automation, efficiency, performance, reliability and availability where everything in the garden is rosy. It’s a term that you might hear a lot in data center conference circles where experts gather to tell each other what will come next. But is it for real or just another buzz-phrase?

The short answer is that while data center efficiency has improved radically in recent years, like other projected Utopian states, the idealised SDDN version will remain in the realms of fiction for many (read most) organisations.

What is an SDDC? It’s really an umbrella term for a data center where servers, storage and network have been virtualised and where various controls and systems can automatically adapt to changing circumstances. SDDCs can add and subtract capacity where needed, pool resources where necessary and then decouple them, detect security threats, provide ‘self-healing’ when there are issues. They are controlled by pre-set policies and require little subsequent human interventions. That, at least, is the theory.

Certainly it’s true that web-scale datacenters with hundreds of thousands of servers and many millions of users are today operated by handfuls of staff. They represent best practice, however, and the needs and capabilities of a Facebook, Amazon.com, Yahoo or Microsoft will be very different to those of the usual enterprise IT shop.

In the real world, mergers and acquisitions will confuse issues by bringing together very different environments. CTOs will then have to work through this in the most efficient way possible given time and financial constraints. The results might be something of a kludge, at least in the short-term. And after that short-term elapses, there is likely to be another event that throws a spanner in the data center works…

Also, while the web-scale elite has the benefit of almost unlimited resources, most businesses do not, so it’s very important for them to have insight into service levels and compute capacity that will be needed in future. But capacity planning is hard even for experienced CIOs. The result can be familiar negative impacts caused by over-utilisation or under-utilisation.

The macro economy remains in flux, business is changing fast as virtual capabilities replace physical assets. Even IT itself is changing and few companies have clear insight into what proportions of their estates will move to the private cloud, public cloud or managed service providers over the coming years. The very nature of their business might change as new geographies are added and unprofitable business areas are exited in favour of new markets.

And then there the ghouls of security and business continuity. Governance rules and business leaders will demand that human nous is in place. What happens if the service is down, a program has been hacked or power is out? Some processes will never be fully automated and telling the CEO ‘don’t worry, the software will fix this’ won’t cut it.

In this world, all these things mean that SDDN perfection remains tantalisingly out of reach. The best ITers will keep striving to automate wherever possible, removing the need for manual inputs and always chipping away at inefficiencies. But Utopia is still some way away.