Acuutech: Stay agile in the digital realm - Philip Moss




Philip Moss of Acuutech discusses how to deliver a flexible, scalable, cost-effective IT system in today's world of business agility by using software-defined data centres.


With IT being intrinsic to what we all do within our businesses, and with today's focus on flexibility and change, it seems quite odd that many of the back-end systems deployed into organisations today still follow a very traditional and rigid approach.

If we look back for a second, due to the complexity and sheer cost of IT solutions, they were historically seen as infrastructure activities, with a focus on designing and deploying large rigid solutions that could deliver several predefined capabilities over a period.

There tends to be significant investment in a limited number of providers and their proprietary technologies. Costs increase over time and, in most instances, there is a vendor lock-in.

Deployments are fixed-cycle activities - commonly three or five years - with systems being scooped out and deployed with built-in estimates of future requirements, significantly increasing capital requirements, and removing agility to react to changes in capability and technology.

Software-defined data centre

At first glance, it's easy to think of data centres as huge rooms with aisles of machines - systems deployed and operated by only the largest businesses and telecommunication organisations.

Although historically true, in recent years it has morphed into a wider term covering the broader concept of back-end IT systems, from smaller-scale installations through to the traditional rooms of flashing lights and cooling systems.

Therefore, the concept of the data centre applies just as much to the SME or mid-sized deployment as it does to the large enterprise.

Several years ago, a new concept started to emerge, commonly known as the software-defined data centre vision. In simple terms, the goal of the software-defined data centre is to significantly reduce IT systems' capital and operating costs while increasing flexibility, and overall system stability and performance.

This is achieved using industry-standard software to provide all the capabilities traditionally provided by dedicated hardware or proprietary software solutions.

This allows systems to be deployed on industry-standard hardware from multiple providers, with customers able to take advantage of the inherent cost savings of open sourcing. This is coupled with the removal of dedicated subsystems, with the inherent higher capital and operational costs of such locked or proprietary systems.

A key trigger for this change was the very significant increase in computing performance that opened up the world of virtualisation on a mass scale where, broadly speaking, it was possible to virtualise most applications, thus decoupling applications from specific hardware.

In effect, servers and back-end computers changed from being designed and configured around the applications that ran on them and, instead, became pools of computing resource, with powerful multicore CPUs and large amounts of memory to function as multiworkload powerhouses.

Many of the activities traditionally carried out in dedicated hardware could now be undertaken in software; for example, networking or storage management through pure software running on general, rather than specialised, hardware.

Although this approach was technically less efficient than the established dedicated hardware approach, the inherent speed enhancements in off-the-self hardware and resulting cost reductions more than offset this challenge.

The IT sector likes to talk about the concept of 'storage, network and compute'. In a traditional deployment, this means the following:

  • Storage: some form of dedicated data storage solution that lets you access your data and does not lose it if part of the system fails.
  • Network: an interconnection system commonly made up of dedicated networking hardware and cables to allow every part of the IT system to talk to all the other parts.
  • Compute: computers (servers) to run software and services.

In the software data centre world, these capabilities and services still exist; however, they are all delivered via software. Now that does not mean that suddenly bits of computers - hard disks or cables, for example - have suddenly vanished. Instead, the intelligence to manage and control them to do the jobs you need them to do is now controlled through standard commodity software (Windows, Linux, VMware are common examples); harnessing the power of the underlying hardware to provide the data storage, high-performance networking and computing systems your business needs to operate.

Make maintenance simple

Management and system maintenance has been historically complex and a costly part of organisations' IT solutions. Many business leaders will be familiar with the time and cost overheads inherent in staff setting up new systems or maintaining them, with constant cycles of patching, reacting to the latest virus or hacking threats and performing housekeeping tasks like a backup.

Software-defined data centres help with this challenge, as they are inherently driven through automation and workflow.

This does not mean that your staff suddenly need to be programmers or command-line gurus. Instead, simple-to-use (commonly web-based) dashboards are used to configure, deploy and manage systems from a single interface, with repeatable, reliable, automated workflow occurring under the covers.

No more staff standing hunched over a small screen and keyboard in front of a server in a darkened room. No more loss of IT services when a new update or patch is released.

Innovation driven benefits

By taking advantage of this software-first paradigm, progress and evolution in the area has been rapid, with many of the key providers and vendors in the sector on their fifth or sixth-generation offerings, providing a vast suite of capabilities that deliver end-to-end computing platforms.

This rapid evolution has created the trend towards a concept known as the 'hyper-converged' model, with the traditional roles of data storage, networking and virtualisation all provided within a single server.

System capacity expansion and fault tolerance is then provided through increasing the number of servers - what the IT industry calls the 'scale-out' model.

In many ways, hyper-converged is the ultimate expression of the software-defined data centre approach, through combining the traditionally separate parts of the IT platform into a single unit, and using the power of software to provide the controls and capabilities required to make everything operate correctly.

For your business, this further enhances the overall benefits, with even greater reduction in hardware requirements and system complexity, while still delivering on those core goals of solution agility and scale.

Not all vendors share the same vision

Flexibility in software provider and hardware is one of the key underlying benefits of the software-defined data centre and one would love to be able to advise you to select your vendor of choice and dive straight in.

Unfortunately, as is so often the case with suppliers, many come with their own slants and vested interests. In this case, the key area of consideration is the technology history of a provider and how that may affect their offering.

Vendors with a strong lineage in hardware-based solutions may, on the surface, be fully embracing the software data centre vision; however, they are often ultimately keen to protect existing investments in hardware technologies and therefore limit either software compatibility options or force hardware platform lock-in. This can most commonly be seen with some of the networking or storage hardware providers - both have significant investments in dedicated hardware chips with associated high profit margins they are keen to protect.

Meeting the business need

This article opened by highlighting the classic business goals of agility, cost-efficiency and quality customer experience, and how, although we all rely on our IT platforms in our business, the way we design, deploy and maintain them does not inherently lead them to delivering on those goals.

The software-defined data centre finally breaks that situation. By moving all configuration and services to software, you are empowered through inherent flexibility in your IT systems.

The use of industry-standard hardware drives down the capital investment, gaining the direct advantage of the constantly improving returns in hardware-to-pound ratios that are inherent in technology.

As your infrastructure is defined in software, you can easily change your system's configurations or capabilities in line with your business requirements - even the ones you don't know about yet.

Operational overheads are reduced though standardisation, automation and workflow, reducing costs and improving system reliability. Through embracing the power of the software-defined data centre, you open the door to a raft of hybrid and cloud-connected solutions.

In summary, through embracing the software-defined data centre approach, your IT systems finally flex to meet the needs of your business, no longer being a slave to the capabilities or costs of IT.

Philip Moss, group chief product officer at Acuutech.