Americas

  • United States
craig mathias
Principal

After virtualization and cloud, what’s left on premises?

News
Oct 26, 201713 mins
Virtualization

Extreme virtualization leaves just switches, access points, secure routers

cloud computing data center
Credit: Thinkstock

How much of an organization’s IT capabilities – including processors, storage and network – can be virtualized and moved into the cloud?

Quite a bit, according to Craig Mathias, principal at advisory firm Farpoint Group. The enterprise network of the future will consist of access points, switches to provide interconnect and power, and routers that combine security functions, traffic optimization and related capabilities. That’s it. Everything else will be provisioned as services in the cloud.

So-called “extreme virtualization” will enable continuous access to appropriate computing and information, even as requirements evolve over time, Mathias says. Economics will drive the transition, as enterprises look to better manage IT costs and curb spending on traditional capital investments and ongoing maintenance.

The transition will occur over the next decade, Mathias says. It’s already underway, with significant adoption of cloud services for compute, storage and network functions. Some network management and operations platforms have also shifted to the cloud.

Read on to see how we got here and what comes next.

With virtualization hard at word across essentially all of IT, we’ve begun to explore the next phase in the evolution of this powerful and versatile approach to provisioning computation, storage, networking and more.

But while the technical benefits are close to overwhelming, we need to begin our discussion with the kicker that will place an evolved definition of virtualization at the heart of future IT strategies across organizations of all sizes and missions.

And with that, it is, as they say, all about the money.

Many IT practitioners are constantly reminded of a very stark truth: IT budgets in general never recovered from the effects of the Great Recession of almost a decade ago. A commonly restated rule of thumb is that every year brings a demand from senior management that IT accomplish 10% more with 10% less funding. While the situation may not be as dire as that in every IT shop, we have nonetheless seen an overarching emphasis on cost control, affecting both capital expense (CapEx) and operating expense (OpEx) budgets. No surprise here, really; IT is usually considered to be a cost center, not a profit center, and the performance of the IT function overall is gauged by the (admittedly, rather vague) metric of benefits accruing to end-user productivity, and not, of course, by the degree of adoption of cool new technologies.

CapEx v. OpEx

A general strategy for dealing with this challenge that was recommended in the early days of the recession was to increase CapEx so as to decrease OpEx. CapEx includes the physical gear, software, and non-recurring engineering (NRE) required to get any given installation or upgrade planned, purchased, installed and configured as optimally as possible. CapEx, owing to the faster/better/cheaper tradition in IT, benefits from both manufacturing economies of scale and the higher performance, in terms of both product functionality and operations-staff productivity, inherent in the technological advances central to the introduction of innovative new products.

OpEx, on the other hand, is labor intensive, with the associated cost curves often moving quite opposite to that of CapEx. No matter how good a given IT professional’s skills, human productivity has limits – you know, sleep, the occasional weekend off, and the ever-present possibility of all-too-human error. So, the strategy at the time was, simply put: Invest in newer technologies so as to increase the productivity of IT staff, thereby limiting OpEx and gaining from the benefits of upgrades to the newer technologies and products that would more often than not be in the planning pipe regardless. Simple.

Except now it’s becoming clear that such a strategy also has its limits. CapEx upgrade cycles have stretched significantly, because of the above-noted slow growth in CapEx budgets, but also perhaps due to a decreasing overall rate of innovation in high tech itself. And while enhanced operations-staff productivity is often easy to observe, maintenance costs in many cases now constitute a major component of OpEx, often compensating vendors for up-front purchase discounts but limiting savings on the OpEx side. Fortunately, a reexamination of OpEx enabled by a new strategy we have been calling “extreme virtualization” holds great promise for managing IT costs, this time by shifting costs back into the OpEx domain, but with an interesting twist in the form of another key technology trend: the cloud.

Extreme virtualization

The concept of extreme virtualization began with several longer-term planning exercises that Farpoint Group participated in over the past few years, and at the root of each of these was a single question: What does IT infrastructure look like in 2025?

The past three decades of CapEx spending have been dominated by performance enhancements (to today’s minimum of 1 Gbps Ethernet, for example), coverage and capacity improvements for wireless LANs, specialized hardware for performance management (for example, accelerators of various forms), and, of course, a broad range of security solutions. The mix-and-match interoperability central to networking made it possible to build highly customized solutions via incremental enhancements as required, but unfortunately often accompanied by complexity that clearly has a negative impact on OpEx. As point-product innovations consolidated into more comprehensive and manageable offerings, however, the 10% rule could still be respected.

This consolidation was followed by an even more important set of innovations with corresponding cost reductions: the rise of the cloud as a platform for networks. While many if not most definitions of virtualization have focused on virtual machines and similar technologies, a more contemporary and appropriate definition of virtualization can also include functions and capabilities that have historically been based in locally provisioned hardware and software, but which are today available as services in the cloud. We thus today suggest a strategy that moves costs in the opposite direction from what was previously applied, this time from CapEx to OpEx, but in this case by virtualizing as much of IT infrastructure as possible into cloud-based services.

Such a strategy is already hard at work in many organizations today. Just for starters, computational infrastructure (servers in the traditional sense) are now available, even on-demand, as cloud-based services. IT organizations see no difference in capabilities; these virtual servers (and virtual machines on cloud-resident physical servers) can be used in a manner identical to that of local hardware. Ditto for storage, with WAN performance often the only factor currently blocking a transition to cloud storage as primary and not just for collaborative or backup applications.

virtualization features benefits Farpoint Group/Network World

We’re even seeing significant application of network virtualization, particularly with respect to network functions virtualization (NFV). While much of the emphasis here has been on carrier and service-provider solutions, the possibilities of moving what formerly required local network hardware into the cloud, again with service and capacity on demand, can be broadly applied by individual end-user organizations. The continuing transition to software-defined networking (SDN) provides further motivation here, with computation in the cloud (SDN controllers, for example) replacing dedicated local networking components in the interest of enhanced flexibility, security, performance – and, again, cost reduction.

In addition, significant portions of the management and operations arsenal are also moving to the cloud. Management consoles (wired, wireless, security, other IT, and beyond) are now widely available provisioned in the cloud, with service charges – including maintenance and enhancements – billed monthly. The cloud brings unparalleled convenience, including anytime/anywhere/any-device access, along with easy scalability, to IT shops of all sizes and missions. VMs on servers in the data center dedicated to network management requirements? Nope – no longer required.

What’s left on premises?

Examining just the networking requirements of IT going forward, we find the need for only a very limited set of functionality, as follows:

  • Wi-Fi access points (AP) With the majority of client devices now connecting to organizational networks wirelessly, the coverage and capacity provisioned by contemporary Wi-Fi is today critical. Advances in basic technologies as a result of enhancements to IEEE 802.11 standards yield greater security, faster (with higher overall capacity) and more reliable connections, lower prices resulting from functional consolidation into chipsets as well as marketplace competition, and also drive a corresponding evolution of the wireless capabilities of client devices. Given advances like Wave 2 of 802.11ac, which features multi-user MIMO (MU-MIMO), the upcoming 10 Gbps 802.11ax, and the 60-GHz. technologies, 802.11ad and .11ay (the latter of which might even reach 100 Gbps!), there appears to be no significant upper bound on overall WLAN capacity, essential for organizational success irrespective of mission.
  • Ethernet switches Interconnecting and powering all those APs, as well as providing the occasional wired drop and implementing security and traffic policies across the network value chain, is the otherwise transparent Ethernet switch. There is some debate as to the long-term viability of the 2.5/5-Mbps products now relatively common, with 10 Gbps a safer (if, for now, somewhat more pricey) bet given the above-noted evolution of Wi-Fi technologies. We expect that hierarchies of switches ranging from edge to core will remain the preferred architecture and implementation strategy, with increasing levels of provisioned throughput continuing their traditional outward migration to the edge. And while we expect more distributed and cooperative WLAN control-plane implementations, Wi-Fi controllers, where required, will disappear into the resulting hierarchy of switches – or the cloud.
  • What used to be a router This element provides essential interface functionality for the connection of organizational LANs to service-provider WANs. We have, of course, moved quite a distance from the multiprotocol router of 30 years ago (remember IPX/SPX, NetBEUI, and Burroughs Poll/Select, among others?) and have in fact now returned to the roots of the single-protocol Interface Message Processor (IMP), reasonably described as the router of the ARPANET – the direct ancestor of today’s Internet – and also arguably a precursor of today’s software defined networking (SDN) technologies as well.The router going forward will of course provide IP addressing and routing functions (NAT and etc.), VLANs, and other familiar capabilities, but also the vast array of security functions, traffic optimization (including class and quality of service as well as load balancing), and related capabilities. Its functionality, though, while highly-configurable in software via the management console, is straightforward and the device will essentially be transparent in operation. Of particular interest, however, is the provisioning or overlapping of redundant WAN connections, again for performance optimization (moving functionality to the cloud always highlights the need for more capacity here regardless), but also for the resilience that derives from the elimination of single points of failure.

Everything else as a service

The above rather limited premises-based arsenal, however, leads to the central feature of extreme virtualization introduced above: Essentially all other networking functionality is resident in the cloud and provisioned and purchased as a service. This includes, of course, servers and their supported applications (and of course virtual machines), storage (even primary storage in many cases), and, via network functions virtualization, major elements of organizational networks as well. Also included in cloud-provisioned form are unified wired/wireless management consoles, network analytics, and related capabilities.

The advantages of this almost “everything-as-a-Service” (EaaS) approach are numerous:

  • Availability Given that EaaS infrastructure is resident in the cloud, and cloud is connected to the Internet, required services are available from anywhere connected users happen to be. Note that there is no difference in security provisions between premises and Cloud-based infrastructure; the same requirements, procedures and solutions apply to both.
  • Reliability and resilience Cloud suppliers will compete not just on capacity and features, but also on availability, reliability, and resilience. As there are already and will continue to be multiple suppliers in the cloud-services domain, end-user organizations will specify and receive these assurances in agreements and contracts as a matter of course.
  • Scalability More capacity required? No problem. Suppliers will also compete in this dimension as well. Additional capacity will, in many if not most cases, be available on demand with no advance notice, and at a market-set competitive price. And smaller firms can have “big company” IT and networking that they can grow with from Day 1.
  • Controlled evolution Upgrades and enhancements have traditionally required large budgets, careful planning, and staged deployments. The EaaS model places the responsibility here into the domains of service providers who will develop significant experience and expertise in these activities. Moving to, for example, an SDN-based infrastructure will involve minimal effort on the part of organizational IT, network, and operations management, with end-users seeing minimal if any interruption to service.

As is almost always the case, competition will reduce costs to the absolute minimum possible, with additional improvements over time as suppliers enhance their knowledge, methods, and procedures, and amortize these via economies of scale across potentially very large customer bases. Our conversion of CapEx to OpEx is thus complete, with the potential, we believe, for massive savings across the lifecycle of any given installation.

We can even extend our extreme virtualization concept all the way to the edge of the network, into devices used to access the network. While BYOD has become the norm here today, end-users are still left with the burden of maintaining those elements of their device’s functionality not covered by the organization enterprise mobility management (EMM) solutions. Imagine instead a device-provisioning model based on renting or “borrowing” a device on a temporary basis. Choose a device from a local cache of these, on demand, and otherwise selected based on the form factor desired at a given moment in time, insert one’s smart card (which could be a USB key), authenticate with a password or other second factor, and voilà – one’s (virtual, of course) “desktop” appears. Appropriate management and control functions, replacing the traditional OS, transparently assure a given device’s configuration, integrity, and reliability. All done? Remove the smart card, and you were never there.

Transitioning to extreme virtualization

The overarching concept of extreme virtualization is simple: continuous access to appropriate computing and information, even as requirements evolve over time, largely replacing traditional capital investments and the ongoing maintenance of network and IT infrastructures. Note also that the extreme virtualization model could even extend into premises infrastructure, leasing as well the rather limited set of hardware elements we noted above from a service provider, and thus representing a real opportunity for carriers and integrators to extend their business models – and, once again, with competition here benefiting end-user organizations.

One potential issue for many will be the requirement for continuous network connectivity, both client and WAN, for normal operations. Let’s face it: The viability of offline IT activities expired some time ago, as today’s real-time, collaborative model for information access means that anyone off the network is truly out of the loop. Again, the extreme virtualization model is initially driven by the requirement to minimize ongoing costs, but the provisioning of the enhanced reliability and availability required here ultimately seals the deal.

One final point: We do expect meaningful impacts on the business models of traditional network equipment suppliers as the transition from products to services proceeds. For many of these, however, the evolution to extreme virtualization will introduce new marketplace opportunities and thus keep the network equipment industry viable and growing.

We expect the transition to extreme virtualization to take at least 5-10 years, but managing the cost constraints that began this discussion ultimately dictates that such will be the only direction forward. And many of us, to be sure, cannot wait.

craig mathias
Principal

Craig J. Mathias is a principal with Farpoint Group, an advisory firm specializing in wireless networking and mobile computing. Founded in 1991, Farpoint Group works with technology developers, manufacturers, carriers and operators, enterprises, and the financial community. Craig is an internationally-recognized industry and technology analyst, consultant, conference speaker, author, columnist, and blogger. He regularly writes for Network World, CIO.com, and TechTarget. Craig holds an Sc.B. degree in Computer Science from Brown University, and is a member of the Society of Sigma Xi and the IEEE.

More from this author