Americas

  • United States
Contributor

Why microservices are the foundation to a digital future

Opinion
Aug 28, 20175 mins
Virtualization

To extract full value from the cloud, companies must make sure that they aren’t bringing the equivalent of a cutlass to a gun fight when it comes to migrating existing applications and accelerating software development. Companies will need to change application models to suit this new environment.

microservices
Credit: Thinkstock

There’s no doubt that digital transformation (DX) is revolutionizing the way we do business, and cloud computing serves as a key cog in the DX machine. Cloud’s elasticity can indeed help digital businesses communicate more rapidly and increase innovation. But to extract full value from the cloud, companies must make sure that they aren’t bringing the equivalent of a cutlass to a gun fight when it comes to migrating existing applications and accelerating software development.

Here is what I mean: many businesses start their migration journeys by lifting and shifting existing on-premises applications into the cloud, making few to no changes to the application itself.  But running such the same old monolithic application architectures in the cloud means that your applications aren’t built to maximize cloud benefits. Just the opposite: They often present scalability issues, increase cost and require time-consuming application support. Ultimately, this will erode DX strategies, which depend on modernizing, rapidly iterating, and scaling applications.

To fully maximize the cloud, companies need to change application models to suit this new environment. At the same time, this model must also work with existing virtualized infrastructures, as cloud and on-premises IT infrastructure must co-exist for some years.

Apps built for DX

So, what to do? Lift and shift can work as a viable first step, if you know that the application already performs well on premises. From there, companies can lift and extend by then refactoring the application, making significant adjustments to it to make its architecture compatible with a cloud environment. They can also opt for a full redesign, and re-write it as a cloud-native application, a much more work-intensive option reserved for high-value apps that require optimal performance and agility. This is a space where the enterprise takes a much bigger leap ahead than their service operator compatriots, streamlining their own networks of their own accord and liberating themselves from vendor lock in.

How can the enterprise go about this? The answer lies in microservices and containers, two high-growth technologies that are powering DX strategies at companies such as Saks Fifth Avenue and BNY Mellon, according to the Forrester Research report, “Why The CIO Must Care About Containers.”

With a microservice approach to application development, large applications are broken down into small, independently deployable, modular services that each represent a specific business process and communicate with lightweight interfaces such as application programming interfaces (API.)

This approach supports DX activities in several ways. Microservices are easily deployed, scale well, and require less production time, while individual services can be re-used in different projects. Thus, developers can work more quickly and update applications more rapidly. There are a couple drawbacks, however.  Frequently accessed microservices require increased number of API calls which can lead to increased latency and degrade the application response time. Moreover, the need to have multiple microservices operate in concert at any given moment creates a multitude of interdependencies within the application. It is therefore becoming more challenging to monitor the performance of these applications and quickly identify the root-cause of performance degradations.  

Containerization is a virtualization method that helps solve some of the latency and efficiency problems of microservices. A container bundles applications together with the pieces they depend on, like files, environment variables, and libraries. Unlike traditional virtual machines, however, containers share the same kernel operating system and without the overhead of the hypervisor processing they allow many more microservices to run on each server, thus significantly boosting application performance.

Code-independent service assurance helps address the second requirement for microservices of monitoring the multitude of interdependencies. It allows visibility into the communication and transactions across microservices without the need to instrument the bytecode. This methodology is the equivalent of monitoring wire-data across traditional networks, customized to virtualized and containerized environments. I it is not only application agnostic, but also capable of providing insights at every layer of the service and application stack.

Empowered with this visibility the enterprise will gain greater clarity on their applications and services what is going on across the physical and virtual wires of their infrastructure. In a world where data is currency and application and service assurance are the basis for investment, this method of ensuring visibility and performance is crucial. Add to this ability to detect anomalies that may indicate security breaches, and the resulting solution becomes an integral part of a successful DX and business assurance strategy.

Agility and other benefits

While monitoring and assuring microservices performance may be challenging, it is highly advantageous and drives innovation and business agility. Through the creation of microservices and containers, service innovation and alteration can be conducted with ease and with speed. Adopting microservices would allow enterprises to refactor their applications effectively either before migration or after they lift and shift them to the cloud, as well as to develop from scratch applications that are optimized to operate in private and public cloud environments. 

Of course, a cultural change that promotes experimenting, adapting and implementing at a quicker rate will need to be implemented. Moving from a fail-safe to a safe-to-fail environment with microservices and containers provide the perfect opportunity for this if robust service assurance is in place. Along with encouraging a culture of innovation, it will allow the enterprise to be far more rapid when it comes to implementing new services and fixing problems.

This microservices-led architecture combined with robust service assurance will be crucial for bringing the full benefits of agile service delivery and cloud elasticity at reduced cost into play and help the enterprise dominate the game.

Contributor

As area vice president of strategic marketing at NETSCOUT, Michael Segal is responsible for market research, enterprise solutions marketing, analyst relations, customer advocacy, analyst relations, advertising, and social marketing.

Michael’s product management experience spans across ten years at Cisco Systems, where he managed all aspects of product line life cycles for several successful product lines. Michael's technical areas of expertise include SaaS/cloud, virtualization, mobile IP, security, IP networking, Wi-Fi/wireless, VoIP, and remote access. Michael holds patents in areas of networking and wireless mobility.

The opinions expressed in this blog are those of Michael Segal and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author