How to Handle the Continuous Delivery of Microservices

How to Handle the Continuous Delivery of Microservices

How to Handle the Continuous Delivery of MicroservicesThe world of API architecture and development is tricky in many ways. Unfortunately, there is no such thing as a “perfect” solution, and with every new implementation or solution, new problems are bound to crop up.

It is important to remember, then, that even the most positive, powerful decisions in API architecture could have significant issues in the long run that, if not recognized in the hazy glow of post-adoption euphoria, could easily interfere with the success of an API or collection of APIs.

Case in point — continuous delivery of microservices. How does a provider handle consistent updates across a suite of microservices? How is communication optimized and managed? More to the point, how is cross — and backwards — compatibility handled between different versions of the same client?

These questions have serious implications for data management, marketability, user adoption, and more. Today, we’re going to look at this problem, and all the issues it creates, while offering some basic solutions that will help to mitigate larger issues and streamline delivery.

What is a Microservice Really?

One way to conceptualize microservices is to think of an API as a book. A book is nothing more than a methodology adopted by an author, using their own style and language, to translate their thoughts into written form, transferring them via reading into your own mind as translated thoughts. While the comparison isn’t perfect, it’ll work for our purposes.

Consider the case of Artamène, by author Georges and Madeleine de Scudéry. The novel, first published in serial format and then combined into a single novel as it was initially intended, is arguably the longest book in history, boasting 1,954,300 words over 13,095 pages.

While the content and story are interesting, we’re focused on physical properties here — with such a large book, activities such as error checking, translation, even reading can be a chore. The content itself is not the problem, but rather the method of delivery is.

It might in fact be some of the finest literature of its time, but for many readers, the simple fact of length and the resultant weight of the intended format of a single publication removes its usefulness in great measure — try to read this on a crowded train in single publication format and you’ll see the negatives first hand.

On the other side, consider the Harry Potter book series by J.K. Rowling. Significantly shorter and with less content overall than Artamène, it has nonetheless been a bestseller across the world. Forgetting for a moment how great and amazing the story, dialogue, and universe is, Harry Potter owes its success in large part to its digestibility.

Each Harry Potter book is broken into a story arc that is tied to the theme of the story being told, and each book is unique enough to be engaging in its own properties so as to function somewhat as a standalone work. It’s easy to digest singularly, but made more powerful with the rest of the series.

This is fundamentally what a microservice is. While we could use a megalithic, singular API, it brings with it a lack of portability, an absence of digestibility, and a difficulty in error checking and translation. By breaking up a service into separate functions and creating “mini APIs” for each that are strong enough on their own, but more powerful together, we create a suite of APIs that are portable, easy to check for errors, easy to implement, and in most cases simply better.

All of this comes with the caveat that microservices necessitate a unique backend in the form of microservices architectures. This can be a time and effort cost in and of itself, but for many cases, it’s a perfectly acceptable cost for the final benefits.

More on microservices architecture: Asynchronous APIs in Choreographed Microservices

The Hidden Challenges of the Microservice Format

It’s not all daisies and sunshine, though — there are some significant issues with this type of content delivery that can’t be explained with an amazing literary metaphor. Whenever you segment functionality of one property into functionality of many properties, the most significant error you run into is the topic that sparked this piece — communication between each individual piece and the problems that arise in such a situation.

Simply put, how do we ensure that each microservice application is compatible with the rest of the services? How do we make data compatible between services? Do we deploy all microservice functionality concurrently, or would this negate the benefits of the segmented architecture? How do we handle continuous delivery of microservices?

Data Scaling

When it comes to transferring data from one version of a microservice application to another, perhaps the easiest (if least efficient) method is data scaling. When we talk about data scaling, we’re really talking about conversion of data and either upscaling or downscaling this data dynamically between incompatible services.

Take for instance a Geolocator API that calls the geodata on your phone, and matches it to an approximate place on a map application. Another API, the CheckIn API, is then called from the microservice collection to notify you of what restaurants, hotels, and landmarks are in the area using geodata from other users who have checked in to these places.

Unfortunately, the version of the Geolocator API uses a different data stream format to deliver your location than the CheckIn API does, as the CheckIn API requires more efficient data combination and encoding to protect privacy than a simple locator application.

In this case, the data could simply be dynamically converted. The CheckIn data could be anonymized and sanitized, removing any private personally identifiable information before encoding, and then decoding this content on the native user application.

The problem here is, of course, overhead. Putting the weight of decoding on the shoulders of the user is not fair, and eats resources on the native application. This could be done “in the cloud”, but this raises further security questions. Users could be notified of this limitation, and although this is the ethical and legal thing to do, it could result in a lower user base.

That being said, dynamic content translation from service to service is a solution, and it is used by many application suites (such as the Google suite of Drive, Gmail, Calendar, etc.).

Ensuring Compatibility

Even ignoring the issue of real time data issues, there’s the obvious issue of compatibility between versions of the API. When updating one microservice, how do you ensure previous versions are compatible?

One easy way is to simply add support for old data types per the needs of the consumer. Using simple metrics, you can largely track which version of an API your users are utilizing, and track the data types as such. Therefore, if you know 0% of users use the old data type of an archaic API version, you can remove support for that data type in a later API revision, while maintaining basic support for the 1.5% using the last update of your service.

While this certainly adds some bloat, it’s a much more elegant development solution than the alternative. Your userbase is your lifeblood, and without them, an API is fundamentally useless. Ensuring that data is still supported regardless of revision might add some bloat, but intelligently tracking datatypes being used can lead to long-term API size reduction.

There is of course an argument to be made for decentralizing this data. If an API has issues with datatype from revision to revision, creating a central “routing” API is arguably a great solution as well. While this adds yet another service to the mix and could possibly create a chokepoint in the data stream, offloading data handling to a central application that can route and convert data regardless of origin is a great solution.

When removing endpoints, consider these API Retirement Flaws and Best Practices

Supporting a Function, Not a Network

Much of these errors arise from a failure in viewpoint, however, rather than a failure in architecture. Too many developers unnecessarily segment their API into a collection of microservices when their functionality does not really warrant it. This ends up creating a network of applications that demand attention to ensure that multiple products can use each API, and that each API is updated and compatible.

Not every API needs this, though. Imagine a notepad API, where you can jot down some basic ideas. Do we really need an API to change pen color, an API to spellcheck, an API to save the file, and an API to share the file? Some of that functionality is a handful of lines of code — so why do we need to unnecessarily segment the functionality into a network?

This becomes a nightmare during significant development as well. Do you want to push beta revisions to every variation of the user, to every function as an API? Where do you stop?

Let’s make this very clear — you should be supporting a function, not a network. Everything you do in your APIs space, format, organization, and architecture should be to the betterment of the function your API is designed to perform. If this takes the format of a network, then so be it, but simplifying to a function rather than a network can remove many of those issues.

When You Have to Support a Network

Not every situation can be so simplified, though. Sometimes you have to support a network, and not a function. In these cases, simple planning and development management goes a long way to negating these issues.

Suppose you have multiple products that share some services, and then some specific products utilizing only one or two of these services and don’t touch the shared ones. Rolling out a beta version of these products or services might be a chore, given the issues of compatibility, reusing previous services while pushing out new ones, and handling this top level communication.

How does a provider do this effectively? Like almost everything, organization will set you free. Plan your revisions, and ensure compatibility on an internal environment. Integrate legacy support where needed, and create a “compatibility layer” within specific applications where you know the user base functions calls are so vastly different from the new function and calls.

The issue of versioning is not to drastically alter your userbase right off the bat, but to slowly draw them to new versions. Wanting to change everything all at once is a noble pursuit, but one that breaks and segments your community.

While you can centralize the data handling so as to remove the beta limitations from the user, the fact is that most of the issues arising in microservice delivery are those of approach, rather than of technical limitations.

Another Solution — Wrappers and Containers

There is another solution, and one that this author considers the “best” of those presented here. Adopting wrappers and containers rather than simplifying functionality through segmentation is perhaps the clearest, if not the easiest, solution possible.

The general concept behind a wrapper is to wrap the code base with a simple wrapper that establishes a constant, unchanging interface from the user, and converts it to the format needed. One example of this is GraphQL, a REST wrapper that collects multiple endpoints into a singular entry endpoint.

Functionally, this means that the application in question just needs to contact the server in order to know the limitations on data type, format, and structure. The data handling is taken entirely out of the hands of the local application, and is instead handled and dictated by the server handling the data.

Likewise, the concept of the container method is to contain like services and their required assets into a singular package. A system like the Docker container solution can package dependencies and functionalities into a single service — this is incredibly important when ensuring compatibility, as the compatibility service or layer can be packaged with the new versions of the application in order to ensure cross-compatibility with ease.

Conclusion

Issues of cross compatibility, revision functionality, and continuous delivery is not one of technical limitation, but instead of limitation of approach. By ensuring legacy support between versions of the same application and off-loading much of the data handling away from the user, you can ensure compatibility with little impact on the user.

Any of the solutions here could work, though the wrapper and container format is likely the most effective for most situations. Once an API provider decides to provide this long term support, simple solutions such as these will go a long way to ensuring handling continuous delivery of microservices is as simple and painless as possible.