It’s all going to be serverless — the question is “When?”

Jouni Heikniemi
Statuscode
Published in
9 min readAug 21, 2017

--

The preceding 10 years have taught us to embrace the elasticity and manageability of the cloud. Cloud sparked the intoxicatingly powerful notion of being able to have a new server whenever you wanted. Next we learned to please ourselves with higher level platform services: queues, API gateways, authentication and so on. Is Serverless Nirvana next?

Many associate serverless with the current functions-as-a-service products (and understandably so), and being disappointed with those, they discount serverless altogether. That’s too narrow a view.

In this post, I‘ll make a case of how the development of the serverless platforms will change our view in the near future. I’ll divide general-purpose serverless into three waves, and look at how they work together to provide a platform way broader than the functions-as-a-service (FaaS) products.

Defining serverless

The hardware is still there, so being serverless is always a level-of-service abstraction, an illusion created for your benefit. As such, it has two defining characteristics: invisible infrastructure in lieu of configured VM images and invocation-based billing instead of an hourly fee.

It’s not as nebulous as you’d first think — much of the cloud is already serverless. When using fundamental services like AWS S3 or Azure Storage, you pay per GB stored (the cost of the disk you use), per I/O operation (the cost of the compute resources needed for data access) and per bytes transferred (the cost of the network pipe). The servers are there, split between hundreds or thousands of customers, you just don’t think about them.

What makes many developers uneasy is the notion of running your own code serverless — in other words, general-purpose serverless computation.

How do I know my code is going to run? How do I debug and monitor the environment? What’s my strategy for hardening the server?

These are valid gut reactions and questions — these underlying non-functional issues that must be solved before we’re even able to look into the abyss of actually defining a good serverless architecture. Let’s talk about our options.

The event-driven wave

The first piece of serverless compute with broad usage is the functions-as-a-service business: AWS Lambda and Azure Functions. Both services host snippets of code that can be executed on demand. You don’t write applications, you write pieces of applications with event rules that trigger your code when needed. “Run X when I get an HTTP request at /foo”, “Kick off Y when there’s a message on this queue” and that sort of thing.

Functions can be quite simple: Just typing a method or a few to execute a simple task.

You don’t know or care about the server, your code just executes. And now that you’re in the functions-land, you’re billed by fractions of a second, measuring memory and CPU usage. Whether you get one or million invocations a day, it doesn’t matter — you just scale invisibly.

So yes, we’re definitely serverless now. But functions alone are not going to inherit the Earth. Why?

  • Not all workloads lend themselves to an event-based single-operation triggering model.
  • Not all code can be cleanly separated from its dependencies, and some dependencies can require quite intensive installation and configuration.
  • Your development language and/or paradigm may not be supported by the FaaS products.
  • Migrating an existing application to a functions-as-a-service model is typically complicated enough to be financially impossible.

All these are valid architectural concerns.

Additionally, there are a lot of tooling issues that make FaaS products painful to use for some scenarios. I’m not worried about those — all these products improve at a rapid pace, and the problems of 2016 are… well, so 2016 today. If the tooling problems block you now, you’re likely to get unblocked soon. The question is just “When?”.

The flowchart-driven wave

If you broke up with the functions wave because your workflow simply doesn’t fit the model of small chunks executed by events, you’ve typically hit one of two things: Either your workflow requires more complicated orchestration, or you need to be running continuously — or for a significant period of time, which is kinda the same.

The complicated orchestration was first tackled by products such as Azure Logic Apps and AWS Step Functions, allowing you to draw a flowchart of a long-running workflow. The workflow can invoke your own Functions code, allowing the injection of custom functionality within the framework of straightforward orchestration.

Creating an Azure Logic App to detect updates in CRM and post them onwards. The code required to orchestrate a multi-step asynchronous integration is reduced to actually writing the required steps.

Workflow orchestration products make many complex things easy. Inserting a 24-hour delay into a shopping feedback process? Just add a delay task, and your execution will magically resume a day later. Need to act on somebody mentioning your product on Twitter? A couple of clicks, and you don’t have to think about polling Twitter APIs and reacting on them. The out-of-the-box activities not only ease the pain of orchestration, but also remove the need to write client code for most common services.

While a workflow is well-suited for modeling business processes, it is not a panacea for making all workloads serverless. For example, really complex decision trees are still pretty cumbersome to represent as workflows. Raw code retains its unique expressive potential. Also, the step-count billing model for workflows makes frequent polling and overly granular division of labor costly.

Put shortly, neither Functions nor Workflows properly address the question of continuously running tasks with lots of moving parts. And while this may sound like a fairly limited problem, it is actually huge for one extra reason: almost every application created in the last few decades is designed at least partially as a continuously running task.

Therefore, we need one more tool in our serverless bag of tricks.

The container-driven wave

Container technology landed a few years ago largely with the rise of Docker, and it certainly landed with quite a splash. The containers defined the next generation of the Virtual Machine. And while the initial offerings focused on the Linux space, Windows support is now getting there, and the container orchestration technology (Mesosphere, Kubernetes etc.) are now becoming more mainstream.

The thing about containers isn’t just that it’s replacing VMs with a more lightweight abstraction. It also acts as a general-purpose packaging mechanism for applications. If your Node app needs a particular Nginx configuration, set it up in your container. If your ASP.NET Web Site leverages a background Windows Service, you can layer that into your container image.

So OK, Jouni, you’re obviously making the case for containers as a software packaging medium, but where does that get us with serverless? Containers are essentially just packaged mini-servers running on a container host, essentially yet another server! Really, they are almost the nemesis of serverless!“

I’m glad you asked. I believe there are two elements the container story needs to get right before it contributes to an serverless platform: serverless hosts and container maintenance.

Hosting your containers without servers

Container clusters are an efficient hosting platform for lots of uses, and containers can really ramp up the workload density when compared to VMs.

But to go serverless, we need to forget about VMs waiting for work. We need invocation-based billing based on detailed consumption measuring.

The first mainstream offering to do this is Azure Container Instances, released for preview in July 2017. What ACI allows us to do is to kick off containers on-demand, without thinking about the infrastructure they’re going to run on. You want to spin off a container instance with one virtual CPU and two gigs of memory?

az container create --name JouniDemo --image myregistry/nginx-based-demo:v2 --cpu 1 --memory 2 --registry

There you go. You’ll be hit with a bill of $0.0025 for invoking the container, plus $0.0000125 per second per GB and core consumed. So running this thing for 10 seconds will set you back $0.002875. If you do it once an hour for a month, you’ll pay $2.07.

So that’s microbilling in action, and it can be quite efficient indeed. You don’t need to have a VM for your batch tasks, and if you need high burst capacity, serverless containers deliver with a latency measured in seconds or less, instead of the considerably longer start-up required for a VM cluster.

But even with a serverless container hosting story in place, we’re still left with the dilemma of containers containing servers — i.e. the “How to get your base images patched” question.

Container maintenance through automation

One of the tenets of serverless is enabling the developer to focus on the business, not the plumbing. Containers are a great tool, but by definition, they come with a technical payload of maintaining the contained environment.

You construct your application by picking an OS base image, layering the needed services on the top and finally injecting your application. When you release a new version, you rebuild your container image and off you go. The thing is, once the base image and dependencies are baked into your image, how are they going to get updated? How is that recent Windows patch or new Linux kernel security fix going to make it into your running application?

Thinking about questions like this is a bit antithetical to the nature of serverless. Routine dependency maintenance is not at the focus of a serverless developer, and hence should be automated.

But the line between routine maintenance and critical decision-making is blurred. For example, a typical web site will only be happy to get its OS updates installed, and will not see adverse effects to that. But where do you draw the line? Should your web server get updated without you knowing? A new Node.js version?

This is a tricky problem, but one that is being looked into. For this discussion, I recommend listening for a few minutes to Microsoft’s Steve Lasker being interviewed on .NET Rocks #1459, starting at about 37 minutes into the recording.

The depth and the details of the possibilities of this thinking are beyond the scope of this post, but imagine yourself in a future where:

  • Your container ships with an automated test suite that verifies the key functionality of your workload.
  • The platform around you knows about base OS image update semantics (for example, “Alpine Linux 14.1.5 has a critical security fix”)
  • The platform is able to test out new patches for you. When a base image update occurs, it can try to rebuild your application image on it, run the test suite and report on the results, proactively.
  • You may even drive automatic updating with policy-based controls— “When a critical OS update lands, auto-update my running application image as long as no tests marked critical fail”

And suddenly, the server-containing containers seem so much more serverless. We’re not there yet, but this is the path. Containers enable much more complicated workloads than simple code-snippet function frameworks do, and by mitigating the operations-stage side effects of the included dependencies, they become much more business-focused as well.

Everything good now?

The “It’s all going to be serverless” mantra from the title was followed by the When question. It should now be obvious that the answer isn’t “Now”. The platforms still have a lot of holes to plug.

For example, running a container workload 24/7 on Azure Container Instances micro-billing is too expensive; you’ll want a VM host for that. Getting the underlying VM patching totally automated is yet another challenge. Container maintenance automation isn’t there yet. The logging and monitoring parts of FaaS and workflow products have been a constant source of grief for both AWS and Azure developers. And first-class support for Windows containers? Ah yes, that’s still a work in progress.

But the tooling is improving at a rapid rate. Both function-based serverless as well as the workflow products have taken huge leaps ahead in the last year. The container story is still in the works, but there’s ample pressure for all the parties to deliver on it.

Further, the notion of splitting workloads into smaller pieces, microservices, is ingrained in the idea of serverless. To maximally leverage the orchestration capabilities of a serverless framework, there must be something to orchestrate. This is another field where significant development is happening. Services like Azure Event Grid deliver connectivity between elements of the preceding waves, but also between platforms: You could easily glue AWS Lambda and Azure Logic Apps together, while spinning up Container Instances as a reaction to changes in your data warehouse. All of this is more of the same: small, reaction-based, business-focused workloads coming together to solve a larger problem.

Once all of this done, it will enable a huge chunk of even legacy applications to be hosted on a platform with way fewer server-level dependencies. For architects of new cloud-native applications, the serverless microservices platform is going to demand a whole new design mindset.

I expect the next 12 months to make a significant difference here.

--

--

Jouni Heikniemi
Statuscode

Consultant/CEO at Offbeat Solutions. Software developer, entrepreneur, Microsoft Regional Director.