TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Serverless

Serverless Pricing: Estimating Consumption Costs

Sep 12th, 2018 3:00am by
Featued image for: Serverless Pricing: Estimating Consumption Costs

Serverless brings the idea of consumption-based pricing to the forefront of IT budgeting and, in time, will totally recast decision-making, and organizational structure for companies and businesses. Already, serverless has made new startups more viable and have allowed enterprises to experiment faster with fewer risks, as production does not rely on allocating server infrastructure to run new applications or to deploy product features that have not yet been market-tested.

Often, the initial drive towards serverless has been about developer velocity. Providing businesses and governments with an opportunity to test new ideas quickly, without infrastructure cost overheads, has driven interest and initial implementation. But, if not monitored, serverless costs can cause equally large infrastructure costs as traditional systems.

“The cost of serverless systems is something we are finding that people are worried about more and more,” said Nitzan Shapira, CEO and cofounder at Epsagon. “People want to understand their costs in serverless because its pay for use. More adopters are now asking: what are we paying, and what for? Is it because the code is slow, or because of an API we are using? They need to be able to analyze costs in a very business and logical way.”

Finding the Sweet Spot in Function Runtime

Pricing of serverless systems, while not totally commoditized, is structured in similar ways by AWS, Azure Functions, Google Cloud Platform and IBM OpenWhisk. Functions are charged for each million invocations. Users can then choose the amount of memory size to allocate to compute time.

On the face of it, it would appear that choosing the lowest memory allocation for compute would be the cheapest, with the pay off being that the runtime might be slower. But as Jeremy Daly shared in his write-up of a startup day held by AWS in Boston earlier this year, “tweaking your function’s computing power has major benefits.”

Because execution time may lengthen with smaller memory size allocations, the overall cost can be higher. Daly shared data presented by Chris Munns, Amazon Web Services’ senior developer advocate for serverless, that demonstrated that allocating 128MB for memory size could cost $0.024628 after 1,000 invocations, and take 11.72 seconds. The price then rose slightly for memory size allocations of 256MB, and 512MB, but what was most interesting was when a user allocated 1024 MB RAM to compute, the execution time dropped to 1.47 seconds, but the cost was only $0.00001 more at $0.024638.

At Serverlessconf in New York last year, several speakers all shared similar findings in regard to finding the sweet spot with serverless pricing, regardless of the cloud vendor.

Pricing Is More Than Just Invocations and Compute

However, in addition to weighing up function invocations and memory allocation required for compute time, there are other costs that add up in a complete serverless system or workflow. Jonathan Kosgei, from IP Geolocation API firm ipdata, wrote that serverless gave them “favorable pricing” and infinite scale and high throughput. This was needed for their global service that needs to quickly return a website visitor’s location based of their IP address in order to serve up customized and regionally relevant content. But such a serverless system, being designed in AWS, not only need lambdas, they also needed the API Gateway, Dynamodb, use of CloudWatch services, and Kinesis.

“We have seen some companies get a $50,000 bill because they had an error in their code.”

For Kosgei, it was the unexpected CloudWatch usage, including alarms and requests that added up, and as CloudWatch only stores logs for 24 hours, there was also additional costs involved in more long-term log storage. On the plus side, Kosgei notes that his Dynamodb costs were less than expected, as it is possible to over-provision the use of RCUs, but billing is usage-based, so only those used in any set timeframe would be charged. He notes there was also a learning curve involved in understanding the Dynamodb pricing system.

Elliot Forbes also found additional costs in a serverless system, so that while he was able to reduce overall costs for hosting a single page web app from $20/month to $7/month by using serverless, much of those costs came from CloudFront, and a small amount from Route53 and S3 bucket storage.

One other hidden cost comes from API requests. Amiram Shachar, CEO and founder of Spotinst, wrote at the start of the year that this was one of the most costly pricing elements of a serverless system. “Since many serverless apps are heavy on API calls, this can get quite pricey at roughly $3.50 per 1M executions,” he notes.

In Serverless, Performance and Cost are Tightly Coupled

Shapira from Epsagon says that the key to understanding serverless costs is to look beyond individual functions to see the full serverless business flow.

He says distributed tracing is the key to understanding a serverless system in order to first monitor performance, which in turn will determine costs.

“To really figure out what is happening in a serverless system, you have to connect the events in some way. That is why we thought about serverless costing and distributed tracing,” said Shapira. “We use code instrumentation and AI tech to group serverless apps into types of flows, for example, one type of flow might be for a user signing up to the system, another one is for a user making a payment. Then you can identify the most frequent flows, and the highest priority flows. Then you can figure out, how would you even know why the business flow takes one second? Is it because your code is slow or because the APIs you are using are degrading performance?”

This is a central question, says Shapira, because when performance suffers in a serverless system, it is paid for in pricing. Shapira explained: “When you think about the entire system, costs come in very quickly. How do you really know that you are not going to get a $100,000 bill at the end of the month, when you are expecting a $5,000 bill? We have seen some companies get a $50,000 bill because they had an error in their code.”

Serverless pricing, Shapira says, is basically costed at how long your code is running multiplied by some factor, such as compute resources (plus all those extras mentioned above). But that suggests that a focus on optimizing writing function code so that it is small and efficient will reduce costs.

“The total time and cost of your functions is more affected by APIs, including AWS’ APIs, or any third party like Twilio or Stripe. Even if one of these APIs is working slowly, you may be paying a lot of money. If they work at one second rather 50 milliseconds, and that happens a million times in a month, that rises costs. You need to identify all the APIs you are using and our system will tell you these APIs are running slowly,” suggested Shapira.

Epsagon dashboard shows link between performance and costs

Epsagon dashboard shows link between performance and costs

Another goal with mapping and identifying all of the serverless workflows being used is that it is easier to prioritize which ones are integral to the core business, rather than possibly paying higher bills for workflows that do not add a corresponding share of business value or are not revenue-generating.

New Approaches to Monitoring Pricing of Serverless in Enterprise

For now, many of those beginning to adopt serverless are not seeing these hidden issues with pricing. Those who are gaining the most advantage are those who were paying for underutilized servers and migrated to serverless to see massive reductions in their infrastructure costs. Startups and disrupters who have established new serverless applications but have not yet reached scale are also seeing benefits from consumption-based pricing, where their usage levels may mean they pay nothing or low levels each month.

But larger enterprises who are seeking server autoscaling as a key benefit for their serverless adoption are at risk of handing over the financial keys to serverless cloud providers. Closely monitoring risks associated with spiking and ensuring that estimated costs will not jump from poor performance of serverless workflows and systems will be an emerging focus for Engineering VPs and Enterprise Architects in the year to come.

Feature image: Photo by Alvaro Reyes on Unsplash.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.