Going Serverless: From Common LISP and CGI to AWS Lambda and API Gateway

Alex Glikson
7 min readFeb 23, 2018

The Function-as-a-Service (FaaS) paradigm is quickly gaining popularity as a programming model to develop and run short-lived non-interactive functions triggered by events or requests. FaaS platforms are often limited to a restricted set of programming languages, such as JavaScript and Python. However, it is typically possible to include arbitrary files packaged together with the function, which may include executables and their dependencies (limited to a particular size — e.g., 50MB). In some FaaS platforms, it is even possible to provide arbitrary Docker images, thus making development of functions in additional programming languages even more flexible.

In this article we share our experience leveraging the ability to include binary files with functions, to migrate the backend of an interactive web-based tutorial of the NESL programming language implemented in Common LISP (using CGI to interact with the frontend) to AWS Lambda and API Gateway. Developed in the ’90s at the Carnegie Mellon University, the tutorial is not actively used for teaching — but also not something people wanted to discontinue entirely.

TL;DR:

Figure 1: Solution in a nutshell

The NESL tutorial

The tutorial comprises a static web page with examples and exercises in NESL programming language (targeting efficient development and execution of parallel algorithms). Each example or exercise is a simple text form (often pre-filled), with a “Submit” button that triggers a CGI script (running on another machine) that runs the NESL interpreter on the input program text and returns the program results (see attached screenshot for an example).

Figure 2: NESL Tutorial

So, the main motivation was to see if we can get rid of the physical machine serving the CGI backend (sitting under the desk of a faculty member), in a way that would provide a reasonable combination of reliability, security, scalability (in case tutorial *is* used) and low maintenance. This was also a ‘case study’ as part of a larger effort to teach computer science students and faculty at CMU to take advantage of modern cloud computing platforms, technologies and practices. Given the short-lived, non-interactive and stateless nature of each invocation of the NESL interpreter, FaaS seemed to be a natural choice. Moreover, given the minimal traffic this web tutorial is targeting — the backend is likely to easily fits the free tier of cloud providers. So, the infrastructure cost will be essentially zero. And you can’t beat zero cost, really.

NESL interpreter

NESL language interpreter was developed in the early ’90s, in Common LISP and C (using lex and yacc underneath). The executables have been produced some years ago, and hasn’t been touched for years. When we tried running it on a different machine (we used a t2.micro instance, which has Linux distro similar to the one used in Lambda), we identified several issues:

  1. The interpreter contained several hard-coded path strings, making it impossible to run it outside of a standard CMU environment
  2. The interpreter depended on certain dynamically linked libraries, installed together with Common LISP
  3. The CGI script wrapping the interpreter invocation was written in Ruby — not one of the languages natively supported on AWS Lambda

Luckily, in order to address issue #1, we were able to find the source code, replace absolute path strings with relative ones, and recompile the executables.

For issue #2, the solution we found was to identify the missing libraries, to install them manually on our development machine, to include their binaries in the package we upload with the function, and to make sure Linux loader can find them, by adding the corresponding folder to LD_LIBRARY_PATH environment variable. Luckily, there were not too many of them, and the total size of the package (including the interpreter and all the dependencies) didn’t exceed 3MB. As a side note, if we had decided to use a FaaS that supports arbitrary Docker function containers (such as IBM Cloud Functions/OpenWhisk), we could handle this more elegantly by just adding the missing Linux packages to the Docker image, also not being restricted by a particular maximal size of the binaries.

For issue #3, we just decided to rewrite the script in Python (as it wasn’t particularly complex).

Solution details

After figuring out the technical issues, this was the architecture that we came up with:

Figure 3: Solution Architecture

Now the tutorial web page would point to the corresponding API URL of the API Gateway, which would trigger the NESLAPI function, which would in turn invoke the NESL function that actually runs the interpreter binary, and then generate and return the HTML page with the output, similarly to the way it worked before. The reason for having 2 functions and not one is because we wanted the NESL function to be useful via other interfaces, and not only via this particular HTML form (an alternative could have been to introduce client-side logic that would generate the HTML, but we didn’t want to do it). Moreover, we wanted to address the scenario when program execution exceeds the time or resource limits that we defined (5 seconds, 256MB memory) — but still return a valid HTML back to the Web browser (indicating the error), e.g.:

Figure 4: Example of a program exceeding time/resource limits

Here are the two functions:

Figure 5: NESLAPI function
Figure 6: NESL function

Furthermore, leveraging the serverless framework made the deployment (and the development) process very easy. Here is the corresponding serverless.yml file:

Figure 7: serverless.yml

Here we specify the two functions (handlers, events for the first one, as well as resource constraints), packaging details (which files to include in the zip file, including dynamic libraries under ‘lib” folders that the interpreter depends on), as well as IAM rules to enable function-to-function invocation and “usage plan” to protect our API from excessive usage.

Running the application

Deploying the application is now a simple manner of running ‘sls deploy’:

Figure 8: Deployment with serverless framework

Then you just need to notice the endpoint reported by the API Gateway, and to make sure you are using it as the target of the ‘POST’ action in the tutorial HTML page. To test this manually, you can use curl (notice that the program itself must be url-encoded):

Figure 9: Manual testing

That’s it!

You can check the github repository for detailed deployment and debugging instructions. It also contains the source code of the language interpreter itself.

Advantages

Besides the obvious benefits of zero cost, the resulting solution has several advantages:

  1. Low maintenance: now that our code is hosted on a ‘serverless’ platform, we don’t need to worry about operating system maintenance of the underlying server(s), corresponding security threats, etc
  2. Improved manageability: now that our code is hosted on a public cloud platform, we can benefit from various services which are part of the platform, seamlessly available for our functions, out of the box — such as logging, monitoring, version control, etc.
  3. Controlled scaling (up and down): our new solution can seamlessly scale up (if at some point the tutorial becomes extremely popular — e.g., as a result of this blog post), as well as scale down to zero capacity when there are no users. Moreover, the scale-up is carefully protected by limits we implemented at the API Gateway (to avoid DOS-style attacks), as well as limits on the individual function container (e.g., in case someone submits a program that would take very long to execute).
  4. High availability, fault-tolerance: unlike the machine under professor’s desk, the cloud-based FaaS platform is highly available and tolerant to hardware failures.

Summary and Gotchas

Overall, pretty much everything worked as expected. We were able to migrate the backend of the interactive tutorial to AWS Lambda + API Gateway, resulting in an elegant and robust solution. There were few minor issues that we didn’t address in this prototype:

  1. Function-to-function invocation: the serverless framework currently doesn’t seem to provide a convenient way to determine deployment-time names of functions, and inject them into other functions (e.g., as parameters or environment variables). This makes the function-to-function invocation a bit hacky. We didn’t explore other deployment mechanisms (e.g., terraform or AWS CloudFormation/SAM) to check if they address this issue.
  2. DNS name: the API URL generated by the AWS API gateway can change if you remove and re-deploy the solution (with the serverless framework). One way to address this could be by utilizing custom domains (although we didn’t try it).

The code referred in this article (as well as the code of the interpreter itself) is available on github.

--

--