Changing a SaaS Architecture to Serverless

There are lots of articles about the benefits of a Serverless Architecture, this is not one of them. We assume that you already understand the benefits for not having to maintain servers, having an on demand service and not having to pay for services that are not running. What we wanted to learn was; how hard would it be to take an existing functional architecture and convert it to a 100% serverless architecture. Could you do this and separately deploy the microservices? And how easy would this be to build and maintain? The rest of this article talks about; The tools and approaches we used to migrate the original application to a 100% serverless architecture.

The Existing application

SaaS applications come in many configurations. To make this exercise more realistic, we decided to start with a complete and functional application. This would realistically simulate what your company might go through with one of your applications. We also wanted to choose something with some complexity, multiple services and features. We choose to start with a complete reference application.

Amazon created a Quick Start implementation SaaS identity and isolation with Amazon Cognito to talk about many design considerations that companies must consider when building out a SaaS application. This Quick Start talked about some options for building a multi tenant application and focused on Authentication and Authorization. The implementation was based on Amazon Cognito as the identity provider.

The Quick Start architecture created several microservices that could be run on EC2 instances. It uses an API gateway to provide a REST api to the application. The data is then stored in several DynamoDB tables. The main focus of the Quick Start is the use of Cognito to provide the authentication and create policies per tenant to provide access to the data in dynamoDB and potentially other services.

This seemed like a great choice. It can be deployed on an AWS account, so you can see the application in action. It also provided us with lots of opportunities to make sure we could manage multiple services and test the integration with API Gateway, DynamoDB, Cognito and other services. This allowed us to demonstrate one way to integrate these into a serverless architecture.

EC2 to Serverless

With the buzz around serverless, how hard would it be to convert this to a serverless architecture? Turns out not to be that hard. Because the original code was designed as Node.js microservices, it was easy to make small changes to them to run each function as a separate lambda function. As part of the conversion to separate Lambda functions the version of Node.js was updated to requiring minimum of v8.10. This was changed was not a requirement of creating the lambda functions, but it allowed us to use "promises" for AWS Api calls and reduce the number of callbacks and do some minor code cleanup. All the handler events are now written as promise calls instead of callbacks. You can check out the code and build out the environment on your own AWS account. The serverless version is stored in 2 GitHub repositories the core services of the application in SaaS Serverless Identity and the client application in SaaS Serverless Client

All the services were created under a single API gateway. This allowed us to use a single custom authorization function to authenticate the API calls. This is the same custom authentication routine used by the Amazon Quick Start. While we kept with the single API gateway, we wanted to make each microservice maintainable and separate. This matches the Quick Start approach of having an EC2 instance for the microservices that could be rebuilt to add functions to the Order microservice for example.

Using the Serverless Framework

The Amazon Quick Start was built using CloudFormation templates, but a popular toolkit for building and managing serverless application is the Serverless Framework. We used the Serverless Framework to build out all the resources, like the API Gateway, all the REST api calls, and the dynamoDB databases. The Framework also provided easy ways to define all the options for each API call and any special permissions that might be needed by the functions. We also defined IAM policies that can be assigned to API functions and uploaded to AWS when we deployed our application. The Serverless Framework provided an excellent tool for managing and deploying our serverless environment.

Separate deployable microservices

The Serverless Framework takes one configuration file and builds a CloudFormation template and deploys this to AWS. But as an application grows and the number of services grows, a single configuration file can be very large and hard to manage. Plus it requires that all the services related to this file be redeployed each time. We wanted to match the Quick Start approach of having separate deployable microservices. So we needed to split the microservices up into separate directories, each with their own serverless.yml configuration file for their related resources.

Yet we still needed a starting point, a place to start the first deployment and a place to store some shared resources used by the microservices. To achieve this we started with a main or as we called it, a common configuration. In the common/serverless.yml file we defined the API gateway as the main resource and then created some outputs to be used by CloudFormation and other services. We also defined the custom Authorization function as a common resource. This way other services could just reference it. Anything we needed to define once and provide outputs for other services where in the common/serverless.yml file or a common.yml file that could be included in other serverless.yml files.

Then we created a directory for each microservice, like tenantMgr, userMgr, orderMgr, etc. For these services we defined resources that are specific to that service. So for example, in the orderMgr/serverless.yml file, we defined the DynamoDB table to be built for this services. We also had to link back to the common API gateway, because we wanted all these services to flow under a common URL. The orderMgr/serverless.yml also defined each api call, handler, and parameters as needed. We can also reference the custom authorizer thanks to the common/common.yml file.

Some function calls for some services were only used internally. They were not used by the client application. The original application used HTTP to access these functions. We had some options, we could use HTTP on the existing API gateway, create a different "internal" API gateway, or invoke the functions directly. We decided to have the existing API gateway only define API calls used by the client. This provided us with some advantages. We only have to document and publish those API calls the client needs. For developers that needed access to the client application they only have access to these calls. It provides some security to our backend resources.

We then thought about remaining consistent and using an internal API gateway for these internal function calls, but we decided to change the lambda functions to be invoked directly. It was only a few functions and it seems like extra overhead and expense to create another API gateway. The trade off was that we had to create a different calling method just for these few functions. But it provides some security and cost savings. We configured the internal lambda functions in the serverless.yml file just like the API calls. The framework built and deployed these functions like any other resource.

We used environment variables to pass in some configuration information. We used variables in the serverless.yml files to define them and pass in information to our functions. The framework also allows the creation of IAM policies to give functions permission to perform tasks, so some functions were given IAM policies to create, delete and attach Roles as new tenants and users are created. while others were given access to dynamoDB tables.

Service Discovery

For a small application like this one, and since we are using a common API for all the services, there is probably not much need to discover services and status. But as an application grows and need to deploy services by different teams, it may be necessary to lookup services. This also allows services to be changed to have different API gateways or endpoints and the other service or client does not have to change. We built a very simplistic service. It provides registration and lookup services. A true production implementation would have to enhance this, or you could go with a third party solution like Consul, or Apache Zookeeper.

We created the discovery service as a standalone service. It has its own API gateway and dynamo database to track the services. Each microservice is responsible for registering the functions to the discovery service. As a service is needed, a function will query the discovery service to get the endpoint (either HTTP or internal) and then call the service. The application client also uses the discovery service to get the API endpoints for each service. The application client is only given the service discovery URL. It must then lookup all the services and get the endpoints that it needs to access the services.

One challenge we faced with this approach of having everything serverless is; how do you register a microservice function with the discovery service once the microservice is deployed? Once the service is deployed, the lambda functions are passive. They are waiting to be called, there is nothing that starts to register the service. To solve this problem we created a cloudwatch event. For purposes of the architecture, we setup the event to run every 5 minutes. In production this might need to be changed. Our simple event calls a lambda function that simply registers all the services. In a real application there might be some tests or other things you may want to check to possibly update the status or health of the each service.

Automate Build and Deployment

As the services are broken out for manageability, it can be harder to deploy and manage the environment. For a SaaS application, you may not be rebuilding the application each time, just updating certain services. But since the serverless framework and AWS can support multiple environment, you could build out dev, test, and other environments to test the application. But with all the different directories it can become tedious to build and deploy each microservice. So to solve this problem we used the scripts within NPM to handle this repetitive process. We also using a package called npm-run-all, that will run several npm scripts either sequentially or in parallel. So defining a few scripts and options in the package.json file, we can now force an npm install in all the microservices directories. We can then run a script to have serverless deploy the microservice to AWS. We can define the specific order that we need to make sure that resources are in place before a dependant service is deployed. And of course we can run a script to remove all the services if we don't need them, for a dev environment for example.

React Application

The client application is a complete rewrite from the Quick Start application. The Quick Start used AngularJS and we choose to use React+Redux. There was no overwhelming reason for the change, except to provide another reference alternative. This client application provides all the same functionality as the Quick Start client. There are a few small differences. We mentioned the service discovery process in the section above.

One noticeable change is that we moved the Install process from the CloudFormation template to the application client. Adding a "/install" path to URL allows the client to collect all the information needed to create the system admin tenant and then post the information to the API. The process easily creates the System tenant.

We also created an S3 bucket using the serverless framework. It is setup as public and a static website. We then use a plugin with serverless to sync the application files to the S3 bucket. So now we have a website for our application. The next logical steps to do, if this was a real application would be us a valid domain and configure cloudfront to serve out the images and other files. But we will leave that for another project.

Summary

Our goal was to take a standard microservices SaaS application and see how we could convert this to a 100% serverless application. We wanted to see what popular tools were available to support this architecture and could they be scaled to handle a larger environment. As we have shown this was a fairly straight forward process. Using the Serverless Framework made the configuration of the services and the deployment of the services very easy. The biggest challenge we had with the serverless framework was distributing the services and finding good examples of how this can be implements. Based on research and trial and error, we were able to put together a solution that offers a lot of configurability and a way to centralize some information for use by multiple services.

This solution is not complete. There are several areas that can be cleaned up and improved, but for a tutorial or a reference architecture it provides a good starting point for others to build Serverless Applications. You can download and run the code from the following GitHub repositories. There are instructions on how to download, build and deploy your own serverless application. Check out the application in SaaS Serverless Identity and the client application in SaaS Serverless Client

 
Author:Bill Stoltz is a AWS certified Solutions Architect with Booster Web Solutions. He has over 25 years of IT experience with 15 years of pre-Sales engineering and Infrastructure Architecture experience. His current interests are most things serverless and lambda functions.