All Articles

10 Things I Wish I Had Known About Serverless

Below is a list of things learned over the last year working on production grade projects built with the Serverless Framework on AWS. This is not at all a comprehensive list and the topic deserves a thoughtful write up on best practices. At the very least, this list will hopefully save you some time and headaches.

1. RDS and Lambda are complicated

There are two possible setups for connecting a lambda function to an RDS database:

RDS in a VPC but open to all IPs:

This is a security vulnerability. It is essentially setting up your VPC Security Group(s) to allow any inbound IP address to connect to your database (assuming the client has the right username and password). This opens your RDS instances up to automated attacks from bots that troll the internet for such laissez faire security controls. Short of a quick and dirty development setup, don’t do this.

RDS in a VPC, Lambda in the same VPC:

This is the correct solution, albeit more complicated. For starters, your functions all must contain the correct subnets and the security group that is shared with your RDS cluster:

Next, you will need to ensure that you have provided ENI access to your lambdas. There are hard limits on ENI allocation that should be considered before placing any Lambda inside a VPC. The permissions look like so:

There are real cold-start and latency issues that come with utilizing lambda’s in a VPC. For some security setups this isn’t possible to avoid, so you need to make sure that only the necessary lambda’s are placed in a VPC and that you utilize caching as much as possible to prevent unnecessary ENI allocation or DB connections.

NOTE: For the sake of simplicity I have avoided discussing the setup that includes a NAT Gateway. NAT Gateways incur substantial costs and are more complicated to setup. For more on NAT setups checkout out these instructions

2. Make sure your lambda has the correct roles attached

This is a simple mistake that we made several times before we got into the habit of allocating function level permissions. Serverless Framework will not kick out any errors on deployment or packaging if you fail to add the correct permissions to your function. And you will find yourself digging through CloudWatch logs for far too long to determine exactly what is wrong. Make it a habit to always review the iamRoleStatements before deploying new functions.

3. IAM role names are limited to 64 characters

This is pretty self explanatory, but a pain to find out in the middle of an sls deploy. Be thoughtful about the length of your function and stage names.

4. There is a resource limit of 200 in CloudFormation

The solution, for those not using the Serverless Framework, is to simply create nested stacks in CloudFormation. Nested stacks can be organized on a function by function level or by domain entities. This is a matter of preference.

At the time of writing this, it is not (easily) possible to configure nested stacks directly in the serverless.yml file without using a plugin. A mature service with several cross cutting dependencies will may not take kindly to just dropping in such a plugin. If possible, plan ahead of time if you anticipate a the resources of your serverless project to grow substantially.

5. You can attach multiple events to a single lambda

This is a great solution for having different application clients access the same API. Client A will have its own lambda Authorizer, while Client B will have another Authorizer. A good scenario for this is when you allow for a machine-to-machine client (via client credentials) to access routes associated with the client facing application (via implicit or auth code grants). The other benefit of this is the ability to apply different API Gateway settings (e.g. caching, tracing, error codes, request limits) for different clients with minimal effort.

6. Custom Authorizers need to have cache manually disabled

It is unclear if this is a bug of the Serverless framework, but setting resultTtlInSeconds: 0 does not seem to work correctly. You will need to manually access the AWS console and disable the checkbox. This is important as you do not want your authorizer caching deny/allow policy that is created based on the previous request(s).

7. Serverless.yml can be configured dynamically

Probably the quickest thing you’ll learn when building anything of size in Serverless, is that hardcoding values everywhere is untenable. There are a few options the framework provides to configure your serverless.yml dynamically.

  • ${file()} operator - you can import a configuration file of your choosing (e.g. secrets.json) and set it to a custom variable that is usable anywhere in your serverless.yml. An example of this:

  • ${opt:stage, self:provider.stage} - allows you to specify a the stage as a value and defaults to the default stage set in your provider section of your serverless.yml. At Kard we currently use this to dynamically point to stage specific resources within a given account.

8. Use a warmup plugin for your important endpoints

Cold starts are no joke. At Kard we have clocked a few warmups that took upwards of 20+ seconds (20,000 ms) for a lambda sitting in a VPC. I know Yan Cui has harped on not using lambda’s inside a vpc . However, in a few architectures this is unavoidable unless you choose to expose your database directly to the internet.

Important endpoints would be any endpoint that is necessary to complete page loads or provide a high performance service. This will vary by application, but in Kard’s environment this includes the endpoints used to query for offers on our landing pages.

This is the warmup plugin we use at Kard.

9. The AWS SDK is part of the lambda runtime

AWS does mention this in their documentation, but it is worth noting not to include the SDK in your deployed dependencies as you will just be bloating your deployment artifact size.

10. Package separately and don’t include test files

Our zip files for sls deploy got outrageously large. For single services that contained ~20 lambdas, we were seeing compressed deployment artifacts that were around 15GB. We quickly blew past the 75GB soft limit on lambda and had to request a service increase.

Eventually bumping against the new 150GB limit I started to investigate what was going into these packages.

Two simple steps allowed us to drop the deployment artifact size by an order of magnitude:

The first easy win was using a file exclusion within our webpack configuration. This was to prevent all of our test files getting wrapped into the build.

Next I packaged all functions independently. This dropped our packaged size an order of magnitude and put us way under the 150GB limit on our account.

The serverless community has matured a lot in the last year and I think it will only get better. As with any technology there is a bit of a learning curve that all engineers must go through to feel more comfortable. My recommendation is to stay on top of things by following Yan Cui and Serverless cronicle. As always, drop me a message at if you have any thoughts, comments or questions.