All of DashOne’s back-end runs on AWS and uses AWS CloudFormation via serverless to manage the infrastructure. This is my first time using this backend strategy and I am very happy about it. However, after a few weeks of smooth usage, I ran into the dreaded “200 resources limit”.

Error --------------------------------------------------

The CloudFormation template is invalid: Template format error: Number of resources, 201, is greater than maximum allowed, 200

You should check out this great article which gives the backstory on the error and discusses various workarounds. Based on that, I chose to split the service into smaller microservices. That’s easy enough to do. Hint: It wasn’t. Why? Because the article does not explain what to do when you have shared resource i.e. resources(DynamoDB) created in one service but used in another.

Here’s a skimmed down version of what my original serverless.yml looked like

service: dashone
custom:
  usersTableName: 'users-table-${self:provider.stage}'

provider:
  name: aws
  runtime: nodejs8.10
  memorySize: 128
  timeout: 10
  stage: ${opt:stage, 'dev'} # Set the default stage used. Default is dev
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Query
        - dynamodb:Scan
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
      Resource:
        - { 'Fn::GetAtt': ['UsersDynamoDBTable', 'Arn'] }
  environment:
    USERS_TABLE: ${self:custom.usersTableName}
    
functions:
  actionA:
    handler: services.actionA
    events:
      - http:
          method: get
          path: /actionA
          cors: true

  actionB:
    handler: services.actionB
    events:
      - http:
          method: get
          path: /actionB
          cors: true
 
 resources:
  Resources:
    UsersDynamoDBTable:
      Type: 'AWS::DynamoDB::Table'
      Properties:
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: email
            KeyType: HASH
        ProvisionedThroughput:
          ReadCapacityUnits: 5
          WriteCapacityUnits: 5
        TableName: ${self:custom.usersTableName}

What this essentially does (in dev environment)

  • Create a AWS DynamoDB table and name it users-table-dev
  • Create two AWS Lambda functions serviceA and serviceB and expose them via AWS API Gateway at /serviceA and /serviceB respectively.
  • Make the AWS DynamoDB available for CRUD operations to the AWS Lambda functions(iamRoleStatements)

Great, you with me so far?

So, as I sat down to refactor, this is what I had to do

Split them into two separate services i.e. two separate serverless.yml files and deploy them separately. Fine, I can split the services but what about my database? I want it to be created only once. In my case, the table was already deployed in production with live data. So, I didn’t want to risk any data migration while switching. After a bit of digging, I came across Fn::ImportValue. Fn::ImportValue lets you cross-reference resources across services. But you’d have to export the values first.  Going back to my original example, I added the following under resources in my original serverless.yml. This creates or “outputs” a reference that can used in a different AWS cloud formation service. Cloud Formation takes care of all the magic to make it available.

Outputs:
  UsersDynamoDBTable:
    Value:
      Fn::GetAtt:
        - UsersDynamoDBTable
        - Arn
    Export:
      Name: ${self:provider.stage}-UsersTableArn

To use this resource in a different file, you use Fn::ImportValue. Create a new serviceA/serverless.yml where the IAM section looks like this

iamRoleStatements:
  - Effect: Allow
    Action:
      - dynamodb:Query
      - dynamodb:Scan
      - dynamodb:GetItem
      - dynamodb:PutItem
      - dynamodb:UpdateItem
      - dynamodb:DeleteItem
    Resource:
      - { 'Fn::ImportValue': '${self:provider.stage}-TokensTableArn' }
      - { 'Fn::ImportValue': '${self:provider.stage}-UsersTableArn' }

Thats it! You can now create as many services as you need that will “import” the shared DB. And you can use the original service just to manage the DB, which will end up being a permanent resource that does not change often.

Hope this helps. You can reach me at @SharathPrabhal if you have any questions.