Showing posts with label CloudFormation. Show all posts
Showing posts with label CloudFormation. Show all posts

Wednesday, June 24, 2020

One Bite of Real-world Serverless: Controlling an EC2 with Lambda, API Gateway and Sigma

Originally written for The SLAppForge Blog; Jun 19, 2020

I have been developing and blogging about Sigma, the world's first serverless IDE for serverless developers - but haven't really been using it for my non-serverless work. That was why, when a (somewhat) peculiar situation came up recently, I decided to give Sigma a full-scale spin.

The Situation: a third party needs to control one of our EC2 instances

Our parent company AdroitLogic, sells an enterprise B2B messaging platform called AS2 Gateway - which comes as a simple SaaS subscription as well as an on-premise or cloud-installable dedicated deployment. (Meanwhile, part of our own team is also working on making it a completely serverless solution - we'll probably be bothering you with a whole lotta blog posts on that too, pretty soon!)

One of our potential clients needed a customized copy of the platform, first as a staging instance in our own AWS account; they would configure and test their integrations against it, before deciding on a production deployment - under a different cloud platform of their choice, in their own realm.

Their work time zone is several hours ahead of ours; keeping aside the clock skew on emails and Zoom calls, the staging instance had to be made available during their working hours, not ours.

Managing the EC2 across time zones: the Options

Obviously, we did have a few choices:

  • keep the instance running 24/7, so our client can access it anytime they want - obviously the simplest but also the costliest choice. True, one hour of EC2 time is pretty cheap - less than half a dollar - but it tends to add up pretty fast; while we continue to waste precious resources on a mostly-idling EC2 VM instance.
  • get up at 3 AM (figure of speech) every morning and launch the instance; and shut it down when we sign off - won't work if our client wishes to work late nights; besides they don't get the chance to do the testing every day, so there's still room for significant waste
  • fix up some automated schedule to start and stop the instance - pretty much the same caveats as before (minus the "getting up at 3 AM" part)
  • delegate control of the instance to our client, so they can start and stop it at their convenience

Evidently, the last option was the most economical for us (remember, the client is still in evaluation stage - and may decide not to go with us, after all), and also fairly convenient for them (just two extra steps, before and after work, plus a few seconds' startup delay).

Client-controlled EC2: how to KISS it, the right way

But on the other hand, we didn't want to overcomplicate the process either:

  • Giving them access to our AWS console was out of the question - even with highly constrained access.
  • A key pair with just ec2:StartInstances and ec2:StopInstances IAM permissions on the respective instance ID, would have been ideal; but it would still mean they would have to either install the AWS CLI, or write (or run) some custom code snippets every time they wanted to control the instance.
  • AWS isn't, and wasn't going to be, their favorite cloud platform anyway; so any AWS-specific steps would have been an unnecessary overhead for them.

KISS, FTW!

Serverless to the rescue!

Most probably, you are already screaming out the solution: a pair of custom HTTP (API Gateway) endpoints backed by dedicated Lambdas (we're thinking serverless, after all!) that would do that very specific job - and have just that permission, nothing else, keeping with the preached-by-everybody, least privilege principle.

Our client would just have to invoke the start/stop URL (with a simple, random auth token that you choose - for extra safety), and EC2 will obey promptly.

  • No more AWS or EC2 semantics for them,
  • our budget runs smooth,
  • they have full control over the testing cycles, and
  • I get to have a good night's sleep!

ec2-control: writing it with Sigma

There were a few points in this projects that required some advanced voodoo on Sigma side:

  • Sigma does not natively support EC2 APIs (why should it; it's supposed to be for serverless computing 😎) so, in addition to writing the EC2 SDK calls, we would need to add a custom permission for each function policy; to compensate for the automatic policy generation aspect.
  • The custom policy would need to be as narrow as possible: just ec2:StartInstances and ec2:StopInstances actions, on just our client's staging instance. (If the URL somehow gets out and some remote hacker out there gains control of our function, we don't want them to be able to start and stop random - or perhaps not-so-random - instances in our AWS account!)
  • Both the IAM role and the function itself, would need access to the instance ID (for policy minimization and the actual API call, respectively).
  • For reusability (we devs really love that, don't we? 😎) it should be possible to specify the instance ID (and the auth token) on a per-deployment basis - without embedding the values in the code or configurations, which would get checked into version control.

Template Editor FTW

Since Sigma uses CloudFormation under the hood, the solution is pretty obvious: define two template parameters for the instance ID and token, and refer them in the functions' environment variables and the IAM roles' policy statements.

Sigma does not natively support CloudFormation parameters (our team recently started working on it, so perhaps it may actually be supported at the time you read this!) but it surely allows you to specify them in your custom deployment template - which would get nicely merged into the final deployment template that Sigma would run.

Some premium bad news, and then some free good news

At the time of this writing, both the template editor and the permission manager were premium features of Sigma IDE. So if you start writing this on your own, you would either need to pay a few bucks and upgrade your account, or mess around with Sigma's configuration files to hack those pieces in (which I won't say is impossible 😎).

(After writing this project, I managed to convince our team to enable the permission manager and template editor for the free tier as well 🤗 so, by the time you read this, things may have taken a better light!)

But, as part of the way that Sigma actually works, not having a premium account does not mean that you cannot deploy an already template- or permission-customized project written by someone else; and my project is already in GitHub so you can simply open it in your Sigma IDE and deploy it, straightaway.

"But how do I provide my own instance ID and token when deploying?"

Patience. Read on.

"Old, but not obsolete" (a.k.a. more limitations, but not impossible)

As I said before, Sigma didn't natively support CloudFormation parameters; so even if you add them to the custom template, Sigma would just blindly merge and deploy the whole thing - without asking for actual values of the parameters!

While this could have been a cause for deployment failures in some cases, lucky for us, here it doesn't cause any trouble. But still, we need to provide correct, custom values for that instance ID and protection token!

Amazingly, CloudFormation allows you to just update the input parameters of an already completed deployment - without having to touch or even re-submit the deployment template:

aws cloudformation update-stack --stack-name Whatever-Stack \
  --use-previous-template --capabilities CAPABILITY_IAM \
  --parameters \
  ParameterKey=SomeKey,ParameterValue=SomeValue ...

(That command is already there, in my project's README.)

So our plan is simple:

  1. Deploy the project via Sigma, as usual.
  2. Run an update from CloudFormation side, providing just the correct instance ID and your own secret token value.

Enough talk, let's code!

Warning: You may not actually be able to write the complete project on your own, unless we have enabled custom template editing for free accounts - or you already have a premium account.

If you are just looking to deploy a copy on your own, simply open my already existing public project from https://github.com/janakaud/ec2-control - and skip over to the Ready to Deploy section.

1. ec2-start.js, a NodeJS Lambda

Note: If you use a different name for the file, your custom template would need to be adjusted - don't forget to check the details when you get to that point.

const {ec2} = require("./util");
exports.handler = async (event) => ec2(event, "startInstances", "StartingInstances");

API Gateway trigger

After writing the code,

  1. drag-n-drop an API Gateway entry from the left-side Resources pane, on to the event variable of the function,
  2. enter a few details -
    1. an API name (say EC2Control),
    2. path (say /start, or /ec2/start),
    3. HTTP method (GET would be easiest for the user - they can just paste a link into a browser!)
    4. and a stage name (say prod)
  3. under Show Advanced, turn on Enable Lambda Proxy Integration so that we will receive the query parameters (including the auth token) in the request
  4. and click Inject.

Custom permissions tab

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": {
                "Fn::Sub": "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/${EC2ID}"
            },
            "Action": [
                "ec2:StartInstances"
            ]
        }
    ]
}

2. ec2-stop.js, a NodeJS Lambda

Note: As before, if your filename is different, update the key in your custom template accordingly - details later.

const {ec2} = require("./util");
exports.handler = async (event) => ec2(event, "stopInstances", "StoppingInstances");

API Gateway trigger

Just like before, drag-n-drop and configure an APIG trigger.

  1. But this time, make sure that you select the API name and deployment stage via the Existing tabs - instead of typing in new values.
  2. Resource path would still be a new one; pick a suitable pathname as before, like /ec2/stop (consistent with the previous).
  3. Method is also your choice; natural is to stick to the previously used one.
  4. Don't forget to Enable Lambda Proxy Integration too.

Custom permissions tab

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": {
                "Fn::Sub": "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/${EC2ID}"
            },
            "Action": [
                "ec2:StopInstances"
            ]
        }
    ]
}

3. util.js, just a NodeJS file

const ec2 = new (require("aws-sdk")).EC2();

const EC2_ID = process.env.EC2_ID;
if (!EC2_ID) {
    throw new Error("EC2_ID unavailable");
}
const TOKEN = process.env.TOKEN;
if (!TOKEN) {
    throw new Error("TOKEN unavailable");
}

exports.ec2 = async (event, method, resultKey) => {
    let tok = (event.queryStringParameters || {}).token;
    if (tok !== TOKEN) {
        return {statusCode: 401};
    }
    let data = await ec2[method]({InstanceIds: [EC2_ID]}).promise();
    return {
        headers: {"Content-Type": "text/plain"},
        body: data[resultKey].map(si => `${si.PreviousState.Name} -> ${si.CurrentState.Name}`).join("\n")
    };
};

Code is pretty simple - we aren't doing much, just validating the incoming token, calling the EC2 API, and returning the state transition result (e.g. running -> stopping) back to the caller as confirmation; e.g. it will appear in the our client's browser window.

(If you were wondering why we didn't add aws-sdk as a dependency despite require()ing it; that's because aws-sdk is already available in the standard NodeJS Lambda environment. No need to bloat up our deployment package with a redundant copy - unless you wish to use some cutting-edge feature or SDK component that was released just last week.)

The better part of the coordinating fat and glue, is in the custom permissions and the template:

4. Custom template

{
  "Parameters": {
    "EC2ID": {
      "Type": "String",
      "Default": ""
    },
    "TOKEN": {
      "Type": "String",
      "Default": ""
    }
  },
  "Resources": {
    "ec2Start": {
      "Properties": {
        "Environment": {
          "Variables": {
            "EC2_ID": {
              "Ref": "EC2ID"
            },
            "TOKEN": {
              "Ref": "TOKEN"
            }
          }
        }
      }
    },
    "ec2Stop": {
      "Properties": {
        "Environment": {
          "Variables": {
            "EC2_ID": {
              "Ref": "EC2ID"
            },
            "TOKEN": {
              "Ref": "TOKEN"
            }
          }
        }
      }
    }
  }
}

Note: If you used some other/custom names for the Lambda code files, two object keys (ec2Start, ec2Stop) under Resources would be different - it's always better to double-check with the auto-generated template and ensure that the merged template also displays the properly-merged final version.

Deriving that one on your own, isn't total voodoo magic either; after writing the rest of the project, just have a look at the auto-generated template tab, and write up a custom JSON - whose pieces would merge themselves into the right places, yielding the expected final template.

We accept the EC2ID and TOKEN as parameters, and merge them into the Environment.Variables property of the Lambda definitions. (The customized IAM policies are already referencing the parameters via Fn::Sub so we don't need to do anything for them here.)

Once we have the template editor in the free tier, you would certainly have much more cool concepts to play around with - and probably also figure out so many bugs (full disclaimer: I was the one that initially wrote that feature!) which you would promptly report to us! 🤗

Ready to Deploy

When all is ready, click Deploy Project on the toolbar (or Project menu).

(If you came here on the fast-track (by directly opening my project from GitHub), Sigma may prompt you to enter values for the EC2_ID and TOKEN environment variables - just enter some dummy values; we are changing them later anyways.)

If all goes well, Sigma will build the project and deploy it, and you would end up with a Changes Summary popup with an outputs section at the bottom containing the URLs of your API Gateway endpoints.

If you accidentally closed the popup, you can get the outputs back via the Deployment tab of the Project Info window.

Copy both URLs - you would be sending these to your client.

Sigma's work is done - but we're not done yet!

Update the parameters to real values

Grab the EC2-generated identifier of your instance, and find a suitable value for the auth token (perhaps a uuid -v4?).

Via AWS CLI

If you have AWS CLI - which is really awesome, by the way - the next step is just one command; as mentioned in the README as well:

aws cloudformation update-stack --stack-name ec2-control-Stack \
  --use-previous-template --capabilities CAPABILITY_IAM --parameters \
  ParameterKey=EC2ID,ParameterValue=i-0123456789abcdef \
  ParameterKey=TOKEN,ParameterValue=your-token-goes-here

(If you copy-paste, remember to change the parameter values!)

We tell CloudFormation "hey, I don't need to change my deployment definitions but want to change the input parameters; so go and do it for me".

The update usually takes just a few seconds; if needed, you can confirm its success by calling aws cloudformation describe-stacks --stack-name ec2-control-Stack and checking the Stacks.0.StackStatus field.

Via the AWS Console

If you don't have the CLI, you can still do the update via the AWS Console; while it is a bit overkill, the console provides more intuitive (and colorful) feedback regarding the progress and success of the stack update.

Complete the URLs - plus one round of testing

Add the token (?token=the-token-you-picked) to the two URLs you copied from Sigma's deployment outputs. Now they are ready to be shared with your client.

1. Test: starting up

Finally, just to make sure everything works (and avoid any unpleasant or awkward moments), open the starter-up URL in your browser.

Assuming your instance was already stopped, you would get a plaintext response:

stopped -> pending

Within a few seconds, the instance will enter running status and become ready (obviously, this transition won't be visible to the user; but that shouldn't really matter).

2. Test: stopping

Now open the stopper URL:

running -> stopping

As before, stopped status will be reached in background within a few seconds.

0. Test: does it work without the token - hopefully not?

The "unauthorized" response doesn't have a payload, so you may want to use curl or wget to verify this one:

janaka@DESKTOP-M314LAB:~ curl -v https://foobarbaz0.execute-api.us-east-1.amazonaws.com/ec2/stop
*   Trying 13.225.2.77...
* ...
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* ...
* ALPN, server accepted to use http/1.1

> GET /ec2/stop HTTP/1.1
> Host: foobarbaz0.execute-api.us-east-1.amazonaws.com
> User-Agent: curl/7.47.0
> Accept: */*
>

< HTTP/1.1 401 Unauthorized
< Content-Type: application/json
< Content-Length: 0
< Connection: keep-alive
< Date: Thu, 18 Jun 2020 06:14:58 GMT
< x-amzn-RequestId: ...

All good!

Now go ahead - share just those two token-included URLs with your client - or whatever third party that you wish to delegate the EC2 control; and ask them to use 'em wisely and keep 'em safe.

If the third party loses the URL(s); and the bad guy who got them, starts playing with them unnecessarily (stopping and starting things rapidly - or at random hours - for example): just run an aws cloudformation update-stack with a new TOKEN - to cut out old access! Then share the new token with your partner, obviously warning them to be a lot more careful.

You can also tear down the whole thing in seconds - without a trace of existence (except for the CloudWatch logs from previous runs) - via:

  • Sigma's Undeploy Project toolbar button or Project menu item,
  • aws cloudformation delete-stack on the CLI, or
  • the AWS console.

Lastly, don't forget to stay tuned for more serverless bites, snacks and full-course meals from our team!

Tuesday, May 28, 2019

AWS Lambda Event Source Mappings: bringing your triggers to order from chaos

Event-driven: it's the new style. (ShutterStock)

Recently we introduced two new AWS Lambda event sources (trigger types) for your serverless projects on Sigma cloud IDE: SQS queues and DynamoDB Streams. (Yup, AWS introduced them months ago; but we're still a tiny team, caught up in a thousand and one other things as well!)

While developing support for these triggers, I noticed a common (and yeah, pretty obvious) pattern on Lambda event source trigger configurations; that I felt was worth sharing.

Why AWS Lambda triggers are messed up

Lambda - or rather AWS - has a rather peculiar and disorganized trigger architecture; to put it lightly. For different trigger types, you have to put up configurations all over the place; targets for CloudWatch Events rules, integrations for API Gateway endpoints, notification configurations for S3 bucket events, and the like. Quite a mess, considering other platforms like GCP where you can configure everything in one place: the "trigger" config of the actual target function.

Configs. Configs. All over the place.

If you have used infrastructure as code (IAC) services like CloudFormation (CF) or Terraform (TF), you would already know what I mean. You need mappings, linkages, permissions and other bells and whistles all over the place to get even a simple HTTP URL working. (SAM does simplify this a bit, but it comes with its own set of limitations - and we have tried our best to avoid such complexities in our Sigma IDE.)

Maybe this is to be expected, given the diversity of services offered by AWS, and their timeline (Lambda, after all, is just a four-year-old kid). AWS should surely have had to do some crazy hacks to support triggering Lambdas from so many diverse services; and hence the confusing, scattered configurations.

Event Source Mappings: light at the end of the tunnel?

Event Source Mappings: light at the end of the tunnel (ShutterStock)

Luckily, the more recently introduced, stream-type triggers follow a common pattern:

This way, you know exactly where you should configure the trigger, and how you should allow the Lambda to consume the event stream.

No more jumping around.

This is quite convenient when you are based on an IAC like CloudFormation:

{
  ...

    // event source (SQS queue)

    "sqsq": {
      "Type": "AWS::SQS::Queue",
      "Properties": {
        "DelaySeconds": 0,
        "MaximumMessageSize": 262144,
        "MessageRetentionPeriod": 345600,
        "QueueName": "q",
        "ReceiveMessageWaitTimeSeconds": 0,
        "VisibilityTimeout": 30
      }
    },

    // event target (Lambda function)

    "tikjs": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "FunctionName": "tikjs",
        "Description": "Invokes functions defined in \
tik/js.js in project tik. Generated by Sigma.",
        ...
      }
    },

    // function execution role that allows it (Lambda service)
    // to query SQS and remove read messages

    "tikjsExecutionRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "ManagedPolicyArns": [
          "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
        ],
        "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": [
                "sts:AssumeRole"
              ],
              "Effect": "Allow",
              "Principal": {
                "Service": [
                  "lambda.amazonaws.com"
                ]
              }
            }
          ]
        },
        "Policies": [
          {
            "PolicyName": "tikjsPolicy",
            "PolicyDocument": {
              "Statement": [
                {
                  "Effect": "Allow",
                  "Action": [
                    "sqs:GetQueueAttributes",
                    "sqs:ReceiveMessage",
                    "sqs:DeleteMessage"
                  ],
                  "Resource": {
                    "Fn::GetAtt": [
                      "sqsq",
                      "Arn"
                    ]
                  }
                }
              ]
            }
          }
        ]
      }
    },

    // the actual event source mapping (SQS queue -> Lambda)

    "sqsqTriggertikjs0": {
      "Type": "AWS::Lambda::EventSourceMapping",
      "Properties": {
        "BatchSize": "10",
        "EventSourceArn": {
          "Fn::GetAtt": [
            "sqsq",
            "Arn"
          ]
        },
        "FunctionName": {
          "Ref": "tikjs"
        }
      }
    },

    // grants permission for SQS service to invoke the Lambda
    // when messages are available in our queue

    "sqsqPermissiontikjs": {
      "Type": "AWS::Lambda::Permission",
      "Properties": {
        "Action": "lambda:InvokeFunction",
        "FunctionName": {
          "Ref": "tikjs"
        },
        "SourceArn": {
          "Fn::GetAtt": [
            "sqsq",
            "Arn"
          ]
        },
        "Principal": "sqs.amazonaws.com"
      }
    }

  ...
}

(In fact, that was the whole reason/purpose of this post.)

Tip: You do not need to worry about this whole IAC/CloudFormation thingy - or writing lengthy JSON/YAML - if you go with a fully automated resource management tool like SLAppForge Sigma serverless cloud IDE.

But... are Event Source Mappings ready for the big game?

Ready for the Big Game? (Wikipedia)

They sure look promising, but it seems event source mappings do need a bit more maturity, before we can use them in fully automated, production environments.

You cannot update an event source mapping via IAC.

For example, even after more than four years from their inception, event sources cannot be updated after being created via an IaC like CloudFormation or Serverless Framework. This causes serious trouble; if you update the mapping configuration, you need to manually delete the old one and deploy the new one. Get it right the first time, or you'll have to run through a hectic manual cleanup to get the whole thing working again. So much for automation!

The event source arn (aaa) and function (bbb) provided mapping already exists. Please update or delete the existing mapping...

Polling? Sounds old-school.

There are other, less-evident problems as well; for one, event source mappings are driven by polling mechanisms. If your source is an SQS queue, the Lambda service will keep polling it until the next message arrives. While this is fully out of your hands, it does mean that you pay for the polling. Plus, as a dev, I don't feel that polling exactly fits into the event-driven, serverless paradigm. Sure, everything boils down to polling in the end, but still...

In closing: why not just try out event source mappings?

Event Source Mappings FTW! (AWS docs)

Ready or not, looks like event source mappings are here to stay. With the growing popularity of data streaming (Kinesis), queue-driven distributed processing and coordination (SQS) and event ledgers (DynamoDB Streams), they will become ever more popular as time passes.

You can try out how event source mappings work, via numerous means: the AWS console, aws-cli, CloudFormation, Serverless Framework, and the easy-as-pie graphical IDE SLAppForge Sigma.

Easily manage your event source mappings - with just a drag-n-drop!

In Sigma IDE you can simply drag-n-drop an event source (SQS queue, DynamoDB table or Kinesis stream) on to the event variable of your Lambda function code. Sigma will pop-up a dialog with available mapping configurations, so you can easily configure the source mapping behavior. You can even configure an entirely new source (queue, table or stream) instead of using an existing one, right there within the pop-up.

Sigma is the shiny new thing for serverless.

When deployed, Sigma will auto-generate all necessary configurations and permissions for your new event source, and publish them to AWS for you. It's all fully managed, fully automated and fully transparent.

Enough talk. Let's start!

Thursday, April 19, 2018

Sigma QuickBuild: Towards a Faster Serverless IDE

TL;DR

The QuickBuild/QuickDeploy feature described here is pretty much obsoleted by the test framework (ingeniously hacked together by @CWidanage), that gives you a much more streamlined dev-test experience with much better response time!


In case you hadn't noticed, we have recently been chanting about a new Serverless IDE, the mighty SLAppForge Sigma.

With Sigma, developing a serverless app becomes as easy as drag-drop, code, and one-click-Deploy; no getting lost among overcomplicated dashboards, no eternal struggles with service entities and their permissions, no sailing through oceans of docs and tutorials - above all that, nothing to install (just a web browser - which you already have!).

So, how does Sigma do it all?

In case you already tried Sigma and dug a bit deeper than just deploying an app, you may have noticed that it uses AWS CodeBuild under the hood for the build phase. While CodeBuild gives us a fairly simple and convenient way of configuring and running builds, it has its own set of perks:

  • CodeBuild takes a significant time to complete (sometimes close to a minute). This may not be a problem if you just deploy a few sample apps, but it can severely impair your productivity - especially when you begin developing your own solution, and need to reflect your code updates every time you make a change.
  • The AWS Free Tier only includes 100 minutes of CodeBuild time per month. While this sounds like a generous amount, it can expire much faster than you think - especially when developing your own app, in your usual trial-and-error cycles ;) True, CodeBuild doesn't cost much either ($0.005 per minute of build.general1.small), but why not go free while you can? :)

Options, people?

Lambda, on the other hand, has a rather impressive free quota of 1 million executions and 3.2 million seconds of execution time per month. Moreover, traffic between S3 and Lambda is free as far as we are concerned!

Oh, and S3 has a free quota of 20000 reads and 2000 writes per month - which, with some optimizations on the reads, is quite sufficient for what we are about to do.

2 + 2 = ...

So, guess what we are about to do?

Yup, we're going to update our Lambda source artifacts in S3, via Lambda itself, instead of CodeBuild!

Of course, replicating the full CodeBuild functionality via a lambda would need a fair deal of effort, but we can get away with a much simpler subset; read on!

The Big Picture

First, let's see what Sigma does when it builds a project:

  • prepare the infra for the build, such as a role and an S3 bucket, skipping any that already exist
  • create a CodeBuild project (or, if one already exists, update it to match the latest Sigma project spec)
  • invoke the project, which will:
    • download the Sigma project source from your GitHub repo,
    • run an npm install to populate its dependencies,
    • package everything into a zip file, and
    • upload the zip artifact to the S3 bucket created above
  • monitor the project progress, and retrieve the URL of the uploaded S3 file when done.

And usually every build has to be followed by a deployment; to update the lambdas of the project to point to the newly generated source archive; and that means a whole load of additional steps!

  • create a CloudFormation stack (if one does not exist)
  • create a changeset that contains the latest updates to be published
  • execute the changeset, which will, at the least, have to:
    • update each of the lambdas in the project to point to the new source zip file generated by the build, and
    • in some cases, update the triggers associated with the modified lambdas as well
  • monitor the stack progress until it gets through with the update.

All in all, well over 60-90 seconds of your precious time - all to accommodate perhaps just one line (or how about one word, or one letter?) of change!

Can we do better?

At first glance, we see quite a few redundancies and possible improvements:

  • Cloning the whole project source from scratch is overkill, especially when only a few lines/files have changed.
  • Every build will download and populate the NPM dependencies from scratch, consuming bandwidth, CPU cycles and build time.
  • The whole zip file is now being prepared from scratch after each build.
  • Since we're still in dev, running a costly CF update for every single code change doesn't make much sense.

But since CodeBuild invocations are stateless and CloudFormation's resource update logic is mostly out of our hands, we don't have the freedom to meddle with many of the above; other than simple improvements like enabling dependency caching.

Trimming down the fat

However, if we have a lambda, we have full control over how we can simplify the build!

If we think about 80% - or maybe even 90% - of the cases for running a build, we see that they merely involve changes to application logic (code); you don't add new dependencies, move your files around or change your repo URL all the time, but you sure as heck would go through an awful lot of code edits until your code starts behaving as you expect it to!

And what does this mean for our build?

80% - or even 90% - of the time, we can get away by updating just the modified files in the lambda source zip, and updating the lambda functions themselves to point to the updated file!

Behold, here comes QuickDeploy!

And that's exactly what we do, with the QuickBuild/QuickDeploy feature!

Lambda to the rescue!

QuickBuild uses a lambda (deployed in your own account, to eliminate the need for cross-account resource access) to:

  • fetch the latest CodeBuild zip artifact from S3,
  • patch the zip file to accommodate the latest code-level changes, and
  • upload the updated file back to S3, overriding the original zip artifact

Once this is done, we can run a QuickDeploy which simply sends an UpdateFunctionCode Lambda API call to each of the affected lambda functions in your project, so that they can scoop up the latest and greatest of your serverless code!

And the whole thing does not take more than 15 seconds (give or take the network delays): a raw 4x improvement in your serverless dev workflow!

A sneak peek

First of all, we need a lambda that can modify an S3-hosted zip file based on a given set of input files. While it's easy to make with NodeJS, it's even easier with Python, and requires zero external dependencies as well:

Here we go... Pythonic!

import boto3

from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED

s3_client = boto3.client('s3')

def handler(event, context):
  src = event["src"]
  if src.find("s3://") > -1:
    src = src[5:]
  
  bucket, key = src.split("/", 1)
  src_name = "/tmp/" + key[(key.rfind("/") + 1):]
  dst_name = src_name + "_modified"
  
  s3_client.download_file(bucket, key, src_name)
  zin = ZipFile(src_name, 'r')
  
  diff = event["changes"]
  zout = ZipFile(dst_name, 'w', ZIP_DEFLATED)
  
  added = 0
  modified = 0
  
  # files that already exist in the archive
  for info in zin.infolist():
    name = info.filename
    if (name in diff):
      modified += 1
      zout.writestr(info, diff.pop(name))
    else:
      zout.writestr(info, zin.read(info))
  
  # files in the diff, that are not on the archive
  # (i.e. newly added files)
  for name in diff:
    info = ZipInfo(name)
    info.external_attr = 0755 << 16L
    added += 1
    zout.writestr(info, diff[name])
  
  zout.close()
  zin.close()
  
  s3_client.upload_file(dst_name, bucket, key)
  return {
    'added': added,
    'modified': modified
  }

We can directly invoke the lambda using the Invoke API, hence we don't need to define a trigger for the function; just a role with S3 full access permissions would do. (We use full access here because we would be reading from/writing to different buckets at different times.)

CloudFormation, you beauty.

From what I see, the coolest thing about this contraption is that you can stuff it all into a single CloudFormation template (remember the lambda command shell?) that can be deployed (and undeployed) in one go:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  zipedit:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: zipedit
      Handler: index.handler
      Runtime: python2.7
      Code:
        ZipFile: >
          import boto3
          
          from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
          
          s3_client = boto3.client('s3')
          
          def handler(event, context):
            src = event["src"]
            if src.find("s3://") > -1:
              src = src[5:]
            
            bucket, key = src.split("/", 1)
            src_name = "/tmp/" + key[(key.rfind("/") + 1):]
            dst_name = src_name + "_modified"
            
            s3_client.download_file(bucket, key, src_name)
            zin = ZipFile(src_name, 'r')
            
            diff = event["changes"]
            zout = ZipFile(dst_name, 'w', ZIP_DEFLATED)
            
            added = 0
            modified = 0
            
            # files that already exist in the archive
            for info in zin.infolist():
              name = info.filename
              if (name in diff):
                modified += 1
                zout.writestr(info, diff.pop(name))
              else:
                zout.writestr(info, zin.read(info))
            
            # files in the diff, that are not on the archive
            # (i.e. newly added files)
            for name in diff:
              info = ZipInfo(name)
              info.external_attr = 0755 << 16L
              added += 1
              zout.writestr(info, diff[name])
            
            zout.close()
            zin.close()
            
            s3_client.upload_file(dst_name, bucket, key)
            return {
                'added': added,
                'modified': modified
            }
      Timeout: 60
      MemorySize: 256
      Role:
        Fn::GetAtt:
        - role
        - Arn
  role:
    Type: AWS::IAM::Role
    Properties:
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      - arn:aws:iam::aws:policy/AmazonS3FullAccess
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            Service: lambda.amazonaws.com

Moment of truth

Once the stack is ready, we can start submitting our QuickBuild requests to the lambda!

// assuming auth stuff is already done
let lambda = new AWS.Lambda({region: "us-east-1"});

// ...

lambda.invoke({
  FunctionName: "zipedit",
  Payload: JSON.stringify({
    src: "s3://bucket/path/to/archive.zip",
    changes: {
      "path/to/file1/inside/archive": "new content of file1",
      "path/to/file2/inside/archive": "new content of file2",
      // ...
    }
  })
}, (err, data) => {
  let result = JSON.parse(data.Payload);
  let totalChanges = result.added + result.modified;
  if (totalChanges === expected_no_of_files_from_changes_list) {
    // all izz well!
  } else {
    // too bad, we missed a spot :(
  }
});

Once QuickBuild has completed updating the artifact, it's simply a matter of calling UpdateFunctionCode on the affected lambdas, with the S3 URL of the artifact:

lambda.updateFunctionCode({
  FunctionName: "original_function_name",
  S3Bucket: "bucket",
  S3Key: "path/to/archive.zip"
})
.promise()
.then(() => { /* done! */ })
.catch(err => { /* something went wrong :( */ });

(In our case the S3 URL remains unchanged (because our lambda simply overwrites the original file), but it still works because the Lambda service makes a copy of the code artifact when updating the target lambda.)

To speed up the QuickDeploy for multiple lambdas, we can even parallelize the UpdateFunctionCode calls:

Promise.all(
  lambdaNames.map(name =>
    lambda.updateFunctionCode({ /* params */ })
    .promise()
    .then(() => { /* done! */ }))

.then(() => { /* all good! */ })
.catch(err => { /* failures; handle them! */ });

And that's how we gained an initial 4x improvement in our lambda deployment cycle, sometimes even faster than the native AWS Lambda console!