Showing posts with label google cloud platform. Show all posts
Showing posts with label google cloud platform. Show all posts

Friday, November 29, 2019

Google Cloud has the fastest build now - a.k.a. හූ හූ, AWS!

SLAppForge Sigma cloud IDE - for an ultra-fast serverless ride! (courtesy of Pexels)

Building a deployable serverless code artifact is a key functionality of our SLAppForge Sigma cloud IDE - the first-ever, completely browser-based solution for serverless development. The faster the serverless build, the sooner you can get along with the deployment - and the sooner you get to see your serverless application up and running.

AWS was "slow" too - but then came Quick Build.

During early days, back when we supported only AWS, our mainstream build was driven by CodeBuild. This had several drawbacks; it usually took 10-20 seconds for the build to complete, and it was rather repetitive - cloning the repo and downloading dependencies each time. Plus, you only get 100 free build minutes per month, so it adds a bit of a cost - despite small - to ourselves, as well as to our users.

Then we noticed that we only need to modify the previous build artifact in order to get the new code rolling. So I wrote a "quick build"; basically a Lambda that downloads the last build artifact, updates the zipfile with the changed code files, and re-uploads it as the current artifact. This was accompanied by a "quick deploy" that directly updates the code of affected functions, thereby avoiding the overhead of a complete CloudFormation deployment.

Then our ex-Adroit wizard Chathura built a test environment, and things changed drastically. The test environment (basically a warm Lambda, replicating the content of the user's project) already had everything; all code files and dependencies, pre-loaded. Now "quick build" was just a matter of zipping everything up from within the test environment itself, and uploading it to S3; just one network call instead of two.

GCP build - still in stone age?

When we introduced GCP support, the build was again based on their Cloud Build, a.k.a. Container Builder service. Although GCP did offer 3600(!) free build minutes per month (120 each day; see what I'm talking, AWS?), theirs was generally slower than CodeBuild. So, for several months, Sigma's GCP support had the bad reputation of having the slowest build-deployment cycle.

But now, it is no longer the case.

Wait, what? It only needs code - no dependencies?

There's a very interesting characteristic of Cloud Functions:

When you deploy your function, Cloud Functions installs dependencies declared in the package.json file using the npm install command.

-Google Cloud Functions: Specifying dependencies in Node.js

This means, for deploying, you just have to upload a zipfile containing the sources and a dependencies file (package.json, requirements.txt and the like). No more npm install, or megabyte-sized bundle uploads.

But, the coolest part is...

... you can do it completely within the browser!

jszip FTW!

That awesome jszip package does it all for us, in just a couple lines:

let zip = new JSZip();

files.forEach(file => zip.file(file.name, file.code));

/*
a bit more complex, actually - e.g. for a nested file 'folder/file'
zip.folder(folder.name).file(file.name, file.code)
*/

let data = await zip.generateAsync({
 type: "string",
 streamFiles: true,
 compression: "DEFLATE"
});

We just zip up all code files in our project, plus the Node/npm package.json and/or Python/pip requirements.txt...

...and upload them to a Cloud Storage bucket:

let bucket = "your-bucket-name";
let key = "path/to/upload";

gapi.client.request({
 path: `/upload/storage/v1/b/${bucket}/o`,
 method: "POST",
 params: {
  uploadType: "media",
  name: key
 },
 headers: {
  "Content-Type": "application/zip",
  "Content-Encoding": "base64"
 },
 body: btoa(data)
})).then(res => {
 console.debug("GS upload successful", res);

 return {
  Bucket: res.result.bucket,
  Key: res.result.name
 };
});

Now we can add the Cloud Storage object path into our Deployment Manager template right away!

...
{
 "name": "goofunc",
 "type": "cloudfunctions.v1beta2.function",
 "properties": {
  "function": "goofunc",
  "sourceArchiveUrl": "gs://your-bucket-name/path/to/upload",
  "entryPoint": ...
 }
}

So, how fast is it - for real?

  1. jszip runs in-memory and takes just a few millis - as expected.
  2. If it's the first time after the IDE is loaded, the Google APIs JS client library takes a few seconds to load.
  3. After that, it's a single Cloud Storage API call - to upload our teeny tiny zipfile into our designated Cloud Storage bucket sigma-slappforge-{your Cloud Platform project name}-build-artifacts!
  4. If the bucket is not yet available, and the upload fails as a result, we have two more steps - create the bucket and then re-run the upload. This happens only once in a lifetime.

So for a routine serverless developer, skipping steps 2 and 4, the whole process takes around just one second - the faster your network, the faster it all is!

In comparison to AWS builds, where we want to first run a dependency sync and then a build (each of which is preceded by HTTP OPTIONS requests, thanks to CORS restrictions); this is lightning fast!

(And yeah, this is one of those places where the googleapis client library shines; high above aws-sdk.)

Enough reading - let's roll!

I am a Google Cloud fan by nature - perhaps because my "online" life started with Gmail, and my "cloud dev" life started with Google Apps Script and App Engine. So I'm certainly at bias here.

Still, when you really think about it, Google Cloud is way simpler far more organized than AWS. While this could be a disadvantage when it comes to advanced serverless apps - say, "how do I trigger my Cloud Function periodically?" - GCF is pretty simple, easy and fast. Very much so, when all you need is a serverless HTTP endpoint (webhook) or bucket/queue consumer up and running in a few minutes.

And, when you do that with Sigma IDE, that few minutes could even drop down to a matter of seconds - thanks to the brand new quick build!

So, why waste time reading this - when you can just go and do it right away?!

Monday, July 16, 2018

Google, Here We Come!

Google is catching up fast on the serverless race.

And we are, too.

That's how our serverless IDE Sigma recently started supporting integrations with Google Cloud Platform, pet-named GCP.

What the heck does that mean?

To be extra honest, there's nothing new so far. Sigma had had the capability to interact with GCP (and any other cloud service, for that matter) right from the beginning; we just didn't have the scaffolding to make things easy and fun—the nice drag-n-drops and UI pop-ups, and the boilerplate code for handling credentials.

Now, with the bits 'n' pieces in place, you can drag, drop and operate on your GCP stuff right inside your Sigma project!

(*) Conditions apply

Of course, there are some important points (well, let's face it, limitations):

  • GCP access is confined to one project, meaning that all cloud resources accessed in a Sigma project should come from a single GCP project. You can of course work around this by writing custom authorization logic on your own, but drag-n-drop stuff will still work only for the first project you configured.
  • You can only interact with existing resources, and only as operations (no trigger support). New resource creation and trigger configurations require deeper integration with GCP, which will become available soon (fingers crossed).
  • Only a handful of resource types are currently supported—including Storage, Pub/Sub, Datastore and SQL. These should, however, cover most of the common scenarios; we are hoping to give more attention to big data and machine learning stuff in the immediate future—which Google also seems to be thriving upon—but feel free to shout out so we can reconsider our strategy!

So, where can I get this thing?

'GCP Resources' button on the Resources pane

  • Now you can provide a service account key from the desired GCP project in order to access its resources via Sigma. Click Authorize, paste the JSON service account key, and press Save.

GCP Resources pane with 'Authorize' button

GCP Service Account Key pop-up

  • The Resources pane will now display the available GCP resource types. You can drag-n-drop them into your code, configure new operations and edit existing ones, just like you would do with AWS resources.

Resources pane displaying available GCP resource types

(Disclaimer: The list you see may be different, depending on how old this post is.)

But you said there's nothing new!

How could I have done this earlier?

Well, if you look closely, you'll see what we do behind the scenes:

  • We have two environment variables, GCP_SERVICE_TOKEN and GCP_PROJECT_ID, which expose the GCP project identity and access to your Sigma app.
  • Sigma has an Authorizer file that reads this content and configures the googleapis NodeJS client library, using them as credentials.
  • Your code imports Authorizer (which runs the abovementioned configuration stuff), and invokes operations via googleapis, effectively operating on the resources in your GCP project.

See? No voodoo.

Of course, back in those days, you won't have seen the nice UI pop-ups, or the drag 'n' drop capability; so we've had to do a bit of work, after all.

Not cool! I want more!!

Relax, the really cool stuff is already on the way:

  • Ability to define new GCP resources, right within the IDE (drag-drop-configure, of course!)
  • Ability to use GCP resources as triggers (say, fire a cloud function via a Cloud Storage bucket or a Pub/Sub topic)

And, saving the best for the last,

  • Complete application deployments on Google Cloud Platform!

And...

  • A complete cross-cloud experience: operating on AWS from within GCP, and vice versa!
  • With other players (MS Azure, Alibaba Cloud and so forth) joining the game, pretty soon!

So, stay tuned!

Sunday, April 22, 2018

Deploying your stuff with Google Cloud Deployment Manager: via NodeJS

This may not be the correct way; heck, this may be the crappiest way. I'm putting this up because I could not find a single decent sample on how to do it with JS.

The approach in this post uses NodeJS (server-side), but it is possible to do the same on the client side by loading the Google API client and subsequently the deploymentmanager v2 module; I'll write about it as well, if/when I get a chance.

First you set up authentication so your googleapis client can obtain a token automatically.

Assuming that you have added the googleapis:28.0.1 NPM module to your dependencies list, and downloaded your service account key into the current directory (where the deploymentmanager-invoking code is residing):

const google = require("googleapis").google;

const key = require("./keys.json");
const jwtClient = new google.auth.JWT({
    email: key.client_email,
    key: key.private_key,
    scopes: ["https://www.googleapis.com/auth/cloud-platform"]
});
google.options({auth: jwtClient});

I used a service account, so YMMV.

If you like, you can cache the token at dev time by adding some more gimmicks: I used axios-debug-log to intercept the auth response and persist the token to a local file, from which I read the token during subsequent runs (if the token expires the JWT client will automatically refresh it, which I will then persist):

process.env.log = "axios";
const tokenFile = "./token.json";
require("axios-debug-log")({
    // disable extra logging
    request: function (debug, config) {},
    error: function (debug, error) {},
    response: function (debug, response) {
        // grab and save access token for reuse
        if (response.data.access_token) {
            console.log("Updating token");
            require("fs").writeFile(tokenFile, JSON.stringify(response.data));
        }
    },
});

// load saved token; if success, use OAuth2 client with loaded token instead of JWT client
// (avoid re-auth at each run)
try {
    const token = require(tokenFile);
    if (!token.access_token) {
        throw Error("no token found");
    }
    token.refresh_token = token.refresh_token || "refreshtoken";    //mocking
    console.log("Using saved tokens from", tokenFile);
    jwtClient.setCredentials(token);
} catch (e) {
    console.log(e.message);
}

Fair enough. Now to get the current state of the deployment:

const projectId = "your-gcp-project-id";
const deployment = "your-deployment-name";

const deployments = google.deploymentmanager("v2").deployments;

let fingerprint = null;

let deployment = deployments.get({
    project: projectId,
    deployment: deployment
})
    .then(response => {
        fingerprint = response.data.fingerprint;
        console.log("Fingerprint", fingerprint);
        return Promise.resolve(response);
    })
    .then(response => {
        // continue the logic
    });

The "fingerprint logic" is needed because we need to pass a "fingerprint" to every "write" (update (preview/start), stop, cancelPreview etc.) operation in order to guarantee in-order execution and operation synchronization.

That done, we set up an update for our deployment by creating a deployment preview (shell) within the last .then():

    .then(response => {
        console.log("Creating deployment preview", deployment);
        return deployments.update({
            project: projectId,
            deployment: deployment,
            preview: true,
            resource: {
                name: deployment,
                fingerprint: fingerprint,
                target: {
                    config: {
                        content: JSON.stringify({
                            resources: [
                                /* your resource definitions here; e.g.

                                {
                                    name: "myGcsBucket",
                                    type: "storage.v1.bucket",
                                    properties: {
                                        storageClass: "STANDARD",
                                        location: "US",
                                        labels: {
                                            "keyOne": "valueOne"
                                        }
                                    }
                                }
                                
                                and so on */
                            ]
                        }, 4, 2)
                    }
                }
            }
        })
            .catch(e => err("Failed to preview deployment", e))
    })

// small utilty function for one-line throws

const err = (msg, e) => {
    console.log(`${msg}: ${e}`);
    throw e;
};

Notice that we passed fingerprint as part of the payload. Without it, Google would complain that it expected one.

But now, we again need to call deployments.get() because the fingerprint would have been updated! (Why the heck doesn't Google return the fingerprint in the response itself?!)

Maybe it's easier to just wrap the modification calls inside a utility code snippet:

const filter = {
    project: projectId,
    deployment: deployment
};

const ensureFingerprint = promise =>
    promise
        .then(response => deployments.get(filter))
        .then(response => {
            fingerprint = response.data.fingerprint;
            console.log("Fingerprint", fingerprint);
            return Promise.resolve(response);
        });

// ...

let preview = ensureFingerprint(Promise.resolve(null))   // only obtain the fingerprint
    .then(response => {
        console.log("Creating deployment preview", deployment);
        return ensureFingerprint(deployments.update({
            // same payload from previous code block
        }))
            .catch(e => err("Failed to preview deployment", e))
    })

True, it's nasty to have a global fingerprint variable. You can pick your own way.

Meanwhile, if the initial deployments.get() fails due to a deployment being not found by the given name, we can create one (along with a preview) right away:

    .catch(e => {
        // fail unless the error is a 'not found' error
        if (e.code === 404) {
            console.log("Deployment", deployment, "not found, creating");
            return ensureFingerprint(deployments.insert({
                // identical to deployments.create(), except for missing fingerprint
                project: projectId,
                deployment: deployment,
                preview: true,
                resource: {
                    name: deployment,
                    target: {
                        config: {
                            content: JSON.stringify({
                                resources: [
                                    // your resource definitions here
                                ]
                            }, 4, 2)
                        }
                    }
                }
            }))
                .catch(e => err("Deployment creation failed", e));
        } else {
            err("Unknown failure in previewing deployment", e);
        }
    });

Now let's keep on "monitoring" the preview until it reaches a stable state (DONE, CANCELLED etc.):

// small utility to run a timer task without multiple concurrent requests

const startTimer = (func, timer, period) => {
    let caller = () => {
        func().then(repeat => {
            if (repeat) {
                timer.handle = setTimeout(caller, period);
            }
        });
    };
    timer.handle = setTimeout(caller, period);
};

let timer = {
    handle: null
};
preview.then(response => {
    console.log("Starting preview monitor", deployment);
    startTimer(() => {
        return deployments.get(filter)
            .catch(e => {
                //TODO detect and ignore temporary failures
                err("Unexpected error in monitoring preview", e);
            })
            .then(response => {
                let op = response.data.operation;
                let status = op.status;
                console.log(status, "at", op.progress, "%");

            })
    }, timer, 5000);
});

And check if we reached a terminal (completion) state:

const SUCCESS_STATES = ["SUCCESS", "DONE"];
const FAILURE_STATES = ["FAILURE", "CANCELLED"];
const COMPLETE_STATES = SUCCESS_STATES.concat(FAILURE_STATES);

// ...

            .then(response => {
                // ...

                if (COMPLETE_STATES.includes(status)) {
                    console.log("Preview completed with status", status);
                    if (SUCCESS_STATES.includes(status)) {
                        if (op.error) {
                            console.error("Errors:", op.error);
                        } else {
                            
                        }
                    } else if (FAILURE_STATES.includes(status)) {
                        console.log("Preview failed, skipping deployment");
                    }
                    return false;
                }
                return true;

If we reach a success state, we can commence the actual deployment:

                        // ...
                        } else {
                            deploy();
                        }

// ...

const deploy = () => {
    let deployer = () => {
        console.log("Starting deployment", deployment);
        return deployments.update({
            project: projectId,
            deployment: deployment,
            preview: false,
            fingerprint: fingerprint,
            resource: {
                name: deployment
            }
        })
            .catch(e => err("Deployment startup failed", e))
    };

And start monitoring again, until we reach a completion state:

    // ...

    deployer().then(response => {
        console.log("Starting deployment monitor", deployment);
        startTimer(() => {
            return deployments.get(filter)
                .catch(e => {
                    //TODO detect and ignore temporary failures
                    err("Unexpected error in monitoring deployment", e);
                })
                .then(response => {
                    let op = response.data.operation;
                    let status = op.status;
                    console.log(status, "at", op.progress, "%");

                    if (COMPLETE_STATES.includes(status)) {
                        console.log("Deployment completed with status", status);
                        if (op.error) {
                            console.error("Errors:", op.error);
                        }
                        return false;  // stop
                    }
                    return true;  // continue
                })
        }, timer, 5000);
    });
};

Recap:

const SUCCESS_STATES = ["SUCCESS", "DONE"];
const FAILURE_STATES = ["FAILURE", "CANCELLED"];
const COMPLETE_STATES = SUCCESS_STATES.concat(FAILURE_STATES);

const google = require("googleapis").google;

const key = require("./keys.json");
const jwtClient = new google.auth.JWT({
    email: key.client_email,
    key: key.private_key,
    scopes: ["https://www.googleapis.com/auth/cloud-platform"]
});
google.options({auth: jwtClient});

const projectId = "your-gcp-project-id";
const deployment = "your-deployment-name";

// small utility to run a timer task without multiple concurrent requests

const startTimer = (func, timer, period) => {
    let caller = () => {
        func().then(repeat => {
            if (repeat) {
                timer.handle = setTimeout(caller, period);
            }
        });
    };
    timer.handle = setTimeout(caller, period);
};

// small utilty function for one-line throws

const err = (msg, e) => {
    console.log(`${msg}: ${e}`);
    throw e;
};

let timer = {
    handle: null
};

const deployments = google.deploymentmanager("v2").deployments;

const filter = {
    project: projectId,
    deployment: deployment
};

let fingerprint = null;
const ensureFingerprint = promise =>
    promise
        .then(response => deployments.get(filter))
        .then(response => {
            fingerprint = response.data.fingerprint;
            console.log("Fingerprint", fingerprint);
            return Promise.resolve(response);
        });

let preview = ensureFingerprint(Promise.resolve(null))   // only obtain the fingerprint
    .then(response => {
        console.log("Creating deployment preview", deployment);
        return ensureFingerprint(deployments.update({
            project: projectId,
            deployment: deployment,
            preview: true,
            resource: {
                name: deployment,
                fingerprint: fingerprint,
                target: {
                    config: {
                        content: JSON.stringify({
                            resources: [
                                // your resource definitions here
                            ]
                        }, 4, 2)
                    }
                }
            }
        }))
            .catch(e => err("Failed to preview deployment", e))
    })
    .catch(e => {
        // fail unless the error is a 'not found' error
        if (e.code === 404) {
            console.log("Deployment", deployment, "not found, creating");
            return ensureFingerprint(deployments.insert({
                // identical to deployments.create(), except for missing fingerprint
                project: projectId,
                deployment: deployment,
                preview: true,
                resource: {
                    name: deployment,
                    target: {
                        config: {
                            content: JSON.stringify({
                                resources: [
                                    // your resource definitions here
                                ]
                            }, 4, 2)
                        }
                    }
                }
            }))
                .catch(e => err("Deployment creation failed", e));
        } else {
            err("Unknown failure in previewing deployment", e);
        }
    });

preview.then(response => {
    console.log("Starting preview monitor", deployment);
    startTimer(() => {
        return deployments.get(filter)
            .catch(e => {
                //TODO detect and ignore temporary failures
                err("Unexpected error in monitoring preview", e);
            })
            .then(response => {
                let op = response.data.operation;
                let status = op.status;
                console.log(status, "at", op.progress, "%");

                if (COMPLETE_STATES.includes(status)) {
                    console.log("Preview completed with status", status);
                    if (SUCCESS_STATES.includes(status)) {
                        if (op.error) {
                            console.error("Errors:", op.error);
                        } else {
                            deploy();
                        }
                    } else if (FAILURE_STATES.includes(status)) {
                        console.log("Preview failed, skipping deployment");
                    }
                    return false;  // stop
                }
                return true;  // continue
            })
    }, timer, 5000);
});

const deploy = () => {
    let deployer = () => {
        console.log("Starting deployment", deployment);
        return deployments.update({
            project: projectId,
            deployment: deployment,
            preview: false,
            resource: {
                name: deployment,
                fingerprint: fingerprint
            }
        })
            .catch(e => err("Deployment startup failed", e))
    };

    deployer().then(response => {
        console.log("Starting deployment monitor", deployment);
        startTimer(() => {
            return deployments.get(filter)
                .catch(e => {
                    //TODO detect and ignore temporary failures
                    err("Unexpected error in monitoring deployment", e);
                })
                .then(response => {
                    let op = response.data.operation;
                    let status = op.status;
                    console.log(status, "at", op.progress, "%");

                    if (COMPLETE_STATES.includes(status)) {
                        console.log("Deployment completed with status", status);
                        if (op.error) {
                            console.error("Errors:", op.error);
                        }
                        return false;  // stop
                    }
                    return true;  // continue
                })
        }, timer, 5000);
    });
};

That should be enough to get you going.

Good luck!

Serverless is the new Build Server: Google CloudBuild (Container Builder) via NodeJS

Google's CloudBuild (a.k.a. "Container Builder") is an on-demand, container-based build service offered under the Google Cloud Platform (GCP). For you and me, it is a nice alternative to maintaining and paying for our own build server, and a clever addition to anyone's CI stack.

CloudBuild allows you to start from a source (e.g. a Google Cloud Source repo, a GCS bucket - or perhaps even nothing (a blank directory; "scratch"), incrementally apply several Docker container runs upon it, and publish the final result to a desired location: like a Docker repository or a GCS bucket.

With its wide variety of custom builders, CloudBuild can do almost anything - that is, as far as I see so far, anything that can be achieved by a Docker container and a volume mount can be fulfilled in CloudBuild as well. To our great satisfaction, this includes fetching sources from GitHub/BitBucket repos (in addition to the native source location options), running custom commands like zip, and much more!

Above all this (how nice of GCP!), CloudBuild gives you 2 whole hours (120 minutes) of build time per day, for free - in comparison to the comparable CodeBuild service of AWS, which offers just 1 hour and 40 minutes per month!

So, now let's have a look at how we can run a CloudBuild via JS (server-side NodeJS):

First things first: adding googleapis:28.0.1 to our dependency list;

{
  "dependencies": {
    "googleapis": "28.0.1"
  }
}

Don't forget the npm install!

In our logic flow, first we need to get ourselves authenticated; with the google-auth-library module that comes with googleapis, this is quite straightforward because the client can be fed with a JWT auth client right from the beginning, which will handle all the auth stuff behind the scenes:

const projectId = "my-gcp-project-id";

const google = require("googleapis").google;

const key = require("./keys.json");
const jwtClient = new google.auth.JWT({
    email: key.client_email,
    key: key.private_key,
    scopes: ["https://www.googleapis.com/auth/cloud-platform"]
});
google.options({auth: jwtClient});

Note that, for the above code to work verbatim, you need to place a service account key file in the current directory (usually obtained by creating a new service account via the Google Cloud console, in case you don't already have one).

Now we can simply retrieve the v1 version of the cloudbuild client from google, and start our magic:

const builds = google.cloudbuild("v1").projects.builds;

First we submit a build "spec" to the CloudBuild service. Below is an example for a typical NodeJS module on GitHub:

builds.create({
    projectId: projectId,
    resource: {
        steps: [
            {
                name: "gcr.io/cloud-builders/git",
                args: ["clone", "https://github.com/slappforge/slappforge-sdk", "."]
            },
            {
                name: "gcr.io/cloud-builders/npm",
                args: ["install"]
            },
            {
                name: "kramos/alpine-zip",
                args: [
                    "-q",
                    "-x", "package.json", ".git/", ".git/**", "README.md",
                    "-r",
                    "slappforge-sdk.zip",
                    "."
                ]
            },
            {
                name: "gcr.io/cloud-builders/gsutil",
                args: [
                    "cp",
                    "slappforge-sdk.zip",
                    "gs://sdk-archives/slappforge-sdk/$BUILD_ID/slappforge-sdk.zip"
                ]
            }
        ]
    }
})
    .catch(e => {
        throw Error("Failed to start build: " + e);
    })

Basically we retrieve the source from GitHub, fetch the dependencies via a npm install, bundle the whole thing using a zip command container (took me a while to figure it out, which is why I'm posting this!) and upload the resulting zip to a GCS bucket.

We can tidy this up a bit (and perhaps make the template reusable for subsequent builds, by extracting out the parameters into a substitutions section:

const repoUrl = "https://github.com/slappforge/slappforge-sdk";
const projectName = "slappforge-sdk";
const bucket = "sdk-archives";

builds.create({
    projectId: projectId,
    resource: {
        steps: [
            {
                name: "gcr.io/cloud-builders/git",
                args: ["clone", "$_REPO_URL", "."]
            },
            {
                name: "gcr.io/cloud-builders/npm",
                args: ["install"]
            },
            {
                name: "kramos/alpine-zip",
                args: [
                    "-q",
                    "-x", "package.json", ".git/", ".git/**", "README.md",
                    "-r",
                    "$_PROJECT_NAME.zip",
                    "."
                ]
            },
            {
                name: "gcr.io/cloud-builders/gsutil",
                args: [
                    "cp",
                    "$_PROJECT_NAME.zip",
                    "gs://$_BUCKET_NAME/$_PROJECT_NAME/$BUILD_ID/$_PROJECT_NAME.zip"
                ]
            }
        ],
        substitutions: {
            _REPO_URL: repoUrl,
            _PROJECT_NAME: projectName,
            _BUCKET_NAME: bucket
        }
    }
})
    .catch(e => {
        throw Error("Failed to start build: " + e);
    })

Once the build is started, we can monitor it like so (with a few, somewhat neat wrappers to properly manage the timer logic):

    .then(response => {
        let timer = {
            handle: null
        };

        startTimer(() => {
            return builds.get({
                projectId: projectId,
                id: response.data.metadata.build.id
            })
                .catch(e => {
                    throw e;
                })
                .then(response => {
                    const COMPLETE_STATES = ["SUCCESS", "DONE", "FAILURE", "CANCELLED"];
                    if (COMPLETE_STATES.includes(response.data.status)) {
                        return false;
                    }
                    return true;
                })
        }, timer, 5000);
    });

// small utility to run a timer task without multiple concurrent requests

const startTimer = (func, timer, period) => {
    let caller = () => {
        func().then(repeat => {
            if (repeat) {
                timer.handle = setTimeout(caller, period);
            }
        });
    };
    timer.handle = setTimeout(caller, period);
};

Once the build reaches a steady state, you are done!

If you want fine-grained details, just dig into response.data within the timer callback blocks.

Happy CloudBuilding!