Friday, February 16, 2018

Sigma: The New Kid on the Serverless Block

Despite its young age (barely 73 years; in comparison to, say, automobiles (200+), digital computing is growing and flourishing rapidly; and so are the associated tools and utilities. Today's "hot" topic or tech is no longer hot tomorrow, "legacy" in a week and "deprecated" in a month.

Application deployment and orchestration is no exception: in just three decades we have gone from legacy monoliths to modular systems, P2P integration, middleware, SOA, microservices, and the latest, functions or FaaS. The deployment paradigm has shifted to comply, with in-house servers and data centers, enterprise networks, VMs, containers, and now, "serverless".

Keeping up with things was easy so far, but the serverless paradigm demands quite a shift in the developer mindset (not to mention the programming paradigm). This, combined with the lack of intuitive tooling, has fairly hindered the adoption of serverless application development, even among the cutting-edge developers.

And (you guessed it), that's where _____ comes into play.

Missing something?


A way to glue stuff together.

A way to compose a serverless application care-free. Without having to worry—and to read tons of documentation, watch reels of tutorials, or trial-and-error till your head is on fire—about all the bells and whistles of the underlying framework and related services.

Essentially, a sum-up of all that is serverless.




Sigma Logo

What's in a name?

As the name implies, (quoted from the official website)

The Sigma editor is a hybrid between the simplicity of drag-and-drop style development,
and the full and unlimited power of raw code.

The drag-and-drop events generate sample or usage snippets to quickly get started,
and introduce a powerful, uniform and intuitive library with auto-completion,
which allow users to quickly become productive in developing Serverless applications
that integrate with a myriad of AWS based services.

Making of...

Before Sigma, a bit of background of its origins.

As a first-time user of AWS Lambda, one of our team members brought up an impressive series of questions: if serverless is so cool, why is it so complicated to get an application up and running in Lambda?

(His quest, converted into a presentation, is [right here].)

And we ourselves started trying out the same thing. Guess what, we got the same questions as well.

So we set out to devise something that could bypass all those tedious steps: something where we could just write our code, save it, and deploy it as a working serverless application, without having to wander from dashboard to dashboard, or sift through heaps of documentation or reels of video tutorials.

And we ended up with Sigma!

Yet another IDE?

At first glance, Sigma looks like another cloud IDE that additionally supports deploying an application directly into a serverless provider environment (AWS so far).

However, there are a few not-to-be-missed distinctions:

  • Unlike many of the existing cloud IDEs, Sigma itself is truly serverless; it runs completely inside your browser, using backend services only for user authentication and analytics, and requires no dedicated server/VM/container to be running in the background. Just fire up your browser, log in, and start coding your dream away.
  • Sigma directly interacts with and configures the serverless platform on your behalf, using the credentials that you provide, saving hours of configuration and troubleshooting time. No more back-and-forth between overcomplicated dashboards and dizzying configurations.
  • Sigma encapsulates the complexities of the serverless platform, such as service entities, access policies, invocation trigger configurations and associated permissions, and even some API invocation syntaxes, saving you the trouble of having to delve into piles of documentation.
  • All of this comes in a fairly simple, intuitive environment, with easy, drag-and-drop composition combined with the full power of written code. Drag and drop a DynamoDB table into the UI, pick your operation and just write your logic, and Sigma will do the magic of automatically creating, configuring and managing the DynamoDB table on your AWS account.

Now, I won't say that's "just another IDE"; what say you?

A serverless platform?

Based on the extent of its capabilities, you may also be inclined to classify Sigma as a serverless platform. This is true to a great extent; after all, Sigma facilitates all of it—composing, building and deploying the application! However...

Hybrid! It's a hybrid!

Yup, Sigma is a hybrid.

Fusion of a cloud IDE (which in itself is a hybrid of graphical composition and granular coding) and a serverless development framework (which automatically deploys and manages the resources, permissions, wiring and other bells and whistles of your serverless application).

One of a kind.

To be precise, the first of its kind.

A new beginning

With Sigma, we hope to redefine serverless development.

Yup. Seriously.

From here onwards, developers shall simply focus on what they need to achieve: workflow, business logic, algorithm, whatever.

Not about all the gears and crankshafts of the platform on which they would deploy the whole thing.

Not about the syntax of, or permissions required by, platform-specific API or service calls.

Not about the deployment, configurations and lifecycle of all the tables, buckets, streams, schedulers, REST endpoints, queues and so forth, that they want to use within their application.

Because Sigma will take care of it all.

And we believe our initiative would

  • make it easy for newcomers to get started with serverless development,
  • improve the productivity of devs that are already familiar with—or even experts of—serverless development,
  • speed up the adoption of serverless development among the not-yet-serverless community,
  • allow y'all to "think serverless", and
  • make serverless way much fun!

We have proof!

While developing Sigma, we also wanted to verify that we were doing the right thing, and doing it right. So we bestowed upon two of our fellows, the responsibility to develop two showcase applications using Sigma: a serverless accounting webapp, and a location-based mobile dating app.

To our great joy, both experiments were successful!

The accounting app SLAppBook is now live for public access. By default it runs against one of our test serverless backends, but you can always deploy the serverless backend project on your own AWS account via Sigma and point the frontend to your brand new backend, after which you can use it for your own personal use!

The dating app HotSpaces is currently undergoing some rad improvements (see, now it's the frontend that takes time to develop!) and will be out pretty soon!

So, once again, we have proof that Sigma really rocks it!

Far from perfection, but getting there; fast!

Needless to say, Sigma is pretty much an infant. It needs quite a lot more—more built-in services, better code suggestions, smarter resource handling, faster builds and deployments, support for other cloud platforms, you name it—before it can be considered "mature".

But we are getting there. And we will get there. Fast.

We will publish our roadmap pretty soon, which would include (among other things) adding more AWS services, supporting integration with external APIs/services and, most importantly, expanding to other cloud providers like GCP and MS Azure.

That's where we need your help.

We need you!

Needless to say, you are most welcome to try out Sigma. Sign up here, if you haven't already, and start playing around with our samples (once you are signed in to Sigma, you can directly open them via the projects page). Or, if you feel adventurous, start off with a clean slate, and start building your own serverless application.

We are continually smoothening the ride, but you may hit a few bumps here and there. Possibly even hard ones. Sometimes even impassable. Maybe none, if you are really lucky.

Either way, we are eagerly waiting for your feedback. Just write us about anything that came to your mind: a missing functionality, a popular AWS service that you really missed in Sigma (there are hundreds, no doubt!), the next cloud platform you would like Sigma to support; a failed build, a faulty deployment, a nasty error that hogged your browser; or even the slightest of improvements that you would like to see, like a misaligned button, a hard-to-scroll pop-up or a badly-named text label.

You can either use our official feedback form or the "Report an Issue" option on the IDE Help menu, post your feedback in our GitHub issue tracker, or send us a direct email at

If you would like to join hands with us in our forward march, towards a "think serverless" future, drop us an email at right away.

Welcome to Sigma!

That's it; time to start your journey with Sigma!

(Originally authored on Medium.)

Inside a Lambda Runtime: A Peek into the Serverless Lair

Ever wondered what it is like inside a lambda? Stop wondering. Let's find out.

Ever since they surfaced in 2014, AWS's lambda functions have made themselves a steaming hot topic, opening up whole new annals in serverless computing. The stateless, zero-maintenance, pay-per-execution goodies are literally changing—if not uprooting— the very roots of the cloud computing paradigm. While other players like Google and MS Azure are entering the game, AWS is the clear winner so far.

Okay, preaching aside, what does it really look like inside a lambda function?

As per AWS folks, lambdas are driven by container technology; to be precise, AWS EC2 Container Service (ECS). Hence, at this point, a lambda is merely a Docker container with limited access from outside. However, the function code that we run inside the container has almost unlimited access to it—except root privileges— including the filesystem, built-in and installed commands and CLI tools, system metadata and stats, logs, and more. Not very useful for a regular lambda author, but could be so if you intend to go knee-deep in OS-level stuff.

Obviously, the easiest way to explore all these OS-level offerings is to have CLI (shell) access to the lambda environment. Unfortunately this is not possible at the moment; nevertheless, combining the insanely simple syntax provided by the NodeJS runtime and the fact that lambdas have a few minutes' keep-alive time, we can easily write a ten-liner lambda that can emulate a shell. Although a real "session" cannot be established in this manner (for example, you cannot run top for a real-time updating view), you can repeatedly run a series of commands as if you are interacting with a user console.

let {exec} = require('child_process');

exports.handle = (event, context, callback) => {
  exec(event.cmd, (err, stdout, stderr) => {
    if (err) console.log(stderr);
    callback(undefined, {statusCode: 200});

Lucky for us, since the code is a mere ten-liner with zero external dependencies, we can deploy the whole lambda—including code, configurations and execution role—via a single CloudFormation template:

AWSTemplateFormatVersion: '2010-09-09'
    Type: AWS::Lambda::Function
      FunctionName: shell
      Handler: index.handle
      Runtime: nodejs6.10
        ZipFile: >
          let {exec} = require('child_process');

          exports.handle = (event, context, callback) => {
            exec(event.cmd, (err, stdout, stderr) => {
              if (err) console.log(stderr);
              callback(undefined, {statusCode: 200});
      Timeout: 60
        - role
        - Arn
    Type: AWS::IAM::Role
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        Version: 2012-10-17
        - Action: sts:AssumeRole
          Effect: Allow

Deploying the whole thing is as easy as:

aws cloudformation deploy --stack-name shell --template-file /path/to/template.yaml --capabilities CAPABILITY_IAM

or selecting and uploading the template to the CloudFormation dashboard, in case you don't have the AWS CLI to do it the (above) nerdy way.

Once deployed, it's simply a matter of invoking the lambda with a payload containing the desired shell command:

{"cmd":"the command to be executed"}

If you have the AWS CLI, the whole thing becomes way more sexy, when invoked via the following shell snippet:

echo -n "> "
read cmd
while [ "$cmd" != "exit" ]; do
  aws lambda invoke --function-name shell --payload "{\"cmd\":\"$cmd\"}" --log-type Tail /tmp/shell.log --query LogResult --output text | base64 -d
  echo -n "> "
  read cmd

With this script in place, all you have is to invoke the script; you will be given a fake "shell" where you can execute your long-awaited command, and the lambda will execute it and return the output back to your console right away, dropping you back into the "shell" prompt:

> free

START RequestId: c143847d-12b8-11e8-bae7-1d25ba5302bd Version: $LATEST
2018-02-16T01:28:56.051Z	c143847d-12b8-11e8-bae7-1d25ba5302bd	{ cmd: 'free' }
2018-02-16T01:28:56.057Z	c143847d-12b8-11e8-bae7-1d25ba5302bd	             total       used       free     shared    buffers     cached
Mem:       3855608     554604    3301004        200      44864     263008
-/+ buffers/cache:     246732    3608876
Swap:            0          0          0

END RequestId: c143847d-12b8-11e8-bae7-1d25ba5302bd
REPORT RequestId: c143847d-12b8-11e8-bae7-1d25ba5302bd	Duration: 6.91 ms	Billed Duration: 100 ms 	Memory Size: 128 MB	Max Memory Used: 82 MB


With this contraption you could learn quite a bit about the habitat and lifestyle of your lambda function. I, for starters, came to know that the container runtime environment comprises Amazon Linux instances, with around 4GB of (possibly shared) memoey and several (unusable) disk mounts of considerable size (in addition to the "recommended-for-use" 500MB mount on /tmp):

> df

START RequestId: bb0034fa-12ba-11e8-8390-cb81e1cfae92 Version: $LATEST
2018-02-16T01:43:04.559Z	bb0034fa-12ba-11e8-8390-cb81e1cfae92	{ cmd: 'df' }
2018-02-16T01:43:04.778Z	bb0034fa-12ba-11e8-8390-cb81e1cfae92	Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1      30830568 3228824  27501496  11% /
/dev/loop8        538424     440    526148   1% /tmp
/dev/loop9           128     128         0 100% /var/task

END RequestId: bb0034fa-12ba-11e8-8390-cb81e1cfae92
REPORT RequestId: bb0034fa-12ba-11e8-8390-cb81e1cfae92	Duration: 235.44 ms	Billed Duration: 300 ms 	Memory Size: 128 MB	Max Memory Used: 22 MB

> cat /etc/*-release

START RequestId: 6112efb9-12bd-11e8-9d14-d5c0177bc74f Version: $LATEST
2018-02-16T02:02:02.190Z	6112efb9-12bd-11e8-9d14-d5c0177bc74f	{ cmd: 'cat /etc/*-release' }
2018-02-16T02:02:02.400Z	6112efb9-12bd-11e8-9d14-d5c0177bc74f	NAME="Amazon Linux AMI"
ID_LIKE="rhel fedora"
PRETTY_NAME="Amazon Linux AMI 2017.03"
Amazon Linux AMI release 2017.03

END RequestId: 6112efb9-12bd-11e8-9d14-d5c0177bc74f
REPORT RequestId: 6112efb9-12bd-11e8-9d14-d5c0177bc74f	Duration: 209.82 ms	Billed Duration: 300 ms 	Memory Size: 128 MB	Max Memory Used: 22 MB


True, the output format (which is mostly raw from CloudWatch Logs) could be significantly improved, in addition to dozens of other possible enhancemenrs. So let's discuss, under comments!

Monday, December 11, 2017

Fun with Mendelson AS2: Automating your AS2 Workflows

Mendelson AS2 is one of the widely used AS2 clients, and is also the unofficial AS2 testing tool that we use here at AdroitLogic (besides OpenAS2 etc.).

Mendelson AS2

While Mendelson does offer quite a lucrative handful of features, we needed more flexibility in order to integrate it into our testing cycles—especially when it comes to programmatic test automation of our AS2Gateway.


A spark of hope

If you have a curious eye, you might already have glimpsed the following on the log window of the Mendelson UI, right after it is fired up:

[8:30:42 AM] Client connected to localhost/
[8:30:44 AM] Logged in as user "admin"

So there's probably a server-client distinction among Mendelson's numerous components; a server that handles AS2 communication, and a client that authenticates to it and provides the necessary instructions.

The fact is confirmed by the docs.

What if...

So what if we can manipulate the client component of Mendelson AS2, and use it to programmatically perform AS2 operations: like sending and checking received messages under different, programmatically configured partner and local station configurations?

Guess what? That's totally possible.

Mendelson comes bundled with a wide range of Java clients, in addition to the GUI client that you see everyday. Different ones are available for different tasks, such as configuration, general commands, file transfers, etc. It's just a matter of picking and choosing the matching set of clients and request/response pairs, and wiring them together to compose the flow you want.

Which could turn out to be harder than you think, due to the lack of decent client documentation (at least for the stuff I searched for).

Digging for the gold

Fortunately the source is available online, so you could just download and extract it, plug it into an IDE like IntelliJ or Eclipse, and start hunting for classes with suspicious names, e.g. those having "client", "request" or "message" in their class or package names. If your IDE supports class decompilation, you could also simply add the main AS2 JAR (<Mendelson installation root>/as2.jar) to your project's build path (although I cannot guarantee the legality of such a move!)

Well, my understandings may not be perfect, but this is what my findings revealed regarding tapping into Mendelson's AS2 client ecosystem:

  1. You start by creating a de.mendelson.util.clientserver.BaseClient derivative of the required type, providing either a host-port-user-password combination for a server (which we already have, when running the UI; usually configurable at <Mendelson installation root>/passwd), or another pre-initialized BaseClient instance.
  2. You compose a request entity, picking one out of the wide range of request-response classes deriving from de.mendelson.util.clientserver.messages.ClientServerMessage (yup, I too wished the base class were <something>Request; looks a bit clumsy, but gotta live with it—at least the actual concrete class name ends with "Request"!).
  3. Now you submit the request entity to one of the sender methods of your client (such as sendSync()), and get hold of the response, another ClientServerMessage instance (with a name ending with, you guessed it, "Response").
  4. You now consult the response entity to see if the operation succeeded (e.g. response.getException() != null) and to retrieve what you were looking for, in case it was a query.

While it sounds simple, some operations such as sending messages and browsing through old messages requires a bit of insight into how the gears interlock.

Your first move

Let's start by creating a client for sending our commands to the server:

 "NoOpClientSessionHandlerCallback" is a bare-bones implementation of
 you could also use one of the existing implementations, like "AnonymousTextClient"

BaseClient client = new BaseClient(new NoOpClientSessionHandlerCallback(logger));
if (!client.connect(new InetSocketAddress(host, port), 1000) ||
        client.login("admin", "admin".toCharArray(), AS2ServerVersion.getFullProductName())
                .getState() != LoginState.STATE_AUTHENTICATION_SUCCESS) {
    throw new IllegalStateException("Login failed");
// done!

My partners!

For most of the operations, you need to possess in advance, Partner entities representing the list of configured partners (and local stations; by the way, I wish if it were possible to treat local stations as separate entities, for the sake of distinguishing their role, similar to how AS2Gateway does it):

PartnerListRequest listReq = new PartnerListRequest(PartnerListRequest.LIST_ALL);

 you can optionally receive a filtered result based on the partner ID:

// cast() is my tiny utility method for casting the response to the appropriate type (2nd argument)

List<Partner> partners = cast(client.sendSync(listReq), PartnerListResponse.class).getList();

 now you can filter the "partners" list to retrieve the interested partner and local station;
 let's call them "partnerEntity" and "stationEntity"

Sending stuff out

For a send, you first have to individually upload each outbound attachment via a de.mendelson.util.clientserver.clients.datatransfer.TransferClient, accumulating the returned "hashes", and finally submit a de.mendelson.comm.as2.client.manualsend.ManualSendRequest containing the hashes along with the recipient and other information. (If you hadn't noticed, this client-based approach inherently allows you to send multiple attachments in a single message, which is not facilitated via the GUI :) )

// "files" is a String array containing paths of files for upload

// create a new file transfer client, wrapping our existing "client"
TransferClient tc = new TransferClient(client);

ManualSendRequest sendReq = new ManualSendRequest();

List<String> hashes = new ArrayList<>();
List<String> fileNames = sendReq.getFilenames();

// upload each file separately
for (String file : files) {
    try (InputStream in = new FileInputStream(file)) {
        // upload as chunks, set returned hash as payload identifier

// submit actual message for sending
Throwable e = client.sendSync(sendReq).getException();
if (e != null) {
    throw e;
// done!

Delving into the history

Message history retrieval is fairly granular, with separate requests for list, detail and attachment queries. A de.mendelson.comm.as2.message.clientserver.MessageOverviewRequest gives you the list of messages matching some filter criteria, whose message IDs can then be used in de.mendelson.comm.as2.message.clientserver.MessageDetailRequests in order to retrieve further AS2-level details of the message.

To retrieve a list of messages:

// retrieve messages received from "sender" on local station "receiver"
MessageOverviewFilter filter = new MessageOverviewFilter();
List<AS2MessageInfo> msgs = cast(c.sendSync(new MessageOverviewRequest(filter)),

To retrieve an individual message, just send a MessageOverviewRequest with the message ID instead of a filter:

// although it returns a list, it should theoretically contain a single message matching "as2MsgId"
AS2MessageInfo msg = cast(client.sendSync(new MessageOverviewRequest(as2MsgId)),

If you want the actual content (attachments) delivered in a message, just send a de.mendelson.comm.as2.message.clientserver.MessagePayloadRequest with the message ID; but ensure that you invoke loadDataFromPayloadFile() on each retrieved payload entity, before you attempt to read its content via getData().

for (AS2Payload payload : cast(client.sendSync(new MessagePayloadRequest(msg.getMessageId())),
     MessagePayloadResponse.class).getList()) {

    // WARNING: this loads the payload into memory!
    byte[] content = payload.getData();

In closing

I hope the above would help you get started in your quest for Nirvana with Mendelson AS2; cheers! And don't forget to check out our new and improved AS2Gateway, which is fully compatible with Mendelson AS2 (or any other AS2 broker, for that matter)!

3 (4) things you should know about B2B security—and about AS2, the one-stop solution

In today's digitally transformed world, security is a key concern when it comes to B2B communication—may it be a simple 210-997 exchange or a sophisticated SCM document chain. Being aware and up-to-date on B2B security will always keep you ahead of the competition, and increase your chance of becoming that "largest retailer" or "leading partner" role that you and your business has always been dreaming about.

1. What it is

B2B security naturally derives from the basics of communication security:


What goes on between you and your business partner (here onwards, let's simply call him/her "partner") stays between you two; in other words, nobody else ("third party") gets to see or read whatever it is that you two are communicating. (Imagine a sealed package that can be opened only by the intended recipient.)

Eyes Only


When your partner receives something that seemingly came from you, he/she can verify that it indeed came from you. (Picture a signature or an official seal on a business letter.)



What your partner receives is exactly the same thing that you sent out from your end, and vice versa; i.e. nobody else can modify or tamper the communicated content. (Just imagine if somebody were able to change the inventory in the invoice that you just sent out, before it reaches your partner!)



Once your partner receives what you sent him/her, he/she cannot refute the fact that he/she received it. (Think about the receipt that you never forget to collect once you have made a payment.)


2. Why you (and your partner) should care

This part should be fairly obvious, from the examples I took in the previous section: without these measures, all sorts of weird and unpleasant things could happen, eventually leading to the ruination of your business:


  • Your rival could send your supplier a fake purchase order on your behalf—perhaps even a complete order document chain, resulting in a (surprise!) delivery at your doorstep which you never actually ordered.
  • Your rivals could intercept and view every single purchase order—or any other document, for that matter—that you would be sending to your partner.
  • Your rival could impersonate your partner, and start sending fake responses to your documents that are actually intended for your partner's eyes only. You would have no way of knowing that the other party talking is not your actual partner.
  • Even more interestingly, your rival could intercept the document you (or your partner) sent out, take it off the line, make small modifications—nothing much, maybe adding or removing a few zeroes here and there!—and put it back on the line. (I hope it's needless to explain how harmless such a "small" modification could be!)

Now that would explain why your partner would have mentioned in your negotiations, "Ah, and all documents should be exchanged securely over AS2/HTTPS" (or maybe "AS3", "OFTP" or some other weird acronym, for that matter).

3. How to achieve it

Lost your peace of mind? No worries, techniques are already in place to guard you against all of those nasty rivals out there.

WARNING: Things are about to get a little G(r)eeky. Reader discretion is advised.

Encryption for confidentiality

This is all about transforming the content that you send out, into a form that can be interpreted (read) only by your partner. (In other words, no third party would be able to make sense of the document while it is in transit.)

There are actually 2 levels of achieving this, in the "AS2/HTTPS" example above:

  • HTTPS provides transport-layer encryption, meaning whatever that is being sent on the wire (or wireless radio waves, if you insist) would be encrypted, right from the time it leaves your computer right up to the time it arrives at your partner's computer.
  • AS2 provides application-layer encryption, meaning that the application (AS2 client) triggering the sending of your document would already have encrypted it (so that it is fairly secured, even in the absence of HTTPS).

Now, if that was all Greek to you, it's just enough to keep in mind that protocols like AS2 include both, providing the security equivalent to a dual-lock safe.

Signing, for authenticity and integrity

While a real-world signature deals only with authenticity, a digital signature can also ensure integrity as well. This is due to the way a digital signature is calculated; which, in plain English, involves a kind of transformation where a concise form (a hash) of the message is created, and then scrambled (encrypted, if I may) using a special token (a private key, technically speaking) that is unique to the sending party (the technical term being digest calculation). If the receiver can unscramble the scrambled chunk using another different token (technically, the public key corresponding to the above private key), it verifies that the message had indeed been scrambled using the exact-same partnet-specific token (private key) that was mentioned earlier—and hence that it actually originated from that specific partner and nobody else.

While the unscrambled chunk of data (effectively the has that was calculated earlier by your partner, before being encrypted) can also be used to verify the integrity (intactness) of the message (e.g. you could calculate the same hash against the received message, on your end, and compare it with the hash that was sent to you by your partner), usually the MIC (message integrity code) technique is employed to explicitly enforce the integrity aspect. We'll come to that later.

By the way, if you are wondering how on earth something scrambled using one token (private key) can be unscrambled using a different token (public key), you might want to read upon asymmetric key encryption and public key infrastructure (PKI); only if you dare, of course (I'll pass).

Encryption and Signing

Receipt for non-repudiation

Just like a payment receipt, your partner who successfully received the message you just sent out, is supposed to send back a "receipt" (technically a disposition notification) saying that he/she was able to successfully receive and interpret the message. By convention, a communication is considered to have been successfully completed only after the receipt has been sent back to the sender (just like a real-world transaction). Once the receipt is sent, your partner has confirmed that he/she received whatever it is that you sent, and can never deny it.

MIC for integrity

This looks like a "repeat back to me what I just said" scheme, although it's usually more concise. Here, your partner is supposed to compute a fixed-length data chunk representing the message that he/she just received from you (a "hash" (again!), technically speaking; the chunk is usually shorter than the actual message) and send it back to you. You can then do the same computation on your end and compare your "chunk" with his/her "chunk". Since the two "chunk"s were calculated independently, if they do match, you can be pretty sure that the message (document) content is the same at both ends—and hence that your partner received exactly what you sent out.

All of the above are readily available in the AS2 (Applicability Statement 2) protocol for secure document exchange. AS2 is the most widely used B2B protocol in the modern e-commerce space, which is probably why your partner explicitly requested it. When it comes to all the nitty gritty details that we discussed above, it's all there in the official AS2 RFC, in case you are curious. (But I won't click that link, if I were you!)

However, the caveat is that AS2 requires you to have a server machine running 24×7, exposed to the public internet, so that your partners could send messages to you whenever they decide to. That in itself could be a substantial problem, especially for small- and medium-scale enterprises, with the incorporated painful set-up, configuration and testing steps, operational costs, maintenance overhead and all sorts of security concerns (remember, you are exposed to the WWW (wide, wild world) and there are hackers everywhere!).

Receipt and MIC

4. (Bonus!) How the "gateway" achieves it all—and more—for you

If you are overwhelmed about all this alien stuff by this point (or wisely skipped all the way down to here), it is perfectly logical; all those technicalities are too much for the regular, business oriented minds (and rightly so). That is precisely why we have managed AS2 clients and services—which make AS2-based trading as simple as the click of a button. Better still, now there are online AS2 solutions that free you completely from having to maintain fancy servers or other forms of hard-to-manage AS2 infrastructure.

On the cloud, light as a feather

One such solution is AS2Gateway, a cloud software-as-a-service (SaaS) that brings all those AS2 goodies—and more—right into your favourite web browser. Yup, nothing to download, install or run—just log in to your AS2G account, and you have your own dedicated AS2 space, ready for action.

AS2Gateway logo

Hit-and-run, or click-and-send

Time to forget about all those pesky security entrails that you read earlier (or wisely skipped); because AS2G does it all for you. Just log in, click the New Message icon, select your partner, pick your documents and hit Send. Under the hood, AS2G does all the heavy lifting for you—composing a message with the uploaded files, encrypting, compressing and signing it as your partner has requested, sending it to your partner, and even accepting the receipt (MDN).

24×7, all ears

When it comes to receiving documents from your partner, AS2G does an equally good job to make things super simple: it automatically accepts the incoming message, saves it in your inbox, and even sends back the receipt (MDN) based on whether the whole process was successful or not. Not to mention all the complicated stuff like message decryption, decompression, signature verification, and so on.


Space for all, on the house!

Everything is saved under your account, securely and reliably, under stringent security and privacy standards. Just log in and go to your inbox, and everything you send and received will be right there, neatly arranged. You could view any message in detail, download its attached documents, and archive or delete old stuff to keep things tidy.

Jekyll and Hyde? No problem.

AS2Gateway allows you to manage multiple partners as well as multiple "trading stations" (meaning that you could use different identities for trading with different partners). Each partner or station can be configured totally independent of each other, giving you enormous flexibility in dealing with different business partners with varying corportate security and demands.

Ahoy, mateys! There's more!

On top of all this, AS2Gateway offers many other nifty features, including fine-grained statistics for your partners and stations, email notifications for new messages, and a free SFTP service where you can send out your documents by uploading them into your own private SFTP space and also retrieve documents from incoming messages via the same space—quite handy for integrating with your own internal systems. Coupled with a fully-fledged SFTP integrator such as the UltraESB, the end-to-end solution could soon turn out to be a game changer for your business in terms of efficiency, rapid connectivity and hands-off automation.

AS2Gateway dashboard

Looking for something in-house, on-premise?

Having read all that hands-off, cloud-hosted, zero-maintenance stuff, in case you are actually looking for an on-premise solution—one you can host in your own server, and run on your own terms—there is AS2Gateway's "big brother"—AS2Station—cut out just for the job.

AS2Gateway logo

So, the next time your partner bothers you with "secure trading", "secure B2B exchange" or "AS2", you know where to look!

Wednesday, November 29, 2017

Integration simplified: Professor Calculus' assignment uploader in ten minutes!

Professor Calculus is no longer at Marlinspike Hall. (So if you happen to go there and ring for him, all you would be getting would be some Blistering Barnacles.) He's now lecturing full-time at the University of Syldavia. (Of course, he hasn't heard a single complaint from any student or staff member, since he simply doesn't hear them.)

Professor Calculus at the University of Syldavia

An age-old tradition of the university has been the submission of calculus assignments via FTP. However, things are really about to take a turn, with the recent changes in administrative powers; every lecturer is now required to set up a website where a student can upload her assignment anytime, anywhere; even from her tab or smartphone.

Web Assignment Upload

Unfortunately, being a rather conservative person, Prof. Calc has only a very vague idea of what needs to be done (from the very few words that he hardly heard during the faculty meeting).

So, dear reader, it is up to you (and me) to implement a quick solution for Prof. Calc.—before he gets heavily scolded (although inaudibly) during the next staff meeting!

The clock is ticking!

So, before we begin, let's see what challenge lies before us:

  • Present the students with a simple website having a file upload form
  • Transfer the uploaded file (with the original filename) to the site backend
  • Upload the received file into the main FTP server
  • Return a response to the frontend indicating whether the upload was successful
  • Display the received response to the student

As for the implementation, we have a few choices:

  • If we use a traditional stack (such as LAMP we would have to write and maintain code for both the backend and the frontend, probably in different languages.
  • We can reduce the overhead by using a unified language like NodeJS, so that the JS-driven frontend will be fairly compatible with the backend (with similar language semantics etc.); still, we'll have to bear the burden of coding the backend (which would be fairly complex relative to the frontend, as it would have to deal with FTP integration). Plus, we'll need a way to reliably host the NodeJS backend, of course.
  • Cloud services like Zapier may not be an option because we need the app to be hosted in-house (in-university to be exact), connecting to a local FTP server.

Fortunately, the new Project-X framework has just the right balance for all our requirements:

...and, most impressive for our case... a collection of connectors and processing elements that allows us to build our solution without writing a single line of code!

  • A HTTP ingress connector for accepting HTTP traffic (for the web UI and file uploads)
  • A Web Server processing element that can serve the frontend (static portion) of the website
  • A FTP egress connector that can take all FTP upload matters out of your hand

OK, now that we have the right tool for the job, let's start with the flashy parts—the frontend, that is.

The frontend stuff can be done easily with HTML and JS. To keep things simple (and save time), we shall build a minimal site (without CSS styling, modals and other "complex" goodies.

As for the upload, if we use a regular <form> with an <input type="file">, it would send a multipart upload request to the backend (containing the file name and payload as fields). Multipart uploads are a bit clumsy to handle on the server side, so here we will resort to a custom approach where we send the filename in a HTTP request header named Upload-Filename and the raw file content in the request body.

What follows is a very simple frontend that achieves just what we need (don't worry about the horrific look, we could polish it up later on):

    <meta charset="utf-8"/>
    <title>FilePit Uploader!</title>
<form method="post" onsubmit="return runUpload()">
    <label for="file">Select the file to upload:</label>
    <input type="file" id="file" name="file"/>
    <input type="submit" value="Upload"/>
<script type="text/javascript">

    function runUpload() {
        var file = document.forms[0].file.files[0];
        if (!file) {
            alert("Please select a file for uploading :)");
            return false;

        var xhr = new XMLHttpRequest();"POST", "upload");
        xhr.onload = function () {
        xhr.onerror = function (e) {
            alert("Failed to upload file: " + e);

        var reader = new FileReader();
        reader.onload = function (evt) {
            xhr.setRequestHeader("Content-Type", file.type);

        return false;

Now that the frontend is ready, we can download and install UltraStudio and start working on our backend by creating a new project.

One more thing before we begin: when developing the flow, we should better test things using a different FTP server than the actual university server—what if you make a small mistake and all the previously submitted assignments get mixed up, kicking out half the university? You could get hold of a simple FTP server software (e.g. vsftpd for Ubuntu/Debian, FileZilla for Windows, something like this for Mac—unless your Mac is too new), configure it (e.g. in case of vsftpd ensure that you set local_enable=YES and write_enable=YES in /etc/vsftpd.conf—and don't forget to restart the service!), and provide the respective credentials to the coming-up FTP egress connector configuration.

Now, if you're wondering, "okay, how am I supposed to switch to using the actual university server when actually deploying the end solution?", the answer is right here, in our property configuration docs; you'd simply externalize the FTP connector properties—simply by clicking the little toggle buttons to the right of each of its fields that you would be filling—so that you could simply drop a file (similar to what you would find at src/main/resources of the project) into the final deployment, and things would magically get switched over to the correct FTP server!

Cool, isn't it? (Don't worry, you'll get it later.)

For serving the website, we can get away with a very simple, standard web server flow:

Web Server Flow

Just drag in a NIO HTTP ingress connector and a Web Server processing element, connect them as in the diagram, and configure them as follows:

HTTP ingress connector:

Http port 8280
Service path /calculus/submissions.*

Web Server processing element:

Base Path /calculus/submissions
Base Page index.html

Now, create a calculus directory in the src/main/resources path of the project (via the Project side window), create a submissions directory inside it, and save the HTML code that we wrote above inside that directory by the name index.html (so that it will effectively be at src/main/resources/calculus/submissions/index.html). Henceforth, students will see your simple upload page every time they visit /calculus/submissions/ on the "website" that you would soon be hosting—ironically, without any web hosting server or service!

For the upload part, the flow is slightly more complex:

Web Upload Flow

HTTP Ingress Connector:

Http port 8280
Service path /calculus/submissions/upload

Add Variable processor:

Variable Name filename
Extraction Type HEADER
Value Upload-Filename
Variable Type String

Add New Transport Header processor:

Transport Header Name
Use Variable true (enabled)
Value filename
Header Variable Type String

FTP Egress Connector (make sure to toggle the Externalize Property switch against each property, as described earlier):

Host localhost (or external FTP service host/IP)
Port 21 (or external FTP service port)
Username username of FTP account on the server
Password password of FTP account on the server
File Path absolute path on the FTP server to which the file should be uploaded (e.g. /srv/ftp/uploads)
File Name (leave empty)

String Payload Setter (connected to FTP connector's Response port, i.e. success path):

String Payload File successfully uploaded!

String Payload Setter (connected to FTP connector's On Exception port, i.e. failure path):

String Payload Oops, the upload failed :( With error: @{last.exception}

Response Code Setter (failure path):

Response Code 500
Reason Phrase Internal Server Error

In English, the above flow does the following (scream it out in the Prof's ear, in case he becomes curious):

  • accepts the HTTP file upload request, which includes the file name (Upload-Filename HTTP header) and content (payload)
  • extracts the Upload-Filename HTTP header into a scope variable (temporary stage) for future use
  • assigns the above value back into a different transport header (similar to a HTTP header),, that will be used as the name of the file during the FTP upload
  • sends the received message, whose payload is the uploaded file, into a FTP egress connector, configured for the dear old assigment upload FTP server; here we have left the File Name field of the connector empty, in which case the name would be derived from the abovementioned header, as desired
  • if the upload was successful, sets the content of the return message (response) to say so
  • if the upload failed due to some reason, sets the response content to include the error and the response code to 500 (reason Internal Server Error); note that the default response code is 200 (OK) which is why we didn't bother to set it in the success case, and
  • sends back the updated message as the response of the original upload (HTTP) request

Phew, that's it.

Wasn't as bad as writing a few hundred lines of scary code, was it?

Yup, that's the beauty of composable application development, and of course, of UltraStudio and Project-X!

Now you can test your brand new solution right away, by creating a run configuration (say, calculus) and clicking Run → Run 'calculus'!

Run calculus

(Note that, if it's your first time using UltraStudio, you'll have to add your client key to the UltraStudio configuration before you can run the flow.

Once the run window displays the "started successfully in n seconds" log (within a matter of seconds), simply fire up your browser and visit http://localhost:8280/calculus/submissions/. (Sorry folks, no IE support... Maybe try Edge?)

The (to-be-redesigned) Assignment Upload Page

Oh ho! There's my tiny little upload page!

Just pick a file, and click Upload.

Depending on your stars, you'd either get a "File successfully uploaded!" or "Oops, the upload failed :(" message; hopefully the first :) If not, you may have to switch back to the Run window of the IDE and diagnose what might have gone wrong.

Once you get the successful upload confirmation, just log in to your FTP server, and behold the file that you just uploaded!

That's it!

Now all that is left is to bundle the project into a deployment archive and try it out in the standalone UltraESB-X; which, dear reader, is an exercise left for the reader :)

And, of course, to shout in our Prof's ear, "IT WORKS, PROFESSOR!!!"

Thursday, November 23, 2017

Connecting the dots in style: Build your own Dropbox Sync in 10 minutes!

Integration, or "connecting the dots", is something that is quite difficult to avoid in the modern era of highly globalized business domains. Fortunately, integration, or "enterprise integration" in more "enterprise-y" terms, is no longer meant to be something that makes your hair stand, thanks to advanced yet user-friendly enterprise integration frameworks such as Project-X.

Today, we shall extend our helping hand to Jane, a nice Public Relations officer of the HappiShoppin supermarket service (never heard the name? yup, neither have I :)) in setting up a portion of her latest customer feedback aggregation mechanism. No worries, though, since I will be helping and guiding you all the way to the end!

The PR Department of HappiShoppin supermarket service has opened up new channels for receiving customer feedback. In addition to the former, conventional paperback feedback drop-ins, they now accept electronic feedback via their website as well as via a public Dropbox folder (in addition to social media, Google Drive, Google Forms etc). Jane, who is heading the Dropbox-driven feedback initiative, would like to set up an automated system to sync any newly added Dropbox feedback to her computer so that she can check them offline whenever it is convenient for her, rather than having to keep an eye on the Dropbox folder all the time.

Jane has decided to compose a simple "Dropbox sync" integration flow that would periodically sync new content from the feedback accumulation Dropbox folder, to a local folder on her computer.

  • On HappiShoppin's shared Dropbox account, /Feedback/Inbox is the folder where customers can place feedback documents, and Jane hopes to sync the new arrivals into /home/jane/dropbox-feedback on her computer.
  • Jane has estimated that it is sufficient to sync content once a day, as the company receives only a limited number of feedback over a given day; however, during the coming Christmas season, the company is expecting a spike in customer purchases, which would probably mean an accompanied increase in feedback submissions as well.
  • For easier tracking and maintenance, she wants the feedback files to be organized into daily subfolders.
  • In order to avoid repetitively syncing the same feedback file, Jane has to ensure that the successfully synced files are removed from the inbox, which she hopes to address by moving them to a different Dropbox folder: /Feedback/Synced.

Design of the Dropbox Sync solution

Now, before we begin, a bit about what Project-X is and what we are about to do with it:

  • Project-X is a messaging engine, which one could also call an enterprise service bus (which is also valid for the scenario we are about to tackle).
  • Project-X ingests events (or messages) from ingress connectors, subjects them to various transformations via processing elements, and emits them to other systems via egress connectors. For a single message, any number of such transformations and emissions can happen, in any order.
  • The message lifecycle described above, is represented as an integration flow. It is somewhat similar to a conveyor belt in a production line, although it can be much more flexible with stuff like cloning, conditional branching, looping and try-catch flows.
  • A set of integration projects make up an integration project, which is the basic deployment unit when it comes to Project-X runtimes such as UltraESB-X.

So, in our case, we should:

  • create a new integration project
  • create an integration flow inside the project, to represent Jane's scenario
  • add the necessary connectors and processors, and configure and wire them together
  • test the flow to see if what we assembled is actually capable of doing what Jane is expecting
  • build the project into a deployable artifact, ready to be deployed in UltraESB-X

While the above may sound like quite a bit of work, we already have a cool IDE UltraStudio that can do most of the work for us. With UltraStudio on your side, all you have to do is to drag, drop and connect the required connectors and processing elements, and everything else will be magically done for you. You can even try out your brand-new solution right there, inside the IDE, and trace your events or messages real-time as they pass through your integration flow.

So, before we begin, let's get UltraStudio installed on your system (unless you already have it, of course!).

Once you are ready, create a new Ultra Project using File → New → Project... option on the menu bar and selecting Empty Ultra Project. While creating the project, select the following components on the respective wizard pages (don't worry, in a moment we'll actually get to know what they actually are):

  • Timer Task Connector and Dropbox Connector on the Connectors page
  • JSON Processor and Flow Control processor on the Processors page

If you were impatient and had already created a project, you could always add the above components later on via the menu option Tools → Ultra Studio → Component Registry.

Now we can start by creating a new integration flow dropbox-sync-flow, by opening the Project side pane and right-clicking the src/main/conf directory.

Again, a few tips on using the graphical flow UI (in case you're wondering where on earth it is) before you begin:

  • Inside, an integration flow is a XML (Spring) configuration, which UltraStudio can alternatively represent as a composable diagram for your convenience.
  • You can switch between the XML and graphical views using the two small tabs that would appear at the bottom of an integration flow file while it is opened in the IDE. (These tabs might be missing at certain times, e.g. when the IDE is performing indexing or Maven dependency resolution; at such times, patience is a virtue!)
  • The graphical view contains a side palette with all the components (connectors and processors) that have currently been added to your project (at creation or through the Component Registry). You can browse them by clicking on the collapsible labels on the palette, and add them to the flow by simply dragging-and-dropping them into the canvas.
  • In order to mimic the message flow, components should be connected together using lines drawn between their ports (small dots of different colors that appear around the component's icon). You will get the hang of it, when you have had a look at some of the existing integration flows, or at the image of the flow that would be developing (appearing later in this article).
  • When a component requires configuration parameters, a configuration pane gets automatically opened as soon as you drop an element into the canvas (you can also open it by clicking on the component later on). If the labels or descriptions on the configuration pane are not clear enough, just switch to the Documentation tab and click on the "Read more" URL to visit the complete documentation of the element (on your favourite web browser). Also, make sure that you click the Save button (at the bottom or on the side pane) once you have made any changes.

Start the flow with a Timer Ingress Connector. This is a connector used to trigger a periodic event (similar to a clock tick) for a time-driven message flow. Let's configure it to trigger an event that would set the sync process in motion. For flexibility, we will use a cron expression instead of a simple periodic trigger.

Scheduling tab:

Polling CRON Expression 0/30 * * ? * *

Although Jane wanted to run the check only at 6 PM each day, we have set the polling time to every 30 seconds, for the sake of convenience; otherwise you'll simply have to wait until 6 PM to see if things are working :)

Next add a Dropbox Egress Connector with a List Entities Connector operation element added to the side port. You can find the connector operations by clicking on the down arrow icon against the Dropbox Connector on the component palette, which will expand a list of available connector operations.

A connector operation is an appendage that you can, well, append to a connector, which will perform some additional processing on the outgoing message in a connector-specific way. For example, for Dropbox we have a main connector, with a bunch of connector operations that represent different API operations that you can perform against your Dropbox account, such as managing files, searching, downloading, etc.

Configure the Dropbox Connector with the shared Dropbox account credentials (App ID and Access Token), and the connector operation with the Path /Feedback/Inbox.

Basic tab:

Client ID
{client ID for your Dropbox app;
visit to create a new app}
Access Token
{access token for your Dropbox account, under the above app;
to obtain an access token for personal use against your own app}

List Entities, Basic tab:

Path /Feedback/Inbox

The above contraption will return a List Folder response, containing all files that are currently inside /Feedback/Inbox, as a wrapped JSON payload:

    "entries": [
            ".tag": "file",
            "name": "johndoe.docx",
            "id": "id:12345_67_890ABCDEFGHIJ",
        }, {
            ".tag": "file",
            "name": "janedoe.txt",
            "id": "id:JIHGF_ED_CBA9876543210",

Ah, now there's the info that we have been looking for; sitting there in boldface. Now we need to somehow pull them out.

Next add a JSON Path Extractor processor to extract out the file paths list from the above JSON response, using a JSON Path pattern: $.entries[*].name. This will store the resulting file name list in a scope variable named files, for further processing. A scope variable is a kind of temporary storage where you can retain simple values for referring later in the flow.

Variable Name files
JSON Path $.entries[*].name

Then add a ForEach Loop to iterate over the previously mentioned scope variable, so that we can process each of the observed files separately. The next processing operations will each take place within a single iteration of the loop.

Basic tab:

Collection Variable Name files
Collection Type COLLECTION
Iterating Variable Name file

Now add a new Dropbox Connector (configured with your app and account credentials as before), along with a, Download Entity connector operation, to download the file (file) corresponding to the current iteration from Dropbox into the local directory.

Tip: When you are drawing outgoing connections from ForEach Loop, note that the topmost out port is for the loop termination (exit) path, and not for the next iteration!

Basic tab:

Client ID {client ID for your Dropbox app}
Access Token {access token for your Dropbox account, under the above app}

Advanced tab:

Retry Count 3

Download Entity, Basic tab:

Path /Feedback/Inbox/@{variable.file}
Destination /home/jane/dropbox-feedback/@{current.timestamp.yyyy-MM-dd_HH-mm}

Next add another Dropbox Connector (configured with your app and account credentials) with a Move Entity connector operation, to move the original file to /Feedback/Synced so that we would not process it again. We will set the Retry Count property of the connector to 3, to make a best effort to move the file (in case we face any temporary errors, such as network failures, during the initial move). We will also enable Auto-Rename on the connector operation to avoid any possible issues resulting from files with same name being placed at /Feedback/Inbox at different times (which could cause conflicts during movement).

Move Entity, Basic tab:

Path /Feedback/Inbox/@{variable.file}
Destination /Feedback/Synced/@{variable.file}

Now add a Successful Flow End element to signify that the message flow has completed successfully.

Now we need to connect the processing elements together, to resemble the following final flow diagram:

Dropbox Sync: Sample Flow

Finally, now we are ready to test our brand new Dropbox sync flow!

Before proceeding, ensure that your Dropbox account contains the /Feedback/Inbox and /Feedback/Synced directories.

Create an UltraStudio run configuration by clicking Run → Edit Configurations... on the menu, and selecting UltraESB-X Server under the Add New Configuration (+) button on the top left.

Now, with everything in place, select Run → Run configuration name from the menu to launch your project!

If everything goes fine, after a series of blue-colored logs, you'll see the following line at the end of the Run window:

2017-11-23T11:45:27,554 [] [main] [system-] [XEN45001I013]
INFO XContainer AdroitLogic UltraStudio UltraESB-X server started successfully in 1 seconds and 650 milliseconds

If you get any errors (red) or warnings (yellow) before this, you would have to click Stop (red square) on the Run window to stop the project, and dig into the logs to get a clue as to what might have gone wrong.

Once you have things up and running, open your Dropbox account on your favourite web browser, and drop some files into the /Feedback/Inbox directory.

After a few seconds (depending on the cron expression that you provided above), the files you dropped there will magically appear in a folder /home/jane/dropbox-feedback/. After this, if you check the Dropbox account again, you will notice that the original files have been moved from /Feedback/Inbox to /Feedback/Synced, as we expected.

Now, if you drop some more files into /Feedback/Inbox, they will appear under a different folder (named with the new timestamp) under /home/jane/dropbox-feedback. This would not be a problem for Jane, as in her case the flow will only be triggered once a day, resulting in a single directory for each day.

See? That's all!

Now, all that is left is to call Jane and let her know that her Dropbox integration task is ready to go alive!

Sunday, November 19, 2017

Out, you wretched, corrupted cache entry... OUT! (exclusively for the Fox on Fire)

While I'm a Firefox fan, I often run into tiny issues of the browser, many of which cannot be reproduced in clean environments (and hence are somehow related to the dozens of customizations and the horde of add-ons that I take for granted).

I recently nailed one that had been bugging me for well over three years—practically ever since I discovered FF's offline mode.

While the offline mode does an excellent job almost all the time, sometimes it can screw up your cache entries so bad that the only way out is a full cache clear. This often happens if you place the browser in offline mode while a resource (CSS, JS, font,... and sometimes even the main HTML page, esp. in case of Wikipedia).

If you are unfortunate enough to run into such a mess, from then onwards, whenever you load the page from cache, the cache responds with the partially fetched (hence partially cached) broken resource—apparently a known bug. No matter how many times you refresh—even in online mode—the full version of the resource will not get cached (the browser would fetch the full resource and just discard it secretly, coughing up the corrupted entry right away during the next offline fetch).

Although FF has a "Forget about this site" option that could have shed some light (as you could simply ask the browser to clear just that page from the cache), the feature is bugged as well, and ends up clearing your whole cache anyway; so you have no easy way of discarding the corrupted entry in isolation.

And the ultimate and unfortunate solution, for getting the site to work again, would be to drop several hundred megabytes of cache, so that the browser could start from zero; or to stop using the site until the expiry time of the resource is hit, which could potentially be months ahead in the future.

The good news is, FF's Cache2 API allows you to access the offending resource by URL, and kick it out of the cache. The bad news, on tbe other hand, is that although there are a few plugins that allow you to do this by hand, all of them are generic cache-browsing solutions, so they take forever to iterate through the browser cache and build the entry index, during which you cannot practically do anything useful. I don't know how things would be on a fast disk like a SSD, but on my 5400-RPM magnetic disk it takes well over 5 minutes to populate the list.

But since you already know the URL of the resource, why not invoke the Cache2 API directly with a few lines of code, and kick the bugger out yourself?

// load the disk cache
var cacheservice = Components.classes[";1"]
var {LoadContextInfo} = Components.utils.import("resource://gre/modules/LoadContextInfo.jsm",{})
var hdcache = cacheservice.diskCacheStorage(LoadContextInfo.default, true);

// compose the URL and submit it for dooming
var uri = Components.classes[";1"]
    .getService(Components.interfaces.nsIIOService).newURI(prompt("Enter the URL to kick out:"), null, null);
hdcache.asyncDoomURI(uri, null, null);

Yes, that's all. Once the script is run on the browser console, with uri populated with the URL of the offending resource (which in this case is read in using a JS prompt()), poof! You just have to reload the resource (usually by loading the parent HTML page), taking care not to hit the offline mode prematurely, to get the site working fine again.

And that's the absolute beauty of Firefox.