Automated Webhook QA

I love automated QA tools like Runscope. Recently at Intercom I investigated the potential for us to begin running similar tests against our production webhooks offering.

The problem

Let's look at some of the problems you hit when determining "are webhooks working well right now":

  • The act of triggering a webhook (via an API action for example) and receiving the webhook are completely separate. Sometimes they can be seconds or more apart. A test framework would ideally abstract this away into a request/response model.
  • QAing webhooks requires a web-server running, configured as a subscription to the system you are testing.
  • Receiving webhooks from a public server requires a public IP.

Shelduck is a tool I've been working on to help out with this. At its core it provides a lens-based DSL for describing webhook expectations. For example:

This says:

  • When I hit the /users endpoint
  • With a random email
  • I should receive a user.created webhook

Shelduck spins up Spock servers at runtime, and uses ngrok for ssh tunnelling so you can test against production systems.

Logging

A lot of the current decisions made around Shelduck are ad-hoc and likely to change. By default Shelduck writes a JSON based logfile at ~/shelduck.log. It provides a Yesod powered webserver on localhost, which parses this log and gives a friendly way to investigate issues.

Each block (time descending) represents an event that Shelduck has observed.

Other tools

It is interesting that Pusher are working on something that sounds similar, also in Haskell.