Continued from Markup and DSL.
In the modern age, enterprises are transformed by microservices and reactive principles (asynchronous decoupling, encapsulated microservices etc). Not so requirements, testing and even architecture and development though - it's time we take a new, fresh look at requirements and continuous testing and indeed, the architecture process.
We used to capture requirements in one or more Word documents, which are handed out to architects, vendors, integrators and developer teams. Often, these documents go out of sync with the actual system needs as well as the implementation, especially across versions.
Acceptance criteria are often not up to date with the requirements and most of the time, the acceptance criteria are just some text in a document followed by some (manual?) testing by a QA team, which by the time it becomes automated, it may already be out of sync with the updated requirements.
We aim to change that status quo. Instead of words in a document, we will build a collaborative requirements website, where we organize the requirements and acceptance criteria and tests. These are "live" and run all the time, throughout the development lifecycle and even production, increasing quality and reducing errors throughout the enterprise development lifecycle.
The requirements and acceptance criteria are fully versioned and tagged and kept in sync with the actual testing and even code. Since you're probably familiar with the notion of https://en.wikipedia.org/wiki/User_story User story, we organize requirements and acceptance, in Stories. A story contains both free-text descriptions as well as coded acceptance tests, see some examples below.
In a microservices architecture, as well as most use cases and user stories, we focus on the APIs and flows, the messages and data that flow between the services, the components of the system. Or events - most good stories may start with an event like "a user walks into a bar".
In the most generic case, requirements are of the form: "when a message or event is received, we expect this and that". We call these stories and here's an example:
home.guest_arrived
appeared, we expect the lights.on
command to have been sent out and also the status of the lights
to be bright
.
Here's how we may write this in an intuitive DSL, using tags like $send
and $expect
:
$send home.guest_arrived (name="Jane")
$expect lights.on
$expect (light is "bright")
Ok... now what? Well - because of the respective tags, the system has already created an HTTP/REST microservice for home.guest_arrived
, which will invoke the test above. Just try it in the REST tab above.
You can send this message directly from the requirement page (such as this very page), which is now "active". It allows you to test any of the services you have - the stories basically can be "told" at any time, without any coding. This is what happens when you click on the Trace tab - the message is simulated and executed and you will see what happened, as well as the state of the acceptance tests.
The assumption here is that we're dealing with two microservices home
and lights
, and two specific calls or pieces of functionality: home.guest_arrived
and lights.on
. This was a specification for the home.guest_arrived
functionality and we expect it to somehow trigger the other one. We did not implement either functionality, let's say some other teams or vendors will. We just want to specify what it should do.
How does this test the actual micro-service instead of this simulation - we'll see that soon. We'll also look later at what executing a message means.
These requirements stories, once written, can be validated any time, against the delivered system. They can be triggered manually or via scripts when the underlying implementation changes.
They could also run against production data, for assurance.
We so far defined the expected behaviour for home
and lights
... without interfering with those actual services.
While designing and architecting solutions, we will not always have access to the other services that we need to actually run the system, so one of the needs is to simulate them, for testing the others. We can simulate microservices right here, via mocks:
$mock lights.check => (light = "bright")
$mock chimes.welcome => (greeting = "Greetings, "+name)
Now, the lights.check
and chimes.welcome
services are available as mocks and will be simulated with the rules above. You can now develop elsewhere, the logic that connects and/or uses these, see below.
So the lights.check
and the other one, because we now have an idea how to mock them, they are actually available and you can invoke them and they will be mocked - check the REST tab for the actual links to invoke these, and play with them.
On the other hand, if we already do have the services we need to complete our architecture, we often have to wait for some functionality to be developed to connect the available services, until we can use that and/or write other higher-level logic.
So the question now becomes: how do we stimulate the services we do have?
With this tool, you don't have to wait for anyone to write any code or start a process or do any of those pesky things. We can create inputs and dispatch calls to stimulate the services we already have, to test the higher-level logic, before the code is even written.
Here are some example of dispatcher rules. In this case, the AST and REST tabs contain nothing, as these are just rules. However, as soon as you write these rules, they can be executed and having these rues, we can now mock and play with the entire system.
$when home.guest_arrived(name) => lights.on
$when home.guest_arrived(name is "Jane") => chimes.welcome(name="Jane")
$when lights.on => lights.check
Now, the home.guest_arrived
microservice is available (as a mock) and will simply connect those others. You can continue developing some other logic depending on this new aggregate service!
You need to log in to post a comment!