The default executors are built-in functionality that it's always available, from accessing the context, to flow control to communication.
There are few services/categories of messages:
engine - control flow
context (ctx
)
lifecycle
scope
wiki
guardian
cron
Please read Engine
These messages are triggered internally before and after running an engine. You can use them to set environment variables, constants etc.
Each flow will be wrapped in a set of engine-generated messages:
diesel.vals
diesel.before
diesel.after
This is an internally generated and processed message. This will collect all $val
declarations at the global scope and execute them, so that all global values and variables are defined before any other rules are triggered.
You cannot handle or intercept this message.
When inspecting a trace, this message is logged at "trace" level.
This is a message that is internally generated as the first message of each flow. You can use this to setup say constants and other common statics across all flows.
You can typically use it to define constants:
$when diesel.before
=> ctx.set (
INV_CANCELED="666400011",
INV_APPROVED="666400016",
INV_DRAFT="666400002",
INV_POSTED="666400009",
INV_FAILED_POSTING="666400010"
)
This is an internally generated message after any flow has fininshed. You can intercept it and run some logic.
Exception handling works somewhat different from a programming language, and it still work in progress, here are some principles (see examples in engine story++ and engine spec).
Remember that each message is executed asynchronously in a separate context. Normally, an exception does not cause other side effects in the flow like stopping the flow etc, it simply results in an EError
node being added to the flow and the payload
set to an Exception object. This is a big difference from normal sequential programming where the flow is interrupted, the stack rolled back etc.
So, an exception (like divide by zero) does not cascade up or stop anything. the payload
is set to the exception, but the flow continues.
If you want to instead catch and deal with the exceptions in a more traditional manner, you can demarcate a block with diesel.try
- diesel.catch
, to have all exceptions inside that block caught and execution stopped when the exception occured and control handed to the catch
block, as traditional in sequential programming.
diesel.flow.return
) or check the type of payload payload is exception
or such
msg diesel.try
msg diesel.catch
payload, no catch:
$when my.lookup.list
=> snakk.thirdparty (...) // assume this returns exception
=> $if (payload is exception) (payload=[])
In a story, when you want to ignore exceptions in a block (exceptions are reported as failures) you can surround that block with a try-catch:
$send diesel.try
$send ctx.set(oops = 1/0)
$send diesel.catch
This will throw a diesel exception within the flow. It will behave much like throwing exceptions in code:
diesel.throw
will be evaluated and added to the contextCatching errors works much like a regular catch - with some differences:
diesel.try
, if missing, it will use the closest enclosing scopediesel.throw
either. Any node of type EError
in the enclosing scope can trigger the catch clauseEError
as handled and it will not show as a failure in tests (handled errors look yellow not red)exception
is populated in the enclosing context - it has code
, message
and details
.$when a.b
=> do.something.with.errors
=> (payload = {status:"ok"})
=> diesel.catch
|=> (payload = {status:"failed", error:exceptions})
When handling a diesel.catch
there are a few values populated:
exception
- the last exception "caught"exceptions
- all the exceptions that were caughtYou can assert a condition. If successful, nothing happens. If the condition is true, then the entire flow is stopped with a diesel.flow.return
.
=> diesel.assert(x = wfHeader not empty)
You can pass several arguments to assert:
diesel.flow.return
if a boolean evaluates to falsediesel.http.response
and/or a diesel.http.response.status
then you can override the diesel.flow.return
defaults, for instance to send back a JSON failure=> diesel.assert(
x=actionSpec is defined,
diesel.http.response = {state:States.INVALID, detailedState: "deviceAction is not found"})
(deprecated form: diesel.return
)
Stop the current flow and return. You can include a return code, headers etc - if the flow will return as an HTTP server.
Example of an entry point using a flow.return :
$mock diesel.rest (path ~path "/account2/404/id")
=> diesel.flow.return(
diesel.http.response.status=404,
diesel.http.response.header.myHeader = "mine",
)
Stops the current rule and returns to the parent rules. A rule starts with the $when
keyword. This is similar to returning from a function.
Stops the flow execution within the current scope. This should be the most used return, when the logic needs to stop processing something. Scopes are generally automatic, but not always.
msg diesel.debug
msg diesel.later
The engine is by default asynchronous - meaning that each message is executed in a separate actor message/context, on potentially separate threads.
diesel.engine.sync
will cause the engine to process the rest of the current flow in synchronous mode - this means that even if multiple paths are available at one point, they will be processed in sequence, on the same thread, as opposed to being processed as separate messages.
Advantages are making the engine a bit faster (the engine uses this mode in very few scenarios), but disadvantages include potentially impacting the engine if exceptions appear or timeouts etc.
When to use it? No good reason that I can think of, other than looping through collections or such.
Like diesel.engine.sync
but will have the opposite effect: switch the enging back to the default asynchronous mode
You can pause/continue an engine (like a breakpoint). Then execution and step by step can be controlled from engineView:
msg diesel.engine.pause
msg diesel.engine.play
msg diesel.engine.continue
msg diesel.engine.cancel (id, reason)
Abort/stop another engine.
msg diesel.engine.strict
msg diesel.engine.nonstrict
This is the default executor for messages to the current context (data).
Print the current context to log.
Insert an EInfo - if you click on it, it will dump the value to the browser's console.
Dump the contents of the current context - you can use it to see what values are there.
Echo a value - you can easily see a value on the screen - use it to debug expressions.
Set a value with the given name and value.
Set values: one for each input argument.
Sleep - duration is in millis. Good to simulate timeouts etc.
msg ctx.base64encode (result)
msg ctx.base64decode (result)
Encode or decode BASE64. Each input value is decoded into payload
. If result
is passed in, then a value with that name will also be populated.
msg ctx.sha1 (result)
msg ctx.sha256 (result)
Like base64 but sha1 or sha256 encoded as hex. You can pass in many parameters like parm1
and each will result in an parm1_sha1
or a parm_sha256
respectively. Additionally, you can pass in a result="outputParm1"
if you want a specific output value, otherwise the payload
will be used.
Feed an array of JSON objects and get an array of strings of CSV.
msg ctx.jsonToCsv (list, separator, useHeaders?, csvStream?)
payload
Turn an array of strings into a bigger string using the separator (i.e. separator="\n"
)
Turn a big string CSV into an array of Json objects. If no headers are present, some default attribute names will be used.
msg ctx.csvToJson (separator, hasHeaders?)
Arguments:
The result is one array with one document per row, the attribute names are either the heading or like "col3".
Persist the current context between calls / executions - see State Machines for examples:
Clear the current context:
msg ctx.storySync
msg ctx.storyAsync
0 $mock::
test.diesel.storyVal
. (theValue
=storyValue1
)
Run tests in the context of a user, configurable per domain. Normally, an automated test runs in the background, so without a user context. This will pose many issues to testing APIs that are meant to work in the context of a user - so this is where this message comes in handy.
Invoke it at the beginning of a story that's meant to run in the context of a user. And configure this special user in the reactor properties, as diesel.testUserEmail=someemail
where the ID is a user id.
Make sure there is an user. This will prevent public messages from being invoked by the "public" or index engines etc.
Use this to convert a list of documents/objects into a list of strings CSV style, first row will be the headers.
Convert a payload of list of strings into a string, using the separator.
The document database model is used to keep state and there are a few kinds available:
inmem
- in memory, per usermemshared
- in memory, per appcol
- persisted in a build-in MongoDb, available for paid accounts, depending on volumepostgres
- persisted in a PostgresDb, when deployed in-houseThe generic operations are:
upsert(collection, id?, document)
- update or create document with given value, returns the ID createdget(collection,id)
- get. if nothing found, it will return an Undefined
getsert(collection,id,default?)
- get or create, if default given. If no default and nothing found, it will return an Undefined
query(collection,parmA,parmB...)
- query documents, based on document properties, returns list of documentsremove(collection, id)
- delete document with given valueclear(collection)
- delete all entries from one collection / one document type... *careful with this one... Not all DB implement this operationThese are implemented by all DB types and instances:
msg diesel.db.INST.upsert (collection, id?, document)
msg diesel.db.INST.get (collection, id)
msg diesel.db.INST.getsert (collection, id, default?)
msg diesel.db.INST.query (collection)
msg diesel.db.INST.remove (collection, id)
msg diesel.db.INST.clear (collection)
The in-memory database is good to mock up functions that require a bit of state. You should not rely on it being available or persisted for too long :).
The data is groupped by user (so if you're logged in, you can access the same collection across flows). For anonymous users running anon fiddles, data is only available in the same flow.
msg diesel.db.inmem.upsert (collection, id?, document)
msg diesel.db.inmem.remove (collection, id)
msg diesel.db.inmem.get (collection, id)
msg diesel.db.inmem.getsert (collection, id, default?)
msg diesel.db.inmem.query (collection)
msg diesel.db.inmem.log
msg diesel.db.inmem.clear
NOTE: this is a shared database per user, so it may be important to clear between sessions.
NOTE: there are small limits as to the number of collections and entries in these.
The shared database is good to mock up functions that require a bit of state across flows in the same realm. This one is available in a cluster (in case of transparent restarts of processing nodes etc).
msg diesel.db.memshared.upsert (collection, id?, document)
msg diesel.db.memshared.remove (collection, id)
msg diesel.db.memshared.get (collection, id)
msg diesel.db.memshared.getsert (collection, id, default)
msg diesel.db.memshared.query (collection)
msg diesel.db.memshared.log
msg diesel.db.memshared.clear
NOTE: this is a shared database per user, so it may be important to clear between sessions.
NOTE: there are small limits as to the number of collections and entries in these.
This is an actual persisted DB - avaialble for paid member accounts.
msg diesel.db.col.upsert (collection, id, document)
msg diesel.db.col.remove (collection, id)
msg diesel.db.col.get (collection, id)
msg diesel.db.col.getsert (collection, id, default?)
msg diesel.db.col.query (collection)
msg diesel.db.col.clear (collection)
msg diesel.db.col.clearAll
msg diesel.db.postgres.new (connection, env?, url)
msg diesel.db.postgres.close (connection, env?)
msg diesel.db.postgres.upsert (collection, id, document)
msg diesel.db.postgres.remove (collection, id)
msg diesel.db.postgres.get (collection, id)
msg diesel.db.postgres.getsert (collection, id, default?)
msg diesel.db.postgres.query (collection)
Note that for connected DBs (like postgres), you need to create a connector first, with diesel.db.postgres.new (connection, url)
:
connection
is "default"
or ""
connection="x"
to specify which connection the operation will go through, if not default
env="x"
to specify which environment the entity will belong to, if not local
or the current diesel.env
will be usedQueries can support the following optional parameters
size
, to limit the size of the returned collectionfrom
, to start from a different position (in support of pagination)For query, these cannot not be used as attributes in the entities: size
, from
The wiki executor deals with wiki commands.
msg diesel.wiki.follow (userName, wpath, how)
User follows a wiki (for instance following the club's calendar when joining a club).
msg diesel.wiki.content (wpath, result?, type?)
Set a value with the name contained by result
to the content referenced by wpath
. The content is not formatted or pre-processed. This is useful to get schemas, sample data etc - all of these can be saved as topics and loaded into variables like this.
If there is no result
specified, the payload
is set to the respective contents.
The optional type
is used to coerce the wiki content string into a given type, also parsing it etc. Valid values are: "JSON"
or "String"
- with string being default.
msg diesel.wiki.format (wpath, result?)
This will format a topic into HTML - you can use to make up fragments etc.
msg diesel.wiki.updated (wpath, realm, event, userName)
Generated automatically when a topic is updated. You can attach rules to it and handle it.
JS script executor
This is an automatic executor which will execute blocks of code.
If you define a function with $def
like so:
$def func.haha(p1,p2) {{
p1+p2
}}
Then this is executed for the func.haha
message:
$msg func.haha (p1="a", p2="b")
$when ha.ha => func.haha (p1,p2)
$msg ha.ha (p1="a", p2="b")
See more complex examples in expr-json-story and expr-json-spec.
This one can make REST calls - it works by defining templates for the calls, see REST and HTTP templates - as you can see, the templates mirror the actual HTTP calls, so you can configure header attributes or content or both.
You can also snakk directly by calling these:
msg snakk.json (url, verb, body, headers, result)
msg snakk.xml (url, verb, body, headers, result)
msg snakk.text (url, verb, body, headers, result)
msg snakk.ssh (host, port:Number?=22
, user, pwd, cmd)
Here's a sample usage (using oauth to get a token):
$when api.getAccessToken
=> snakk.json(
url="${AUTH_URL}",
verb="POST",
'Content-type'="application/x-www-form-urlencoded",
body="grant_type=client_credentials&client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET_ENC}&resource=${RESOURCE}/"
)
=> (accessToken=payload.access_token)
The result
of snakk.json
is of type JSON and you can see the last expression extracts something from it.
See more details in Snakking REST.
msg snakk.parse.json
msg snakk.parse.xml
msg snakk.parse.regex
This is the default message generated for an incoming HTTP event. It will be passed these parameters:
~path
operator:env
a single path segment*elkpath
the rest of the path from that point onpath
the path of the incoming requestverb
the verb: GET
, POST
, PUT
, PATCH
etcqueryString
decoded query stringqueryStringEncoded
non-decoded query string, as it came indieselQuery
as a JSON object containing all the parsed query params$when diesel.rest (path ~path "/v1/:env/elk/*elkpath", verb == "GET")
=> elk.query.passthrough(path=elkpath, query=queryString)
You can push and pop scopes - this is important to define independent sub-scopes.
msg diesel.scope.push
msg diesel.scope.pop
See Variables and scopes for more details on scopes, exception handling and variables.
There are a set of lifecycle events, which can be intercepted. These refer to either a project (or app node) or each flow.
When a realm is loaded in a node (this will be called once per node, many times in a cluster), a set of events will be raised. To enable these messages/events, just map the following events to something, in the Environment settings, by using it in a $when
rule. The startup of each node/realm combination triggers a single flow with this message sequence:
diesel.realm.configure(realm)
diesel.realm.loaded(realm)
diesel.realm.ready(realm)
If you don't map them, they won't be triggered. Also, you need to handle these in Environment settings and nowhere else... this is an optimization.
msg diesel.realm.configure (realm)
This event is raised whenever the settings for an environment and user combination are needed - normally once per startup before the realm is active, but can also be called when the settings change (i.e. if you edit EnvironmentSettings in "dev mode". When intercepting this, it is where you set any global variables needed by any of the flows. Note - no flow should directly call this message.
This is called multiple times as EnvironmentSettings is updated - careful with resource leaks etc.
msg diesel.realm.loaded (realm)
This is called when a reactor is loaded on a node. This is generally when the node starts. Set any globals here and do any initialization work here. Initialization flows typically check databases, load configuration files etc.
msg diesel.realm.ready (realm)
This is called on startup, after all the other lifecycle events. This event indicates that the realm is considered ready and all initialization work has been done, including whatever you did when handling the diesel.realm.loaded, as it will be triggered after it, that's it's only advantage.
This is the typical convention for configuring individual flows in a realm: intercept diesel.setEnv(env,user)
and add your configuration, for instance:
$when diesel.setEnv(env == "sandbox")
=> ctx.set (
HOST = "https://sandbox1.cloudhub.io",
URL = "https://sandbox1.cloudhub.io/myService",
PING_URL = "https://sandbox1.cloudhub.io/status"
)
These then become variables in each of the flows that call this.
Typically, the environment is configured on each and every flow by intercepting the diesel.before
and calling diesel.setEnv
there.
Stick to this pattern to make it maintenable... here's an example:
$when <trace> diesel.before
=> diesel.setEnv(env=diesel.env, user=diesel.username)
Note: diesel.env
and diesel.username
represent the current env and user in the context of the current flow. The only exception to this rule is when you want to invoke a flow in a different environment, for instance let's say you offer a multi-tenant API which sets the environment:
$when diesel.rest (path ~path "/v1/:env/someAPI")
=> diesel.setEnv(env=env, user=diesel.username)
The `:env` part of the path will superceed the `diesel.env` local environment so you have to call manually the `diesel.setEnv` with this new environment.
Another instance where you need to call diesel.setEnv
directly are other flows that run in another environment than the default, for instance Guardian pollers:
$when diesel.guardian.poll(env)
=> diesel.setEnv(env, user=diesel.username)
=> snakk.json (url="${SERVER}/status")
...
The guardian is a utility for automated testing and executing flows. See Guardian for more details and examples.
Create a guardian schedule for a specific environment - you have to call this on diesel.realm.loaded
:
msg diesel.guardian.schedule (schedule, env, inLocal:Boolean)
A poll
is executed every time on the schedule - you need to implement this, in Environment settings. Read more in Guardian.
msg diesel.guardian.poll (realm, env)
After a poll, this decides to start a run, read more in Guardian:
msg diesel.guardian.polled (env, stamp, inRealm?, tagQuery?)
msg diesel.guardian.run (realm, env, tagQuery?)
msg diesel.guardian.starts (realm, env)
msg diesel.guardian.ends
msg diesel.guardian.notify
EMsg(DG, "report") ::
EMsg(DG, "stats") ::
EMsg(DG, "clear") :: Nil
The crons are best configured in Environment settings. Every time you issue a diesel.cron.set
on a given name, it will reset that job and re-start it (if there was a counter, the counter is reset). See also how to use crons for polling Looping, polling and waiting.
Note that these jobs fire off in every node in the cluster. If you only want one, like a singleton, use the Diesel Singleton++.
The cron related messages:
cronMsg
doneMsg
msg diesel.cron.set (name, env, count?, time, schedule?, cronExpr?, count, tquery, singleton, collectCount, cronMsg?, description?, inSequence?)
Start a cron schedule with the given name and frequenccy (schedule).
name
- he name is mandatory and you can identify different schedules by name.env
- the environment for this cronacceptPast
- boolean if should accept times in the past - default "false"scheduleExpr
or schedule
- in natural language 30 seconds
. The units are:
cronExpr
- instead of schedule
, you can use cron expressions, which underlying use the Quartz dispatcher, see docs
"* * * * * ?"
time
- optional, an absolute time to kick in just once, ISO formatendTime
- optional, an absolute time to stop, ISO formatcount
- optional: it will stop after this many occurences, don't pass anything if you want it to go on and on and on... (until endTime
, if any)cronMsg
- optional: it will call this instead of the default diesel.cron.tick
tquery
- this is an important one: every time this fires, a new engine is created and it will load all the specs that match this tquery (these tags), see tag query++singleton
- TBD, by default all cron jobs are singletons (i.e. only one per environment) but on-demand, there is an ability to have a job per node (i.e. load some configuration etc)inSequence
- if "true", these ticks will be guaranteed (within a certain timeout) to be executed in sequence, otherwise they may start in parallel. Note that when running inSequence
, each execution that takes a longer time may impact the number of ticks of the cron. Also, the relative timing of the crons will be impacted, as the new "tick" won't actually be triggered until the previous has completed...collectCount
- by default, cron job traces are not equal to others, so only the last 10 (system configuration) are collected - you can overwrite this if they're important
0
will not keep any. A larger number may or may not work, depending on other constraintstags
a list of tags applicable to this objectclusterMode
can be one of singleton
, local
, all
lb
for this attribute - it can be achieved by using singleton
and tagging the rule with <route.lb>
Internally, when using the scheduleExpr
it will use an akka scheduler. When using a cronExpr
, it will use the Quartz scheduler.
Please see the Quartz Cron Expression tutorial, currently at http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html
Notes on behaviour:
time
, without schedule
or cronExpr
, then it is a one-time tick at that given time, otherwise this is the time
at which it will start to kick in, so you should pass in time=now()
if you want it to start nowschedule
or cronExpr
in the respective formats.cronMsg
will be passed these attributes of the cron:
The result will be an exception or a success message.
Cancel a cron with:
This will list all the current cron jobs / schedules.
msg diesel.cron.tick (name, env)
This is fired off every time - you need to intercept it and do something, in Environment settings, example:
$when diesel.cron.tick (name == "keepalive")
=> snakk.json(url="http://this.is.keptalive.com/keepalive")
msg diesel.cron.stop (name, env, count)
This is fired off when a count
is reached and the timer stops. Use it to signal the end of a cron polling or such.
These messages are used to validate expressions and calculate the next time of an expression:
msg diesel.cron.nextTime (cronExpr)
msg diesel.cron.validate (schedule?, cronExpr?)
The behavior of the cron jobs is controlled by a few properties. Some of these properties may not be available, depending on your plan.
diesel.cron.await
In a cluster, the crons have some specific features:
clusterMode=singleton
and clusterMode=all
are replicated in all nodes in the cluster, automaticallyWhen creating a cron remotely (aka a singleton), the message diesel.cron.remote.create
is generated - intercept this if you need to for instance persist these crons.
When cancelling a cron remotely, the message diesel.cron.remote.cancel
is raised - you may intercept it if needed.
You can access meta information in several ways.
msg diesel.dom.meta (className)
It's best to set this up when the realm is loaded - this is executed upon startup:
$when diesel.realm.loaded
=> diesel.cron.set (name="keealive", env="sandbox", schedule="5 minutes")
These crons are not persisted. If the server restarts for any reason, they are all lost. You may want to make them persisted, which is easy enough:
The ability to send emails is built-in.
msg diesel.mail.send (to, subject, body)
To configure the email sender, you need to set these in the project properties:
mail.support.email=The Support Team <support@myorg.com>
mail.replyTo=...
mail.smtp.user=...
mail.smtp.pwd=...
mail.smtp.host=smtp.office365.com
mail.smtp.port=587
Note that the mail.smtp.pwd
needs to encoded, see js:wix.utils.enc
.
The general notification emails sent are sent to mail.admin.email
.