Default executors Pub Share

The default executors are built-in functionality that it's always available, from accessing the context, to flow control to communication.

There are few services/categories of messages:

Special flow control messages

Please read Engine

These messages are triggered internally before and after running an engine. You can use them to set environment variables, constants etc.

Before and after

Each flow will be wrapped in a set of engine-generated messages:

  • diesel.vals
  • diesel.before
  • ... // all rules and messages
  • diesel.after

msg diesel.vals 

This is an internally generated and processed message. This will collect all $val declarations at the global scope and execute them, so that all global values and variables are defined before any other rules are triggered.

You cannot handle or intercept this message.

When inspecting a trace, this message is logged at "trace" level.

msg diesel.before 

This is a message that is internally generated as the first message of each flow. You can use this to setup say constants and other common statics across all flows.

You can typically use it to define constants:

$when diesel.before
=> ctx.set (
    INV_CANCELED="666400011",
    INV_APPROVED="666400016",
    INV_DRAFT="666400002",
    INV_POSTED="666400009",
    INV_FAILED_POSTING="666400010"
    )

msg diesel.after 

This is an internally generated message after any flow has fininshed. You can intercept it and run some logic.

Exception handling, try-catch-throw

Exception handling works somewhat different from a programming language, and it still work in progress, here are some principles (see examples in engine story++ and engine spec).

Remember that each message is executed asynchronously in a separate context. Normally, an exception does not cause other side effects in the flow like stopping the flow etc, it simply results in an EError node being added to the flow and the payload set to an Exception object. This is a big difference from normal sequential programming where the flow is interrupted, the stack rolled back etc.

So, an exception (like divide by zero) does not cascade up or stop anything. the payload is set to the exception, but the flow continues.

If you want to instead catch and deal with the exceptions in a more traditional manner, you can demarcate a block with diesel.try - diesel.catch, to have all exceptions inside that block caught and execution stopped when the exception occured and control handed to the catch block, as traditional in sequential programming.

So, it is up to you to either catch this and do something (like diesel.flow.return) or check the type of payload payload is exception or such

msg diesel.try 

msg diesel.catch 

Examples of handling errors and exceptions:

payload, no catch:

$when my.lookup.list
=> snakk.thirdparty (...) // assume this returns exception
=> $if (payload is exception) (payload=[])

In a story, when you want to ignore exceptions in a block (exceptions are reported as failures) you can surround that block with a try-catch:

$send diesel.try
$send ctx.set(oops = 1/0)
$send diesel.catch

msg diesel.throw 

This will throw a diesel exception within the flow. It will behave much like throwing exceptions in code:

  • other siblings are stopped from processing, only within the parent scope
  • any input values passed to diesel.throw will be evaluated and added to the context

diesel.catch

Catching errors works much like a regular catch - with some differences:

  • it doesn't require a diesel.try, if missing, it will use the closest enclosing scope
  • it doesn't require a diesel.throw either. Any node of type EError in the enclosing scope can trigger the catch clause
  • when activated, it will mark the EError as handled and it will not show as a failure in tests (handled errors look yellow not red)
  • also, when activated, any actions underneath are also executed, see below
  • when caught, an exception is populated in the enclosing context - it has code, message and details.
$when a.b
=> do.something.with.errors
=> (payload = {status:"ok"})
=> diesel.catch
|=> (payload = {status:"failed", error:exceptions})

When handling a diesel.catch there are a few values populated:

  • exception - the last exception "caught"
  • exceptions - all the exceptions that were caught

diesel.assert

You can assert a condition. If successful, nothing happens. If the condition is true, then the entire flow is stopped with a diesel.flow.return.

=> diesel.assert(x = wfHeader not empty)

You can pass several arguments to assert:

  • any boolean argument should be true
  • all other arguments are passed to the diesel.flow.return if a boolean evaluates to false
  • so if you pass a diesel.http.response and/or a diesel.http.response.status then you can override the diesel.flow.return defaults, for instance to send back a JSON failure
=> diesel.assert(
  x=actionSpec is defined,
  diesel.http.response = {state:States.INVALID, detailedState: "deviceAction is not found"})

Other

msg diesel.flow.return 

(deprecated form: diesel.return)

Stop the current flow and return. You can include a return code, headers etc - if the flow will return as an HTTP server.

Example of an entry point using a flow.return :

$mock diesel.rest (path ~path "/account2/404/id") 
=> diesel.flow.return(
    diesel.http.response.status=404,
    diesel.http.response.header.myHeader = "mine",
  )

msg diesel.rule.return 

Stops the current rule and returns to the parent rules. A rule starts with the $when keyword. This is similar to returning from a function.

msg diesel.scope.return 

Stops the flow execution within the current scope. This should be the most used return, when the logic needs to stop processing something. Scopes are generally automatic, but not always.

More

msg diesel.debug 

msg diesel.later 

msg diesel.engine.sync 

The engine is by default asynchronous - meaning that each message is executed in a separate actor message/context, on potentially separate threads.

diesel.engine.sync will cause the engine to process the rest of the current flow in synchronous mode - this means that even if multiple paths are available at one point, they will be processed in sequence, on the same thread, as opposed to being processed as separate messages.

Advantages are making the engine a bit faster (the engine uses this mode in very few scenarios), but disadvantages include potentially impacting the engine if exceptions appear or timeouts etc.

When to use it? No good reason that I can think of, other than looping through collections or such.

msg diesel.engine.async 

Like diesel.engine.sync but will have the opposite effect: switch the enging back to the default asynchronous mode

pause/continue

You can pause/continue an engine (like a breakpoint). Then execution and step by step can be controlled from engineView:

msg diesel.engine.pause 

msg diesel.engine.play 

msg diesel.engine.continue 

Control other engines

msg diesel.engine.cancel  (id, reason)

Abort/stop another engine.

Strict

msg diesel.engine.strict 

msg diesel.engine.nonstrict 

ctx

This is the default executor for messages to the current context (data).

msg ctx.log 

Print the current context to log.

msg ctx.info 

Insert an EInfo - if you click on it, it will dump the value to the browser's console.

msg ctx.test 

msg ctx.debug 

Dump the contents of the current context - you can use it to see what values are there.

msg ctx.echo 

Echo a value - you can easily see a value on the screen - use it to debug expressions.

msg ctx.setVal  (name, value)

Set a value with the given name and value.

msg ctx.set 

Set values: one for each input argument.

msg ctx.sleep  (duration)

Sleep - duration is in millis. Good to simulate timeouts etc.

msg ctx.timer  (duration)

msg ctx.base64encode  (result)

msg ctx.base64decode  (result)

Encode or decode BASE64. Each input value is decoded into payload. If result is passed in, then a value with that name will also be populated.

msg ctx.sha1  (result)

msg ctx.sha256  (result)

Like base64 but sha1 or sha256 encoded as hex. You can pass in many parameters like parm1 and each will result in an parm1_sha1 or a parm_sha256 respectively. Additionally, you can pass in a result="outputParm1" if you want a specific output value, otherwise the payload will be used.

Data transformation

Feed an array of JSON objects and get an array of strings of CSV.

msg ctx.jsonToCsv  (list, separator, useHeaders?, csvStream?)

  • if the csvStream is present, then results are sent there line by line as opposed to a resulting payload

Turn an array of strings into a bigger string using the separator (i.e. separator="\n")

msg ctx.mkString  (separator)

Turn a big string CSV into an array of Json objects. If no headers are present, some default attribute names will be used.

msg ctx.csvToJson  (separator, hasHeaders?)

Arguments:

  • separator is the separator, normally ","
  • hasHeaders defaults to true, means the first row is the heading and these become the name of the properties in the json objects

The result is one array with one document per row, the attribute names are either the heading or like "col3".

Others

msg ctx.persisted  (kind, id)

Persist the current context between calls / executions - see State Machines for examples:

msg ctx.clear 

Clear the current context:

Story teller instructions

msg ctx.storySync 

msg ctx.storyAsync 

Story values are available for use in story constructs

0 $mock:: test.diesel.storyVal
   . (theValue=storyValue1)

Authorization and authentication

Run tests in the context of a user, configurable per domain. Normally, an automated test runs in the background, so without a user context. This will pose many issues to testing APIs that are meant to work in the context of a user - so this is where this message comes in handy.

Invoke it at the beginning of a story that's meant to run in the context of a user. And configure this special user in the reactor properties, as diesel.testUserEmail=someemail where the ID is a user id.

msg ctx.setValAuthUser 

Make sure there is an user. This will prevent public messages from being invoked by the "public" or index engines etc.

msg ctx.authUser 

TODO document

msg ctx.csv  (list, separator)

Use this to convert a list of documents/objects into a list of strings CSV style, first row will be the headers.

msg ctx.mkString  (separator)

Convert a payload of list of strings into a string, using the separator.

Databases

The document database model is used to keep state and there are a few kinds available:

  • inmem - in memory, per user
  • memshared - in memory, per app
  • col - persisted in a build-in MongoDb, available for paid accounts, depending on volume
  • postgres - persisted in a PostgresDb, when deployed in-house

The generic operations are:

  • upsert(collection, id?, document) - update or create document with given value, returns the ID created
  • get(collection,id) - get. if nothing found, it will return an Undefined
  • getsert(collection,id,default?) - get or create, if default given. If no default and nothing found, it will return an Undefined
  • query(collection,parmA,parmB...) - query documents, based on document properties, returns list of documents
  • remove(collection, id) - delete document with given value
  • clear(collection) - delete all entries from one collection / one document type... *careful with this one... Not all DB implement this operation

Generic DB operations

These are implemented by all DB types and instances:

msg diesel.db.INST.upsert  (collection, id?, document)

msg diesel.db.INST.get  (collection, id)

msg diesel.db.INST.getsert  (collection, id, default?)

msg diesel.db.INST.query  (collection)

msg diesel.db.INST.remove  (collection, id)

msg diesel.db.INST.clear  (collection)

Mem DB

The in-memory database is good to mock up functions that require a bit of state. You should not rely on it being available or persisted for too long :).

The data is groupped by user (so if you're logged in, you can access the same collection across flows). For anonymous users running anon fiddles, data is only available in the same flow.

msg diesel.db.inmem.upsert  (collection, id?, document)

msg diesel.db.inmem.remove  (collection, id)

msg diesel.db.inmem.get  (collection, id)

msg diesel.db.inmem.getsert  (collection, id, default?)

msg diesel.db.inmem.query  (collection)

msg diesel.db.inmem.log 

msg diesel.db.inmem.clear 

NOTE: this is a shared database per user, so it may be important to clear between sessions.

NOTE: there are small limits as to the number of collections and entries in these.

Shared in-memory DB

The shared database is good to mock up functions that require a bit of state across flows in the same realm. This one is available in a cluster (in case of transparent restarts of processing nodes etc).

msg diesel.db.memshared.upsert  (collection, id?, document)

msg diesel.db.memshared.remove  (collection, id)

msg diesel.db.memshared.get  (collection, id)

msg diesel.db.memshared.getsert  (collection, id, default)

msg diesel.db.memshared.query  (collection)

msg diesel.db.memshared.log 

msg diesel.db.memshared.clear 

NOTE: this is a shared database per user, so it may be important to clear between sessions.

NOTE: there are small limits as to the number of collections and entries in these.

Persisted DB

This is an actual persisted DB - avaialble for paid member accounts.

msg diesel.db.col.upsert  (collection, id, document)

msg diesel.db.col.remove  (collection, id)

msg diesel.db.col.get  (collection, id)

msg diesel.db.col.getsert  (collection, id, default?)

msg diesel.db.col.query  (collection)

msg diesel.db.col.clear  (collection)

msg diesel.db.col.clearAll 

msg diesel.db.postgres.new  (connection, env?, url)

msg diesel.db.postgres.close  (connection, env?)

msg diesel.db.postgres.upsert  (collection, id, document)

msg diesel.db.postgres.remove  (collection, id)

msg diesel.db.postgres.get  (collection, id)

msg diesel.db.postgres.getsert  (collection, id, default?)

msg diesel.db.postgres.query  (collection)

Note that for connected DBs (like postgres), you need to create a connector first, with diesel.db.postgres.new (connection, url):

  • the name of the default connection is "default" or ""
  • you can pass an additional optional parameter to each DB call, connection="x" to specify which connection the operation will go through, if not default
  • you can pass an additional optional parameter to each DB call, env="x" to specify which environment the entity will belong to, if not local or the current diesel.env will be used

Pagination and dealing with large collections

Queries can support the following optional parameters

  • size, to limit the size of the returned collection
  • from, to start from a different position (in support of pagination)

Reserved attribute keywords

For query, these cannot not be used as attributes in the entities: size, from

Wiki functions (EEWiki)

The wiki executor deals with wiki commands.

msg diesel.wiki.follow  (userName, wpath, how)

User follows a wiki (for instance following the club's calendar when joining a club).

msg diesel.wiki.content  (wpath, result?, type?)

Set a value with the name contained by result to the content referenced by wpath. The content is not formatted or pre-processed. This is useful to get schemas, sample data etc - all of these can be saved as topics and loaded into variables like this.

If there is no result specified, the payload is set to the respective contents.

The optional type is used to coerce the wiki content string into a given type, also parsing it etc. Valid values are: "JSON" or "String" - with string being default.

msg diesel.wiki.format  (wpath, result?)

This will format a topic into HTML - you can use to make up fragments etc.

msg diesel.wiki.updated  (wpath, realm, event, userName)

Generated automatically when a topic is updated. You can attach rules to it and handle it.

def

JS script executor

This is an automatic executor which will execute blocks of code.

If you define a function with $def like so:

$def func.haha(p1,p2) {{
p1+p2
}}

Then this is executed for the func.haha message:

$msg func.haha (p1="a", p2="b")

$when ha.ha => func.haha (p1,p2)

$msg ha.ha (p1="a", p2="b")

See more complex examples in expr-json-story and expr-json-spec.

REST (snakk)

This one can make REST calls - it works by defining templates for the calls, see REST and HTTP templates - as you can see, the templates mirror the actual HTTP calls, so you can configure header attributes or content or both.

You can also snakk directly by calling these:

msg snakk.json  (url, verb, body, headers, result)

msg snakk.xml  (url, verb, body, headers, result)

msg snakk.text  (url, verb, body, headers, result)

msg snakk.ssh  (host, port:Number?=22, user, pwd, cmd)

Here's a sample usage (using oauth to get a token):

$when api.getAccessToken
=> snakk.json(
    url="${AUTH_URL}", 
    verb="POST", 
    'Content-type'="application/x-www-form-urlencoded",
    body="grant_type=client_credentials&client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET_ENC}&resource=${RESOURCE}/"
    )
=> (accessToken=payload.access_token)

The result of snakk.json is of type JSON and you can see the last expression extracts something from it.

See more details in Snakking REST.

you can also use snakk to just parse local data:

msg snakk.parse.json 

msg snakk.parse.xml 

msg snakk.parse.regex 

REST APIs

msg diesel.rest 

This is the default message generated for an incoming HTTP event. It will be passed these parameters:

  • path the request path
    • you can match the path to an extraction pattern with the special ~path operator
    • it will not only match but also extract, in the example below:
    • :env a single path segment
    • *elkpath the rest of the path from that point on
  • other parameters passed in context:
    • path the path of the incoming request
    • verb the verb: GET, POST, PUT, PATCH etc
    • queryString decoded query string
    • queryStringEncoded non-decoded query string, as it came in
    • dieselQuery as a JSON object containing all the parsed query params
    • all the query parameters are also flattened and passed in - note that you should not overwrite any of the reserved parameters in this list - the behaviour is undefined
$when diesel.rest (path ~path "/v1/:env/elk/*elkpath", verb == "GET")
=> elk.query.passthrough(path=elkpath, query=queryString)

Scope

You can push and pop scopes - this is important to define independent sub-scopes.

msg diesel.scope.push 

msg diesel.scope.pop 

See Variables and scopes for more details on scopes, exception handling and variables.

Lifecycle events

There are a set of lifecycle events, which can be intercepted. These refer to either a project (or app node) or each flow.

When a realm is loaded in a node (this will be called once per node, many times in a cluster), a set of events will be raised. To enable these messages/events, just map the following events to something, in the Environment settings, by using it in a $when rule. The startup of each node/realm combination triggers a single flow with this message sequence:

  • diesel.realm.configure(realm)
  • diesel.realm.loaded(realm)
  • diesel.realm.ready(realm)

If you don't map them, they won't be triggered. Also, you need to handle these in Environment settings and nowhere else... this is an optimization.

msg diesel.realm.configure  (realm)

This event is raised whenever the settings for an environment and user combination are needed - normally once per startup before the realm is active, but can also be called when the settings change (i.e. if you edit EnvironmentSettings in "dev mode". When intercepting this, it is where you set any global variables needed by any of the flows. Note - no flow should directly call this message.

This is called multiple times as EnvironmentSettings is updated - careful with resource leaks etc.

msg diesel.realm.loaded  (realm)

This is called when a reactor is loaded on a node. This is generally when the node starts. Set any globals here and do any initialization work here. Initialization flows typically check databases, load configuration files etc.

msg diesel.realm.ready  (realm)

This is called on startup, after all the other lifecycle events. This event indicates that the realm is considered ready and all initialization work has been done, including whatever you did when handling the diesel.realm.loaded, as it will be triggered after it, that's it's only advantage.

Configuring flows

msg diesel.setEnv  (env, user)

This is the typical convention for configuring individual flows in a realm: intercept diesel.setEnv(env,user) and add your configuration, for instance:

$when diesel.setEnv(env == "sandbox")
=> ctx.set (
    HOST = "https://sandbox1.cloudhub.io",
    URL = "https://sandbox1.cloudhub.io/myService",
    PING_URL = "https://sandbox1.cloudhub.io/status"
    )
    
These then become variables in each of the flows that call this.

Typically, the environment is configured on each and every flow by intercepting the diesel.before and calling diesel.setEnv there. Stick to this pattern to make it maintenable... here's an example:

$when <trace> diesel.before
=> diesel.setEnv(env=diesel.env, user=diesel.username)

Note: diesel.env and diesel.username represent the current env and user in the context of the current flow. The only exception to this rule is when you want to invoke a flow in a different environment, for instance let's say you offer a multi-tenant API which sets the environment:

$when diesel.rest (path ~path "/v1/:env/someAPI")
=> diesel.setEnv(env=env, user=diesel.username)

The `:env` part of the path will superceed the `diesel.env` local environment so you have to call manually the `diesel.setEnv` with this new environment.

Another instance where you need to call diesel.setEnv directly are other flows that run in another environment than the default, for instance Guardian pollers:

$when diesel.guardian.poll(env)
=> diesel.setEnv(env, user=diesel.username)
=> snakk.json (url="${SERVER}/status")
...

Other

msg public diesel.ping 

Guardian messages

The guardian is a utility for automated testing and executing flows. See Guardian for more details and examples.

Create a guardian schedule for a specific environment - you have to call this on diesel.realm.loaded:

msg diesel.guardian.schedule  (schedule, env, inLocal:Boolean)

A poll is executed every time on the schedule - you need to implement this, in Environment settings. Read more in Guardian.

msg diesel.guardian.poll  (realm, env)

After a poll, this decides to start a run, read more in Guardian:

  • stamp is the stamp you computed from polling
  • inRealm is the target realm (defaults to current realm)
  • tagQuery is the list of tags you want to test, defaults to "story/sanity"

msg diesel.guardian.polled  (env, stamp, inRealm?, tagQuery?)

msg diesel.guardian.run  (realm, env, tagQuery?)

msg diesel.guardian.starts  (realm, env)

msg diesel.guardian.ends 

msg diesel.guardian.notify 

    EMsg(DG, "report") ::
    EMsg(DG, "stats") ::
    EMsg(DG, "clear") :: Nil

Cron functionality

The crons are best configured in Environment settings. Every time you issue a diesel.cron.set on a given name, it will reset that job and re-start it (if there was a counter, the counter is reset). See also how to use crons for polling Looping, polling and waiting.

Note that these jobs fire off in every node in the cluster. If you only want one, like a singleton, use the Diesel Singleton++.

The cron related messages:

  • control:
    • diesel.cron.set
    • diesel.cron.nextTime (cronExpr)
    • diesel.cron.validate (schedule?, cronExpr?)
    • diesel.cron.list
    • diesel.cron.cancel (name)
  • runtime:
    • diesel.cron.tick (name,realm,env) - each tick of a cron, if you did not supply a cronMsg
    • diesel.cron.stop (name,realm,env) - when cron ends, if you did not supply a doneMsg
  • internal
    • diesel.cron.remote.* - internal messages to control crons in a cluster, you may see them fly around, but should not interfere with

msg diesel.cron.set  (name, env, count?, time, schedule?, cronExpr?, count, tquery, singleton, collectCount, cronMsg?, description?, inSequence?)

Start a cron schedule with the given name and frequenccy (schedule).

  • Parameters:
    • name - he name is mandatory and you can identify different schedules by name.
    • env - the environment for this cron
    • acceptPast - boolean if should accept times in the past - default "false"
    • scheduleExpr or schedule - in natural language , example 30 seconds. The units are:
      • "d day",
      • "h hour",
      • "min minute",
      • "s sec second",
      • "ms milli millisecond",
      • "µs micro microsecond",
      • "ns nano nanosecond"
    • cronExpr - instead of schedule, you can use cron expressions, which underlying use the Quartz dispatcher, see docs
      • Examples:
      • every second: "* * * * * ?"
    • time - optional, an absolute time to kick in just once, ISO format
    • endTime - optional, an absolute time to stop, ISO format
    • count - optional: it will stop after this many occurences, don't pass anything if you want it to go on and on and on... (until endTime, if any)
    • cronMsg - optional: it will call this instead of the default diesel.cron.tick
    • tquery - this is an important one: every time this fires, a new engine is created and it will load all the specs that match this tquery (these tags), see tag query++
    • singleton - TBD, by default all cron jobs are singletons (i.e. only one per environment) but on-demand, there is an ability to have a job per node (i.e. load some configuration etc)
    • inSequence - if "true", these ticks will be guaranteed (within a certain timeout) to be executed in sequence, otherwise they may start in parallel. Note that when running inSequence, each execution that takes a longer time may impact the number of ticks of the cron. Also, the relative timing of the crons will be impacted, as the new "tick" won't actually be triggered until the previous has completed...
    • collectCount - by default, cron job traces are not equal to others, so only the last 10 (system configuration) are collected - you can overwrite this if they're important
      • 0 will not keep any. A larger number may or may not work, depending on other constraints
    • tags a list of tags applicable to this object
    • clusterMode can be one of singleton, local, all
      • note that we don't support lb for this attribute - it can be achieved by using singleton and tagging the rule with <route.lb>

Internally, when using the scheduleExpr it will use an akka scheduler. When using a cronExpr, it will use the Quartz scheduler.

Be careful with the cron expressions... expressions like "* */5 * * * ?" would still kick every second, you'd probably want "0 */5 * * * ?"!

Please see the Quartz Cron Expression tutorial, currently at http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html

Notes on behaviour:

  • if you only pass in time, without schedule or cronExpr, then it is a one-time tick at that given time, otherwise this is the time at which it will start to kick in, so you should pass in time=now() if you want it to start now
  • if it's a scheduled ticker, you have to pass in either schedule or cronExpr in the respective formats.
  • the cronMsg will be passed these attributes of the cron:
    • name
    • realm
    • env
    • cronCount

The result will be an exception or a success message.

msg diesel.cron.list 

Cancel a cron with:

msg diesel.cron.cancel  (name)

This will list all the current cron jobs / schedules.

msg diesel.cron.tick  (name, env)

This is fired off every time - you need to intercept it and do something, in Environment settings, example:

$when diesel.cron.tick (name == "keepalive")
=> snakk.json(url="http://this.is.keptalive.com/keepalive")

msg diesel.cron.stop  (name, env, count)

This is fired off when a count is reached and the timer stops. Use it to signal the end of a cron polling or such.

These messages are used to validate expressions and calculate the next time of an expression:

msg diesel.cron.nextTime  (cronExpr)

msg diesel.cron.validate  (schedule?, cronExpr?)

Cron properties

The behavior of the cron jobs is controlled by a few properties. Some of these properties may not be available, depending on your plan.

diesel.cron.await

  • defaulted to true
  • set this to false to allow parallel timers - by default is not disabled, so each cron awaits for the previous to finish (basically single threaded)

Crons and cluster

In a cluster, the crons have some specific features:

  • crons with clusterMode=singleton and clusterMode=all are replicated in all nodes in the cluster, automatically
  • cancelling crons is also replicated automatically in the cluster

When creating a cron remotely (aka a singleton), the message diesel.cron.remote.create is generated - intercept this if you need to for instance persist these crons.

When cancelling a cron remotely, the message diesel.cron.remote.cancel is raised - you may intercept it if needed.

Meta functions

You can access meta information in several ways.

msg diesel.dom.meta  (className)

Lifecycle and persistence

It's best to set this up when the realm is loaded - this is executed upon startup:

$when diesel.realm.loaded
=> diesel.cron.set (name="keealive", env="sandbox", schedule="5 minutes")

These crons are not persisted. If the server restarts for any reason, they are all lost. You may want to make them persisted, which is easy enough:

IO

  • diesel.io.textFile(path)
  • diesel.io.listFiles(path)
  • diesel.io.listDirectories(path)
  • diesel.io.canRead(path)

Email

The ability to send emails is built-in.

msg diesel.mail.send  (to, subject, body)

To configure the email sender, you need to set these in the project properties:

mail.support.email=The Support Team <support@myorg.com>
mail.replyTo=...
mail.smtp.user=...
mail.smtp.pwd=...
mail.smtp.host=smtp.office365.com
mail.smtp.port=587

Note that the mail.smtp.pwd needs to encoded, see js:wix.utils.enc.

The general notification emails sent are sent to mail.admin.email.


Was this useful?    

By: Razie | 2016-10-23 .. 2024-10-27 | Tags: spec , engine , academy , reference , dsl , diesel.common


Viewed 1741 times ( | History | Print ) this page.

You need to log in to post a comment!

© Copyright DieselApps, 2012-2024, all rights reserved.