Archive for the ‘Java’ Category

In the latest round of development of the Lapis Server, we’ve added the following functionality:

Hot Folders

you can now add a watcher to monitor file system folders. When files are created, modified or deleted, workflows can be instantiated automatically.
Files that can trigger the workflow may need to match a regular expression. For example, when a new video file is created, a different workflow is instantiated than when a pdf document is modified.
The workflow can be instantiated with a given set of properties. An additional property “” is set to the file name which triggered the workflow instantitation.
Below is an example of the watch.xml configuration file:

The server property “” can be set to point to an xml document configuring the folders watch as well as the workflows to instantiate.
the root element is the “watch” element and it has one attribute, named “runAs”, which defines which user to impersonate when instantiating workflows from hot folders.
The watch element has any number of folder elements. Each folder element has two attributes, “path” which defines the path of the folder to watch and “recurse” which contains a boolean (true or false) indicating if sub-folders must also be watched, recursively.
The watch element also has any number of graph elements. The graph element has one attribute, “path” which defines the path of the workflow file to instantiate relative to the graph store (defined by the server property “lapis.workflow.graphs”).
the graph element has any number of event elements. the event element has one attribute, “kind”, which defines the type of events that may trigger the instantiation of the graph. The kind attribute accepts the following values: “ENTRY_CREATE”,”ENTRY_DELETE”, “ENTRY_MODIFY”, corresponding to file creation, deletion and modification respectively.
the graph element has any number of regex elements. The regex element has one attribute, “match”, which corresponds to a regular expression which is compared to the path of the file which is being modified, created or deleted. If both the event kind and the regular expression match, then the graph is instantiated.
the graph element has any number of parameter elements. the parameter element has two attributes, “id” which defines the id of the parameter and “value”, which defines the value of the parameter. The parameters are passed to the graph during instantiation and set as properties (duplicates are removed).
A specific property, “” is also added to the list of properties and is set to the path of the file which was created, modified or deleted.

Group tasks

Now that we’ve added the capability of users to belong to groups, the next logical step was to add a group task.
just like a node can e assigned to a user, the node can also be assigned to a group or a number of groups. A group task starts its life without ownership. A user must acquire the ownership of the group task before they can complete it. Only members of the groups mentioned can complete the token on a group node. Group tasks no yet assigned to a user require the ownership to be set to a member of the groups listed in its groups property before they can be completed.
The group node type requires the below node properties to be set:
groups: a comma-separated list of groups whose members can acquire the ownership of the task and complete it.

Graph execution and node token filters

Graph execution and node token filters can now be created to accept or reject graph executions or node tokens respectively. This is used to filter lists of graph executions, which we now use in the web application.
Graph executions can now be filtered by id, name, owner or description.
Node tokens can be filtered by id, name, owner, type (email, group, sub, etc…) and description, group and you can expand this list by creating your own filters. I am sure more will get created as time goes by.
Because of this the web application menus have been changed slightly to separate listing of graph executions and node tokens.
/wfe/graphexecutions list the graph executions unfiltered whilst /wfe/user/graphexecutions automatically starts with a “owner” filter set to the current user. The same logic is applied to /wfe/nodetokens and /wfe/user/nodetokens.
Below is a screenshot of the web application showing this:

Email tasks

email task now accepts users, groups and email address in to, cc and bcc fields.

  • Groups are expended into a list of users,
  • Users are expanded into a list of email addresses and
  • Email addresses are added to the corresponding fields

Command tasks

The command task now is a bit more stable and also accepts the working directory parameter. I have fixed an issue with the stdout and stderr output streams.

Chrome extension and Ubuntu application

I was playing a bit with this more than anything and thought i would be easier to start browsing if there was a chrome extension opening the web site for me, so I built one for my development environment and another one for my production environment (I have started using the engine for my own workflows now – I figured if I want a truly fit for purpose workflow engine, I may as well use it for myself)

To make starting the engine easier, I created an Ubuntu desktop application launcher which starts the engine and also starts Tomcat, where the web app resides.
Below is the screenshot of the launcher file in ~/.local/share/applications

What’s next?

As always, I have a backlog to choose what I build next. I’m not sure what that will be but among the list is LDAP authentication module, an EZPack installer and improving working with “attached” files


This sprint is almost over and I have been quite the busy guy. Below are the (important) features that have been added since my last post:

  • mail node: send emails from workflows
  • command node: execute external commands from workflows, such as perl scripts
  • encrypt/decrypt SMTP paswords
  • simplified graph execution serialisation to XML
  • password changes by users

Mail Node

I have created a core email task which connects to an smtp server using credentials stored at the server level. The server can then provide a session easily without details appearing in the workflows themselves. The password is encrypted in the property file and decrypted whenever a session is required.
Below are the connection details for my SMTP account (the password property provided is an encryption of the password):

The mail task uses XSL to transform the workflow’s XML representation as HTML email. In the next few days, I made the email task to be able to have also a text/plain part, be able to accept custom content via a custom property, reference a file list of files to attach to the email.

The mail node accepts the following parameters:

  • onSuccess: name of the arc to follow if it all goes well
  • onFailure: name of the arc to follow in case of error. Can be the same arc as the sucess arc.
  • from: email address to send the email from
  • to: email address to send the email to
  • subject: the subject of the email
  • stylesheet.html: the URL of the stylesheet to use to create the HTML alternative content of the email
  • stylesheet.text: the URL of the stylesheet to use to create the text alternative content of the email
  • xmlContent: the text of an XML document to parse. If not present, the mailer will use the graphExecution’s XML representation.
  • fileList: a path to a file containing a list of files to attach to the email. The content disposition is set to “attachment”
  • relatedFileList: a path to a file containing a list of files that can be referenced by the email. The content disposition is set to “inline”. The content id is set to a named GUID, created from the path of the image so that it can be made easier to reference from HTML emails. The cid will therefore never change for a given path.

Command node

I also added a command task, which let’s you execute system commands. With it, it is now possible to execute perl, python, LUA scripts, or any other executable you want. This is starting to make this workflow engine quite powerful.

The command node allows the workflow to execute commands that you normally would type at the prompt. This enables you top interact with the scripts or other commands that you may have already written, such as Perl scripts for example.

The command may necessitate the use of an interpreter first. For example, the perl program is not to be called directly. Instead, it must be passed as an argument to the perl interpreter: “/usr/bin/perl”.

The command node takes the following parameters:

  • onSuccess: This arc will be followed when the command completes with a return code of zero.
  • onFailure: this arc will be followed if the command cannot be instantiated or if the return code is non-zero.
  • command: the command to execute, with its arguments
  • stdout: the file to create or append to with the content of the standard output
  • stderr: the file to create or append to with the content of the error output
  • stdin: optional argument to use if the program takes its input from standard input

Below is an example of a hello world program:

<command name="perl" owner="laurent" start="true"> <!-- credit where it is due: This is simply the best hello world script of the world. -->
 <arc name="failed" to="end" /> <arc name="done" to="end" /> <parameter name="onSuccess" value="done" /> <parameter name="onFailure" value="failed" /> <parameter name="command" value="/usr/bin/perl -w"/> <parameter name="stdout" value="logs/perl.out" /> <parameter name="stderr" value="logs/perl.err" /> <parameter name="stdin"><![CDATA[package Earth;sub Greet{
         e[2])?!(push@time~~~~~~~~~~~~~~~~Zone,loc ~altime())?rotation?~~~~~~~~~~~~~q~~?The Worl ~~d?:q:[\w]::q=[\~~~~~~~~~~~~~~~~~d~a-f]=:q?..~~ ~~~?:q:.:;"42b3d3~~~~~~~~~~~~~~~~~~~~~728656c6c6f6 ~~~~~0277f627c64672~~~~~~~~~~~~~~~~~~~~~b3072796e647 ~~~~~~~42b3b3rg7d"=Ym~~~~~~~~~~~~~~~~~~~\$;~~*\;p~~~~u ~~~~~~~~~sh@_,$&;bless~~~~~~~~~~~~~~~~~~~~~~~~~$c~~~~~~~ ~~~~~~~~~o~ntine~~~~~nt~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~s=\~~~~~~~$~~~~~~~~~~~~~~~~~~~~~~~pangaea~~~~ ~~~~~~~~~~~~~~~;{l~~~~~~~~~~~~~~~~~~~~~~~~~~~~ocal@_;local$; ~~~~~~~~~~~~~~~~~="o~~~~~~~~~~~~~~~~~~~~~~~~~cean";$^A=(defi ~~~~~~~~~~~~~~~~~~~n~~~~~~~~~~~~~~~~~~~~~~~~~ed$continents)? ~~~~~~~~~~~~~~~~~~~(vec(~~~~~~~~~~~~~~~~~~~~~~$;, YYsplit(\' ~~~~~~~~~~~~~~~~~\',${\$;}~~~~~~~~~~~~~~~~~~~~~~)%3,YYsplit( ~~~~~~~~~~~~~~~~q??,$;)**2-~~~~~~~~~~~~~~~~~~~~~~(($;=Ytr/oa ~~~~~~~~~~~~~~~~eiu//)**2))=~~~~~~~~~~~~~~~~~~~~~~=28160)?q: ~~~~~~~~~~~~~~~~~.::q?!?:\'?~~~~~~~~~~~~~~~~~~~~~~\';}$^A=Ys ~~~~~~~~~~~~~~~~:\Q.\E:pack(~~~~~~~~~~~~~~~~~~~~~~\'h*\',j ~~~~~~~~~~~~~~~~~oin(q(),~~~~~~~~~~~~~~~~~~~~~~~grep{$_= ~~~~~~~~~~~~~~~~~~Ym,$,,}~~~~~~~~~~~~~~~~~~~~~~~split(" ~~~~~~~~~~~~~~~~~",@_~~~~~~~~~~~~~~~~~~~~~~~~~~[0])) ~~~~~~~~~~~~~~~~):e~~~~~~~~~~~~~~~~~~~~~~~~~~~gexe ~~~~~~~~~~~~~~~;$d~~~~~~~~~~~~~~~~~~~~~~~~~~~="s ~~~~~~~~~~~~~~ort~~~~~~~~~~~~~~~~~~~~~~~~~~<= ~~~~~~~~~~~~>,~~~~~~~~~~~~~~~~~~~~~~~~~~YY ~~~~~~~~~~~@_~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~"~~~~~~~~~~~~~~~~~~~';; s,(~|\r|\n|\s),,g;s.Y.\x7e.g; eval};Greet;'the world';]]></parameter> </command>

Other minor improvements

I have created a utility for users to change their own password or the password of another user. You must have been granted the “passwd” privilege to perform this operation (this is provided by jaas). All passwords are stored encrypted, currently in an xml document used for the user repository (I still have to write the LDAP functionality, it will come).

I fixed a small issue which workflow instances causing exception on the development environment when the server was stopped and re-started. Exceptions are now captured for each workflow to prevent exceptions being blocking to the server startup. Workflows that cause an exceptions would not be resumed.

What’s Coming Next?

I have to start thinking about bundling the release together. I’ve spent some time cleaning up the blog to look presentable and re-linked it to linked-in, google+, Twitter and Facebook. You never know who might be reading…

I also started a bit enthusiastically with the version numbering system. The next version will be version 1.2.2, to slow things down a notch. I have to look in the product backlog (yes, I do sprint planning) but I have some tidying up to do of the build system.
Maybe, I’d like to do something around queued tasks (no owner until someone changes the task ownership), Client Sessions and OAuth (to make it even more robust and ready for the Rest API I’ll be putting in for Version 2 [definitely a version number

I’ll stop blabbing on now and actually organise my sprint. If I don’t see you before next year,
Have a Happy new Year!

Part of what I was developing for this sprint has been to check the permissions when attempting to change the owner of a task.

The commands

laurent@laurent-Aspire-5742:~/Projects/development/LapisServer/bin$ ./ -u laurent -p xxxxx -host localhost -port 12345
test [32bbae32-8944-4c31-b370-60024bf533b3]
	=>	4f480ffe-21a3-4d82-8697-436c2a9fa506 [pause [laurent] [070d3c83-d658-4c22-bc1a-7c4f83660b49] => Complete]
	=>	a12ded01-ce52-4cef-81b6-274954cf8443 [Another pause [laurent] [4ff04705-9818-4637-87dc-5080bc35a50e] => Complete]
	=>	5bbf56c7-10d0-4da7-b3e7-0dc99bccd751 [verify [sarah] [30bd0552-7c1b-4fb7-97ae-34923789e9ed] => Active]
		=>	finish
		=>	start subworkflow
	=>	eab0c785-f523-4fea-afbd-c9589dc73088 [hello [laurent] [fe69d6ad-10b5-4ec8-9f1b-def5fdaa9505] => Complete]
	=>	6dbf3510-aef9-4840-9bf8-3c30bc2930dd [subWorkflow [laurent] [f9c21bfa-fa9d-46ec-950b-88b9d367d6af] => Rejected]

The user task should have been completed by the user Sarah. I then issue the command to change the owner for the task to myself, but since I don’t have the permission to change the task ownership, the engine raises an exception:

laurent@laurent-Aspire-5742:~/Projects/development/LapisServer/bin$ ./ -u laurent -p xxxxx -host localhost -port 12345 -w 32bbae32-8944-4c31-b370-60024bf533b3  -t 5bbf56c7-10d0-4da7-b3e7-0dc99bccd751 -o "laurent"
Exception in thread "main" access denied ("" "wfchown")
	at Method)
	at fukoka.lapis.engine.workflow.remote.RemoteNode.setOwner(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at sun.rmi.server.UnicastServerRef.dispatch(
	at sun.rmi.transport.Transport$
	at sun.rmi.transport.Transport$
	at Method)
	at sun.rmi.transport.Transport.serviceCall(
	at sun.rmi.transport.tcp.TCPTransport.handleMessages(
	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(
	at sun.rmi.transport.tcp.TCPTransport$
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$
	at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(
	at sun.rmi.transport.StreamRemoteCall.executeCall(
	at sun.rmi.server.UnicastRef.invoke(
	at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(
	at java.rmi.server.RemoteObjectInvocationHandler.invoke(
	at com.sun.proxy.$Proxy5.setOwner(Unknown Source)
	at fukoka.lapis.client.clt.WFchown.main(

In order to succeed, I then edit the server policy file to grant the permission:

grant principal "laurent" {
    permission "shutdown";
    permission "wfchown";

The listing now gives me the ownership of the task

Related articles
Oooh, not so long ago I announced that version 1.0 of the workflow system was developed. Now, it’s the turn of version 1.1. In this version, we’ve separated the interfaces from their implementations into their own jar file. This way, they are easier to add to your project and you can start developing your own custom actions for your workflows.
This meant that you can compile your custom actions and jar them together. Once you have your own jar file full of custom code, you add it to the lib/ext folder and the workflow engine can now call your custom node actions!
I re-instated the authorisation mechanism I had started to develop a while back so that sensitive actions cannot be performed without proper authorisation – like shutting down the server! The mechanism uses JAAS. The next version will see other activities only permitted to named users, such as wfchown, which changes the owner of a task.
Finally, I have integrated the sub-workflow task into the core API (as opposed to having it as a custom node implementation). Sub-workflows modes can be synchronous (it waits for the sub-workflow to finish) or asynchronous (the node transitions straight away and doesn’t really care what the sub workflow instance is doing).
We had to clean a couple of events in the process so that it would work even after server restarts, but it does work.
I have planned Version 1.2. Promoted from the project backlog to this version are:
licensing: This server is not free software. I’d like to make money from it
wfchown permission: not everyone will be able to take tasks as their own from another user
Events cleanup: This will be continued
client for current user: the current user will be used to connect. The command line tools will then use this to reduce the number of parameters required


Multi-Threaded workflow system

Posted: 1 November 2013 in Java

I have just finished some touches on version 1.0 of the Workflow Engine and it already has a few good features to rival with even the best out there. Even if it needs lot of work to make it pretty on the front-end, the back-end API is resolutely quite impressive.

  • Authenticated: you must log-in to use the workflow engine,
  • Multi-threaded: each workflow arc creates a new thread, for true concurrency
  • User Tasks: the user is responsible for performing the activity, complete the task whereby the workflow transitions to the next set of tasks.
  • Programmatic tasks: Java classes can be executed by the workflow.
  • Event system: every action in the workflow creates an event. You can create custom event listeners and react as soon as an event is generated.
  • File persisted: every time  the workflow changes, the modification is written to a file (in XML format). When the engine is shut-down and re-started, user tasks can be resumed (programmatic tasks shouldn’t – there is a product backlog story to make some programmatic tasks resumable).
  • Sub-Workflows can be started enabling you to compartmentalise the processes. This is powered by the event system where the “parent” workflow listens to the sub-workflow to resume itself when the sub workflow is finished.
  • XML format graph definitions: dynamically create XML graph documents using XSL, in house Lapis executable XML documents, or Apache JellyBeans.
  • XML user repository with encrypted passwords (there is a product backlog story to enable LDAP)
  • A series of command line tools to start workflows, list workflow, change task ownership and complete tasks

Work will now start on version 1.1, featuring the following

  • Cancel workflows: a command line to cancel workflows. Workflows can be cancelled already with a workaround
  • API interfaces extracted to their own jars to be easily added to you own project. This is to create your custom actions (these are your workflows, they should perform your tasks)
  • Sub-workflows have so far been implemented as custom actions. They will be moved into the core API.
  • independent sub-workflow: the parent does not need to wait for the sub-workflow to finish before moving on. Both sub-workflow types will be implemented in the core API.
  • 3rd party API: place your jars in lib/ext and we can run your tasks.
  • There will also be a development effort to add more to the javadocs which currently are looking quite bare.

Please contact me if you wish to receive an update when version 1.1 is complete so that you can receive a development license to evaluate the product. I will try to post a video of the product soon.