lapis server 1.2.9

Posted: 18 July 2014 in Teamsite

It has been a somewhat short sprint with not a lot to talk about. The interface is getting more and more usable, which is a good thing since I’m using it to manage the sprints. The filters and comparators and really helping with this.

Server instance name

the lapis start program has been modified to take a second argument which is used to distinguish between multiple running instances. The argument is simply there to appear on the command line when checking the running processes as shown below:

$ ps -ef | grep LapisServer.jar laurent 21255 21010 5 10:42 ? 00:01:26 java -Djava.util.logging.config.file=etc/logging.properties -DinstanceName=dev -jar lib/LapisServer.jar etc/server.properties laurent 21067 21065 5 10:42 ? 00:00:39 java -Djava.util.logging.config.file=etc/logging.properties -DinstanceName=tst -jar lib/LapisServer.jar etc/server.properties

Before this was introduced, there was no real way to find out which process was for which instance. Now I know...

Due Date & Priority

I've added the due date and priority to workflows and tasks to complete the move towards Getting Things Done. Along with that, I created due date filters and comparators and priority filters and comparators.

Lapis Server 1.2.8

Posted: 7 July 2014 in Teamsite

It just never stops! Lapis Server 1.2.8 is now done, but it’s a release that is part of a bigger program of work, so I’m straight on 1.2.9

I had to create a few things to make it all available as a software platform.

  • a Signup process (signup form, signup servlet, signup workflow (it’s pretty nice that after signup up to a workflow engine, the workflow engine uses a workflow to send you a nice email – it uses an email task!)
  • Some terms and conditions and a privacy policy
  • A new stylesheet
  • Changes to the login process to include “forgot username”, “forgot password” functionalities. Again, both of these use a workflow to send the emails
  • Rewrite rules to make sure you can’t see all the workflows you’re not supposed to.
  • Role based security constraints to match what will be the various packages. The tomcat realm class now reads the role of the user so the web app has the same roles as the Lapis server.
  • Web Interface improvements, such as moving the tasks link into its own menu, links to the privacy policy and terms and conditions
  • A task status filter (active, completed, rejected, all) which somehow had been missed out in the design process and is DEFINITELY required to make it user friendly. The default is “active”, so that the task link is effectively a link to your current to-do list.
  • An available workflows datasource to make it more user friendly by selecting workflows from a drop-down list instead of typing them in. The datasource only lists the files the user has access to.
  • Changes to the server to be able to bind itself to a specific network interface in multi-home network environments such as that of the redhat cloud.
  • Changes to the client command line tools to bind themselves to a specific network interface in multi-home network environments such as that of the redhat cloud.

what’s next?

as part of my 1.2.9 sprint, I am looking at finishing the integration of the redhat environment (with the shutdown and startup scripts and a whole round of testing) and send a link to a few people to do a soft launch and gather feedback before getting me a real domain name!
After that, I’ll see how to get a payment solution in place to offer upgrades and start the process of making changes towards working with an organisation’s groups.

Lapis Server 1.2.7

Posted: 29 June 2014 in Teamsite

The lapis server version 1.2.7 is now done.

JCR integration

As promised, the JCR intergration has been done.

We can define repositories centrally via an XML configuration file such as the one below.

<repositories> <repository name="jackrabbit.cq5.dev.author.crx.default" path="/jackrabbit/cq5/dev/author" type="jsr170" url="http://localhost:4502/crx/server" username="admin" password="SrAxtm8ihY+hBPYklrYYJQ=="> <action name="Go to" pattern="http://localhost:4502/crx/de/index.jsp#/crx.default/jcr%3aroot${path}" /> </repository> <!-- <repository name="jackrabbit.cq5.dev.publish.crx.default" path="/jackrabbit/cq5/dev/publish" type="jsr170" url="http://localhost:4503/crx/server" username="admin" password="SrAxtm8ihY+hBPYklrYYJQ==" /> --> </repositories> Note that we can associate a number of actions for the repository. This is used by the web app to display link to your repository browser app (in my example, it's Adobe's CQ5) 
As per the previous post, the client can retrieve the details of the repository and connect to it. The node action (e.g. the custom java of your tasks) would connect to the JCR in the following way: 
 List<String> repositories = client.getRepositories(); for (String repository : repositories) { Repository jcrRepository = JcrUtils.getRepository(client.getJCRRepositoryURL(repository)); Credentials credentials = client.getJCRCryptedCredentials(repository); Session jcrSession = jcrRepository.login(credentials); /* do something with the nodes */ List<String> attached = nodeToken.getGraphExecution().getAttached(); for (String anAttached : atached){ /* e.g. /jackrabbit/cq5/dev/author/some/path/within */ String basePath = client.getJCRRepositoryPath(attached); /* e.g. /jackrabbit/cq5/dev/author */ LapisPath lapisPath = new LapisPath(basePath,attached); logger.info(lapisPath.modulatePath()); /* e.g. /some/path/within */ Node node = jcrSession.getNode(lapisPath.modulatePath()); /* retrieve the node at the modulated path */ / * do something with it */ } }

Attached JCR nodes

Attached nodes are now also supported by the wfstart.sh command line tool via a "lapis.attached={path}" argument. The new wfupdate.sh command line tool also supports attaching and detaching jcr nodes via the -attach and -detach options.

The web application allows to browse the repository and select nodes to attach.

LapisServer on RedHat OpenShift and Saas

I've started placing a new version of the web app on OpenShift, the Redhat cloud server. This is to be able to offer the software as a service. I still have to define my pricing plan, but the free option will enable users to instantiate a "todo" workflow. the first upgrade other options would enable users to instantiate a "assign" workflow where groups and group tasks would be enabled. The next upgrade option would see the introduction of custom workflows where email tasks would be enabled. The final upgrade option after that would be a custom instance of the server with our support.

Multi-homed environments

An interesting side effect of looking at the Redhat Openshift was that the servers have multiple IP addresses and the default RMI configuration was of course picking the one which had security restrictions. It is now possible to add a server property named "bindAddress" which forces the Lapis server to listen on a particular network interface. Likewise for the client tools, the local end of the socket can be made to pick the correct IP address when connecting to the server.

Audit trail

In addition to the workflows and the tasks showing an audit trail of when they are created, activacted, rejected and completed, all the events ( such as when a property is changed or a JCR node attached) are now stored, thus giving a complete audit trail of what happened during the life of the workflow.

What's next?

I will be focusing on the Saas web app for signup and the todo workflow (I'm already 2/3rds through so it shouldn't take long) and then start on the assignment workflow and find out what that means about the groups.

I am looking at connecting the web application to JCR repositories at the moment so that I can “attach” JCR nodes to workflows (and also possibly watch nodes that are created/modified/deleted as trigger points for instantiating workflows). I’ve had a few issues along the way but the base logic is there to be able to instantiate JCR sessions from the details located in the Lapis Server.

Classpath

Make sure those jars are on the classpath. they were downloaded from the Maven repository or copied from within the jackrabbit standalone jar file

commons-codec-1.5.jar commons-collections-3.2.1.jar commons-httpclient-3.1.jar jackrabbit-jcr-commons-2.8.0.jar jackrabbit-jcr-rmi-2.8.0.jar jackrabbit-jcr2dav-2.8.0.jar jackrabbit-jcr2spi-2.8.0.jar jackrabbit-spi-2.8.0.jar jackrabbit-spi-commons-2.8.0.jar jackrabbit-spi2dav-2.8.0.jar jackrabbit-spi2jcr-2.8.0.jar jackrabbit-webdav-2.8.0.jar jcl-over-slf4j-1.7.4.jar jcr-2.0.jar slf4j-api-1.6.6.jar

Storing repository details

A file containing the repositories can be created and be referenced by the server property lapis.repositories.config (we set ours to etc/repositories.xml). Below is an example repositories file "mounting" 2 JCR repositories of a standard CQ5 installation. note that the passwords are encrypted.

<repositories> <repository name="jackrabbit.cq5.dev.author.crx.default" path="//jackrabbit/cq5/dev/author" type="jsr170" url="http://localhost:4502/crx/server" username="admin" password="SrAxtm8ihY+hBPYklrYYJQ==" /> <repository name="jackrabbit.cq5.dev.publish.crx.default" path="//jackrabbit/cq5/dev/publish" type="jsr170" url="http://localhost:4503/crx/server" username="admin" password="SrAxtm8ihY+hBPYklrYYJQ==" /> </repositories>

Getting connected to the repositories via the Lapis Server

 List<String> repositories = client.getRepositories(); for (String repository : repositories) { Repository jcrRepository = JcrUtils.getRepository(client.getJCRRepositoryURL(repository)); Credentials credentials = client.getJCRCryptedCredentials(repository); Session jcrSession = jcrRepository.login(credentials); dump(jcrSession.getRootNode()); } 

The code above first retrieves the Lapis "mount points" of the JCR repositories (in our case, we have 2 "//jackrabbit/cq5/dev/author" and "//jackrabbit/cq5/dev/publish") which is than passed the JackRabbit utility class to retrieve the repository. In order to get a session to the repository, we must get the username and password, which is provided by the client calls to getJCRCryptedCredentials. The method gets given a path (it can be anything below a mount point, so //jackrabbit/cq5/dev/author/var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab works just as well as //jackrabbit/cq5/dev/author) to resolve the credentials for us from the encrypted details in the repositories file.

All we need to do then is call the login method of the repository object and get connected. I have in mind that the username/password combination would be for a read-only user and that another method would be available to pass your own username/password (This is yet to be implemented). Below is a screenshot of the development environment in the process of dumping data from the JCR repository (something I will have to replicate when I implement the changes for the web application).

Attaching JCR nodes to the workflows

In the very near future, it will be possible to attach a reference to a JCR node to workflows. The paths would be the Lapis "mounted" paths but I will be providing a path modulator/demodulator to translate between the path as the Lapis server would know it across multiple repositories and the path within the repository.
For example, the path //jackrabbit/cq5/dev/author/var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab of an attached node in a lapis workflow could be translated into /var/audit/com.day.cq.wcm.core.page/content/home/6efc5a6b-f361-4d6a-9f5d-43a7ffc862ab so that a call can be made using the standard JCR API from the workflow. Once I have this developed, I'll post so that this is documented.

Why is that a good thing?

Well, when I get this implementation completed, it would start to open the workflow engine to a log of "enterprise" level systems (and the companies that use them!). Anything that uses Apache JackRabbit, JBoss modeShape, Adobe CQ, IBM WCM, to name but a few could now gain a workflow engine with very little effort. I believe this would definitely help put the Lapis Engine on the map.

In the latest round of development of the Lapis Server, we’ve added the following functionality:

Hot Folders

you can now add a watcher to monitor file system folders. When files are created, modified or deleted, workflows can be instantiated automatically.
Files that can trigger the workflow may need to match a regular expression. For example, when a new video file is created, a different workflow is instantiated than when a pdf document is modified.
The workflow can be instantiated with a given set of properties. An additional property “lapis.workflow.watch.file” is set to the file name which triggered the workflow instantitation.
Below is an example of the watch.xml configuration file:

The server property “lapis.workflow.watch.config” can be set to point to an xml document configuring the folders watch as well as the workflows to instantiate.
the root element is the “watch” element and it has one attribute, named “runAs”, which defines which user to impersonate when instantiating workflows from hot folders.
The watch element has any number of folder elements. Each folder element has two attributes, “path” which defines the path of the folder to watch and “recurse” which contains a boolean (true or false) indicating if sub-folders must also be watched, recursively.
The watch element also has any number of graph elements. The graph element has one attribute, “path” which defines the path of the workflow file to instantiate relative to the graph store (defined by the server property “lapis.workflow.graphs”).
the graph element has any number of event elements. the event element has one attribute, “kind”, which defines the type of events that may trigger the instantiation of the graph. The kind attribute accepts the following values: “ENTRY_CREATE”,”ENTRY_DELETE”, “ENTRY_MODIFY”, corresponding to file creation, deletion and modification respectively.
the graph element has any number of regex elements. The regex element has one attribute, “match”, which corresponds to a regular expression which is compared to the path of the file which is being modified, created or deleted. If both the event kind and the regular expression match, then the graph is instantiated.
the graph element has any number of parameter elements. the parameter element has two attributes, “id” which defines the id of the parameter and “value”, which defines the value of the parameter. The parameters are passed to the graph during instantiation and set as properties (duplicates are removed).
A specific property, “lapis.workflow.watch.file” is also added to the list of properties and is set to the path of the file which was created, modified or deleted.

Group tasks

Now that we’ve added the capability of users to belong to groups, the next logical step was to add a group task.
just like a node can e assigned to a user, the node can also be assigned to a group or a number of groups. A group task starts its life without ownership. A user must acquire the ownership of the group task before they can complete it. Only members of the groups mentioned can complete the token on a group node. Group tasks no yet assigned to a user require the ownership to be set to a member of the groups listed in its groups property before they can be completed.
The group node type requires the below node properties to be set:
groups: a comma-separated list of groups whose members can acquire the ownership of the task and complete it.

Graph execution and node token filters

Graph execution and node token filters can now be created to accept or reject graph executions or node tokens respectively. This is used to filter lists of graph executions, which we now use in the web application.
Graph executions can now be filtered by id, name, owner or description.
Node tokens can be filtered by id, name, owner, type (email, group, sub, etc…) and description, group and you can expand this list by creating your own filters. I am sure more will get created as time goes by.
Because of this the web application menus have been changed slightly to separate listing of graph executions and node tokens.
/wfe/graphexecutions list the graph executions unfiltered whilst /wfe/user/graphexecutions automatically starts with a “owner” filter set to the current user. The same logic is applied to /wfe/nodetokens and /wfe/user/nodetokens.
Below is a screenshot of the web application showing this:

Email tasks

email task now accepts users, groups and email address in to, cc and bcc fields.

  • Groups are expended into a list of users,
  • Users are expanded into a list of email addresses and
  • Email addresses are added to the corresponding fields

Command tasks

The command task now is a bit more stable and also accepts the working directory parameter. I have fixed an issue with the stdout and stderr output streams.

Chrome extension and Ubuntu application

I was playing a bit with this more than anything and thought i would be easier to start browsing if there was a chrome extension opening the web site for me, so I built one for my development environment and another one for my production environment (I have started using the engine for my own workflows now – I figured if I want a truly fit for purpose workflow engine, I may as well use it for myself)

To make starting the engine easier, I created an Ubuntu desktop application launcher which starts the engine and also starts Tomcat, where the web app resides.
Below is the screenshot of the launcher file in ~/.local/share/applications

What’s next?

As always, I have a backlog to choose what I build next. I’m not sure what that will be but among the list is LDAP authentication module, an EZPack installer and improving working with “attached” files