Christopher Meyer

Jan 052018


A Content Server patch is an effective way to fix a bug without having to install a new version of a module. The idea is simple: create a text file with the deltas, copy it to the opentext/patch/ directory, and restart Content Server. Content Server loads the patch at startup and applies the changes to the code base.

Removing a patch is even easier: just delete the patch from the file system and restart Content Server.

The simplicity of installing and removing a patch is why many customers prefer receiving a patch instead of a new version of a module. The overhead is less and rolling back is painless.

But how is the lifecycle of a patch maintained once it’s installed? In other words, how does an administrator know when a patch should be removed? It’s not obvious, and over the years I’ve seen numerous installations littered with old patches. This can cause all sorts of problems when a patch has become obsolete.

In most cases the header of a patch will contain documentation stating its purpose and to what module and version it applies. An administrator can open the patch in a text editor, read the documentation, and make an educated guess as to whether the patch can be removed. However, doing this for potentially hundreds of patch files is a manual, tedious, and error prone task.

OpenText has addressed this problem with the Cluster Agent, which is now part of a standard Content Server installation. This works well, but doesn’t address third-party modules that are not distributed by OpenText.

After writing numerous patches over the years I needed to find a way to expedite the creation process, automate the lifecycle, and make the entire management less error prone. The solution is now part of RHCore. Let’s dive in.

Creating a Patch

The traditional steps to creating a patch are as follows:

  • Create an OSpace matching the name of the patch (typically of the form patNNNNNNNNNN.oll.
  • Orphan target objects from your module into the patch OSpace.
  • Override target features and scripts in the orphan.
  • Add a Comments feature to the Root object and add documentation. Each line must be prefixed with a hash (#). The documentation should describe what the patch does and to which module versions it applies. This gets inserted into the header of the patch file.
  • Generate the patch by executing $PatchUtils.Dump('patNNNNNNNNNN').

The resulting patch has the filename patNNNNNNNNNN.txt, and is ready to be deployed (after testing, of course).

RHCore simplifies the last two steps by:

  • providing a generic $RHCore.OScriptUtils.PatchDump() function, which automatically calls $PatchUtils.Dump() on any open OSpace with a name starting with “pat” (this is much easier than having to remember and write out the syntax each time); and
  • automatically populating the Comments feature (correctly formatted with each line prefixed with a hash) with details of the patch.

The last point is a major time saver since the generated documentation describes which module the patch applies to, in which module version the patch was merged (i.e., which module version makes the patch obsolete), who wrote the patch, and the date the patch was created. A developer can add additional comments to the .Documentation feature, which is automatically included in the comment.

The following is a sample header from a patch generated with RHCore:

# Patch PAT2016081101 created at Fri Oct 07 06:40:12 2016
# Modules:      rhcore
# Author:       Christopher Meyer (
# Date:         2016-10-07
# Merged:       rhcore build 279
# Fixes an issue with `$RHCore.DBUtils.FilterToSQL()` when `daterange` has only one value defined.
# <patchinfo>{"author":"Christopher Meyer (","build":279,"date":"2016-10-07T06:40:12","module":"rhcore","patch":"pat2016081101"}</patchinfo>

The header tells us the patch can be removed once RHCore build 279 or later is installed. But how can we determine this without having to manually open the patch in a text editor?

The added value is in the last line of the header. Details of the patch are injected as a parsable string in the <patchinfo>...</patchinfo> tags. This can be read by Content Server and leads to the next topic.

Removing Obsolete Patches

RHCore adds a new “Patch Info” page to the admin.index pages. Executing it does the following:

  • iterates the patch files located in opentext/patch/;
  • attempts to extract the information from the <patchinfo>...</patchinfo> tags; and
  • if the information could be extracted then compare it to what’s installed and display a warning if the patch can be removed.

A sample screenshot of the page is as follows:

With this information the Administrator can take the necessary steps to remove any obsolete patches.

Wrapping up

I’ve been using this solution for a few years and it has made patch creation and management a less time consuming and error-prone task. I can create a patch and know it will be appropriately removed in the future once it has become obsolete.

Need help developing for Content Server or interested in using RHCore? Contact me at

Jan 302017


Caching is an effective way to boost the performance of an OpenText Content Server module. Caching works by persisting the return value of an operation (such as an expensive function or SQL call), and reusing the value later without having to execute the operation again.

There are a few ways to implement caching in Content Server, but this post will focus on Memcached.

Using Memcached in OpenText Content Server

Memcached is an open source caching system that was added to Content Server in v10.0. The Memcached website sums up what it does:

Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

Sounds great! Content Server provides an OScript API to read and write data to Memcached. Once a value is written to the cache it becomes available to future requests and is accessible from all threads in a cluster.

It’s important to remember that Memcached isn’t a persistent data store; its purpose is to temporarily store transient values to boost performance. A general rule is never to assume Memcached contains a cached value. Memcached purges cached values using a least recently used policy when its memory threshold is reached. For this reason a developer should always first check if a value exists in the cache before trying to use it.

The API for communicating with Memcached is via $LLIAPI.Memcached, and has the following functions (other functions are available, but these are the important ones):

  • SetValue() – write a value to the cache;
  • GetValue() – get a value from the cache; and
  • DeleteKey() – remove a value from the cache.

Each function returns an Assoc with the status of the call (using the standard ok & errMsg keys; see Part XIV on error handling for more information).

Let’s briefly discuss each.


The SetValue() function writes data to Memcached and has the following interface:

function Assoc SetValue(Object prgCtx, String namespace, Dynamic key, Dynamic value, Integer timeout=0)

A few things to note:

  • Care must be taken to choose a namespace/key pair that uniquely maps to the cached value. This is essential to prevent conflicts with other modules that use the cache. Behind the scenes the API concatenates the namespace and key together, which means the pairs hello/world & hellow/orld are effectively the same. It’s a small bug, but I have yet to see it cause problems.
  • The value being cached may only consist of types that are serialisable to a string (e.g., String, Integer, List, Record, RecArray, Assoc, etc.). Non-serialisable types (e.g., Object, Frame, DAPINODE, etc.) cannot be used with the cache.
  • The Undefined type cannot be cached. It may seem like a strange thing to do, but is useful when Undefined is a legitimate return value of an operation (a workaround exists in RHCore).
  • An optional timeout parameter sets how long data should live in the cache before expiring. It’s not required, but is useful when no other method to invalidate a cached value exists. More on this later.


The GetValue() function returns a cached value and has the following interface:

function Assoc GetValue(Object prgCtx, String namespace, Dynamic key)

The return Assoc contains a boolean found key, which indicates whether the namespace and key exists in the cache. If found, the value key will contain the cached value.


The DeleteKey() function removes a value from the cache and can be used to expire a value that is no longer valid. It has the following interface:

function Assoc DeleteKey(Object prgCtx, String namespace, Dynamic key)

What can be cached?

Any return value from an operation or function in Content Server can be cached as long as it meets the following criteria:

  • the operation doesn’t mutate the state of the system (i.e., it’s read-only);
  • the value being cached isn’t too large (Memcached has a default limit of 1 MB per cached item, which can be configured);
  • the value being cached consists of serialisable data types; and
  • there is a policy to invalidate cached values when they are no longer valid.

Cache invalidation is probably the most difficult part of caching and warrants its own discussion.

Cache Invalidation

There are only two hard things in Computer Science: cache invalidation and naming things. – Phil Karlton

Cache invalidation has to do with managing the conditions to keep a cached value sufficiently fresh. But doing this isn’t always obvious: How do you know if the return value of an operation has changed without running the operation again? Wouldn’t running the operation again defeat the purpose of caching?

As far as I know there are a three strategies to cache invalidation (if you know of others please tell me), and which approach is applicable or best is directly related to the makeup of the operation being cached. They are:

  1. a key can be constructed that uniquely maps to the value (also known as key-based expiration);
  2. events to invalidate the cache are known, and callbacks can be implemented to delete the value when these events occur; or
  3. it is satisfactory to expire the cache after a timeout, and stale data during this time isn’t a concern.

Let’s discuss each.

Key-based expiration

In some cases a key can be constructed that uniquely maps to the return value of the operation being cached. A simple example is a pure function, which is a function that:

  • always has the same return value for the same inputs, and;
  • doesn’t mutate the state of the system.

For example, consider a simple sum() function (ignoring that you’d never need to cache a function like this):

function Integer sum(Integer a, Integer b)
    return a + b

This is a pure function since the same inputs for a and b will always return the same value. Knowing this we can construct a unique namespace and key from the parameters and lazy load it as follows:

function Integer sumCache(Integer a, Integer b)

    String namespace = "sumCache" // something unique for this function
    List key = {a,b} // a key based on the input parameters

    Integer sumValue

    Assoc results = $LLIAPI.MemcachedUtil.GetValue(.fPrgCtx, namespace, key)

    // check if a cached value is found
    if results.found
        // we have a cached value
        sumValue = results.Value
        // no cached value, so compute 
        sumValue = .sum(a, b) // call the original function

        // cache it for the next time sumCache is called with a & b
        $LLIAPI.MemcachedUtil.SetValue(.fPrgCtx, namespace, key, sumValue)

    return sumValue


Pure functions don’t require cache invalidation handling since the same inputs will always have the same output (e.g., a + b doesn’t change for the same values of a and b).

But in reality, most operations are not pure functions and require more care. For example, consider a function to return a category value from a node:

function Assoc GetCategoryValue(Object prgCtx, \
                                DAPINODE node, \
                                String categoryName, \
                                String attributeName)

The implementation details are not important, but assume it’s a costly operation. At first glance it might seem like something we can cache using the same pattern as before with the following namespace and key:

String namespace = "GetCategoryValue"
List key = {node.pID, categoryName, attributeName}

Of course, this will not work since the function will have a different return value once the attribute value has changed. But despite this difference, can we still construct a key that uniquely maps to the return value of the function?

In many Content Server instances the modified date of a node is updated whenever an attribute value is changed (this is configurable on the admin.index page, but ignore it for the moment). We can use this information to construct a key that also contains the modified date:

List key = {node.pID, node.pModifyDate, categoryName, attributeName}

This ensures a unique key whenever a category value is changed (since the modified date gets updated), and forces the next GetCategoryValue() call to fetch and cache the updated value.

This approach works well when it’s possible, but unfortunately that’s not always the case. In many situations there is no equivalent of a “modified date” or anything else to indicate a value has changed. For these cases we need another strategy.

Manually invalidate a cached value

Cached values can be manually invalidated with the $LLIAPI.MemcachedUtil.Delete() function. This can be called from a callback (or elsewhere) to respond to events that might alter the cached value.

Consider our previous example and say Content Server is configured to not update the modified date on a category update. This would no longer make the key unique each time a category value was changed. So let’s fix this by first simplifying the key without the modified date (since it’s no longer relevant):

List key = {node.pID, categoryName, attributeName}

We can then implement the $LLIAPI.NodeCallbacks.CBCategoriesUpdatePre() callback (which is executed when a category is updated) to manually delete the old value from the cache when a category update occurs. This will force the next GetCategoryValue() call to fetch and cache the updated value.

Expiring a cached value after a timeout

There are sometimes too many dependencies or unknown factors to efficiently invalidate a cached value. As a last resort you can use the timeout parameter in the SetValue() call to expire the value after a given number of seconds. The compromise is accepting how often the expensive operation should be allowed to execute versus how long a stale value is acceptable. It’s not the best choice, but is sometimes the easiest.

Caching strategies in RHCore

RHCore provides some useful extensions to assist in caching. On RHNode (see Part I for information on RHNode) is a cacheKey() method, which returns a unique string that can be used to construct a cache key for the node. It has the same return value until a delta operation is performed on the node (e.g., Records Management data is updated, a category value is changed, is renamed, a user is added to the ACLs, etc.). After any such event it returns a new unique value until the next delta operation.

We can use this method with our previous example as follows (which doesn’t require any callback to be implemented):

String namespace = "GetCategoryValue"
List key = {node.cachekey(), categoryName, attributeName} 

A similar method exists in RHModel (see Part II for an introduction to RHModel), and has a few additional options for more advanced and complex caching scenarios.

HTML fragment caching

One of my favourite uses of caching is HTML fragment caching. HTML fragment caching permits blocks of HTML rendering code to be cached such that subsequent calls can be quickly rendered again. I don’t believe Weblingo or WebReports support this, but it’s very easy to do with RHTemplate (see Part III for information on template rendering.

For example, say we had a table to display some Records Management information:

        {% for node in nodes %}
                <td>{{|escape }}</td>
                <td>{{ node.recman.classifyInfo.Status|escape }}</td>
                <td>{{ node.recman.PhysicalObjectInfo.UniqueID|escape }}</td>
        {% endfor %}

This is a heavy operation since fetching the status and uniqueid requires multiple database hits and is executed on each iteration. Imagine if this were rendering thousands of rows.

A quick and easy way to improve performance is to add caching. This can be done directly in the template by surrounding the block with the {% cache %} template tag. The tag accepts zero or more keys, which should be chosen in a way that uniquely maps to the content of the block being rendered. For example:

        {% for node in nodes %}
            {% cache node.cachekey %}
                    <td>{{|escape }}</td>
                    <td>{{ node.recman.classifyInfo.Status|escape }}</td>
                    <td>{{ node.recman.PhysicalObjectInfo.UniqueID|escape }}</td>
            {% endcache %}
        {% endfor %}

This simple addition makes a huge improvement to the rendering time, and is a technique I regularly use in my development.

Wrapping up

Caching can give a massive boost to the performance of a Content Server module. I’m finding new ways of using it and am delighted with how much of an improvement it makes. There is almost no reason not to use it.

Need help developing for Content Server or interested in using RHCore? Contact me at

May 252016


OpenText Content Server OScript does not support exception handling. I used to believe this was a limitation, but after learning more I no longer believe this to be the case.

Exception handling is one way to handle errors and has its critics. One criticism is that exceptions that are not immediately caught can create unpredictable paths in your code. This can lead to problems such as putting your program into an inconsistent state or data corruption.

Another common approach to error handling is error checking. Error checking maintains the flow of code by having functions return a special value when an error occurs. This is the approach taken by OScript.

Content Server OScript provides two approaches for error checking. One approach is baked into the language and the other is a convention. Let’s look at each.

The Error Package

The Error class (or “package”) makes it possible to return an error from a function regardless of the function signature. It’s often used in lower level API calls such as CAPI.IniGet() or DAPI.GetNodeByID(). For example, the File.Open() function (used to open a file on the filesystem) has the following return value (from the documentation):

A File representing the open file if successful; Error otherwise.

The “Error” referred to here is of the Error class, and can be checked for by using the IsError() or IsNotError() function. For example:

File f = File.Open("c:/temp/myfile.txt", File.ReadMode)

if IsNotError(f)
    // we have successfully opened the file
    // oops, an error occurred

An Error object can also be defined and returned from a custom OScript function (using the Error.Define() function), but this is rarely used.

The OScript Return Value Error Checking Convention

Content Server has a convention of wrapping most function return values in an Assoc datatype with the following keys:

  • ok – a boolean indicating if the function call was successful;
  • errMsg – a string containing a verbose error message if the function call was unsuccessful (this often gets echoed back to the user);
  • apiError – an Error object usually from a failed lower level API call (if applicable); and
  • anything else pertinent to the function, if successful.

I tend to call an Assoc with this structure a return value Assoc. If ok is false then I call it an error Assoc.

With this convention you’ll find the following pattern throughout Content Server:

Assoc results = somefunction()

if results.ok
    // Great! The call to somefunction() was successful.  Keep going.
    // Oops, something went wrong.

It’s the responsibility of the calling function to handle the error. This usually means ceasing operations and returning the error Assoc to the calling function. This then gets passed up the call stack until it’s finally handled.

What I like about this pattern is its simplicity and consistency. By adopting it a developer can return the error from most function calls and know it’ll be understood and handled up the call stack. The pattern also makes it difficult for a developer to overlook or ignore error handling.

A limitation with this type of error handling is errMsg (which often gets echoed back to the user) only tells you what went wrong, but provides little context to where the error occurred or why. This is particularly annoying when the error is only reproducible on a production system. To collect more information we need a more aggressive way to catch errors, which I’ll get to later. Let’s first discuss database transactions.

Database Transactions

A Content Server request often performs multiple database queries to update the database. For example, adding a new document adds table records to DTreeCore, DVersData, DAudit, ProviderData, etc. What happens if an error occurs part way through updating the tables? We certainly don’t want the integrity of the system to be compromised by having only some tables updated. For this we have database transactions.

A database transaction allows a developer to wrap a group of database calls into a single transaction. A transaction can then be committed at once (i.e., all database queries made in the transaction are committed) or rolled back in the event of an error. This prevents a group of queries from being partially applied if an error occurs part way through the request.

Database transactions are started and ended with the StartTrans() and EndTrans() functions. A typical usage pattern is as follows:

if prgCtx.fDbConnect.StartTrans()
    results = ... //  do a bunch of db inserts and updates

    results = ... // error, transaction could not be started

The StartTrans() call returns true if the database transaction could be started. From that point on any database transaction (regardless of where in the call stack) is part of the transaction. Only after EndTrans() is called are the cumulative queries either committed (by passing in true) or rolled back (by passing in false). In the example I passed in results.ok, which ties in with the previous section on error handling.

There are a few things to consider when using a database transaction:

  • Transactions can be nested, but only the outer most transaction determines if all queries are committed or rolled back (this is why it’s important for errors to be passed back up the call stack).
  • Every successful StartTrans() call must be balanced with a call to EndTrans(). Failure to do so will leave the database transaction open and cause unexpected behaviour in the current and subsequent request. More on this later.
  • Most override points (i.e., where a developer adds code to Content Server such as in a request handler or WebNodeAction) will execute without an open transaction. It’s the responsibility of the developer to start the transaction, do appropriate error handling, and close the transaction as required.

Fatal errors, crashes, stack traces, server did not respond

The return value Assoc pattern works well for errors that have been anticipated and checked for. The error gets caught, is passed up the call stack, and is handled.

Fatal errors occur when something unexpected happens from which the system can’t recover. Examples include:

  • attempting to divide a number by zero;
  • passing an incorrect number of parameters into a function;
  • attempting to assign a value to a variable of a different type (e.g., String text = 5); etc.

Fatal errors will:

  • immediately halt further processing (i.e., crash the thread);
  • generate a trace file on the server containing debug information;
  • display a jarring “Server did not respond” (aka “SDNR”) error to the user (sort of like a 500 error); and
  • fail to close any open database transactions (because the thread is immediately halted and EndTrans() isn’t called).

Fatal errors are always indication of a bug. Although a “Server did not respond” error is unsettling for the user, it’s useful because it captures debug information in the trace file containing:

  • the full stack trace showing the execution path to the code that failed;
  • the reason the error occurred; and
  • the local variable state at every level in the call stack.

This information is usually enough for a developer to analyse, debug, and fix the issue.

The unsettling part of a fatal error is that it can leave a database transaction open. Database connections (and hence database transactions) are persisted per thread, and are not automatically cleaned up when a crash occurs. This means if a request crashes with an open transaction then the following request on that thread will begin with the transaction still open.

This corrupted state would persist if it weren’t for the ResetTransactionsIfNecessary() function, which gets called at the end of most requests. This function acts as a cleanup by rolling back any database transaction that may have been left open by some buggy code (i.e., each StartTrans() wasn’t balanced with a call to EndTrans()).

Placing the ResetTransactionsIfNecessary() call at the end of the request prevents it from being called when a crash occurs. Only at the end of the following request on that thread does it get called. This could be a completely unrelated request by a different user, who will have the database transaction on their request unexpectedly rolled back. This could lead to odd behaviour in the request and data loss.

It’s unsettling to have the error from one request bleed into the next, but explains why a subsequent request after a crash may sometimes behave strangely. A possible solution for OpenText might be to move or copy the ResetTransactionsIfNecessary() call to the start of each request to guarantee it begins with no leftover open transactions.

Crash early, use assertions

Something I learned from the Pragmatic Programmer is to crash early. The idea is:

A dead program normally does a lot less damage than a crippled one.

This may seem counter intuitive, but the idea is to immediately crash program execution if an error is detected that could potentially cause more damage if the program were allowed to continue. Many programming languages provide an “assert” function for this, which forces a crash when a condition is false. The interface is usually something like this:

assert(condition, errorMessage)

The function does nothing if condition is true, but crashes the program if condition is false. The Pragmatic Programmer states this is useful for situations where you might think to yourself: “…but of course that could never happen.

When you think something could never happen then why not back it up with an assert()? If for whatever reason it does happen you’ll immediately be made aware of it. An assert() call isn’t a replacement for proper error handling; instead, assertions are there to catch conditions that should never happen.

OScript doesn’t have an assert() function, and so I added one to RHCore. The interface is simply:

function Void assert(Boolean condition, String errMsg)

The function does a little more than just crash the thread. If condition is false it:

  • closes and rolls back any open database transaction;
  • logs errMsg to the debug window;
  • crashes the request; and
  • generates a trace file.

For example, consider a function that accepts an Assoc or Record datatype:

function Void MyCustomFunction(Dynamic assocOrRecord)
    // ...

Since the argument type is Dynamic it’s technically possible to call the function with a value of another type (e.g., MyCustomFunction({5})). But how do we know this won’t compromise the integrity of our data or cause even bigger problems? To play it safe we can add an assertion to the function:

function Void MyCustomFunction(Dynamic assocOrRecord)
    .assert(Type(assocOrRecord) in {Assoc.AssocType, RecArray.RecordType}, 'Not an Assoc or Record.')
    // ...

Assertions are useful during development to test assumptions and find bugs in your code. However, they are also useful in production environments to catch errors that may have slipped through testing or are only reproducible in that environment.

Crashing a thread may seem like an aggressive thing to do. I’ve been told more than once: “A thread should never crash!” Correct, it shouldn’t. But if a fatal error is going to happen I would prefer to do it in a controlled way that provides me with a trace file and doesn’t leave an open database transaction.

Form Validation Errors

An unfortunate limitation in Content Server is that form validation errors are treated like any other error. The validation error occurs, the error is passed up the call stack, the transaction is rolled back, and an error like the following is presented to the user:

The user must use the back button to recover, which is not obvious and generally discouraged. With some luck the form will return to its previous state where the user can fix the error and submit the form again.

Form validation errors should be handled differently than other Content Server errors. Users should be given the chance to correct their input without having to use the browser back button. Some progress has been made in this area (e.g., the login page), but the majority of forms still don’t support friendly validation.

For more information on form validation see my blog post: Part IV: OpenText Content Server Forms.

Wrapping Up

It’s important not to overlook the significance of error handling. Errors will always happen, but with a some care they can be controlled to minimise their impact.

Questions or comments? Please leave a comment below.

Need help developing for Content Server or interested in using RHCore? Contact me at

Oct 132015

For the last few years I’ve worked with Cassia Content Management Inc. to develop their Records Disposition Approval Module. The module simplifies the disposition sign-off process, but is also a showcase of a module built entirely with RHCore.

The module is being featured in the OpenText Live Webinar Series on Wednesday, October 14, 2015 at 11:00 EDT (that’s tomorrow!). You can register here if you haven’t already.


Need help developing for Content Server or interested in using RHCore? Contact me at

Jul 172015


In Part I of this blog series I introduced an object-based approach for developing with OpenText Content Server. In this next blog post I extend the discussion to include workflows.

The Content Server Workflow API is complex. There is little abstraction or encapsulation, which means operations often require the traversal of complex data structures, converting between workflow representations (e.g., workID, subWorkID, WAPIWork, workData, etc.), and knowledge of where and what functions are available to operate on it. It’s not obvious how it works, and usually requires a considerable amount of reverse engineering to develop with it.

For example, consider the following Workflow GeneralCallbackScript to manipulate a workflow attribute value:

function Dynamic MyGeneralCallbackScript( \
        Object      prgCtx, \
        WAPIWORK    work, \
        Integer     workID, \
        Integer     subWorkID, \
        Integer     taskID, \
        Integer     returnSubWorkID, \
        Integer     returnTaskID, \
        Dynamic     extraData = Undefined )

    // Get the workData for this workflow
    RecArray workData = $WFMain.WAPIPkg.LoadWorkData(prgCtx, work)

    // Fetch the task record for the task id, which we'll need later
    Record task = prgCtx.WSession().LoadTaskStatus(workID, subWorkId, taskID)[1]

    Record workItem
    Boolean found = false

    // Get the package for the workflow attributes
    Object obj = $WFMain.WFPackageSubsystem.GetItemByName('WFAttributes')

    if IsDefined(obj)
        // Iterate the workData and find the workItem for the workflow attributes
        for workItem in workData
            if {workItem.TYPE, workItem.SUBTYPE} == {obj.fType, obj.fSubType}
                found = true

    // If we found the attribute workItem then we can manipulate the attribute value
    if found
        // Here we'd modify the attribute value by traversing the workItem structure (which is highly error prone)
        // Simplified to one line for brevity
        workItem.USERDATA.Content.RootSet.ValueTemplate.Values[1].(2).Values = {'My New Value'}

    // Save the changes.
    return $WFMain.WAPIPkg.SaveWork(prgCtx, task, workData, work)


This pattern is found often when operating on a workflow. It requires knowledge of various functions, understanding of the package and workData data structures, and knowledge of how these structures are related in order to extract the data for that package. The package data has its own structure, which you must also understand in order to do something with it.

I believe much of this can be abstracted and made easier for the developer. Let’s see how RHCore does this.

Introducing RHWorkStatus & RHWorkStatusTask

RHCore introduces the RHWorkStatus and RHWorkStatusTask classes to programatically manipulate a workflow. The classes encapsulate the data structures behind a workflow, while abstracting the programming interface into something that is easier to use.

An instance of RHWorkStatus can be created by calling:

Frame wf = $RHCore.RHWorkStatus.NewFromWorkID(prgCtx, workID, subWorkID)

The RHWorkStatus instance abstracts away many of the patterns you typically see when dealing with workflows, namely:

  • fetching and manipulating package data (comments, attributes, attachments, audit, etc.);
  • fetching metadata (including calculated values) about the workflow;
  • allocating and deallocating the WAPIWork instance;
  • fetching task (or “step”) data;
  • applying an action (accept, complete, or reassignment of a task, set the workflow status, etc.);
  • and more…

The classes also encapsulate the data structure behind the workflow, which means you don’t need to traverse anything to get to the data of interest. This provides fail-safes and lowers the risk of an error.

An RHWorkStatus instance provides a number of methods to operate on and fetch information about the workflow. A few examples:

// Get the map node as an RHNode
Frame mapNode = wf.mapNode()

// Get the status colour of the workflow
String statusColour = wf.statusColour()

// Get the due date of the workflow
Date duedate = wf.due()

// Get the URL to open this workflow
String url = wf.url()

// Get the attribute data of the workflow as an instance of RHAttrData
Frame attrdata = wf.attrdata()

// Get the attachments folder as an RHNode
Frame attachments = wf.attachmentsfolder()

// Get the workflow manager as an RHUser
Frame manager = wf.manager()

// Change the status of the workflow (valid values include 

// get all current tasks
Frame currentTasks = wf.tasks().filter("isCurrent", "==", true)

// get all performer tasks that are currently active
Frame currentTasks = wf.tasks() \
        .filter("isCurrent", "==", true) \
        .filter("isPerformerTask", "==", true)

// Get the task with ID 1 (as a RHWorkflowStatusTask instance)
Frame task = wf.tasks(1)

// Save any changes back to the workflow (with the current task as a parameter)
Assoc results =

A RHWorkflowStatusTask instance represents a workflow task (or “step”) and also provides a number of useful methods:

// Is this task active and current?
Boolean taskIsCurrent = task.isCurrent()

// Is the task completed?
Boolean taskIsDone = task.isDone()

// Get the instructions for the task
String instructions = task.instructions()

// Get the performer of the task as an RHUser
Frame performer = task.performer()

// Get the display name of the performer
String performerDisplayName = task.valueForKeyPath('performer.displayName')

// Does this step represent a sub-workflow?
Boolean isSubWorkflow = task.isSubMapTask()

// When was this task assigned?
Date dateAssigned = task.dateAssigned()

// Get the status in a human readable form (e.g., "Current", "Not Used", "Completed", "Waiting")
String status = task.statusVerbose()

// Is the task an unassigned performer task?  If not, assign it to the current user
if task.isPerformerTask() and NOT task.isTaskAssigned()
    results = task.acceptTask()

// Complete a workflow task
Assoc results = task.complete()

These are just a few examples, but provides an idea of how it works.


Let’s revisit the example from the introduction and rewrite it using the RHWorkStatus class:

function Dynamic MyGeneralCallbackScript( \
        Object      prgCtx, \
        WAPIWORK    work, \
        Integer     workID, \
        Integer     subWorkID, \
        Integer     taskID, \
        Integer     returnSubWorkID, \
        Integer     returnTaskID, \
        Dynamic     extraData = Undefined )

    // Create an instance of RHWorkStatus
    Frame wf = $RHCore.RHWorkStatus.NewFromWorkID(prgCtx, workID, subWorkID)

    // Get the current task as RHWorkStatusTask (required later)
    Frame task = wf.tasks(taskID)

    // Get the attribute frame, which is an instance of RHAttrData
    Frame attrdata = wf.attrdata()

    if IsDefined(attrdata)
        // Use the setter on RHAttrData to modify the attribute value
        attrdata.SetValue(wf.mapobjid(), 2, "My New Value")

    // Save the changes.


I find this faster to develop, easier to read, and less error-prone than the standard approach.

A note about RHAttrData

While standard categories and attributes differ from workflow attributes, there are enough similarities for the APIs to overlap. The wf.attrdata() call returns an instance of RHAttrData, which provides a rich API for setting and getting attribute values. See Part VI – Developing with Categories & Attributes in OpenText Content Server for more information.

Example with attachments

Let’s look at another example using workflow attachments. Say your workflow modifies a set of documents, and you wish to copy the latest document version from the workflow back to the original document. This could be done with the following event script:

function Dynamic MyGeneralCallbackScript( \
        Object      prgCtx, \
        WAPIWORK    work, \
        Integer     workID, \
        Integer     subWorkID, \
        Integer     taskID, \
        Integer     returnSubWorkID, \
        Integer     returnTaskID, \
        Dynamic     extraData = Undefined )

    // Create an instance of RHWorkStatus
    Frame wf = $RHCore.RHWorkStatus.NewFromWorkID(prgCtx, workID, subWorkID)

    // Get the attachments folder as an RHNode
    Frame attachmentsFolder = wf.attachmentsFolder()

    // Get the contents of the attachments folder and filter by document
    Frame children = attachmentsFolder.children().filter('subtype', '==', $TypeDocument)

    // Some variables for the iteration
    Frame child, originalNode

    // Iterate and copy the last version from the WF copy back to the original document
    while IsDefined(

        // Get the original node of the workflow copy
        originalNode = child.wforiginalnode()

        if IsDefined(originalnode)
            // Add the latest document version to the original

    // Skip error handling for brevity
    return true


This example is as much a demonstration of RHNode as RHWorkStatus, but shows how the two can work together to perform basic document management tasks in the context of a workflow. Just consider how many lines of code the equivalent would have taken without these classes.

Instantiating a Workflow

Workflows can also be instantiated with RHCore. For this we use the RHWorkflowMap class, which provides methods to setup the workflow before instantiating. The constructor is as follow:

// First, get the RHNode representation of the map node
Frame node = $RHCore.RHNode.New(prgCtx, <DataID, nickname, nodeRec, DAPINode, or RHNode>)

// Second, get an instance of RHWorkflowMap by calling wfmap()
Frame wfmap = node.wfmap()

A number of methods are available on the RHWorkflowMap instance to setup the workflow:

//  Set the title of the workflow
wfmap.setName("My Demo Workflow")

// Add an attachment to the workflow (can be called multiple times)
wfmap.addAttachment(<DataID, nickname, DAPINODE, nodeRec, or RHNode>)

// Get an RHAttrData frame to manipulate the attributes
Frame attrdata = wfmap.attrdata()
attrData.SetValue(wfmap.mapid(), 2, "My initial attribute value")

Once the instance is setup we can initiate the workflow with the start() method:

Assoc results = wfmap.start()

Querying Workflows

RHCore provides an abstraction to query workflows. This is the programatic equivalent to the func=work.workflows request handler, which is found under the Personal menu in Content Server.

A workflow query in RHCore can be constructed as follows:

Frame wfs = $RHCore.RHWorkflowQuery.New(prgCtx)

The constructor defaults to all non archived workflows sorted by name that the user has access to. Some defaults can be changed:

// Return only managed workflows

// Return only archived workflows

The RHWorkflowQuery instance is a subclass of RHTableQuery (see Part XVII – Table Queries in OpenText Content Server), which allows the results to be paged, sorted, and filtered. However, the page size cannot be changed as it is controlled by the user’s personal workflow settings (more on this later).

// sort by start date in descending order
// valid values include "title", "due", "relationship", "start", "status"

// set the page number to 5

Once we have setup the query we call the iterator() method to return the result set as an Iterator object. As with many RHCore objects, these calls can be chained into a single expression:

Frame iter = $RHCore.RHWorkflowQuery.New(prgCtx) \
    .setKind('managed') \
    .setStatus('archived') \
    .sort('-sort') \
    .setPageNumber(5) \

The result can then be iterated to perform batch operations or to display in a web page:

Frame wf

while IsDefined(
    // do something with wf
    // wf is an instance of RHWorkStatus


Although RHWorkflowQuery is a RHTableQuery subclass, behind the scenes the query is still executed using the same workflow query functions used by Content Server (i.e., the same code as the func=work.workflows request handler). This comes with a few limitations.

The primary limitation is performance. Content Server workflow queries do not scale. The operation works by fetching all workflows that match the query, iterating over each record to perform some calculations, sorting in memory, and only then slicing the result set for paging. I’ve seen the workflow page take minutes to load on systems with thousands of active workflows. Sorting or paging means having to wait minutes again for the page to reload. It’s unusable.

The second limitation is that paging is forced and the page size cannot be changed. This might cause problems depending on what the query is being used for.

RHCore addresses these limitations by providing a second class for querying workflows called RHWorkflowQuery2. The interface is identical to RHWorkflowQuery, but all paging, sorting, and filtering is applied at the database level by leveraging the features of RHTableQuery. This also has some limitations:

  • sorting and filtering by due date is not possible since it’s a calculated value; and
  • the indentation rules for sub-workflows (i.e., how sub-worklows are indented in the func=work.workflows) page are not calculated.

Despite these minor limitations, the boost in performance and paging control makes it useful in some situations.


RHWorkflowQuery and RHWorkflowQuery2 queries can be filtered using the filter() method. Filters work at the database level for optimal performance. For example, to return all workflows initiated by cmeyer in the last 14 days:

Frame user = $RHCore.RHUser.New(prgCtx, 'cmeyer')

Frame wfs = $RHCore.RHWorkflowQuery.New(prgCtx) \
        .filter('SUBWORK_DATEINITIATED', '=>', $RHCore.DateUtils.AddDays(Date.Now(), -14)) \
        .filter('WORK_OWNERID', '==', user)

A filterAttribute() method is also available to filter on workflow attribute values. The method accepts a map ID, attribute ID, operator, and value. The method takes these values and transparently extends the underlying query to join with the WFAttrData table. For example, say you have a workflow map with a “Price” attribute. You now wish to find all workflow instances where “Price” is at least 500:

Integer MapID = ...
Integer AttrID = ... // ID of "Price" attribute
Integer minimumPrice = 500

Frame wfs = $RHCore.RHWorkflowQuery2.New(prgCtx) \
    .filterAttribute(MapID, AttrID, '=>', minimumPrice)

For more complex queries you can use the extra() method, which is detailed in the Part XVII – Table Queries in OpenText Content Server.

Wrapping Up

Workflows are large and complex and this blog post scratches the surface of what RHCore can do. The extension is still a work in progress, and I eventually plan to add support for other workflow steps such as Forms and eSign. However, with this foundation I don’t anticipate it being difficult to do.

What type of difficulties have you had developing around workflows? I welcome your questions and comments below.

Need help developing for Content Server or interested in using RHCore? Contact me at

May 222015

A common pattern in OpenText Content Server development is the execution of a database query. This is a low-level operation, which is useful when an API call isn’t available to get the information you require.

Database queries are a quick way to get to the raw data, but are tricky when the construct of the query is not known until runtime (say, based on the values submitted in a request). A common solution is to dynamically concatenate the query together based on conditions to generate the final query. For example:

if RecArray.IsColumn(request, "filterValue")
    whereClause += " AND myColumn=:A0"
    args = {@args, request.filterValue}


if IsDefined(whereClause) AND whereClause != "" )
    selectStmt += " where " + whereClause

if IsDefined(orderbyClause) AND length(orderbyClause)
    selectStmt += " order by " + orderbyClause

Record recs = CAPI.Exec(selectStmt, args)

It’s a tedious process to build a query this way, and care must be taken to:

  • ensure the query is always valid (including syntax differences between MSSQL and Oracle);
  • prevent SQL injection; and
  • restrict the number of returned items (e.g., a million row result set will cause all sorts of problems).

Unfortunately, the approach provides almost no reusability and needs to be implemented again each time something similar is required.

It was while working with the Django Web framework that I was exposed to a novel way to construct and run a database query without having to write any SQL. It made me wonder if something similar could be done with OScript, and with this idea I developed the RHTableQuery class. It’s now a standard part of RHCore.

Introducing RHTableQuery

RHTableQuery is an abstraction to filter, sort, and page the contents of any table or view in Content Server without having to write any SQL. Let’s jump in with an example to query the contents of the WebNodes view. We start by constructing an instance of RHTableQuery and passing “WebNodes” into the constructor:

Frame nodes = $RHCore.RHTableQuery.New(prgCtx, "WebNodes")

At this point no database query has been executed and the nodes object is just a representation of all records in the WebNodes view.

To fetch the records we call the items() method, which constructs the query, executes it, and returns the results.

RecArray recs = nodes.items()

The underlying query is generated by the sql() method, which can be called to inspect what’s being executed:

echo( nodes.sql() )
> select WebNodes.* from WebNodes

Let’s look at filtering.


Filters are applied with the filter() method and is used to reduce the result set based on a condition. The syntax is as follows (using nodes from our previous example):

nodes.filter(<columnName>, <operator>, <value>)

The parameters are:

  • columnName – the column name on which to apply the filter;
  • operator – the operator to apply (e.g., ==, !=, >, startsWith, contains, in, etc.); and
  • value – the value to query.

The filter() method changes the state of the object to include the condition in the query. For example, the following could be used to limit the nodes query to documents:

nodes.filter('subtype', '==', $TypeDocument)

A subsequent call to items() would now only include documents.

Alternatively, we could use the in operator to limit the results to documents and folders:

nodes.filter('subtype', 'in', {$TypeFolder, $TypeDocument})

The filter() method can be called multiple times to add additional conditions. For example, a second condition could be added to limit the folders and documents to names beginning with “HR”:

nodes \
    .filter('subtype', 'in', {$TypeFolder, $TypeDocument}) \
    .filter('name', 'startswith', 'HR')

Or, a third condition could be added to limit the results to items modified within the last 14 days:

nodes \
    .filter('subtype', 'in', {$TypeFolder, $TypeDocument}) \
    .filter('name', 'startswith', 'HR') \
    .filter('modifydate', '=>', $RHCore.DateUtils.AddDays(Date.Now(), -14))

All filter operations are applied at the database level (i.e., in the “where” clause) for optimal performance.


A sort criteria can be applied with the sort() method and is similar to applying a filter. For example, the following sorts the nodes query by the name field:


The field name can also be prefixed with a negative sign to sort in reverse order:


The method also permits sorting over multiple fields by passing in a list:


As with filtering, sort is applied at the database level for optimal performance.


Since RHTableQuery is a subclass of Paginator (see Part V for information on the Paginator class) the results can be paged with the setPageSize() and setPageNumber() methods. For example, to set the page size to 25 and to get the contents of the 5th page is:


The items() method would now return the 25 items on the 5th page after all filter and sort conditions have been applied.

Paging works by iterating a database cursor, which has shown to work well over large result sets.

Putting it together

You might have noticed the filter, sort, and paging methods each return the query instance. This allows us to chain the methods together and consolidate them into a single expression. For example, we could combine our previous examples as follows:

RecArray recs = $RHCore.RHTableQuery.New(prgCtx, "WebNodes") \
        .filter('subtype', 'in', {$TypeFolder, $TypeDocument}) \
        .filter('name', 'startswith', 'HR') \
        .filter('modifydate', '=>', $RHCore.DateUtils.AddDays(Date.Now(), -14)) \
        .sort('-name') \
        .setPageSize(25) \
        .setPageNumber(5) \

Not bad for a few lines of code, and is also highly readable.

The same approach can be applied to any other table or view in the Content Server system. For example, we could use RHTableQuery to find all users with last names beginning with “S”:

RecArray users = $RHCore.RHTableQuery.New(prgCtx, "KUAF") \
        .filter('deleted', '==', 0) \
        .filter('type', '==', UAPI.USER) \
        .filter('lastname', 'startswith', 'S') \

The compactness, readability, and flexibility of the class makes it an ideal way to run queries that depend on dynamic conditions. It’s far easier to use than manually concatenating a query string.

Additional Methods

The RHTableQuery class has a number of other useful methods for fetching related information.


The count() method returns the total number of records in the result set. The method takes the filter conditions into account, evaluates the result at the database using an aggregate query, and is cached to prevent multiple calls from repeatedly hitting the database. For example, say we want to know how many documents are owned by a particular user. This is simply:

Integer UserID = ...

Integer documentCount = $RHCore.RHTableQuery.New(prgCtx, "WebNodes") \
        .filter('subtype', '==', $TypeDocument) \
        .filter('userid', '==', UserID) \


The values_list() method returns a list with the values of a column. The method accepts a column name as an argument, and an optional boolean argument to remove duplicates from the list. For example, the following could be used to get all unique user last names whose first name begins with “A”:

List lastNames = $RHCore.RHTableQuery.New(prgCtx, "KUAF") \
        .filter('deleted', '==', 0) \
        .filter('type', '==', UAPI.USER) \
        .filter('firstname', 'startswith', 'A') \
        .values_list('lastname', true)

This method is useful when populating a <select> list for filtering.

min() & max()

The min() and max() methods return the minimum and maximum value of a column. The methods accept the column name as an argument, takes the filter conditions into account, and evaluates the result at the database using an aggregate query. For example, to get the last modified date of all documents is simply:

Date lastModifiedDate = $RHCore.RHTableQuery.New(prgCtx, "WebNodes") \
        .filter('subtype', '==', $TypeDocument) \


The iterator() method returns the result set wrapped in an Iterator. I won’t get into the advantages of using an Iterator in this post, but you can read about them in Part II of this blog series.


The filter() method suffices for most query operations, but sometimes a more complex query condition is required. For this is the extra() method, which allows SQL to be inserted directly into the “where” clause of the underlying query.

For example, the following two statements are functionally equivalent to retrieve all nodes containing “RHCore” in the name.

Using the filter() method:

nodes.filter('name', 'contains', 'RHCore')

Or, using the extra() method (with MSSQL):

nodes.extra('LOWER(Name) LIKE LOWER('%'+:A0+'%')', {'RHCore'})

The extra() method is rarely used, but is useful when a complex query statement is required that cannot be expressed with the filter() method.


The join() method is used to create an inner join to another table.

Special Cases: RHNodeQuery

The RHTableQuery class provides a simple and generic way to query a table in Content Server. It works well, but there are some special cases to consider. In particular, the “WebNodes” example misses two important and common requirements:

  • filtering by permissions; and
  • filtering by category attributes.

For this is the RHNodeQuery class, which is a direct subclass of RHTableQuery. It behaves the same as its parent (i.e., all the features mentioned earlier are still applicable), but with some minor differences. First, the constructor doesn’t accept a table or view argument since it’s hardcoded to use the WebNodes view:

Frame nodes = $RHCore.RHNodeQuery.New(prgCtx)

The constructor applies a permission filter by default (See & SeeContents) based on the user defined by the prgCtx context. This is usually fine, but can be disabled by passing false as a second argument to the constructor:

Frame nodesNoPermCheck = $RHCore.RHNodeQuery.New(prgCtx, false)

The RHNodeQuery class also provides a filterAttribute() method to filter on a category attribute. The syntax is as follows:

nodes.filterAttribute(<CatID>, <AttrID>, <operator>, <value>)

As with the filter() method, the filterAttribute() method extends the underlying query (including all necessary joins with the LLAttrData table) to permit filtering on the attribute value.

For example, say you want to find all documents with a boolean category attribute set to true (or 1 in the database):

Integer CatID = ...
Integer AttrID = ...

Frame nodes = $RHCore.RHNodeQuery(prgCtx) \
        .filter('subtype', '==', $TypeDocument) \
        .filterAttribute(CatID, AttrID, '==', 1) \

Again, not bad for a few lines of code.

Wrapping Up

The RHTableQuery class has simplified much of my development. It provides a clean API for querying a table or view, and replaces the need to write complex code to generate a SQL statement. The class also works seamlessly with RHTemplate (see Part III), which allows the results to be rendered as HTML (including pagination) with minimal effort. It’s reusability at its best.

I welcome your questions or comments in the section below. If you like these blog posts you can also subscribe to updates in the “Subscribe to my posts” field at the top of the page.

Need help developing for Content Server or interested in using RHCore? Contact me at

Mar 262015


A common requirement in web applications is to provide a one-time notification message to the user. This is often used to give the user confirmation that something happened. For example, a notification message might confirm a form was successfully submitted or that an error occurred.

Content Server doesn’t always confirm when something happens. For example, the “Submit” and “Cancel” buttons on the category page have very different results, but the redirect (based on nexturl) is identical. This means clicking on either of these buttons appears the same to the user, but in one case the category is saved and in the other it’s not. Wouldn’t it be nice if there was some type of confirmation of the action?

This lack of feedback happens throughout Content Server with most form submissions having no confirmation that the action was successful. I suppose the user might assert success if no error message is displayed, but this is not very user friendly.

One possible solution is to redirect the user to a confirmation page. This already happens after a move or copy, where the status of each copied or moved item is displayed. It adds some extra development effort (i.e., to create the confirmation page and to persist the state of the action), but also requires an extra click to complete the action. What if there was a simpler and less intrusive way of doing this?

Introducing the RHCore Messaging Framework

The messaging framework in RHCore provides a simple and non-intrusive way to provide a one-time notification message to the user. It’s motivated by the Django messages framework, which has been an inspiration for many things in RHCore. Let’s see how this works.

The messaging framework has two components:

  1. a set of functions for creating a notification message; and
  2. a template that can be dropped in anywhere to display them.

Let’s look at both:

Creating a Message

RHCore provides four functions to create a notification message. These can be run from anywhere and queues the message for the current user. The functions provide a convenient way to classify the message as being of type success, info, warning, or error:

$RHCore.Messaging.Success(prgCtx, "This is an example of a success message.")
$RHCore.Messaging.Info(prgCtx, "This is an example of an info message.")
$RHCore.Messaging.Warning(prgCtx, "This is an example of a warning message.")
$RHCore.Messaging.Error(prgCtx, "This is an example of an error message.")

Once a message is queued it can be displayed.

Displaying the Messages

Queued messages can be displayed in a WebLingo or RHTemplate with the following statements:

From WebLingo:

`%L$RHTemplate.Utils.render(request, 'rhcore/messaging.html')`

From RHTemplate:

{% include 'rhcore/messaging.html' %}

The template defined by rhcore/messaging.html loads the queued messages, purges them from the queue (since they should only be displayed once), and displays them according to its type (success, info, warning, or error). Our previous example would render as follows (using rhcore.css to style the messages, which can be customised if you prefer something else):

Let’s look at an example to bring it together.


It’s important to understand that where you create the message is independent of where you display it. This means you can generate the message in one request and display it in a subsequent request on a completely different page (as long as the page includes the rhcore/messaging.html template). Let’s look at the example.

RHCore has a simple configuration page with one option. That looks like this:

Three important things happen when you click the “Save” button:

  1. the setting is saved to the opentext.ini file;
  2. a success or failure notification is generated using $RHCore.Messaging.Success() or $RHCore.Messaging.Error(); and
  3. a redirect is performed to restart the Content Server instance (as follows).

Once the server is restarted the user clicks “Continue” to return to the settings page. However, there is a queued message (from #2 above) is waiting to be displayed. Remember, the creation and display of the message can be in different requests:

The redirect could also have gone somewhere else, but the user would still see the message as long as the resulting page is setup to display them (using the rhcore/messaging.html template).

It’s that easy to create one-time notification messages in an RHCore application.

Wrapping Up

I’m using messaging almost everywhere in my development. It’s highly convenient to generate a message with a line of code and knowing it will be displayed regardless of where the user is redirected1. It gives the user confidence in their actions and requires very little effort by the developer to implement.

Questions or comments? Please leave a comment below!

  1. The exception being a page not enabled for messaging. A possible enhancement to RHCore is to include the rhcore/messaging.html template in the standard webnode/html/llmastheadcomponent.html WebLingo file. This would allow messages to be displayed in almost any Content Server page. 

Need help developing for Content Server or interested in using RHCore? Contact me at

Mar 182015


Over the last year I’ve been blogging about various topics in OpenText Content Server development. In most of these blog posts I referred to a module called RHCore, which is the topic of this next post. So what exactly is RHCore?

RHCore is an Content Server application framework. It’s written in OScript, installs like any other module, and provides a new and fresh way to develop applications with OpenText Content Server.

The module works by simplifying many of the complex development patterns commonly used in Content Server development. It lets the developer write beautiful and elegant OScript, which results in a module that can be developed in less time with fewer bugs.

What motivated RHCore?

I’ve been developing with Content Server for over 14 years and often found myself writing the same code over and over again. It was this repetitiveness and realisation after working with other frameworks that much of this repetition could be abstracted away into something that’s reusable and easier to work with. It was during a break in my contract work that I prototyped RHNode (an abstraction of DTree nodes) and was delighted with the results. The rest of the framework snowballed from there.

I’m using the module as a basis for a few applications that are now running on production systems. The results have spoken for themselves: The applications were built in less time, are more feature rich, and are easier to maintain than anything I have ever worked on before.

What does RHCore do?

The last 14 blog posts have highlighted some of the key features of RHCore. In the remainder of this blog post I will summarise some of these features, throw in a bit more, and try to show how it fits together.

Let’s start with the fundamental concept.

An object-oriented programming approach

Much of RHCore is based on an object-oriented programming (OOP) approach. In simple terms, an “object” is a data structure that contains data and functions (or “methods”) to operate on itself. The beauty of this design is that most of what you need is self contained and doesn’t require referencing external functions in different places. RHCore introduces a number of classes (the “blueprint” for an object instance) to simplify common patterns and interaction with many Content Server data structures.

One such class is RHNode, which is used to interact with DTree nodes (it is analogous to the newly introduced CSNode, but more generic). An RHNode object contains the node data (e.g., DataID, User, Extendeddata, etc.), but also methods to fetch related information (DAPINODE, LLNode, user, parent, categories, etc.) and perform common tasks (e.g., copy, move, rename, etc.). Let’s look at an example.

Say you have the DataID of a node and want to get the URL to open it. With the standard Content Server API this looks something like the following (skipping error handling for brevity):

DAPINODE node = DAPI.GetNodeByID(prgCtx.DapiSess(), DAPI.BY_DATAID, DataID)
Record nodeRec = $WebNode.WebNodeUtils.NodeToWebNode(node)
Object webNode = $WebNode.WebNodes.GetItem(nodeRec.SUBTYPE)
Object cmd = webNode.cmd('open')
String url = cmd.url(request, nodeRec)

That’s a lot of code and requires calling functions in various locations to get to the required result. With RHCore we can do the same by creating an RHNode instance and calling the url() method:

Frame node = $RHCore.RHNode.New(prgCtx, DataID)
String url = node.url(request, 'open')

All the internal workings to generate the URL (getting the DAPINODE, WebNode representation, WebNode object, WebNodeCMD, etc.) is abstracted away. It’s a simple example, but demonstrates how an OOP approach can simplify some otherwise complex code. This pattern of simplification is used throughout RHCore.

Have a look at Part I of this blog series for more information on the OOP approach.

Table Schemas and Data Persistence

Creating a database table in Content Server is a tedious and manual process. The tables must be manually defined (accommodating variations among database flavours) and query, create, read, update, & delete operations require direct database calls. Writing inline SQL isn’t a good idea (it leads to repetition of code and little reusability), and building an API for these operations is another manual and repetitive process.

RHCore simplifies this with its RHModel framework. The framework allows a model to be defined, which automatically generates the table schema and an API for interacting with the data (e.g., setters, getters, and methods for querying independent of the database type). This allows a schema and API to be generated in mere minutes in what used to take hours by conventional means. Best of all, no SQL needs to be written and the generated API provides a natural place for business logic that is tied to the model (e.g., send an e-mail whenever the “status” field is changed to “Completed”).

Have a look at Part II of this blog series for more information on RHModel.

Templates with RHTemplate

RHCore provides a new template language called RHTemplate that can be used as an alternative to WebLingo (plus many other uses). The template syntax is simple and provides tools to quickly build pages with the most common layouts. In particular, it contains functionality to:

  • page, filter, and sort your data (read more in Part V);
  • render your page in a style consistent with Content Server (without having to write extra CSS; read more in Part XI);
  • render interactive widgets (e.g., user picker, date picker, sort headers, paging controls, tabs, etc.) that “just work” without having to import or write any extra JavaScript or CSS (read about this in Part XII); and
  • traverse and resolve complex relationships (e.g., to get the display name of the owner on the parent node: {{ node.parent.user.displayname }}).

RHCore provides extensions to the standard request handlers (including WebNodeAction) to support RHTemplate, which makes the creation of requests with RHTemplate a simple task. As a bonus the extensions also abstract away guiComponents and pageManager so you don’t have to deal with those either.

Have a look at Part III of this blog series for more information on RHTemplate.

Form Lifecycle with RHForm

Writing a form in Content Server to present, capture, and persist a value is a tedious and repetitive process. It involves:

  • fetching the initial values and passing them to the WebLingo;
  • hardcoding the form in the WebLingo (including the initial values, layout, UI widgets, etc.);
  • writing a request handler to consume, validate, and persist the submitted values;
  • spending lots of time debugging and maintaining it; and
  • presenting a jarring error page if anything goes wrong (as in the following screenshot).

Little of this is reusable and doing something small like adding a new field requires modifying code in multiple locations.

RHCore introduces RHForm, which consolidates the definition, rendering, and validation of the form into a single object. This is quite useful since it makes it possible to programmatically control the behaviour of the form at runtime without having to write any logic in the WebLingo or RHTemplate.

RHForm also supports a validation pattern that displays errors inline, which the user can correct and resubmit:

This is far more user friendly than the Content Server error page that requires the use of the back button to recover.

Have a look at Part IV of this blog series for more information on RHForm.

Generic Admin Configuration Pages

Many modules have a configuration page linked from the admin.index page. Configuration pages are often a set of form fields that allow preferences to be configured and persisted in the KINI table or opentext.ini file. Most configuration pages are built by hardcoding everything (although the lesser-known LLConfig OSpace is available to help here) and creating these pages often means a lot of copy and pasting.

RHCore introduces a framework for defining, rendering, and persisting a configuration page without having to write any HTML and a minimal amount of OScript. What used to take hours to write, debug, and maintain can now be done in minutes. The following is a screenshot of a sample admin page created with RHCore:

Have a look at Part VII of this blog series for more information on Generic Admin Configuration Pages.

Override System

I always attempt to minimise the intrusiveness of my code by minimising the number of overrides of core functionality. However, overrides are sometimes unavoidable and it’s unfortunate how often developers will override entire scripts or WebLingo files just to make a small change. The problem with this approach is that it’s not forward compatible: If OpenText changes the original script or WebLingo in a patch or subsequent release then these changes will no longer be reflected in the override.

RHCore provides tools to minimise the impact of overrides by allowing the original script or Weblingo to be called from the override. This doesn’t satisfy every override scenario, but when it does (and it often does) it provides forwards compatibility in the event the original script or WebLingo gets changed.

RHCore also provides a way to monkey patch a script at run time. Sometimes a script is available that does exactly what you want except for a hardcoded value or assumption in the middle of it. A monkey patch allows you to take that script at run time, modify it with a search and replace, and recompile it for execution. This isn’t always failsafe, but until now has worked well and has prevented me from having to inherit hundreds of lines of code from somebody else.

Other Utilities

RHCore does a lot more, and the following is a short overview to highlight some of the other features:

  • Categories and Attributes are notoriously difficult to develop with. RHCore provides an extended API to make this easier (i.e., no more traversing the fData structure; read about it in Part VI).
  • An add-on HTTPClient library is available (based on Apache HTTPClient), which provides a robust interface for making HTTP requests from Content Server.
  • Sending HTML e-mails to users or groups has been simplified to one line of code (read about it in Part XIII).
  • Enumerated types are available for some syntactic sugar (read about it in Part VIII).
  • Documents can be Base64 encoded in one line of code.
  • A messaging framework is available for one-time notifications (this will get its own blog post someday).
  • A framework is available to create custom function menus.
  • Custom Views can be augmented to contain dynamic content and logic (via RHTemplate).
  • Documentation for your project can be generated directly from the OScript source code (in the same spirit as javadoc).
  • A markdown processor is available to convert markdown into HTML.
  • Help pages for your module can be generated from markdown (no more copy and pasting the help page template).
  • Scripts and workflow event callbacks can be edited directly in the Web UI (read about it in Part XIV).
  • and much more…

Wrapping Up

RHCore simplifies many areas in Content Server development. What I also find nice is that modules built on RHCore are easier to migrate to newer versions of Content Server since most of the upgrade logic is rooted in RHCore.

I say it again: RHCore simplifies development and makes it possible to build richer applications with fewer bugs in less time.

If you’re interested in an evaluation, demonstration, or more information about RHCore then please get in touch! I also welcome comments below.

Need help developing for Content Server or interested in using RHCore? Contact me at

Feb 192015


I was recently confronted with some seemingly simple requirements in OpenText Content Server development. In one case a process was required to search and replace all instances of a category attribute value (a common requirement when updating lookup and popup type attributes). It sounded simple with a LiveReport and WebReport, but grew in complexity once we had to consider multivalued attributes within multivalued sets (try building a generic solution if you’re not convinced of its complexity). I suggested a simple one-off OScript module (based on the AttrData extensions in RHCore), but it was not an option since the target system was highly controlled and installing a module was nearly impossible.

Around the same time a user posted a question to the OpenText Knowledge Center forums asking if it were possible to schedule the generation of a System Report. This can’t be done with standard tools, and so I suggested the development of a small custom module. I received no reply, which I assume meant it was also not an option.

These types of requirements are sometimes just a few lines of code. That’s the easy part, but the logistics of getting a module developed and installed can often be much more difficult or impossible.

Developers will often use tools such as LiveReports or WebReports to get around this. These tools allow complex reports to be written without having to build and install an additional module. This is great and serves a purpose, but is limiting since neither LiveReports or WebReports are scripting languages (albeit WebReports has a few “action” tags).

Writing OScript is sometimes the best or only solution to a problem, but there lacks a way to write OScript without having to build and install a module. But what if there was a way to write OScript from the web interface in the same way LiveReports allows you to write SQL? This would open many possibilities for creating small scripts and applications without having to deal with the logistics of installing a module. I pondered the idea for a few years and hesitated due to concerns with system integrity and security. However, I came to realise it wasn’t a problem as long as one adopted a good programming style, had a solid foundation to build on, and trusted Content Server permissions. I prototyped the idea, built a few small applications, and was surprised by how simple and powerful it was. Why hadn’t I done this before? I formalised the prototype and turned it into a subtype called ScriptNode, which is now a part of RHCore.

What is ScriptNode?

ScriptNode is a Content Server node type that allows a privileged user to write and execute OScript directly from the web interface. ScriptNode isn’t a front-end for module development; rather, it allows a user to write scripts in the browser that can be executed on demand or on a schedule. Think of it like a LiveReport, but instead of writing SQL you write OScript. ScriptNode is backed by the RHCore API, which makes it a simple and powerful solution for creating small scripts and applications without having to develop, install, or maintain a module. The only requirement is RHCore.

A ScriptNode can be added to a folder or other container through the “Add Item” menu (as long as the user has the permissions and privileges) and has a simple editor. For example, a ScriptNode to display “Hello World!” back to the user could look as follows:

The text area is for the script and is analogous to the script window in Builder. Executing the example outputs the following:

A ScriptNode script is wrapped in a temporary object at runtime, which provides a number of convenient methods via the this context. For example, this.echo() (or just .echo() as in the example) is analogous to the standard echo() function, but is used to write the output to the browser (instead of the debug window or logs). Other methods include shortcuts for fetching an RHNode, RHUser, program context, request, etc.

Input parameters are defined above the text area and is similar to how parameters are defined in a LiveReport. The “Prompt” field sets the display label and the “Field Type” defines the input widget and data type.

Running a ScriptNode with input parameters first prompts the user for the values before calling the script. Arguments can be accessed in the script by matching the function declaration to the parameters or by using the .args() function. The .args() function returns the arguments as a list, but also accepts an integer to return the argument at a specific index (e.g., .args(3) for the third argument).

Let’s look at another example.

Example: E-Mail a Group

Say you require a tool to send an e-mail to the members of an arbitrary group. This can be built with ScriptNode with a few lines of code.

The first step is to define the input fields:

  • a “To” field of type “KUAFGroup” (an autosuggest field for groups) to input the recipient group;
  • a “Subject” field of type “String” to input the e-mail subject; and
  • a “Body” field of type “Text” to input the body of the e-mail.

The next step is to write the OScript to send the e-mail. This could be done with the $Kernel.SMTPClient library, but for the example we’ll use the EMailer class I introduced in Part XIII: Sending E-Mail from OpenText Content Server. Putting it together looks as follows:

Running the ScriptNode prompts the user with the following form, which can be filled in and submitted to send the e-mail to the group members:

That’s it! Let’s look at a few more examples.

Other Examples

I’ve used ScriptNode to create a number of other small tools and applications. These are conceptually similar to the previous example and include:

  • reporting on the effective permissions of a user on a node;
  • monitoring a file system directory for documents and adding them to Content Server when detected;
  • transferring ownership of all nodes belonging to a user to another user (useful when a user account is to be deleted);
  • searching and replacing category attribute values;
  • generating a System Report and adding it as a document to Content Server (could also be e-mailed);
  • analysing a workflow map to determine where certain actions are taking place;
  • automatic sending of an e-mail to members of a testing team whenever a module under development is updated on the server;
  • monitoring the logs/ directory for trace files and e-mailing them to the administrator when detected; and
  • reporting on installed patches by fetching the list of files in the patch/ directory, extracting the header from each file, and outputting the results.

All of these examples are just a few lines of code and can be added to a system without having to build or install a module each time. Again, the only requirement is RHCore.

Other Features

Run on a Schedule

A ScriptNode can be scheduled to run once at a future date or regularly on a schedule. This is configured on the “Scheduler” tab and looks as follows:

Scheduled ScriptNodes are handled by the standard Content Server agents and run in the context of the Admin user.

Workflow Generic Callback

ScriptNode also integrates with the Workflow Generic Callback subsystem, which means you can use ScriptNode to write Workflow Event Scripts directly from the web interface.

WebReports and Other Node Types

A ScriptNode can also call a WebReport and vice-versa (via a drop-in sub-tag). This creates some interesting possibilities to add custom logic to a WebReport, or use a WebReport to template the output of a ScriptNode.

Of course, a ScriptNode can also call a LiveReport, a Simplate, or any other ScriptNode.

Event Callbacks

This is still a work in progress, but I plan to allow ScriptNodes to respond to node and user events (e.g., run a ScriptNode when a document is added to a specific folder). This could be used to launch a workflow, notify a user with an e-mail, or anything else.

What about security?

You might be thinking this is a security risk. But if you think about it, it’s no different than the security risk associated with a LiveReport. The permission to create, edit, and execute a LiveReport depends on standard permissions and its Object Privilege (via the Administer Object and Usage Privileges admin settings). These must be enforced to prevent an unscrupulous user from writing and executing a LiveReport to augment their permissions or do something crazy like drop tables. The same argument can be made for ScriptNode.

ScriptNode also takes it a step further by restricting who can edit scripts. Object Privileges normally just restricts the creation of nodes, but with ScriptNode it also restricts the editing. I think this is logical due to the sensitive nature of the object type.

Finally, ScriptNodes should be developed with the same diligence used to write a module. They should be developed, tested, and vetted in a development environment before moving to production. The reason for this should be obvious: An obscure typo or bug could lead to something destructive (e.g., losing data, creating an infinite loop, etc.). Also, a development environment allows a developer to debug the ScriptNode using Builder or CSIDE.

Wrapping Up

I’m using ScriptNode in a few projects and enjoy the ease at which I can develop a small application or one-off solution with little constraint. This has been especially useful in environments where installing a module is highly controlled and difficult.

Please leave a comment if you have any questions or thoughts of where you might find this useful. Finally, if you’re interested in seeing a demo then please get in touch!

Addendum (18 March 2015)

Some readers have commented on security concerns with ScriptNode. While I don’t necessarily agree with these concerns (see the comments below), it should be noted that ScriptNode is an optional part of RHCore and is disabled by default. It must be manually enabled after RHCore is installed before it can be used.

Need help developing for Content Server or interested in using RHCore? Contact me at

Oct 292014


If you have ever programatically sent an e-mail from Content Server you would have certainly encountered the $Kernel.SMTPClient library. The library provides the groundwork for sending an e-mail, but has a rather unfriendly API. In a nutshell, to send an e-mail you must:

  • read the SMTP configuration from the notification settings (or get other settings from elsewhere);
  • instantiate an instance of $Kernel.SMTPClient with these settings;
  • render the body of the message (which isn’t trivial with HTML e-mails);
  • call .Login();
  • call .SendMessage() with the details of the e-mail (which can be seven or more parameters);
  • call .Logout(); and
  • implement extensions to support:
    • carbon-copy (CC) and blind carbon-copy (BCC);
    • binary attachments;
    • multiple attachments; and
    • Content Server documents as attachments.

With these limitations and this lack of abstraction I decided to take a fresh look at sending e-mail from Content Server. The result is the EMailer class, which is now a part of RHCore.

Introducing EMailer

The RHCore EMailer class is a direct subclass of $Kernel.SMTPClient. The class abstracts away the complexity of sending an e-mail and provides a simplified programming interface to the developer. Let’s look at an example of how one can send a short plain text e-mail with EMailer:

Frame emailer = $RHCore.EMailer.New( prgCtx )

emailer.setSubject("Welcome to RHCore")
emailer.setBody("We hope you're enjoying this blog post.")


Let’s review line-by-line.

The first line constructs an instance of EMailer and sets a few defaults based on the settings in the “Configure Notification” admin pages. Namely:

  • the SMTP settings (i.e., server, port, & host name);
  • the default from address; and
  • the default reply-to address.

Any of these settings can be changed by calling setServer, setPort, setHost, setFromAddress, or setReplyTo on the instance. Furthermore, the body and subject are defaulted to blank and the body mimetype is defaulted to text/plain.

The second line adds a recipient to the e-mail. The recipient can be a user id, user name, e-mail address, RHUser, or list. The method can be called multiple times, and provisions are in place to prevent the same user or e-mail address from being added more than once.

The remaining lines should be self explanatory.

Each of these methods returns the EMailer instance, which allows the methods to be chained. That is, our example could be written as follows:

$RHCore.EMailer.New(prgCtx) \
    .addRecipients("Admin") \
    .setSubject("Welcome to RHCore") \
    .setBody("We hope you're enjoying this blog post.") \

That’s it! The e-mail gets sent and renders as follows:

Of course, nobody wants to look at plain text e-mails. Let’s see what we can do about this.

Rendering HTML E-Mails

HTML e-mails can be rendered by setting the body mimetype to text/html. For example, we could change the previous example to the following:

$RHCore.EMailer.New(prgCtx) \
    .addRecipients("Admin") \
    .setSubject("Welcome to RHCore") \
    .setBodyMimeType('text/html') \
    .setBody("<p>We hope you're enjoying this <strong>blog post</strong>.</p>") \

As you would expect, the mail client renders this with the text “blog post” in bold:

No developer likes to write inline HTML, so the setBody method also accepts markdown. This can be converted to HTML by calling the renderMD method. For example, the following creates the same e-mail as in the previous example:

$RHCore.EMailer.New(prgCtx) \
    .addRecipients("Admin") \
    .setSubject("Welcome to RHCore") \
    .setBodyMimeType('text/html') \  // set the mimetype to text/html 
    .setBody("We hope you're enjoying this **blog post**.") \
    .renderMD() \ // render the markdown as HTML

Things get difficult as the HTML gets more complex and you consider the many variations in e-mail clients. What renders fine in GMail or Thunderbird may look terrible in Outlook or on an iPhone. There are a number of blog posts that describe how to write generic HTML e-mails, but I won’t get into that here.

However, being able to wrap a message into a mail-friendly HTML template is very useful and has some advantages. Specifically, it:

  • provides a consistent look to e-mails (header, footer, colour scheme, etc.);
  • offers some guarantees the e-mail will render nicely on the mail client; and
  • saves the developer a lot of time.

The EMailer class supports this with the setTemplate method, which takes the current body and embeds it within a template. For example:

$RHCore.EMailer.New(prgCtx) \
    .addRecipients("Admin") \
    .setSubject("Welcome to RHCore") \
    .setBodyMimeType('text/html') \
    .setBody("We hope you're enjoying this **blog post**.") \
    .renderMD() \
    .setTemplate() \  // wrap the message in the default template

This renders as follows:

I’ve tested the default template with a few clients and it seems to work well. Of course, if you don’t like the template you can create your own and pass it as a parameter to the setTemplate method.

How about that? In eight lines of code (which is actually just one line) we’ve generated and sent an HTML e-mail from Content Server.

So what else?


The EMailer class supports multiple attachments, which can come from the file system or Content Server. These are easy to add:

// add a file from the filesystem

// add the document with DataID 12345

The current user must have See and See Contents permission to attach a document from Content Server. Furthermore, additional parameters are available to control:

  • the display name of the attachment;
  • the mimetype; and
  • the document version (including renditions).

Queueing Mail

E-mails can also be queued and sent later using agents. This serves four purposes:

  • you want to audit the generation and send status of each e-mail;
  • you don’t want to tie up the current thread sending potentially thousands of e-mails;
  • prevents an e-mail from being sent if the transaction is rolled back due to a runtime error; and
  • allows each recipient to receive a personalized copy of the e-mail.

The queueing of e-mails depends on the RHTaskQueue module, which is a small extension to RHCore and allows a task to be deferred to the agent (this module may become part of RHCore in the future). Usage is identical to sending a regular e-mail with the exception of the last line:

$RHCore.EMailer.New(prgCtx) \
    .addRecipients("Admin") \
    .addRecipients("cmeyer") \
    .setSubject("Welcome to RHCore") \
    .setBody("We hope you're enjoying this blog post.") \
    .queue() // queue the e-mail instead of sending it

This inserts the e-mail into the queue and will be sent out the next time the five minute agent runs. Failed attempts to send an e-mail (e.g., the SMTP server is offline) are repeated five times before giving up. The status of each e-mail is audited and accessible from the admin pages:

The queue method accepts an optional boolean parameter (defaults to false) to generate a single and unique e-mail for each recipient. This has a few advantages:

  • each e-mail can be personalized when wrapped with the setTemplate method (i.e., the recipient is personally addressed in the e-mail body); and
  • the e-mail won’t be overloaded with potentially thousands of e-mail addresses.

The one caveat is that carbon-copy (CC) and blind carbon-copy (BCC) are not supported when sending individual e-mails. It’s a small detail, but makes sense.

Wrapping Up

The EMailer class breathes new life into programatically sending e-mails from Content Server. I’m using it in a few projects with success as I can now send rich HTML e-mails with just a few lines of code. I couldn’t imagine still doing it the old way.

Questions, comments, or interested in a private demo? Please contact me by e-mail or leave a comment below.

Need help developing for Content Server or interested in using RHCore? Contact me at