OL Learn

Condition on workflow process top level

This is a request from a couple of customers. The idea is to have an optional condition on a process at the top level. A condition that can based on the value of a global variable. The global variable could be set to a value by another process and in affect turn off a process (still active, just not pulling in files) . A “PreCondition” or “PreProcess” step of sorts. This step would be checked just prior to the process being run. If the condition is true then the process runs as normal, but if false the process would be skipped until the next cycle.

One of the potential uses is for DEV/DR vs PROD where they want the same configuration for every environment, but want to “turn off” a process for DEV/DR that is on for PROD. Another use would be to “turn off” a process in a specific instance for “user workflow” steps/actions for flow control. (i.e. User workflow in the sense where User 1 performs a task and then it’s routed to User 2 to perform another task instead of the concept of a PlanetPress workflow) There are other situations where this would be helpful.

I see what you’re getting at, but this is already largely doable with the workflow as it stands.

Consider that you’ve got 2 processes that you want to dynamically enable/disable. Currently they all start with various inputs. Maybe printer inputs or folder captures. It doesn’t really matter what. That initial input will be changed to being the second action in the process. The first will be a new folder capture that is specifically looking for a ‘trigger’ file. That trigger file in turn would be created by another process that looks something like this:

Each of those conditions is checking your global variable. If the condition is true, the trigger file goes off to the trigger folder that then lets that process run once. This whole process then becomes the timing mechanism for your other processes. So if this is set to run every 4 seconds, all of the processes it triggers are also potentially running as fast as once per 4 seconds.

Those processes pick up the trigger file through the initial Folder Capture and the very next step destroy the trigger as they pick up whatever files they’re meant to from whatever other source.

Inputs themselves can also use variables for their configurations. So, perhaps instead of the above method you use your global variable to set the input path. Take a folder capture, for instance. When it’s “on” it’s pulling from the normal source of C:\Work\ProcessA. When it’s “off” it’s pointed at some empty folder like C:\NoCapture. The same concept should work for any input.

This actually works nicely with the whole “Same config, multiple servers” concept. Each server has it’s own startup config file stored on the disk that sets a pile of global variables unique to it. This is picked up by a startup process, so as the server is turned on, the first thing it does is read in it’s config file to set the variables. Those variables are what are being accessed in the various inputs, rather than hard coded values. If you want to change them on the fly, that can be done by feeding a secondary config file into a special process designed for the purpose.

Currently I use a similar approach. With a startup process I read values from an external xml file to set global variables. If the input is a folder then I set the path with a global variable. This allows me to have different paths for DEV and PROD. When there isn’t a DEV location, I use a local folder such as the “NoCapture” folder you mentioned. At other times I use a text condition to check if the global environment variable is set to PROD and perform actions based on the results of this condition.

The customer’s objection is if this process is running every 4 seconds, there is increased load on the server and additional entries in the log. Create file, check condition, delete file every 4 seconds when the value is false.

To my knowledge there isn’t a way to make a specific process inactive using a script (such as in the startup process). The goal is to use the same config file in DEV as in PROD without any modifications. I have a process that works, but customer is questioning if it could be done better with the feature request.

I don’t even know if the feature request is feasible, but said that I would post the idea.

The customer’s objection is if this process is running every 4 seconds, there is increased load on the server and additional entries in the log. Create file, check condition, delete file every 4 seconds when the value is false.

All very good points. I’d just like to address this one in particular. Primarily playing devil’s advocate here as I think there’s some merit to this otherwise.

In my first method, this is absolutely the case. The trigger process will continue to log every run, writing to the disk both to create it’s working file and to write it’s log. Though the load is minuscule, a write is a write.

However, with the second method, where you have the input paths in global variables and you change them mid-run, the logging is at least halted. If a process checks a folder and finds nothing, nothing is logged. This also requires no conditions be checked. It simply changes it’s input location and finds nothing.

Still, it does indeed continue to check the folder. Which does in turn add a very tiny amount of overhead as it reads the disk. So disabling the process on the fly would allow you to save that small amount of processing time.

Like I said, this is a feature request by a couple of customers because of their perceptions. I’m guessing they are primarily annoyed by the logging. Both customers want verbose logging, but don’t like the entries that check and then delete because of the condition does not match.

I appreciate the detailed responses. I hadn’t thought of the argument that the checks are still being performed so the performance difference would me small.

I enjoyed reading this thread because it validates the changes we have planned for future releases of Workflow. Many of Uomo’s concerns (as well as many of Albert’s workarounds) will be addressed by those changes. I unfortunately can’t say much more (if I did, they’d have to kill me… :stuck_out_tongue: ) but know that the first major parts of this evolution are scheduled for release this year.

So stay put!