Upland OL User community

Loading json file in the data mapper


How do I load a json file in the data mapper?

A simple loadjson(“file:///c:/data/zones.json”) and assign this to a variable doesn’t work!

I tried ist as an action and as a preprocessor script. The result was always:

Error running script (ReferenceError: “loadjson” is not defined.) (DME000019)

Thanks in advance and best regards


Hello iiliev,

Unfortunately, loading JSON files this way is not possible - the datamapper does not have access to the same APIs that the Designer module does.

If you’re trying to complement your data file from an external source, you might want to take a look at this how-to:



Hi Evie,

In the Question “Various Tips & Tricks for Connect Designer” there is the following sentence:

The loadjson() function is available both in the DataMapper and Designer module, whenever a script is used (for example, in preprocessor scripts and action steps in the DataMapper, or Scripts in the Designer).

This brought me to the idea to use such an approach in the DataMapper…

The referenced How-To works only partly. There is no way to get back the enriched metadata to the main branch after sequencing.

Best regards


Unfortunately the information in the tips & tricks was either outdated or wrong, I have removed references to the datamapper in this. And even more sad is that indeed, there is an issue with the Metadata sequencer being used in a branch.

We are investigating the sequencer issue and currently the only workaround would be to use a Script, which would loop through the metadata and complement it directly (in such a script you could also load the JSON file(s) and JSON.parse() them, to have access to their data).

The sequencer replaces the original metadata file with a new one on every iteration. An update on the original metadata file never worked (at least in my PP lifetime). I always had to save an intermediary file, reload it, interpret it and so on…

I already solved my problem with passing a separated string as a variable from the workflow to the data mapper and splitting it. But loading text and JSON.parse() is also a way. Have to test it.

But now I have another problem. Is there any way to skip a record in the data mapper?

Let’s say, I have a list of statements to skip, an I don’t want them to be generated as records?

I know, I can do it also in the workflow, but it would be much easier in the data mapper. I already did the definition work and the input data is pretty weird… It doesn’t make sense to replicate the logic in the workflow.

best regards