[solved] Issues with using json as data source

Recently I started investigating using JSON as a data source for the data mapper. As everything is JSON these days it is a good idea to investigate the possibilities for this.
Our current data file that we receive from our other system is still XML so in my workflow I use the XML/JSON Conversion plugin.
There are now 2 issues that I am trying to solve where one involves this plugin and the other involves the data mapper.

Suppose I have the following source XML:
<sample> <item> <mytag1>1</mytag1> <mytag2>2</mytag2> <mytag3>3</mytag3> <mytag4>4</mytag4> </item> <item> <mytag1>5</mytag1> <mytag2>6</mytag2> <mytag3>7</mytag3> <mytag4>8</mytag4> </item> </sample>

(sorry for the formatting - I do not know how I can get the editor to respect enters properly)

After converting via the plugin I get the following JSON:
{ "sample": { "item": [ { "mytag1": "1", "mytag2": "2", "mytag3": "3", "mytag4": "4" }, { "mytag1": "5", "mytag2": "6", "mytag3": "7", "mytag4": "8" } ] } }

In my datamapper I would like to set a repeat on “sample / item” and use the following Json path Collection: $.sample.item[*]
Other ways to write this would be ".sample.item[*]" or "$..item[*]" which all get the same result.
And it is all perfectly valid according to this website:

As soon as I add an Extract step within my repeat I get the following message:
Exctraction: Error occured [Field mytag1]: A value cannot be extracted twice (mytag1)

Why does this cause an error and how would I solve this?

My second issue is that the plugin does not know anything about arrays.
So if my XML changed because I have just one section instead of 2 I would get the following XML:
<sample> <item> <mytag1>1</mytag1> <mytag2>2</mytag2> <mytag3>3</mytag3> <mytag4>4</mytag4> </item> </sample>

Running this through the plugin would give me the following JSON:
{ "sample": { "item": { "mytag1": "1", "mytag2": "2", "mytag3": "3", "mytag4": "4" } } }

And now suddenly it is not an array anymore, which makes sense because the plugin does not know that it needs to be an array. But how do I handle this case within the datamapper?

For proper display in this forum, add your code using the pre-formatted text (</> icon)

"sample": {
	"item": [
			"mytag1": "1",
			"mytag2": "2",
			"mytag3": "3",
			"mytag4": "4"
			"mytag1": "5",
			"mytag2": "6",
			"mytag3": "7",
			"mytag4": "8"


Using Connect 2022.2.3, I have setup my JSON path to $.sample.item[*] for the repeat.
I then added an extract step and it works fine.

What version of Connect are you working with?

And here is how I did it for your second problem:

JSONArrayExample.OL-datamapper (4.1 KB)

I understand the desire to work with JSON rather than XML (much more simpler) but in your case, if you go back to XML you wouldn’t have this issue.

In my case I’d rather work with the data the way it is by default.
The only time I’d do a conversion to JSON is when I need to access the data from a script in Workflow. then I would definitely convert it to JSON cause I can then use the .DOT notation in my script to browse through it.


Thank you for your very quick response. I am currently on version 2022.2.1.5127 and this is the message:

I could upgrade to the latest version but as the 2023 is almost there I think I rather wait a week or so and do the upgrade to the latest version immediately as it has some nice things that would help me (especially the shared scripts in the workflow as I make use of scripting a lot).

And your data mapper solution works perfectly.

I expect our other system to migrate to JSON in the future as the developers start to implement a lot of API interfaces and JSON is the standard for that these days. Once I have the conversion done it is then very easy to switch on my side by just removing the conversion plugin.

I doubt that your version is the issue. Can you share your Datamapper?


I have attached my sample datamapper.

sample.OL-datamapper (3.3 KB)

Please note that even though your datamapper solution works I decided to go another route.
The reason for this is that I have a repeat to process an array but within that repeat I have a lot of extract steps and within the repeat I also have other repeats so this would make it so extremely complex that it is just not maintainable.
And thus, after I do the convert of the XML to JSON I now have a script where I go over the sections that could be possible arrays and convert them on the fly with for example:

// Check if the selected services are already an array. If not then convert it to one
if (!Array.isArray(JsonObject.xml.document.details.selectedservices.service)) {
        JsonObject.xml.document.details.selectedservices.service =
                JSON.parse("[" + JSON.stringify(JsonObject.xml.document.details.selectedservices.service) + "]");

As I am working with a JSON Object replacing values is not that expensive as a string find / replace.

Hello @dvdmeer,

I’m glad to hear that you were able to solve the issue by making use of a script instead.

The reason why the following error occurs is because you’re assigning a value two times to the same record field in the current setup. Please change the Data Table value, as applied to the settings of the Extraction Step “Extraction”, from “record” to (something like) “record.detail” to solve this error. This will add the extracted field to the detail table “detail” instead.

Extraction: Error occurred [Field mytag1]: A value cannot be extracted twice (mytag1) (DME000132) (DME000218)

Thank you Marten. This works perfectly.

1 Like


Your script could be much less costly in terms of performance if you skip the unnecessary JSON.parse() call.

if( !Array.isArray( data.sample.item ) ) {
  data.sample.item = [ data.sample.item ];

That is a great tip. Thank you for this.

It would be nice if it is possible to update the XML/JSON Conversion plugin to have the option to add a list of XPaths that should be treated as arrays as there is no way for the plugin to know this from the XML

I will add your suggestion to our wish list.