Upland OL User community

Datamapper Javascript API

I have built a Data Mapper definition to parse data from a PDF file. I am trying to right a Javascript Action to write some of the parsed data to a text file (CSV). I am able to reference the values captured in the final record through the record.fields object. How do I access all of the other records? Is there an array reference that I can use to access the other records?

You should use a post processor script, that will give you access to all records that have just been extracted by your DM Config. Something like the following:

//Define the destination file format csv
var fileOut = openTextWriter("c:\\out\\Clients.csv");

// ***** If column names need to be added as the first line in the CSV,
// ***** comment out the next three lines (after adjusting the Labels
// var Labels = 'Fullname;Company;Street;City;Province;Country;zip';
// fileOut.write(Labels);
// fileOut.newLine();

//Go through all records to extract the fields value
var str="";
for (var i=0; i < data.records.length;i++){
       name = data.records[i].fields.FullName;
    company = data.records[i].fields.Company;
    street = data.records[i].fields.Street;
    city = data.records[i].fields.City;
    prov = data.records[i].fields.Province;
    country = data.records[i].fields.Country;
    zip = data.records[i].fields.ZipCode;
    // concatenate the fields value with ';' to have csv line
    str = name +';' + company + ';' + street + ';' + city + ';' + prov + ';' + country + ';' + zip;
//Close file

Thank you very much. This helped a lot. The problem I was having was that I included the Javascript logic in an Action, rather than a post processor script. As an Action script, the data.records object is not available, only the last document data. Once I moved it to a post processor script, the data.records object became available and worked like a charm.


Diggin in this forum’s cellar:

One question on this topic:

Has anyone tried to compare the performance of writing datamapper‘s data in a postprocessor loop vs. using the generated metadata (XML) and xslt transformation?

I‘m currently going the XSLT way and it‘s really fast.


The postprocessor will give you much faster performance because Workflow will not have to fetch all the various record information and convert it to metadata.


but, pls correct me if I‘m wrong, if datamapper is running in workflow that metadata is created anyway. So why not use it the XSLT way to get that csv?


Because you can then set the DM Task in Workflow to return only the record ID’s instead of the entire records. This is much faster, especially for large jobs.


So I‘ll rewrite some code and compare both versions, results will follow.