I have a question along these lines, if I use the data mapper to mine/map a text file with tables, and I export the data needed by my addressing software (name, address, city, st, zip) to a csv, how does Connect matchup that data to the “detail” data when I bring it back in to create my print output in presorted order.
If you want to export the data directly from the DataMapper, you will need to add a Unique ID field to each and every record. When you pass the file to your addressing software, it should remain untouched, which will allow you to match the new records to the previous ones, presumably through a script in Workflow.
Alternatively - and this would be the recommended way of doing things - you can export the data from Workflow itself by using the Retrieve Items task and using a script to convert the resulting JSON file to CSV. In this case, a unique Record ID is already provided for each record. Once the addressing software has done its magic, you can update the original JSON file with a script, using the Record ID as your indexing field. You can then use the Update Data Records task to store the modified information in the Connect Database.
I recommend adding a .replace(’"’,’""’) to escape double quotes if you are going for standard CSV. Not doing so can really mess with the data, which I learned the hard way.
One more question: Is there a way to write a detail-data instead of records? I have 1 records with a many lines, looking for the same output.
data.records[i].fields[j].toString() needs to be replaced with…?
You’ll have to add an additional level of looping to use something like: