OL Learn

Split detail records into multiple documents

We have a mailing where the sheet count for any one document can exceed what will fit in the envelope. The job currently runs in PP7 and we use subreccount() to know when to re-execute the address-bearing page.

I’m trying to conceptualize how to do this in Connect.

If a record has let’s say 90 detail records, but we can only “fit” 30 in an envelope, I need a way to create a new mailpiece every 30 detail records, so this one record would produce three documents.

I’m looking to crowdsource ideas… something more elegant than pre-processing the data.

Well you could handle all your section through a Control Script, allowing you to call your Address bearing page when needed. Then, using barcode of inserter mark, you can call a new envelop…

Does that makes sens?

How is the detail table controlled? If I use a Control Script, seems like I’d be responsible for the pagination. So it seems like the approach I used with my “label step-and-repeat imposition” job, where I use a Snippet and do my own search/replace, might work.

Or you could insert you Address bearing page using Post Pagination scripts, in between overflown pages.

This way, you let the pagination being done by Connect. That could be done by simply looking at the value from PaginationInfo, available from the Post Pagination Script API.

The trouble with all of this is that it’s still creating a multi-mailpiece record, which means the inserter marks are going to be broken too, as they are only going to recognize each record as a mailpiece rather than the 2-3 mailpieces it actually is.

I honestly believe the only sane way to do this would be to pre-process the data to make the splits. Set it up such that one mailpiece equals one record so that the inserter marks can be generated on the page correctly without requiring a second pass through Connect.

Effectively, every method I can think of that doesn’t pre-process the data ends up requiring two passes through Connect in some way, which isn’t ideal. By pre-processing the data, we can end up in an ideal situation for handling the inserter marks in one pass through Connect.

Of course, depending on what sort of data we’re talking about, that pre-processing is going to be more or less difficult. CSV where the address information appears on each line, for example, could be done in the datamapper with nothing more than a little boundary script forcing a break every time the address changes or every time we’ve hit X records. Other data types would probably be more tricky.

My guess then is we will rely on AlbertN input as he is far more knowledgeable when it comes to Inserter mark in Connect than I am.

Well, the data is a mess. And so it has to be tackled through programming. The question is where to do that programming, through a lot of complex scripting in Design, or scripting in a pre-processor.

I’ll suggest to my client the best spot to do that kind of programming is in a pre-processor.

Thanks for the discussion.

If the data is that messy, you’d probably want to at least lean on the datamapper to simplify your work. Run it through an initial datamapping to get it into a clean format and have the workflow output it to XML. Now you’ve got a clean data set that you can work with in a script. Once that result set is ordered the way it’s needed to be, just pass it through a second data mapper.