I had this problem recently. You can do one of two things:
Increase the level of memory that each instance of WeaverEngine has - note that, for large outputs - without job grouping and separation - only one single WeaverEngine instance will process the whole thing and so having multiple engines started has no effect other than using up the RAM. That being said, they don’t grow into their memory until they’re used.
NOTE: This is an unknown as you never know where the limit is - I eventually had to have WeaverEngine set to use 8GB in RAM to produce a single output file.
Separate the output into manageable chunks and then thread the output through Weaver for performance. What I did here was: -
a) Put all my content (from Print Content) into the Connect DB using Set Properties.
b) Retrieve Items for the entire batch.
c) Use a Metadata Sequencer to sequence (split) the output every 5,000 occurences of the Document level.
d) Give each iteration of the sequencer an index - easiest to use %i (current iteration from innermost loop in Workflow).
e) Output the metadata file to a folder along with an identically named .ready file.
f) In a threaded process, look for .ready files, pickup the corresponding metadata file, push that through the Create Job plugin and subsequently the Create Output plugin.
This method gave me two benefits - not only did I avoid the unknown upper limit of overwhelming the memory expansion limit but I also gained performance by threading the output through Weaver.
Either way, whether your self-replicate or not, the output can be contained with this limitation on the number of Documents selected for output via Weaver.
Finally, I also noticed a marked improvement in output size capability when allowing the output to directly place the file in the filesytem as per the Output Creation Setting without using it as an action and continuing with workflow. This - I suspect - is to do with the 32/64bit architecture differences between workflow and Connect engines.
Let me know if you need further info.