I am working with a PDF data file which contains about 65000 records.
I have created and sorted the metadata document level without issues.
Now I need to Group the metadata when the City field value changes.
The metadata Goup Level creation takes over an hour to complete and the Workflow memory rises to about 1.6GB.
Then follwoing that, there is a Metadata Field Management task which is required to add the City field at the Group Level.
In the Metadata Fields management tasks there is also a requirement to add a field at the Group level which is the SUM of another field at each document level within each group. This part works fine.
This steps also takes over an hour to complete and keep memory usage at about 1.6GB. So I am worried the process will fall over in production when running with much greater amount of records. Unfortunately, I can’t split the data as it needs to be sorted grouped and then barccoded for the inserter.
Are there any ways to optimise the Metadata Group creation whilst allowing me to add and sum the required fields?
I am told a metadata API exists which might speed things up but quite frankly I have no experience with the API.
Perhaps if someone knows of an existing example script which groups metadata when a Field Value Changes, I could use it as a starting point?