I've got a fairly simple package taking a file, unpivoting it and then sorting it. The problem is, this is taking forever. The file contains about 1.8 million rows and 9 columns (including the pivot) so we end up with 14.4 million rows. It is using stacks of memory (our test and development servers only have 4GB and production only 12GB). Would I be better off writing to a temp database? Are there any free articles out there that talk about this sort of thing or do I just need to use the good old trail and error approach?
Many thanks,
Steve
Many thanks,
Steve