I have code that works well, but I feel it is inefficient. The following table captures the essence of my question:
Each value of P occurs in exactly 32 records.
I use SQL to apply a test to each record, and I want to find the values of P that 'pass' this test in all 32 cases.
So, I apply the test (not shown here and not relevant to my question), then use code ..
.
Everything works perfectly, but I suspect the code is grossly inefficient. Here is why: The test that is applied to each record is a very stringent one - the vast majority of records fail the test. Less that 1 in a million values of P will pass the test all 32 times! So, lets say P=1 fails the test in its 1st record, then I'd like to reject P=1 from any further consideration instead of testing it 31 more times.
I'm just wondering if there is a more efficient algorithm to handle problems like this. Because my data tables are so large, an improvement in efficiency would be really helpful.
Thanks in advance for any hints.
Vicky C.
Code:
tbl_Data_Output_150218
[b]ID (Auto) P X Y Z[/b]
1 [b]1[/b] 32 187 945
2 [b]1[/b] 116 33 4
3 [b]1[/b] 78 877 129
...
32 [b]1[/b] 662 14 81
33 [b]7[/b] 21 778 903
34 [b]7[/b] 144 209 63
35 [b]7[/b] 12 9 982
...
64 [b]7[/b] 667 62 812
65 [b]10[/b] 887 27 893
etc...
Each value of P occurs in exactly 32 records.
I use SQL to apply a test to each record, and I want to find the values of P that 'pass' this test in all 32 cases.
So, I apply the test (not shown here and not relevant to my question), then use code ..
Code:
GROUP BY P
HAVING COUNT(*) = 32
Everything works perfectly, but I suspect the code is grossly inefficient. Here is why: The test that is applied to each record is a very stringent one - the vast majority of records fail the test. Less that 1 in a million values of P will pass the test all 32 times! So, lets say P=1 fails the test in its 1st record, then I'd like to reject P=1 from any further consideration instead of testing it 31 more times.
I'm just wondering if there is a more efficient algorithm to handle problems like this. Because my data tables are so large, an improvement in efficiency would be really helpful.
Thanks in advance for any hints.
Vicky C.