Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Overflow error in AVG function

Status
Not open for further replies.

baugie

Programmer
Apr 9, 2003
56
0
0
US
I am getting an overflow error when using the AVG function. What would cause this problem?
 
About the only I can think of is any of your fields NULL?
 
Null should cause the record to be ignored and ought not to cause overflow.
 
Didn't think null can happen with a numeric field! If your trying to get an average of a text field, can that really happen?

Try this once you verify the field is a numeric field:

In Field:

Avg_num:IIF([field] is null or [field]=0, 0, [field])

Should work.
 
Not only can null exist in a numeric field but that capability is essential. Consider a course with various optional modules. We want to find the average score for each module. Those people who don't take a module will have a null score. Aggregate functions like Average or Sum or Count should ignore nulls so as to give a true average for those who did the module.

If you really want to treat a null as a zero then Access has the nz function which is simpler than the proposed IIF function.

Trying to sum or average a text field simply gives a data type mismatch error, at least in Access 2000. So no overflow.

So, as I said before nulls should not lead to an overflow error. My best guess would be a corrupt database and I would try to repair it.
 
Please show the ENTIRE SQL statement, describe how it is being executed (e.g. DB.Execute, DoCmd.RunSQL, in a query, etc.) and specify the data type of the field you're averaging.

Please also give the exact error message you're getting.

One possibility I can imagine is that you're doing an INSERT INTO tablename (columnname)
SELECT AVG(othercolname) FROM (othertblname)
and the target column is too small to hold the average value. There are many other possibilities.

Rick Sprague
Want the best answers? See faq181-2886
To write a program from scratch, first create the universe. - Paraphrased from Albert Einstein
 
I also believe that the problem was a database corruption problem. However, I made a simple adjustment to the equation and the Overflow went away. I changed it from

format(Avg([Field1]/[Field2),"#.##")

to

format(Avg([Field1])/Avg([Field2]),"#.##")

I appreciate all of your input.

Thanks,

baugie
 
hmmmmmmmmmmmmmm ... the original issue was probably a very small value in Field2.

on the other hand, hte two expressions are not really the same, so perhaps you should look into what value you are really attempting to derive.

The following illustrates the differenc?

Code:
Loc	DtSched	SchedProd	ActProd	Avg1	AvgSched	AvgAct	AvgAvg
CLPL	6/15/03	2883	2900	0.994137931034483	2883	2900	1.00589663544919
CLPL	6/16/03	4000	5000	0.8	4000	5000	1.25
CLPL	6/17/03	3067	2909	1.05431419731867	3067	2909	0.948483860449951
TU	6/15/03	2700	2800	0.964285714285714	2700	2800	1.03703703703704
TU	6/16/03	2900	2870	1.01045296167247	2900	2870	0.989655172413793
TU	6/17/03	3741	6528	0.573069852941177	3741	6528	1.74498797113071
WP	6/15/03	10000	12343	0.810175808150369	10000	12343	1.2343
WP	6/16/03	3000	3039	0.987166831194472	3000	3039	1.013
WP	6/17/03	3000	3443	0.871333139703747	3000	3443	1.14766666666667
[code]

as generated by:

SELECT tblReconProd_Start.Loc, tblReconProd_Start.DtSched, tblReconProd_Start.SchedProd, tblReconProd_Start.ActProd, Avg(([SchedProd]/[ActProd])) AS Avg1, Avg(tblReconProd_Start.SchedProd) AS AvgSched, Avg(tblReconProd_Start.ActProd) AS AvgAct, [AvgAct]/[AvgSched] AS AvgAvg
FROM tblReconProd_Start
GROUP BY tblReconProd_Start.Loc, tblReconProd_Start.DtSched, tblReconProd_Start.SchedProd, tblReconProd_Start.ActProd
WITH OWNERACCESS OPTION;




MichaelRed
m.red@att.net

Searching for employment in all the wrong places
 
I agree with Michael - your two solutions will not give the same answer.

Part of the confusion in the replies is that the overflow is nothing to do with the average function. It is because you are trying to divide by a number that is so small that the answer is too big for the target field. For example, 1/0.000001 = 1000000.

One possibility would be to use a MIN function. For example, if the target field can only store numbers up to 32000 you could replace your calculation with MIN(32000, oldcalculation). Everytime the old calculation gives an answer over 32000 the MIN function will restrict to 32000. If the old calculation is under 32000 then the old calculation will still apply.

My solution will not help if you are trying to divide by zero as the old calculation will fail totally. If that is a possibility then you need to address that separately.

Another possible solution is to stop division by small numbers by making them bigger. If the old calculation was a/b you could have a/MAX(b,0.001). Now if b is under 0.001 you will divide by 0.001.

If a or b could be negative the logic becomes more tricky but can be resolved.

Ken
 
cheerio,


If a or b could be negative the logic becomes more tricky but can be resolved??????????


How so?






MichaelRed
m.red@att.net

Searching for employment in all the wrong places
 
This is one of those statements you make and possibly regret but I think you could look at replacing a/b with

sgn(b) * a /max(abs(b),0.001)

Here we guarantee that we divide by a positive number and then reinstate the effect of the sign of b by using the sgn function.

The problem with my original solution is that if b was -3 it would divide by .001 which was higher than -3. With the new solution it will divide by 3 and then multiply the answer by -1 to get the sign back.

Ken

 
Looks like you guys are having some fun...

Here are the limitations on our data set so that I am not so worried about some of the areas you are discussing.

First, Field1 is a subset of Field2. So it will always be less than 1. Secondly, this is production data, and the numbers will always be positive whole numbers. Third, this customer is not paying for support so as long as the report works then I am happy :)

So I appreciate all of your time and effort. Thanks

baugie
 
Two 'final' comments:

1[tab]" ... not paying for support ... " should not generally be true. Where you have a fixed price contract, you should build in a warranty clause and a warranty fund. At least part of the specification for a fixed pice contract should include language which is quite specific re the "acceptance criteria" including the development of test data and procedures. Very little "un-paid" work should even be considered where the product acceptance testing is passed satisfactorily. You SHOULD NOT blindly accept the responsability to modify (e.g. "FIX") what isn't broken and the diffinition of "isn't broken" is that it passed agreed to tests based on agreed to data. I would usually involve the customer in the development of both the tests and the test data. This is because they have (or at least SHOULD have) a good grasp of the data and kow how they want the product to perform. It also places some degree of responsability directly on the customer. ANot much in the world is perfect, and some (VERTY FEW) exceptions can occur where you can agree that there was an oversight by BOTH paries, and you might work out an agreement to modify a design on a lower cost basis. Beyond that, you need to be A LOT more careful with your work committments.

2[tab]" ... works ... " is a relative term -at best! In the world of my occupation "works" includes ACCURATE. Thus while your "formula" does not produce an error, wheather it "WORKS" is far from settled. One area where you can be liable -regardless of your contract- is where you generate information which is in error. If your client EVER finds out the "formula" produces "incorrect" results and he has acted on the basis of these in a manner which has (or even MIGHT have) harmed any party - you could be in serious legal trouble. A brief example:

Suppose that the calculation is ~~ "ratio of objective to objective achieved. Further suppose that employees status (e.g. fireing / pay-raises - bonus) is -in part- based on htese calculations. At some point, the customer decides hte calcs are 'wrong' and dempnstrates this via a seperate (manual?) calculation. Next the reviewes the personnel action based on YOUR version and a comparision of the actions they "shoulda - woulda -coulda" taken based on the revised calculation. Here the customer 'finds; that -based on the inaccurate information provided by YOUR error, they have 'paid out' more in salary and bonuses than was propper. GUESS WHO may be liable (in the strictest LEGAL sense) not only for the overpayment, but interest on the ammounts as well as THE CLIENTS legal costs ... and more?

This is a simplistic but all to real scenario. It almost certainly is not your and your customer's situation in any detail. The cnetral issue remains " ... works ... " in that it doesnt cause an error is not "WORKS" in any pratical sense.


'___________________________________________________

You need not bother to reply, as I do not intend to get (back) on the soap box of legal and / or moral business activities, only to caution you to be VERY sure that you do not join the rtogues gallery of "Stupid Crooks" - via the quick fix.



MichaelRed
m.red@att.net

Searching for employment in all the wrong places
 
The key point here is that the proposed fix to the problem will give a totally different answer.

My suggested fix, even in the earlier simple form, preserves the original logic except where the system would fall over. However, as a developer I would not apply my own fix without first discussing with the end users. They might prefer that the extreme values that cause the overflow should be excluded from the calculation. Fundamentally this is not an Access problem but a problem of data that is producing wild results.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top