I don't know what you wanted to learn or test with this.
First thing I ask myself: Why even use _Calcmem? Its not a specially accelarated variable, it only plays a role as the "memory" of the calculator window.
I also saw you using _diarydate in a previous post. Why? Who has told you to use these system variables? Do you think because they already exist and don't need to be declared, that's giving you a performance advantage? They are bound to system windows of VFP, the calculator and the calendar, and used there. That rather means exra work for these windows to update, if you change the variable values, even if the windows are not visible, you trigger controlsource behavior of invisble windows, which takes extra time for nothing.
Just use normal variables, Klaus, the only use of _Calcmem or _Diarydate is for usage with those windows, if those windows are made visible. If the windows they are defined for are not visible they have a disadvantage barely used as memory variables.
But maybe what you wanted to learn has nothing to do with optimal performance at all, regarding the performance, the bigger sin is to check all the non-squares. Why? Simply turn the problem around: If you want to calculate squares up to 2000, then don't iterate all numbers and test whether they are perfect squares, just go the other way around. Compute squares until a square becomes larger than 2000.
Code:
Clear
? "Number............SQRT"
Local lnI
lnI = 0
Do While .T.
lnI = m.lnI + 1
lnSquare = m.lnI*m.lnI
If (lnSquare>2000)
Exit
Else
? m.lnSquare, m.lnI
Endif
Enddo
Notice perfect square is just a fancy term for square number, which points out it's a square of an integer, so its square root also is an integer. See
Wikipedia said:
a square number or perfect square is an integer that is the square of an integer
That the squares of an integer by definition have that integer as square root, and that square root has no decimals, means you don't need to check that, you already know it as you put it in a the outset.
All numbers between these perfect squares are not even checked and you save a lot of processing time and iterations. It's even cheaper to once compute the too high square number instead of computing the sqrt(2000) as the upper limit of a for loop. Because sqrt() is a complexer calculation than multiplying. Also, your code won't be optimized by some intelligent optimization algorithms of compilers, that notice the sqrt calculaton could stop at the momemt it encounters the first decimal place<>0. That's something you may expect when programming in C/C++ or also .net languages, perhaps, but I doubt even those compilers are that advanced to discover context of the overall code and make conclusions. Even if you had a function at hand, that would calculate a whole square and a rest, that won't be faster than simply computing the squares.
It's not measurable by seconds(), or simpler said your original code also produces its output faster than you can read it, but take the limits up and you get into territory where it really matters.
The major learning from performance point of view is, that it often pays to think of the inverse problem.
I don't know if you know about the Sieve of Erathostenes. It's a very iconic example of this inversion. Instead of testing whether a number isn't prime by finding a factor of it, you compute the primes by eleminating all multiples of them and exclude them from further investigtion thereby. For the simmple use case to check just one number to be prime, it is over the top to compute all primes lower than it including that prime, but when you want to generate all primes below some upper limit, that's the perfect way to do it.
So as last final conclusion, if you wanted to write a function that checks whether a number is a perfect square, then I'd test whether the square of the rounded square root is the original number. That a result has no decimals may only be a cause of approximation, if you go into the range of very hight numbers.
For example:
Code:
? sqrt(100000001),Int(Sqrt(100000001)), sqrt(100000001) = Int(Sqrt(100000001)), Int(Sqrt(100000001))*Int(Sqrt(100000001))=100000001
Your check will tell 100000001 is a perfect square, because the non rounded result sqrt(100000001) is the same as the rounded Int(Sqrt(100000001)), but that's just luck, bad luck or good luck depends on your point of view. And this already happens far from the end of the range of numbers supported by integer. 10 million is far less than a few billion.
It is best to check whether the Int(sqrt(X)) squared actually results in X again.
So a good square root check function would be
Code:
Function IsPerfectSquare(x)
r = floor(sqrt(x))
return (r*r=x)
It can also report wrong results, once the precision of x in floating point is 2 or higher. If you want a function to only return verifiable truth, then it would also need to check whether its own calculations are within the number range that allow perfect results, i.e. such a function should know the limit of numbers it can verify, which would be at about 2^53.
You can see it this way:
Code:
? 2^52+1 = 2^52 && prints .f.
? 2^53+1 = 2^53 && prints .t.
Clearly n+1 is not n, but at about 2^53 you reach the precision of double floating point to be higher than 1, so that 2^53+1 is converted to the same floating point representation as 2^53 and so VFP tells they are the same, as the comparison isn't comparing the expression 2^53+1 and 2^53, which clearly differ, but it compares the results, which are the floating point representations of these expressions after the expressions are compiled and operations are executed. You can't even trust when r*r=x that it exactly hits x and not x+1 or x-1, if r is beyond that limit. So numbers above 2^53 would need more precise data types to be able to verify them.
Chriss