Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Time accessing records

Status
Not open for further replies.

teknik

Programmer
Feb 27, 2001
33
0
0
IN
I had written about the time needed to access the records from a dbf which has 20000 records. The solution given was to index on the fields for which filter is set and then write a SQL query. But even this does not work. Is there any setting by which the time taken to execute a SQL query can be reduced ?
 
HI,
Can I have look at the SQL code as used by you.. (Those few lines).. Definetely, the index you are using is not used by the SQL or the SQL is not rushmore optimised. The solution can be found. If you cannot for any reason post this code, you can email it to me.. onlt thing is you will miss so many other experts opinion as well which you can get in the forum.

ramani :-9
(Subramanian.G)
FoxAcc
ramani_g@yahoo.com
LET KNOW IF THIS HELPED. ENOUGH EXPERTS ARE HERE TO HELP YOU OUT! BEST OF LUCK :)
 
Besides knowing the exact SQL statement you are executing, it's important to know the environment you are working in. Is the data on the local workstation, or on a file server?

If on a file server, what's the connection to it? (LAN - speed, network topology ; WAN - leased line, dialup?) What kind of server is it (Win9x, WinNT, Win2K, Novell, Unix, ...)? What are the hardware specs on it? (CPUs, speed, totally memory, memory per user, disk capacity and type-IDE, EIDE, SCSI, RAID level, etc.) What other major apps are being supported?

What are the hardware specs of the workstation (CPUs, speed, totally memory, disk capacity and type). Are all the workstations of a similar configuration?

How long is it taking for the query and what are your best case expectations?

There are many things that influence the speed of a query (including table design), and I doubt you'll get a simple "change this one thing in your FoxPro program and everything will be much better" answer.

I had one situation were I first tested a very complex query that took ~10 seconds when it ran on my local workstation. I then moved the data to our Novell network, and on the same work station it now took ~20 seconds. Then I installed the application at the customer's site and I was ready to impress them (especially since there workstations were all new and had twice the memory I had and ~75% faster processors). I fired up the same query on exactly the same data loaded on their network, and 15 minutes later it wasn't finished <s>. This corporate network was extremely busy (thousands of users) and was not configured to supply large chucks of data in &quot;real-time&quot;, but rather it was a huge depository of archived documents and an e-mail server.

Note: By restructuring some of the tables, additing some strategic indexes and requiring more exacting queries, I got it so the query above normally ran in less than 4 minutes on their network. This was more than acceptable to them. Before they would have had to submit a research request that could take weeks to get back results. If they hadn't worded it correctly (or the researcher misunderstood), the data was worthless!

Rick



 
The SQL query uses two files : MH, ACC
MH has fields : N_MH, C_MH_NAME, C_MHTYPE (DEBTORS, CREDITORS ETC)
ACC has fields : N_MH, N_AC_HEAD, C_AC_NAME (ACC contains all the accounts such as Debtors, Creditors, Assets, Liabilities etc)

The data is on a stand alone PC. It has 2 GB hard disk and
64 MB RAM.

The query is :

SELECT ACC.N_AC_HEAD, ACC.C_AC_NAME, ACC.N_MH, ACC.N_SH,;
FROM ACC, MH;
WHERE ACC.N_MH = MH.N_MH AND;
MH.C_MHTYPE = &quot;DEBTORS&quot;;
INTO CURSOR TEMP

 
1. I suggest having a compound index (CDX) on N_MH for MH
and N_MH for ACC
2. I hope the MH and ACC are the absolute files names and they are not the ALIAS() names. I suggest you should use the absolute DBF file names in the SQLselect rather than ALIAS names, since SQL has its own way of choosing the available index.
3. SET DELETED ON shall be set. OR else your SQL can included FOR !DELTED()
4. So the real SQL code... I suggest ........(You copy the following and use as it is)

SELECT ACC.N_AC_HEAD, ACC.C_AC_NAME, ACC.N_MH, ACC.N_SH ;
FROM ACC, MH INTO CURSOR TEMP ;
WHERE ALLT(MH.C_MHTYPE) == &quot;DEBTORS&quot; .AND. ;
ACC.N_MH = MH.N_MH

Hope this gives you the result.

5. Also I suggest you make the field ACC.N_MH with as little length as possible and try to use uniform code length convention.. example..
DR=DEBTOR
CR=CREDITOR
BK=BANK
AS=ASSETS
LI=LIABILITIES
CA=CURRENT ASSETS ..etc what ever
If required you can make it to three characters. Once it is uniformly filled, you dont have to use ALLTRIM in the SQL statement. Another way is to make the &quot;DEBTORS&quot; to have the same length as field length.. example &quot;DEBTORS &quot;. While I am not sure as to how the SQL does the job, I am convinced, we wont be trapped and the RUSHMORE is getting optimised in my suggestion.
ramani :-9
(Subramanian.G)
FoxAcc
ramani_g@yahoo.com
LET KNOW IF THIS HELPED. ENOUGH EXPERTS ARE HERE TO HELP YOU OUT! BEST OF LUCK :)
 
Thanks for your reply. But the problem still exists. The tables already have a CDX on N_MH. And also I use absolute file names and not ALIAS names. I face this problem for all SQL queries when I try to access data from a table having large number of records. And the time taken to execute the query is more only when the query is run for the first time. If it is run immediately after the first time the results are shown in a fraction of a second. Please help me.

Thank you
 
Add more memory! 64MB is barely enough to start up VFP 6.0 - especially on WinME/WinNT/W2K, or if ANYTHING else is running on the system. Since the whole basis of VFP's speed (Rushmore) is built on getting indexes (actually bit arrays representing the indexes) in memory, when it doesn't have have enough room it really loses all it's advantages. Consider at least 128mb and 256+mb is really better. (I found a lot of mysterious VFP runtime errors just disappeared on my old Win98 system when I went from 64mb to 96mb, and even more when I went to 128mb!)

Also, if you happen to have any &quot;active&quot; virus checkers running, check the configuration and make sure it's only scanning code files and not all files (or at least not FP's tables, indexes, etc.).

Rick

 
Hi and thanks, but the code I am talking about is not used in VFP. It is written and executed in FOXPRO 2.5 for DOS. Are there any additional settings in CONFIG.FP ?
 
Sorry about the assumption of VFP (I switch between the forums and sometimes &quot;forget&quot; where I am).

Since you still haven't indicated the OS, I'll assume Win 9x. Again are you running any other applications at the same time - especially an active Virus checker?

On the FP side, are you running 2.5b? Are you running the standard (expanded) memory version - FOXPRO.EXE, or the extended memory version FOXPROX.EXE?

My copy of FPD 2.5b shows:
?VERS(1)
FoxPro 2.5b (X) [Sep 28 1993 11:47:04] Serial # 00-XXX-XXXX-XXXXXXXX


I'm running FOXPROX.EXE with the following CONFIG.FP:
resource=c:\foxpro25\foxuser
help=c:\foxpro25\foxhelp.dbf
tmpfiles=c:\foxtemp
sortwork=c:\foxtemp
editwork=c:\foxtemp
progwork=c:\foxtemp
MVCOUNT=512
MEMLIMIT = 80, 2048, 12288


Note: MEMLIMIT means, use up to 80% of the available memory, but don't start if you can't get at least 2048 KB, but don't take more than 12,288 KB even if it is available.


Rick

 
The OS I am using is Win 95. There are no other applications running at the same time. My copy of Foxpro Shows :
Foxpro 2.5 (X) [Feb 26 1993 08:55:57] Serial # XXX-XXXX.
I tried the settings you specified in Config.fp. But it does not work.
 
Well, that version means you are not even running the &quot;last&quot; version of FPD 2.5. While I know MS no longer has the patches to upgrade to 2.5b online, they may be posted some where on the web. First consider getting them, or even &quot;upgrading&quot; to FPD 2.6a - July 13, 1994. (This is essentially 3+ versions &quot;improved&quot; over the one you are running.)

<soap box on>
In a field where software (as well as hardware) is changing generations every 2-3 years, an 8 1/2 year old application can't be expected to function at it's peak. I don't believe FPD 2.5 was even tested / &quot;certified&quot; to run under Win 3.x - just in a pure DOS environment, and sometimes it amazes me it will run at all.
<soap box off>

Rick
 
I have now upgraded to FPD 2.6. The query now takes 2 sec. less time it used to take earlier. I would like to know whether it is reasonable for a SQL query to take 6-7 seconds of time to access 24,000 records. If the query is run again immediately it takes about 1 sec.
 
i agree that you dont have enough ram. you may try booting your computer from a dos floppy and run your query you may get another few seconds from not loading windows. also try sending the output to a file instead of cursor( fox may be takeing extra time to define chunks to fit in mem and chunks to put in .tmp files
 
I have found that a faster way on a large database is to loop through the dbf with
SCAN - ENDSCAN
or
DO WHILE NOT EOF() - ENDDO.
Then use
IF ACC.N_MH = MH.N_MH AND MH.C_MHTYPE = &quot;DEBTORS
* copy needed fields to cursor/temp dbf
* and append to cursor/temp dbf
ENDIF

Of course, set a relationship between ACC and MH first!

By setting up several loops like this instead of SQL's, and using loops instead of SUM statements, one report I modified took minutes instead of hours!!!
 
larryboy is on to the right track less overhead with scan endscan (be sure and let the user know the thing is running sometimes they hit the reset key while waiting if the screen doesnt do something)
 
If you are going to use LarryBoy's idea,
To increase the speed even more,

1. if the main Table where you are looking for data has no index to use, do not have a index open on the main table. Only have the index open on lookup tables and DO NOT use a relation
[tt]
select 0
create a free collection table with the fields needed from the 2 tables.
select 0
use lookup table order uniquecode
select 0
use main table && note no index
scan all
if condition are NOT met
loop
endif
do commonfunction
endscan
[/tt]

2. If the main table has an index you can use, && this is the faster way
[tt]
select 0
create a free collection table with the fields needed from the 2 tables.
select 0
use lookup table order uniquecode
select 0
use main table order whatever
seek item2query
scan all for main.item == item2query while main.item== item2query
if condition are NOT met
loop
endif
do commonfunction
endscan
[/tt]

[tt]
commonfunction
scatter memvar
select lookup
seek m.uniquecode in lookup
scatter memvar
insert into collection from memvar
select main
return
[/tt]


Reasons for setting/not setting a relation.
In a scan or do while loop when you move from record to record, Foxpro will do a seek in the loopup table for you. BUT if the number of records in the main table are only 100 out of 200000 you just did 199900 to may seeks. Yes it only takes a fraction of a second to do a seek but 199900 fractions add up to minutes when speed is required. Only do the seek if the record is what you want.

The opposite is also true, if you will be pulling lots of records from the main table then set the relation first, Reading the code to do the seek will slow you down in that case. David W. Grewe
Dave@internationalbid.com
ICQ VFP ActiveList #46145644
 
Teknik,
I didn't see in the thread anywhere, I can explain one part of you problem. The issue of re-query immediatly afterward returning so quickly is because the data you are fetching is already cached.

Now, your second issue, I'm going to assume a few things, but I think we might be able to speed this up. (Similar ideas from above, but I have a slightly weird view on these things that sometimes works...)

Based on your code below:

SELECT ACC.N_AC_HEAD, ACC.C_AC_NAME, ACC.N_MH, ACC.N_SH,;
FROM ACC, MH;
WHERE ACC.N_MH = MH.N_MH AND;
MH.C_MHTYPE = &quot;DEBTORS&quot;;
INTO CURSOR TEMP

Now, try doing the following:

I'm assuming that the MH table is more significant, since that is where your &quot;Criteria&quot; really is. First, if it does not already exist, create the following index:
(With MH in the actively selected work area):

INDEX ON C_MHTYPE TAG C_MHTYPE

(With ACC in the actively selected work area):

INDEX ON N_MH TAG N_MH

(Don't mean to insult your intelligence, but ONLY do this once, from the command line, don't put it in your code, it will take unnecessary time.

Next, if I'm going to use a basic structure over and over again, I usually don't use CURSORS with select statments. Mostly, because if something else happens after the actual select, and it's a big one, I don't want to go through all that hassle again. Just a personal preference. In this case, we will not use SELECT anyway, to see if we can get better performance from 'Hack by hand'. To that end, create a table (I'd call it TDEBTORS, or something. Prefix T tells me, I can always use a ZAP command with confidence on a table, because it is strictly Temporary.) Make the table with the following fields: ACC.N_AC_HEAD, ACC.C_AC_NAME, ACC.N_MH, ACC.N_SH, and their appropriate types and size. Now, run this code:

IF NOT USED('TDEBTORS'
SELECT 0
USE TDEBTORS
ZAP
ENDIF
*
IF NOT USED('ACC')
SELECT 0
USE ACC
SET ORDER TO N_MH
ENDIF
*
IF NOT USED('MH')
SELECT 0
USE MH
SET ORDER TO C_MHTYPE
ENDIF
*
SELECT MH
SEEK 'DEBTORS'
IF FOUND()
DO WHILE MH.C_MHTYPE = 'DEBTORS'
SELECT ACC
SEEK MH.N_MH
*
IF FOUND()
SCATTER MEMVAR
SELECT TDEBTORS
APPEND BLANK
GATHER MEMVAR
ENDIF
*
SELECT MH
SKIP
ENDDO
ENDIF


In a database with only 20,000 records, with these indexes, (assuming I have what it is you are trying to capture correct from looking at your SQL code), this should run very, very fast. If this is being accessed across a Novell Network that is using a version of Netware 4.0 or higher on a 10Base-T connection... well, good luck. I have had HUGE problems getting things to work quickly across those networks, but usually on tables that have 1,000,000+ records. Please let me know if this helps.
-Scott
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top