Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

DB problems

Status
Not open for further replies.

jfreak53

IS-IT--Management
Apr 30, 2004
44
GT
Ok so my previous questions have been worked out all good, thanks everyone. Now I have one regarding DB problems. I have MySQL DAC installed and querying data from a remote MySQL DB. Everything works fine, the only problem is that one of my clients has a HUGE db, we're talking 10K plus items in it. And the way the program is currently made to get items and refresh it I connect the DB to the server, then make my SQL then activate it, then copy all info then disconnect all. This is fine for smaller DBs, but this HUGE one is very cumbersome and the app isn't working the way it should with the big one. So any way to make this faster? Is there a way to cache all the DB info to the program locally and only grab from the programs table information? And make updates remotely.

Or is there a better way for this?

Thanks in advance.

Also for one of the queries it is very large. It is categories and I query each one to get all it's subs. So it takes a long long time. Would it be easier to download both tables into a local table component then does anyone know if I can query that local table instead of the remote, so as to cut down on time?
 
Is there a legitimate reason why you are copying all the DB data? You should be fashioning your SQL statement so it only requests the data that is required for your program to function. In doing this, it should be much more efficient all the way around, especially in remote situations where all the data must be copied.

It is not possible for anyone to acknowledge truth when their salary depends on them not doing it.
 
Yes actually, my SQL statements grab only what's needed. Problem being is that for the program to function the WHOLE list of categories is needed no matter what. Or the user does not know what they are doing if the whole list of cats is not there. But yes, in other scenarios that is all it's getting. Like for instance with manufacturers it only gets the name and ID, which is all I need, the same with categories since that's all I need.

But it is still slow as you can imagine with 10K categories. Am I doing it wrong with having it connect then disconnect, should I leave it connected and for each one of them that I pull create a new TSQL component to keep it connected and stored?
 
For a big database, you might have to work it so you are pre-caching parts of the list of categories. Also, it might pay off to check the design/maintenance of the database in question so it would work better. Also, check to see how current indexes for this DB are used. Then compare that with how the programs are accessing it. In this program's case, an accessible index based on the category field would probably be very prudent so the whole DB is not swept for this category query.

It is not possible for anyone to acknowledge truth when their salary depends on them not doing it.
 
Ok so now the big question, how do I cache a DB in Delphi? I have been working with DB's in delphi for many years but never learned how to cache one, how do I do this?

Right now it uses as the index of the table the category_id field, so that's already set and works quite nicely actually, I also cannot change it so I have to work with what is there currently.

I think the caching might be what I want to do, but how?

Thanks for all the help.
 
Can you elaborate on specifically what is slow?

Is the the entire program slow to respond to everything the user is trying to do or is the program slow to respond to tasks related to the 10k categories? Is the program slow to respond while waiting for the server to chew on big/slow queries?

In general, how fast is the connection to the server?

What are you doing with the category information? Are you expressing the data using standard visual VCL (TListBox, TComboBox, etc...)?

Is your interface more of a drill down interface or is the user supposed to choose one category in ten thousand? Can the categories be normalized so a given category can be chosen by drilling down to only a handful of categories that are SELECTed from the database just before the user has to select one? If so, this could drastically reduce the total data pulled into memory.


Steve.
 
No the program itself is fast, if I use a localhost connection to the DB it is fast, it is the fact that it is pulling 10K cats over the NET. I have a 1024 connection so that's not it, it gets slow waiting for the cats. The other connections are fine, when it downloads the manufacturers or the VAT list is fast. It's when the return list is huge like the cats.

Once the category info is received I itterate through it to get all subs, I know this is slowing things down but I don't have a choice. Sub cats are named using a second table for xref. So for each table I then run the procedure again to find sub cats for each cat, so it double dips, but it doesn't select anything that it doesn't find so it's not like it's going all same data.

But it's not the only one, when I have a list of 10K manufacturers then it is slow also, it's having to download all I think. Once it's downloaded it loads into a ListBox, VCL component.

The user is supposed to select one in 10K, I don't have another way to do it either since the program is made to be quick and to the point so I can't go with a drill down sequence, it would take to long to find what the user wants to get to.

The cat query is:

procedure GetCats(lbox: TListBox; cat: string = '0'; level: integer = 0);
var
I, L: integer;
mine: TMySQLQuery;
spacer: string;
begin

// Add Space for SubCategories
for L := 0 to level - 1 do
begin
spacer := ' ' + spacer;
end;
// End spacer

mine := TMySQLQuery.Create(mine);
mine.Database := Form1.sqdb;
mine.SQL.Clear;
mine.SQL.Add('SELECT category_id, category_child_id,category_name FROM jos_vm_category,jos_vm_category_xref ' +
'WHERE jos_vm_category_xref.category_parent_id=' + cat + ' ' +
'AND jos_vm_category.category_id=jos_vm_category_xref.category_child_id ' +
'ORDER BY jos_vm_category.list_order, jos_vm_category.category_name ASC');
mine.Open;

mine.First;
for I := 0 to mine.RecordCount - 1 do
begin

lbox.Items.AddObject(spacer + mine.FieldValues['category_name'], TObject(StrtoInt(mine.FieldValues['category_id'])));

GetCats(lbox, mine.FieldValues['category_id'], level + 1);

mine.Next;
end;

mine.Close;
mine.Destroy;

end;

But like I said all are slow as well and none are like this, the rest are just simple queries.

Now one question, I think I have a solution myself but I need to run it by someone else. Would it be quicker if I created two TTable component's and pulled both xref and the main cat table into them locally, all records once per session, THEN run my query on the local table in the program? It would still itterate through everything but since it's local it would be half the time? Yes or no?
 
Ok so now the big question, how do I cache a DB in Delphi? I have been working with DB's in delphi for many years but never learned how to cache one, how do I do this?

Sorry, maybe that's not the right word (but it might be). On similar programs in the past, I wouldn't pull the entire list for the user to select, but would only pull what they see and hold it into memory. Then if they needed to see more, I would have the program pull a little bit more data (usually a screen full). If they wanted to go back, I wouldn't query the DB but use the part of the DB already cached in the memory of the program.

That was to be quicker/more responsive for the end user, since most of the DB time was spent at the same time as the screen was being drawn and there wasn't a long loading sequence to get the list.

But as was said, given a large number of data for such a sequence (large data usually isn't best for pure browsing applications), the best path would probably be to look into a UI change for the user to get to the desired data faster. This will probably be a request from the user as well once one of them says something to the effect of "why do I have to spend all my time going through 10K items just to get what I want?"

Usually when a performance issue arises like this, the best thing to do is think through all your processes and change them accordingly for the new requirements (large data in this case is).

It is not possible for anyone to acknowledge truth when their salary depends on them not doing it.
 
I would implement a 'google' like search, ie firing queries when the user types part of the word and LIMIT the number of records (100 or so).

/Daddy

-----------------------------------------------------
What You See Is What You Get
Never underestimate tha powah of tha google!
 
Well I think I want to go with caching the DB locally. But like I said, how do I cache a DB locally and then run my SQL commands on that local version?
 
Client datasets can be used for that as well as In Memory databases. I use Accuracer and AbsoluteDatabase. kbmMemTable is another.

But you'll still have to load the list. Loading it into memory will be pretty quick, however.

You might consider making your list not visible while loading it. Definitely make sure that you aren't sorting the list within the control.

Make sure your query is read-only or forward only so it is superfast.

You might consider the Virtual TreeView:
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top