I'm currently writing an application that makes use of DataTables / DataViews generated from data from a SQL Server 2000 database.
One of the important factors of the application is processing speed (the time in which in takes a method to complete).
Due to this fact I've run some testing.
What's suprised me is the speed taken when dealing with DataViews.
I've got an initial DataTable which hits the database for the first 4000 records from a database table (for example).
I then allow myself to run one of two processes (and timed to compare them).
The first process has a DataView (derived from the first DataTable).
I then do a loop.
For each entry in my first DataTable I get the key piece of information (i.e. Company Code) and then use this to set a RowFilter against the DataView, cycling this to count the records in the DataView (with the RowFilter applied then they'll only be one).
For 4000 iterations I clock up about 50 seconds (give or take).
My second process runs a similar iteration but makes use of a DataRow array which is generated from the DataTable.Select("CompanyCode = " + companyCode) concept.
I then cycle this DataRow array and count the records again (with this Filter applied then they'll only be one).
For 4000 iterations I clock up about a second or two.
Which begs the question - should I only be using DataView's if I'm wanting to make use of their additional talents (i.e. Sort, etc.) if all I'm looking at doing is creating a subset of data against the DataTable and working with this - be it just counting the entries in the subset or editing the subset.
Anyone shed any light on this ?
I truly didn't think I'd be looking at this kind of difference in speed when changing my tactics.
Steve![[surprise] [surprise] [surprise]](/data/assets/smilies/surprise.gif)
One of the important factors of the application is processing speed (the time in which in takes a method to complete).
Due to this fact I've run some testing.
What's suprised me is the speed taken when dealing with DataViews.
I've got an initial DataTable which hits the database for the first 4000 records from a database table (for example).
I then allow myself to run one of two processes (and timed to compare them).
The first process has a DataView (derived from the first DataTable).
I then do a loop.
For each entry in my first DataTable I get the key piece of information (i.e. Company Code) and then use this to set a RowFilter against the DataView, cycling this to count the records in the DataView (with the RowFilter applied then they'll only be one).
For 4000 iterations I clock up about 50 seconds (give or take).
My second process runs a similar iteration but makes use of a DataRow array which is generated from the DataTable.Select("CompanyCode = " + companyCode) concept.
I then cycle this DataRow array and count the records again (with this Filter applied then they'll only be one).
For 4000 iterations I clock up about a second or two.
Which begs the question - should I only be using DataView's if I'm wanting to make use of their additional talents (i.e. Sort, etc.) if all I'm looking at doing is creating a subset of data against the DataTable and working with this - be it just counting the entries in the subset or editing the subset.
Anyone shed any light on this ?
I truly didn't think I'd be looking at this kind of difference in speed when changing my tactics.
Steve
![[surprise] [surprise] [surprise]](/data/assets/smilies/surprise.gif)