bont,
I think the guys have already said what needs to be said in regards to you listening to experience. Don't dismiss it.. being open to suggestions can make your job a lot easier. I would expect from the type of question you have stated and the environment you are working in (shared hosting) that the people that have responded already have far more experience in much more complex environments - listen to them, they are only trying to help.
1. Yes, a database server would likely be more effective for what you want on the server side if there is any volume in question. Otherwise you will be building a pseudo filesystem and XML file based data store solution. take away the XML bit and you have C/ISAM............ there is a reason RDBMS's became more prevelant.
2. Must have XML ? ok, even better.. use a Native XML Database, such as DB2, Oracle, SQL Server 2005, etc. Yes, these have express editions which are free and highly functional. You can then use both Xpath and Xquery to get your data in a very efficient manner.
3. Can't install on your ISP ? Use a db file and ADO.NET to access it, rather than installing the full db server.
4. If you want the client to do all the processing then you have to SEND all the data to the client to process - in this case you need to re-design your XML files - so that a single master file contains the top level links (as already mentioned), then you can selectively drill down to other supporting files - maybe pass an XML string in a hidden field when posting - or use XMLHttpRequest to dynamically retrieve content (preferred).
5. If you can't change the XML files you're given then write an import prog to scan the new files and create an index file (in XML or to a database) which fulfils this purpose. Maybe store this in Memory if it is what you use server side, to speed things up a bit more - if it is accessed a lot and rarely changes, it is better to be in memory, as it removes all the load/unload from memory each time you access the file... mind you, I think IIS / ASP.NET half does this anyway with frequently accessed text based files.
6. XSLT and XPath will enable you to present whatever information you want to the client.. just remember that the parser still has to read the entire file from the OS, and will have to navigate through the level of nodes you are searching for to return a result.. so try to keep the nodes you're after at the top level (below the root) of the XML document, otherwise the parser will need to iterate down each level to get the information it needs - and this is where the process becomes intensive. The entire will be "read" from the OS, but the amount of parsing depends a) on the parser (whether it supports streaming or not) and b) your XML design and the XPath you request.
7. The more files on the OS to read, the more IO required, thus the slower it will be.
8. ADO.NET supports XML files as a data source/provider - maybe this will help - I would expect that the ADO.NET team have made the process more efficient than you could in a short space of time.
9. To guarantee a a client can see the data, do the processing server side, and send ALL the data as XHTML. You cannot guarantee that a client will a) support Javascript or b) support CSS - so whatever you do, you will have issues for that small percentage.
10. If you want to load ALL sub nodes in a tree - then think about the fact that you would have to go to the server for every node you defined - e.g. if you create files at the artist level, and have 50 artists in the tree - the server will get hit with 50 requests for data... eek,.. may not sound like much, but for 100,000 users that's 5million requests... now that WILL eat up your memory to maintain the request queue.. and it only gets worse with more nodes.
11. You should make sure that you go through a single access point (e.g. page) to retrieve the server side content - even if it is based in XML - as this provides a level of abstraction that will help with any additional functionality needed and future changes to your architecture. The only reason not to do this is if you want to have an offline version, where all xml files can be downloaded and referenced relative to the html page.
So.... yes, you can do what you ask.. at the simplest, create an index file as mentioned by ca8msm, send that to the client and use AJAX to get the content of each sub entity (e.g. artist) upon request.
Even better if you are building the XHTML manually: use JSON, which is much more efficient than XML. However XML is obviously a better format for use with XSLT - which can be quite efficient.
Javascript, XPath and XSLT are your friends.. as is Google.
You can even get a framework to do most of the work for you, e.g. dojo:
This has a prebuilt dynamically loading tree widget.
And listen to clever people like ca8msm and onpnt - they can help make your work a lot easier.
A smile is worth a thousand kind words. So smile, it's easy! 