While there are third party tools that can report Disk I/O by file, the proper use of that information is for balancing Disk I/O between drives (or mount points). If your database is spread across 120 disk drives, you want approximately the same amount of data to come from each drive.
The SGA (my guess as to the 'oracle cache') can be tuned, but I find it simplest to tune it by looking at the SQL statements that use the most Disk reads (physical I/O), and the ones the use the most SGA reads (logical I/O), not by looking at files.
High physical I/O can mean indexes are not being used, causing full table scans, you can usually add indexes that dramatically reduce physical I/O. High logical I/O can mean the indexes are malformed for the SQL queries you are doing most, better indexes may be needed.
In general, some small blocks of a given table (often 4k to 16k) are stored in the SGA, not whole tables, whole tablespaces or whole files. (If you have a small table, perhaps a single row, or say the beginning and ending ZIP code of the 50 states, it may be worth while to cache the entire table, but for most tables, let Oracle move them in and out of the SGA automatically)
You want to make the SGA as large as possible, that does not cause swapping, as reads from the SGA are about 30,000 times faster than the reads from disk. (Dirty reads, done dirt cheap)
I tried to remain child-like, all I acheived was childish.