Adding an index is always a trade off - queries on the indexed columns will tend to run faster, but updates, inserts, and deletes will tend to do worse because of the extra overhead of maintaining the index.
I certainly don't think a 49,000 row table is too small to justify extra indexes. Your job, as DBA, should be to verify that the indexes result in an overall reduction in I/O. I would take the following steps
1. Identify sql that might profit from the new indexes. Your application programmers might be able to help with this. They should be able to explain why they think the new indexes will help. You can also query v$sqlarea. You want to find queries that use the proposed index columns in their where clauses and have relatively high i/o.
2. Verify that the new indexes actually help performance. Take the candidate queries to your test environment and do explain plans, both before and after adding the new indexes (and collecting statistics). The access paths should change to use the indexes.
3. Estimate the cost of maintaining the indexes by monitoring update activity on the table.
4. Calculate the net benefit of each index, keeping in mind that disk reads are generally faster than writes. The higher the benefit, the better the index.