Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Audit: perf and space impact? 1

Status
Not open for further replies.

PaulinaK

IS-IT--Management
Jan 8, 2004
5
0
0
DE
Question: is it a good/bad idea to turn statement auditing on for all INSERTs, UPDATES, DELETEs on all tables? Purpose: we need to be able to go back and see who changed what data in the db.

How much of an impact would it have on performance (overhead)? How much of a space issue would it be?

Sorry if this question does not make any sense at all. I am new to Oracle...

Thanks in advance for any thoughts
 
Paulina,

The Oracle AUDIT capability is, frankly, not the best method for implementing what we generally accept as an audit trail; it tends to chew up more-than-necessary space and CPU cycles, while not giving enough audit information.

The best method, IMHO, is to use "database triggers" for auditing purposes. I use audit triggers to show WHEN did WHO make WHAT changes (including OLD and NEW values and whether the change resulted from and INSERT, UPDATE, or DELETE).

Where you are new to Oracle, you might need some syntactical help with such trigger(s), but I'll start with answering your immediate question: Oracle AUDIT is not the best audit.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 22:35 (14Jan04) GMT, 15:35 (14Jan04) Mountain Time)
 
Thanks Mufasa. This is what I suspected based on common sense.

Quick follow up question regarding the UPDATE transactions and their audit(while on this topic): does Oracle Audit feature let me clearly store/see old and new values, or do I have to dig through the DML statement to see the new value? And where can I find the old value then?

Thanks again for your help.
 
PaulinaK -
Auditing will tell you virtually nothing about an update. About the best you will get is who did the update, what table they update, and when they did it. It will not tell you which rows or how many rows, much less old or new values.

Triggers, on the other hand, allow you to capture both the old and new values, as well as the row's primary key value.

Elbert, CO
1325 MST
 
Hi PaulinaK

Hmm. FUD answers

It's not a good idea to audit all columns on all tables on all transactions - it's going to drop performance on DML statements (insert, update and delete). If your system has ex. 50% DML and 50% queries, then your performance drop a lot in the 50% DML part – but the biggest problem is contention on concurrent activity so your performance may drop all over the system.

If you really need perfect audit (on some public systems it’s very important for security reasons), then I would turn to 9i and implement audit from the database and not from triggers. I would use tools like LogMiner and other stuff in the database. And it’s fast – very fast.

Read more on Oracle Technology Network

Regards
Allan
Icq: 346225948
 
Allan,

Last time I checked, "FUD" stood for "F***ed Up Disinformation". Is that what you intended to communicate? If so, can you clarify what answers you believe are "FUD"?

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 23:43 (15Jan04) GMT, 16:43 (15Jan04) Mountain Time)
 
Allan -
FUD? Please define.

If PaulinaK needs VALUE-BASED auditing (on some public AND PRIVATE systems it’s very important for security reasons), and not EVENT-BASED auditing, then how is Oracle's EVENT-BASED auditing going to help?

You are correct - auditing on every column of every table is going to put a whack on your DML performance (as would putting an index on every column of every table!). But if it's a verifiable business requirement, then the tradeoff is your DML is going to run like a dog!

PaulinaK - perhaps it DOES bear mentioning: you will be a lot happier if you are selective about the tables/columns you put triggers on (and indexes, for that matter!).

Incidentally, Aoleuro, by following your helpful link, I was able to find an excellent Oracle 8i White Paper on security that provides the following:

Extensible Auditing
To record customized information that is not automatically included in audit records, you can use triggers (described in "Triggers to Customize Functionality") to further design your own audit auditing conditions and audit record contents. For example, you could define a trigger on the EMP table to generate an audit record whenever an employee's salary is increased by more than 10 percent and
include selected information, such as before and after values of SALARY:

CREATE TRIGGER audit_emp_salaries
AFTER INSERT OR DELETE OR UPDATE ON employee_salaries
for each row
begin
if :)new.salary> :eek:ld.salary * 1.10)
then
insert into emp_salary_audit values :)employee_no,
:eek:ld.salary,
:new.salary,
user,
sysdate);
endif;
end;

This looks vaguely familiar.

Elbert, CO
1652 MST
 
Hi all

Sorry – I did not know you didn’t knew FUD

FUD = (Fear, Uncertainty, Doubt) - it's a common word amount top world Oracle experts. (Try and AskTom at Oracle
“The Oracle AUDIT capability is, frankly, not the best method for implementing what we generally accept as an audit trail;” = FUD

“but I'll start with answering your immediate question: Oracle AUDIT is not the best audit.” = FUD

I am not saying FUD because I want you to look silly or stupid, but because the above statements aren’t right. Who are “we”? and “not the best method …. generally accept as audit trail”? and “Oracle AUDIT is not the best audit”??? – Why do you thing Oracle implemented LogMiner.

I would as far as possible try to use the build in functionality in Oracle. I would use logon triggers to audit session information – not on each DML. And I would use Oracle build in AUDIT to see queries on data. And I would use LogMiner to track down change in Log files (DML and DDL statements). I need the logon-trigger (session info) to be able to track the exact user because LogMiner (log files) do not have the information.

LogMiner is improved in 9i. Before 9i, the DDL statements appeared in the log files as a set of DML statements. In 9i you see the DDL statement.

I avoid implementing AUDIT in the application with triggers, because AUDIT is suddenly a part of applications. (Note that Oracle FGA – Fine granted Audit is on queries while we use the word for tracing changes in files. Oracle use LogMiner).

No one can implement and AUDIT that works faster than the build in functionally (actually it’s also a FUD statement – can I prove it – no – is it common knowledge – maybe, but because Logs is always present, then you get LogMiner functionally free without drop in performance, it must be faster than a trigger that generates data to a table and also generates data to Log files).

I admit that trigger audit as you describe is fine on a small mount of tables – but now it’s functionality of the application – but not as a generally part of the database.

I hope you are not offended – because I also make FUD – but I try hard to avoid it with test and test and test – and leaning and leaning and reading books. Some are very good, like one of the latest books I have been reading:

“Effective Oracle by Design”. Thomas Kyte Vice President, Oracle Core Technology (first chapter – The right approach to building Applications - Use supplied functionality and Beware of Universal “Bests” and Other Myths).

Are we still friends?

Regards
Allan
Icq: 346225948
 
Thank-you for clearing up FUD - having work with and for Oracle for a number of years, I had never run across it. Now I have a new TLA to throw around so I too can sound like a top world Oracle expert!

Still friends? Absolutely!
And we have very few points of contention here.

Yes, absolutely - minimizing triggers is the way to go; generating more redo and rollback per DML is certainly undesirable if it can be avoided.

Yes, using an Oracle-supplied, built-in solution is probably going to work better than something you create yourself (which is why we endorse using built-in constraints when feasible).

But Oracle built-in auditing is still event-based whereas the question was seeking a value-based auditing method. I suppose that is why Oracle suggests using triggers to capture this information (and again, I will echo your caveat: apply with an eye to overall performance impact!).

And since the user is on 8i, a 9i improvement is of precious little use at this point (sort of like altitude above you and runway behind you when you are flying a plane!). But I will second your recommendation: if you can upgrade to 9i, by all means do so!

My experience with 8i log miner was that it was cumbersome to use. 9i seems to be a lot smoother and DOES answer many of the needs that auditors might want.

Another bit of advice that I might offer to PaulinaK is that whether you use Oracle's built-in audit or triggers, if you are storing the audit information in a table, be sure to periodically clean out your tables; they will grow to a very large size very quickly.

And to underscore what I believe is the bottom line of all of the preceding, you probably want to refine your auditing requirements. Every column in every table? NOT a good idea! I would want a very good business justification for this. If you are going to use anything besides log miner, you need a good justification for every item that is being audited, since it IS going to impose a storage/performance penalty on you.

Elbert, CO
0921 MST
 
I would like to voice my support for the logminer method.

If the database is in archivelog mode, your only administrative tasks are to define archive log destinations, and to make sure that you have enough space on disk for the logs.

Then you can relatively quickly run logminer on the archive logs and query whatever you want. The user data is in there as well as the sql statements that were run by the user.

No space is taken up in the database, and the only performance issue is when a log switch takes place. As long as there are enough online redo log groups and they are sufficiently large, the database should run much better than if auditing were enabled.

I'd rather not write triggers and then remember where I put them and at some point in time to disable/drop them.

Aryeh Keefe
 
Is it possible to get a value of specific field at some point of time using logminer? Doesn't it require quite complex parsing?

Regards, Dima
 
I've written a VB app that creates triggers and audit tables for any schema to audit the tables, although I find it does hinder performance it's extremely useful for tracking what transactions are doing at a database level which is damned important when you're managing other peoples badly written software. If your requirement is similar to this then I'd suggest trying something similar.
Regards

SOL
I'm only guessing but my guess work generally works for me.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top