We bought and installed Lumigent Log Explorer.
I think it's possible to explore logs taken before installation, so long as they are available e.g. extracted hourly to disk as part of a maintenance plan activity. What it can't do is relate spid's to user idents in logs from before installation...
We always backup everthing (to disk) before starting our regular re-index, done as part of the job. Transaction Log increases drmatically during the process. We re-index each table in turn, it's easier that way as all the commands are in a SQL job (it generates a list of tables on each run)...
Has the Transaction Log ever been backed up? We arrange to backup all TLogs regularly as some were growing. Actions like re-index also caused our TLogs to grow, which caused the next TLog backup to be large, but reduced the log usage.
We used Log Explorer to read backup TLogs generated before its installation. But it can't identify who initiated the change. It could also possibly be used to undo or redo the changes, though that is easier if done close to the change. We ddn't try that. Log Explorer would have to be licensed...
This is an intermittent problem, described in KB241643, so we changed all our accounts from Windows to SQL and have had no problems since. This has the advantage that changes in Windows accounts (e.g. deletion) do not affect SQL jobs. When our jobs were installed, they used the account of the...
Have you enabled DTS logging in the DTS package? Open DTS package in design mode, deselect all objects in the DTS package, right click somewhere in white (unoccupied) space, select "Package Properties", select "Logging" tab then enable logging to SQL server. Think that's how its done (but may...
This can also be done by a SQL job which can be scheduled. Create a TSQL step with the select statement containing the required command. go to the advanced tab and change the "output" field to your required file. Then define a schedule. It could also be done by a DTS package.
Deletion of database backup files during prime time caused a problem (100% CPU usage, users locked out of SQL - KB228206 seems to apply). Anybody aware if using the "WITH INIT" option to remove old backups may cause the same problem?
Thanks.
Try the event log, assuming the option to "write completion status to event log" is enabled (search in BOL for that string). Open your DTS package in Design mode, right click on a blank space, select "Package Properties" and select "Logging" tab. Its at the bottom.
We have recently increased the size of the log_buffer from 10MB to 20MB. Since then we have seen far more archive redo logs being created than before.
The users are not aware of any additional work being done and the log buffer increase is the only obvious change that has been made.
I cannot...
From what I remember, check if any databases are not in "full" recovery mode. If they are not, some backups may fail. If there is a mixture of recovery modes, try defining two Mplans, one for those in "Full" recovery and one for those not in full recovery.
We are about to buy and install Lumigent LogExplorer. One of the options is to record "session login information". This uses SQL Profiler APIs and results are stored in a table. Has anyone who uses LogExplorer used this option, and noticed any performance issues? The system has up to 2500...
Looks like we are going to purchase this tool. One of the options is to "capture login information". The datasheet says this is done by Profiler APIs and stored in a table. Our query is - will this have a noticeable effect on performance? Have any other Tippers any opinions? If you use this, how...
We are seeting up some alerting mechanisms on a linux server running mysql.
We have a programme that will search for text strings in files and alert us if it detects one.
We could set it to look for ERROR or WARNING in the general or error log.
Does anyone have any suggestions an anything...
We have set up a standby database on a different server and enabled automatic shipping of archive redo logs.
This is working ok.
The live database is backed up every night as a cold backup. Since the implementation of the standby database we have noticed that the database shutdown is...
Is it possible to keep an audit trail of commands issued through sqlplus.
We have a system where several users have sqlplus access and I would like to be able to record what has been done through sqlplus without having to run auditting on the entire database.
I am starting it with the mysql.server script and it is starting a mysqld process which spawns another mysqld process immediately which also spawns a mysqld process. This carries on until there are 11 processes running.
I would like to be able to have a file created showing the output from the alert.tcl script. This would be to show whether it is an alert event or a warning event.
Does anyone have any suggestions?
We are now running with the PerfMon service stopped on the passive node, and set to Manual startup. Failovers were done recently and our counters are still present. Because of the erratic type of fail, we can't say this is a fix, but if it reduces the problem, it is low cost and low risk.
We...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.