I have a server which runs the Oracle Enterprise Manager repository and website. I'm experiencing high disk activity on the C drive. its a mirrored 18gb disk. The bulk of the oracle datafiles are on another striped drive. Oracle is reporting that the c drive is 600% utilised - whatever that...
how are you referencing the cetnral tnsnames. some parts of oracle cant use unc's
we map a drive to x:\blah\tnsnames_master.ora
and in the local tnsnames.ora we have
ifile=x:\blah\tnsnames_master.ora
if you're on a pentium IV its a known bug with the installer.
copy the CD to a dir and remove the file symncjit.dll and then run setup from the hdd copy
are you getting a write error or just your users cant read the file?
if its a write error you need to add the path to your ini file to "authorize" utl_file to write to the directory
thanks for the clarification. I've got the programmer to stick some dbms_output.put_lines at various points in the trigger to prove its firing (or not)
I'll get the programmer to rtfm on autonomous transactions too.
cheers
the trigger does not fire becuase the constraints seem to be checked before the insert is performed. the constraint fails and so the insert never occurs and thus the trigger is never fired.
a before trigger is exactly what we want but it only seems to work if the db decides it *will* insert a row.
we seem to have a bug in our code that is allowing some null columns to be inserted into a table. However those columns have NOT NULL constraints on them and thus the insert fails.
We're trying to track down the circumstances of the event and the programmer tasked with this came up with the...
your username has to be part of the dba group. do a search on metalink there is a document on how to do it step by step
its a hardcoded name so you cant have any old group name.
create a user in oracle the same name as the unix user. e.g.
instead of
create user fred identified by password
use
create user fred identified externally
Thanks,
However it seems to have nothing to do with passing the pattern on the command line, as the second example the pattern is hard coded in the BEGIN section and it still complains about the open parentheses.
I still tried what you've said though, and it still fails
I have the following script....
if(!found && match($0, PATstart)) {
found=1;
printf("%s", substr($0, 1, RSTART-1));
next;
}
if( found && match($0, PATend)) {...
the export file itself should be platform independant.
If your sun os does not support files larger than 2gb then you will need to split the export file into several chunks use the FILESIZE param and multiple FILE entries eg file=(a,b,d,e,f)
ftp them in binary and you should be OK.
with that...
There are a couple of ways.
1) export the data, and then import into the new instance using exp/imp
2) flag the originating tablespace as transportable. You can then copy the dbf files to the new instance and add the new tablespace.
Could it be the date formats don't match?
Try doing a to_date on the date columns and force it into the oracle way of doing things.
We've had that when reading from DB2
Could it be the date formats don't match?
Try doing a to_date on the data columns and force it into the oracle way of doing things.
We've had that when reading from DB2
Could it be the data formats don't match?
Try doing a to_date on the data columns and force it into the oracle way of doing things.
We've had that when reading from DB2
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.