Thursday, December 11, 2008

Oracle System Help

The man pages of unix is the best place to know about every nook and corner of a command on unix, how about similar help on an Oracle server as well ?. There is a feature on any Oracle database which prints the basic information needed about a particular sqlplus command.

A system.help table which is not already present on a database, but which needs to be created by running a script from $ORACLE_HOME/sqlplus/admin/help directory. When the hlpbld.sql script along with the helpus.sql script as the command line argument is executed, it creates a table called HELP, usually this table needs to be created under the SYSTEM scehma. This means the hlpbld.sql script needs to be executed from the SYSTEM user as shown below.


SQL> conn system/manager
Connected.
SQL> !pwd
/export/oracle/home/product/10.2.0/sqlplus/admin/help


SQL> @hlpbld.sql "helpus.sql" --- This creates the help table and populates it

Now, the index of the help on sqlplus commands can be viewed with a "HELP INDEX" command from the sql prompt, which is one of the topics that HELP table offers.



Alternatively, "? INDEX", also prints the same output. All, this "HELP/?" command does is run a select query against the system.help table in the background
with the parameter that you pass in the where clause by equating it to the topic column.



SQL> set line 1000
SQL> col topic for a5
SQL> col seq for 999
SQL> select * from system.help where topic='INDEX';


SQL> ? RECOVER

SQL> HELP SHUTDOWN

Wednesday, October 29, 2008

Understanding Oracle password hashing

This is an explanation of how the password hashing works on Oracle databases across 9i and 10g. For the explanation two users are chosen on a 10g and in a 9i database, namely user1 and user2 and their encrypted passwords are displayed. The test is first done on a 10g database

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production


SQL> select username,password from dba_users where username like 'USER%';

USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 1718E5DBB8F89784


Now both the user1 and user2 have their own usernames as their passwords as shown below.

SQL> conn user1/user1
Connected.
SQL>
SQL> conn user2/user2
Connected.

The password of user2 is changed to that of 'user1', just to see how the password encryption mechanism works.

SQL> conn /as sysdba
Connected.
SQL>
SQL> alter user user2 identified by user1;

User altered.

User2's old password does not work anymore, but the new password is in effect now as shown below.


SQL> conn user2/user2
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.
SQL>
SQL>
SQL> conn user2/user1
Connected.
SQL>

When querying the passwords of user1 and user2 this time, shows up different values for both the users inspite of the fact that both the users have the same password assigned to them.


SQL> conn /as sysdba
Connected.
SQL>
SQL> select username,password from dba_users where username like 'USER%';


USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 83566E16913442F3


This shows that Oracle does not only encrypts just the password but also takes into account some other factor with the password for encryption. This other factor could be assumed as the username as such.

The above was tested on a 10g database,the same which one tested on a 9i database shows no difference in the password encryption mechanism which is evident from the output shown below.


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for IBM/AIX RISC System/6000: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production



SQL> create user user1 identified by user1;

User created.

SQL> grant connect,resource to user1;

Grant succeeded.

SQL> create user user2 identified by user1;

User created.

SQL> grant connect,resource to user2;

Grant succeeded.

SQL> select username,password from dba_users where username like '%USER%';

USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 83566E16913442F3


SQL> conn user1/user1
Connected.
SQL>
SQL> conn user2/user2
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.
SQL>
SQL> conn user2/user1
Connected.
SQL>
SQL> conn /as sysdba
Connected.
SQL>
SQL> alter user user2 identified by user2;

User altered.

SQL> select username,password from dba_users where username like '%USER%';

USERNAME PASSWORD
------------------------------ ------------------------------
USER1 BBE7786A584F9103
USER2 1718E5DBB8F89784 <--- The password here is
'user2' which shows
the same encrypted
text in the first query
output at the top of this page.



whether its a CREATE USER/ALTER USER the password encryption is the same irrespective of the type of DDL statement.

Wednesday, October 8, 2008

Oracle 10gR2 Database Parameters - Reference

This is in continuation of my earlier post on 9iR2 Database Parameters Reference, here is a link to the master index of parameters and data dictionary description in 10gR2 and a link to the description of v$ views in 10gR2.

Wednesday, September 17, 2008

Duplicate database for cloning

This post is aimed at providing an understanding of how to clone a PROD database to a TEST/DEV database using DUPLICATE DATABASE command. The following are the environmental specifications to aid in understanding the setup/implementation. This is implemented on a single Windows host.

Target DB (TDB) - The PROD database which is cloned to form the TEST/DEV database.
Auxiliary DB (ADB) - The TEST/DEV database which is cloned from the PROD database.

The above acronyms will be widely used in the entire post.

Before we move any further, make an entry for both the Target Database (TDB) and the Auxiliary Database (ADB) in the tnsnames.ora file, also, make sure the listener is running and that both the services of TDB and ADB are registered with the listener.


1. Start the ADB (Duplicate Database) in nomount mode and exit out of the SQL propmt since the instance goes down in the process of creating a cloned DB. The existing init.ora parameter file for the ADB could be used to start it in the nomount mode.

C:\Documents and Settings\individual\Desktop>set ORACLE_SID=ADB

C:\Documents and Settings\individual\Desktop>sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 17 16:32:29 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1248428 bytes
Variable Size 83886932 bytes
Database Buffers 171966464 bytes
Redo Buffers 7139328 bytes
SQL>
SQL>


2. Connect to the Target Database (TDB) using RMAN as sys or any user who has SYSDBA privileges from another session of command prompt and take a full backup of the TDB along with the archive logs, this is the backup which will be used to create the ADB.


C:\Documents and Settings\individual\Desktop>rman target sys/manager@TDB

TDBvery Manager: Release 10.2.0.1.0 - Production on Wed Sep 17 16:29:13 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TDB (DBID=1449103016)

RMAN>
RMAN>
RMAN> run
2> {
3> allocate channel c1 device type disk;
4> backup format 'C:\oracle\rman_bkup\TDB_duplicate_database.bkup' database;
5> backup format 'C:\oracle\rman_bkup\TDB_duplicate_archives.bkup' archivelog all;
6> release channel c1;
7> }


3. Once the backup is complete, connect to the auxiliary instance from the same RMAN session.


RMAN> connect auxiliary sys/sys@ADB

connected to auxiliary database: ADB (not mounted)

RMAN>
RMAN>

It is to be noted that the password used for the sys account is actually an imaginary one and the server will anyway let you in. Any text/password could be used to connect to the auxiliary instance.

4. Now we are going to manually rename every file from the TDB to a valid location in the ADB. This is where we actually clone the database with a simple "DUPLICATE TARGET DATABASE TO" command.

This step is like the "CREATE CONTROLFILE" step of the conventional cloning process using a physical backup copy, which indeed gets executed in the background. It also involves giving names of the logfiles for the ADB as shown below, ignoring which would end in an error.


RMAN> run
2> {
3> allocate auxiliary channel aux_c1 device type disk;
4> set newname for datafile 'C:\ORACLE\TDB\SYSTEM01.DBF' to 'C:\ORACLE\ADB\SYSTEM01.DBF';
5> set newname for datafile 'C:\ORACLE\TDB\UNDOTS.DBF' to 'C:\ORACLE\ADB\UNDOTS.DBF';
6> set newname for datafile 'C:\ORACLE\TDB\SYSAUX.DBF' to 'C:\ORACLE\ADB\SYSAUX.DBF';
7> set newname for datafile 'C:\ORACLE\TDB\TOOLS.DBF' to 'C:\ORACLE\ADB\TOOLS.DBF';
8> set newname for datafile 'C:\ORACLE\TDB\USERS.DBF' to 'C:\ORACLE\ADB\USERS.DBF';
9> set newname for tempfile 'C:\ORACLE\TDB\TEMP.DBF' to 'C:\ORACLE\ADB\TEMP.DBF';
10> duplicate target database to ADB
11> logfile
12> group 1('C:\ORACLE\ADB\LOG1A.RDO') size 5M,
13> group 2('C:\ORACLE\ADB\LOG2A.RDO') size 5M;
14> }


After hitting return key at the 14th line, a lot of scripts are seen executing on the screen. Capturing each and every snap of it would make this post a real lengthy one so only a gist of the scripts are shown below to give users a brief snapshot of what actually happens.

allocated channel: aux_c1
channel aux_c1: sid=36 devtype=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting Duplicate Db at 17-SEP-08


contents of Memory Script:
{
set until scn 271346;
set newname for datafile 1 to
"C:\ORACLE\ADB\SYSTEM01.DBF";
set newname for datafile 2 to
.
.
}


sql statement: CREATE CONTROLFILE REUSE SET DATABASE "ADB" RESETLOGS ARCHIVELOG
MAXLOGFILES 32
MAXLOGMEMBERS 2
MAXDATAFILES 32
MAXINSTANCES 16
MAXLOGHISTORY 1752
LOGFILE
.
.
.

}


contents of Memory Script:
{
set until scn 271346;
TDBver
clone database
delete archivelog
;
}


database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started


renamed temporary file 1 to C:\ORACLE\ADB\TEMP.DBF in control file

cataloged datafile copy
datafile copy filename=C:\ORACLE\ADB\UNDOTS.DBF recid=1 stamp=665686637


contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 17-SEP-08

RMAN>

RMAN>

A few memory scripts are run in the background as part of the DUPLICATE DATABASE creation. The cloned TEST/DEV database is now ready to be used by the Testers/Developers. Now a comparison between the TDB and ADB will show different values for DBID.


SQL> select DBID,NAME,CREATED,OPEN_MODE,LOG_MODE from v$database;

DBID NAME CREATED OPEN_MODE LOG_MODE
---------- --------- --------- ---------- ------------
1449103016 TDB 08-SEP-08 READ WRITE ARCHIVELOG



SQL> select DBID,NAME,CREATED,OPEN_MODE,LOG_MODE from v$database;

DBID NAME CREATED OPEN_MODE LOG_MODE
---------- --------- --------- ---------- ------------
4023825067 ADB 17-SEP-08 READ WRITE ARCHIVELOG

Tuesday, September 16, 2008

UNIX alias

How many times have you wished to have shortcuts for some long syntaxes/commands on a UNIX host. Here is a way to do away with such hassles, There is a command "alias" available in all the UNIX hosts nowadays which allows one to associate a letter or a shortest word with a long command.

Listed below are the ways of using "alias" in the two major shells of UNIX that I have worked with. Let say for an instance I want to see the number of databases running on a host with the command "ps -ef|grep pmon", this is now going to have an alias of pmon

ksh $> alias pmon='ps -ef|grep pmon'

Ksh $> pmon
oracle 12219 1 0 Jun 02 ? 11:32 ora_pmon_DEV1
oracle 26642 1 0 Mar 21 ? 99:51 ora_pmon_DEV2
oracle 23512 21604 0 16:34:50 pts/3 0:00 grep pmon
oracle 26901 1 0 Mar 21 ? 118:19 ora_pmon_DEV3


csh $> alias pmon 'ps -ef|grep pmon'

csh $> pmon
oracle 12219 1 0 Jun 02 ? 11:32 ora_pmon_DEV1
oracle 26642 1 0 Mar 21 ? 99:51 ora_pmon_DEV2
oracle 23512 21604 0 16:34:50 pts/3 0:00 grep pmon
oracle 26901 1 0 Mar 21 ? 118:19 ora_pmon_DEV3



Any valid UNIX command can be associated with an alias, below are a couple of more examples.

Alias to open ORATAB file of Oracle

ksh $> alias oratab='vi /var/opt/oracle/oratab'

csh $> alias oratab 'vi /var/opt/oracle/oratab'

Here is another example of Navigating to a directory like ORACLE_HOME.

ksh $> alias o_h='cd $ORACLE_HOME'
csh $> alias o_h 'cd $ORACLE_HOME'


Now the crux of this concept is where to place the "alias" entry so you need not execute this for every session you open. This entry has to be made in the .profile for a ksh enabled unix account and in the .cshrc for a csh enabled unix account.But make a backup of the .profile/.cshrc file before making any changes to it as a general rule of thumb to not mess up with the integrity of these files.

If you want to remove the aliases permanently, comment out/remove the alias line from the .profile/.cshrc file which takes effect only at the next login or run the unalias command in the current session as a temporary fix as shown below.

ksh $> unalias o_h
csh $> unalias o_h

Saturday, September 6, 2008

Crontab

Here is some information about crontab and the flags that a crontab command can accept.

crontab -e --> edits the crontab file so you can make entries of scheduled jobs to run.

crontab -l --> lists the crontab file, its like running a "cat" command on a text file.

crontab -r --> removes the crontab file. The most dangerous, make sure you have a backup of your crontab file before you run this command even be it for test reasons.

One has got to be very careful in dealing with the -r flag. if you inadvertently type -r for -e and hit enter, the next thing you know is your crontab file is gone. since E and R are located next to each other on a regular QWERTY keyboard and if you are not really good with the finger positioning of the keyboard, this is sure a nightmare.

This had once happened with me since I am not good at the finger positioning myself on a QWERTY keyboard.It is only after hitting the return key that I realized, I typed the wrong command. But a backup of the crontab should save the day, like it did to me.

Its always a good practice to have a backup of the crontab file. Use the command "crontab -l" to backup the crontab file as shown below.

00 01 * * * crontab -l > /export/oracle/crontab_backup.txt

The above line takes a backup of the crontab at 01:00 hours everyday.

Now, to restore a blank crontab, all you need to do is run the following command. sometimes this might require a bounce of the cron daemon

crontab < /export/oracle/crontab_backup.txt

Let's discuss about the conventions of making entries in a crontab file. There are six fields for an entry in a crontab, below is a depiction which explains about the fields. Fields are seperated by a white space or a tab.





* * * * * COMMAND_OR_SCRIPT_LOCATION
--------------------------------------------------------------------
| | | | | |_ Command or script location
| | | | |
| | | | |___ Day_Of_Week (0 - 6) (0=Sun, 1=Mon, 2=Tue,3=Wed,
| | | | 4=Thu, 5=Fri, 6=Sat)
| | | |_____ Month_Of_Year (1 - 12)
| | |
| | |_______ Day_Of_Month (1 - 31)
| |
| |_________ Hour (0 - 23)
|
|___________ Minutes (0 - 59)





White spaces in filenames

Recently I faced an issue in tranferring a file from Windows to UNIX host. Unlike UNIX, windows as usual accepts white spaces in filenames. A file called "Copy of sqlnet.log" was transferred to a UNIX host and when tried to execute commands like cat or more, it reported the usual error of "No such file or directory" since a single file name is treated as three different filenames due to the white spaces.

But when tried with double quotes around the filename, works just fine.

UNIX-$> more "Copy of sqlnet.log" or cat "Copy of sqlnet.log"

Now, removing such a file is the most daunting task. A simple 'rm Cop*' takes the file out of your server but takes other files from the directory starting with a 'Cop' as a complement.


Method-1

Conventional method of removing a file.

UNIX-$> rm "Copy of sqlnet.log"

Removal of such a file using double quotes around it works good,as shown above. But just to innovate something new and flaunt about it to my colleagues I used the below method.


Method-2

Get the inode number of the file by a simple 'ls -li', this shows an extra field in the long listing of files. The first field of such a command's output is the inode number. To get an explanation about inode, follow my earlier post Inodes and Links

UNIX-$> ls -li

121604 -rwxr-xr-x 1 oracle oracle 96 Feb 21 18:00 oratab_bkup.txt
121621 -rw-rw-r-- 1 oracle oracle 576 Feb 21 23:55 db_check_ora_home.parm
121557 -rw-rw-r-- 1 oracle oracle 4244 Feb 22 01:00 cron_backup.txt
121638 -rw-r--r-- 1 oracle oracle 868 Feb 22 11:28 Copy of sqlnet.log

Now, use the 'find' command to remove the file with '-inum' and '-exec' arguments. -inum argument expects a file's/directory's inode number to be passed.

UNIX-$> find ./ -inum "121638" -exec rm {} ;

This method precisely fits the following anomaly. Imagine a directory full of files with extension '.dbf', User unknowingly makes a mistake and issues a command 'ls -ltr > *.dbf' to list all the files in the long format.

The output of the above command would create a file called '*.dbf' and the contents are long listing of all the '.dbf' files. Now, would anyone be really foolish enough to try 'rm *.dbf' ??? , the next thing you know is he/she is out of the Organisation (Chuckles!!!) . Well a good backup strategy should save the day .

Though the existence of such a file using up a few bytes is not a big deal, but it gives you an awkward look of your files, an entry as *.dbf. All you need to do to have a pleasant look of your '.dbf' files is to get the inode number of the file '*.dbf' and take it down.

UNIX-$> ls -li

1603 -rwxr-xr-x 1 oracle oracle 96 Feb 21 18:00 *.dbf
1628 -rw-rw-r-- 1 oracle oracle 576 Feb 21 23:55 three.dbf
1637 -rw-rw-r-- 1 oracle oracle 4244 Feb 22 01:00 two.dbf
1630 -rw-r--r-- 1 oracle oracle 868 Feb 22 11:28 one.dbf

UNIX-$> find ./ -inum "1603" -exec rm {} ;

Now, the file "*.dbf" is gone forever and a long listing of *.dbf only shows the actual files as shown below.

UNIX-$> ls -li

1628 -rw-rw-r-- 1 oracle oracle 576 Feb 21 23:55 three.dbf
1637 -rw-rw-r-- 1 oracle oracle 4244 Feb 22 01:00 two.dbf
1630 -rw-r--r-- 1 oracle oracle 868 Feb 22 11:28 one.dbf

Monday, July 14, 2008

Mounting a filesystem on Solaris

It happened once when I got a call on a sunday night which is monday early morning at around 3:30 AM that a database is down and is not coming up when the server was rebooted. I like any other DBA went into the server and tried bringing up the database manually and found an error at the SQL prompt that file cannot be created/filesystem is not in use.

Immediately I had to check with the status of all the filesystems that constitute this database and found about four mount points were missing. It was almost sunday evening in US and I could not even get hold of an SA to bring up the filesystems. This required me to get in there myself and try to fix things, the privilege of root access through sudo was a vital advantage I had.

It is normally the SA who knows the filesystem related parameters to bring up the filesystems with. A person like me who was not really involved when the filesystem was created and who does not even know what parameters to pass for the mount command of UNIX but who only knows the simple syntax of mount is not the appropriate one in such a scenario.

Yet, I gave it a call at my own risk and started looking for ways to get the parameters to pass in the mount command. Where to look at ? was the question I was asking myself. Instantly I remembered the file named /etc/mnttab which has an entry of filesystem type,mounting directory and other arguments specific to the filesystem for all the filesystems.

Let me explain briefly something about the mnttab file which is only accessible to the root user. This is a dynamic file whose contents change as and when a filesystem is mounted/unmounted. Although in my situation the filesystems were not mounted yet I saw an entry for them in the mnttab file, especially the mount options for the filesystem. This could be possible only when the cache might not have been cleared.

Mount options are the fourth field in mnttab file. Mount command takes a number of arguments but the arguments that I used in this scenario were

-F - Takes a value for the filesystem type (ufs/vxfs/nfs..etc.)
-o (lower) - Takes multiple other options for the filesystem type as a comma(,) seperated list

Here is an example of using flags with "mount" command.

UNIX> mount -F ufs -o rw,intr,largefiles,logging,onerror=panic,suid /dev/dsk/c0t0d0s3 /export/oracle

the multiple values following -o flag are explained below.

rw (read/write) - Indicates whether reads and writes are allowed on the file system.

suid (setuid) - Permits the execution of setuid programs in the file system.

intr/nointr - Allows and forbids keyboard interrupts to kill a process that is waiting for an operation on a locked file system.

nologging/logging - controls the logging/nologging behaviour of the filesystem. Logging here means the metadata of the filesystem.

largefiles - Allows for the creation of files larger than 2 Gbytes. A file system mounted with this option can contain files larger than 2 Gbytes.

onerror - Specifies the action that the ufs file system should take to recover from an internal inconsistency on a filesystem. An action can be specified as:

panic — Causes a forced system shutdown. This is the default.
lock — Applies a file system lock to the file system.
umount — Forcibly unmounts the file system.

/dev/dsk/c0t0d0s3 - This is the device to be mounted with a mount point directory.

/export/oracle - Mount point directory

If the mount options are not available to bring up the mount points, it is still easy to bring the mountpoints online with just the device name and mount point directory as shown below. A simple "mountall" also does the job, it brings all the mount points that are listed in a file called /etc/vfstab.

The /etc/vfstab has an entry for every mount point in the system, manually made by the SA himself which denotes that the filesystem has to come online automatically when the system reboots. The below pasted are the methods of mounting filesystems without using any mount point options.

UNIX> mount /dev/dsk/c0t0d0s3 /export/oracle

or

UNIX> mountall

or

UNIX> mountall -l
- Where -l denotes the action to be performed on the local filesystems and not network mapped filesystems.

The above mount command was executed for all the missing filesystems to bring them up with their appropriate mountpoint specific values from the /etc/mnttab file. Just to double check with the newly mounted filesystem either a mount or a df command could be used to see if the filesystems are really up.

Sunday, July 13, 2008

ORA-01031: insufficient privileges

How many times have you come across ORA-01031 error in Windows ?, the error message in itself makes sense only in Windows because the same in UNIX could be resolved with a simple "chown -R" command.

ORA-01031: insufficient privileges

The error puzzles an individual so much that he/she considers changing the user privileges on the very first go, which is one of the steps to resolve this error though. But I had to do something else to resolve it to get into the SQL prompt.

I had to install an Oracle server on my PC and it did get successful with a starter database. I created some schemas, tables and other objects of my own choice then shut the DB down and shut my computer as well when done completely. The next day when I wanted to login to the DB for some more R & D, I get the so called " ORA-01031: insufficient privileges " error.

I was struck dumb for a while as no one else logged into my PC since it went down and I had logged into the system with the usual login id that I use. But then I remembered changing something in the sqlnet.ora file the previous day, I thought of rolling back the changes to make it work and fortunately it worked.

I have always had a habit of commenting "NAMES.DEFAULT_DOMAIN" parameter as I did not want it to bother the connections coming in/going out between different databases on my PC, but along with it I commented something else which was "SQLNET.AUTHENTICATION_SERVICES". The parameter had a value of "NTS" within parentheses. When "SQLNET.AUTHENTICATION_SERVICES" was uncommented and tried again, I was able to login back to the database. The contents of my sqlnet.ora file are as shown below.


# SQLNET.ORA Network Configuration File: E:\oracle\ora91_EE\network\admin\sqlnet.ora
# Generated by Oracle configuration tools.

#NAMES.DEFAULT_DOMAIN = home.com

SQLNET.AUTHENTICATION_SERVICES= (NTS)

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)


Upon doing some more R & D with different PCs having multiple Oracle installations, it was found that following error is also reported ORA-12705: invalid or unknown NLS parameter value specified when the ORACLE_HOME is not set appropriately for a particular ORACLE_SID. Before we proceed any further, we need to understand the reference of ORACLE_HOME taken from two locations.

Location-1
Start -> Programs -> Oracle Installation Products -> Home Selector: This shows a drop down box with the names of Oracle installations and their default ORACLE_HOME path. This has to be an appropriate value for the version of DB you want to work with.

Location-2
Right Click My Computer -> Properties -> Advanced(tab) -> Environmental Variables: There is a section called System Variables which shows up all the system wide environmental variables that are set already, ORACLE_HOME is one among them. Unless ORACLE_HOME is set in here,echo %ORACLE_HOME% from the command prompt does not show the value of ORACLE_HOME. It is observed that this takes precedence over the above mentioned Location 1.

The ORA-12705 error came out of the blue when the ORACLE_HOME for a particular ORACLE_SID was not set correctly but got resolved immediately with the precise value of ORACLE_HOME set in System Variables(Location 2).

There are so many other workarounds linked to resolving ORA-01031 error which an individual applies only when confronted with such situations or by simply poking around at MetaLink.

Tuesday, July 8, 2008

Recursive search for a pattern

I Wish I could use a command "grep -R" like other commands with an -R flag, viz., rm -R,chmod -R and chgrp -R etc.

This is a situation most of us would have come across, when you know a particular file has a specific pattern but you do not know where exactly is the file located. This becomes worse when you have the file about 5 levels down the path, is that easy ??? or make it 10,15,20 levels down.

Let us have an elaborate understanding of this. We are currently at "/user/oracle/dba/tst/tst1/" as the Present Working Directory(pwd).Let us see the files and direcories here with a simple ls -ltra.


HOST : /user/oracle/dba/tst/tst1> ls -ltr
total 6
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ./
drwxrwxr-x 3 oracle oracle 512 Mar 29 20:46 tst2/
drwxrwxr-x 3 oracle oracle 512 Mar 29 20:48 ../


Let us now change to tst2 and do a ls -ltr


HOST : /user/oracle/dba/tst/tst1> cd tst2
HOST : /user/oracle/dba/tst/tst1/tst2> ls -ltr
total 8
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ../
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 tst3/
drwxrwxr-x 3 oracle oracle 512 Mar 29 20:46 ./
-rw-rw-r-- 1 oracle oracle 45 Mar 29 20:46 one


Under tst2 there is a file and a directory. This file "one" has the following contents.


HOST : /user/oracle/dba/tst/tst1/tst2> cat one
This is file named one under tst2/ directory


The directory "tst3" can further be navigated to "tst4" which has another directory "tst5", which in turn has a file named "two" and its contents are shown below.


HOST : /user/oracle/dba/tst/tst1/tst2> cd tst3
HOST : /user/oracle/dba/tst/tst1/tst2/tst3> ls -ltr
total 6
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 tst4/
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ./
drwxrwxr-x 3 oracle oracle 512 Mar 29 20:46 ../


HOST : /user/oracle/dba/tst/tst1/tst2/tst3> cd tst4
HOST : /user/oracle/dba/tst/tst1/tst2/tst3/tst4> ls -tlr
total 6
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ../
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ./
drwxrwxr-x 2 oracle oracle 512 Mar 29 20:44 tst5/


HOST : /user/oracle/dba/tst/tst1/tst2/tst3/tst4> cd tst5
HOST : /user/oracle/dba/tst/tst1/tst2/tst3/tst4/tst5> ls -tlr
total 6
drwxrwxr-x 3 oracle oracle 512 Mar 3 15:36 ../
drwxrwxr-x 2 oracle oracle 512 Mar 29 20:44 ./
-rw-rw-r-- 1 oracle oracle 46 Mar 29 20:45 two


HOST : /user/oracle/dba/tst/tst1/tst2/tst3/tst4/tst5> cat two


This is file named two under tst5/ directory.

Let us not bother about anything else but the files one, two, their contents and their locations. File "one" is located under "tst2/" directory and file "two" is located about 3 directories down the path under "tst5/" directory.

Now from "/user/oracle/dba/tst/tst1/" directory, we are looking for a file having a pattern "file named" under the current directory and further in the recursive occuring directories. The crux of our exercise is we do not know how far we should go ???. We do not know whether to go about 5 levels down the path or more than that

Say for an instance we have a file sitting about 30 levels down the path, that sure makes me a little jittery if I were to go through each directory and do a grep of the required pattern on all the files. However, I would grin at it and use a simple "find" command to achieve the results as shown below.


HOST : /user/oracle/dba/tst/tst1> find ./ -print -exec grep "file named" {} \;
./
./tst2
./tst2/tst3
./tst2/tst3/tst4
./tst2/tst3/tst4/tst5
./tst2/tst3/tst4/tst5/two
This is file named two under tst5/ directory
./tst2/one
This is file named one under tst2/ directory


To get the exact filename that you are looking for is a little wary job. You get the pattern displayed on the screen with a couple of files above and below the contents of the files. Thank goodness !!! "grep" outputs the pattern which is being looked for like it always does.

Now, the file name just above the pattern is the actual source of the contents. As shown above, "./tst2/tst3/tst4/tst5/two" is the file that has the pattern we are looking for. Besides, file "./tst2/one" is an easy one to know since it as at the end of the output.

USAGE : This method can be used in situations where one needs to find a file with a known pattern that is to be changed for a fix. I have indeed come across this situation quite a lot of times, hope you would have too.

Friday, July 4, 2008

Oracle 9i Database Parameters - Reference

A lot of times we've come across a few init.ora parameters which we are not exactly aware of what it actually does, what is its default value, whether it is static / dynamic and all that jazz. Here is a quick link to parameter descriptions for Oracle 9iR2 and a link to the parameters that can be changed at session level using an ALTER SESSION statement. Here is also a link to the explanation of all the v$ views.

Thursday, July 3, 2008

Inodes and Links

INODES
When a filesystem is created in a UNIX OS, a range of inode numbers are assigned to it. Any file/directory created under this filesystem gets an inode number from the list. So, there is an association of a file/directory with an inode number and an inode provides access to an OS datablock.

In general, inodes contain two parts logically.
First: inodes contain information about the file, including its owner, its permissions, and its size.
Second: inodes contain pointers to data blocks associated with the file.

The first field of a 'ls -li' command output denotes the inode for a corresponding file/directory. No two files/directories can have the same inode numbers even it be from different filesystems. One can assume that a single inode number is a unique number in the whole of OS.

LINKS
A link can be thought of as a pointer from one file to another.The third field of a 'ls -li' command output denotes the number of links created for a file/directory.Links are of two types.

Hard links
Syntax : ln source_filename link_name

Hard link always increment the value of the link field for a particular file/directory. The link number for a file/directory is directly proprotional to the number of hard links created for the particular file/directory

UNIX-$> ls -ltri one
121674 -rwxr-xr-x 1 oracle oracle 184 Feb 22 12:07 one

In the above output the third field is equal to 1. Now, a hard link is created for the file 'one'

UNIX-$> ln one one_1UNIX-$> ls -ltri one*

121674 -rwxr-xr-x 2 oracle oracle 184 Feb 22 12:07 one_1
121674 -rwxr-xr-x 2 oracle oracle 184 Feb 22 12:07 one

The above example shows the filename alone differs, but everything else remains unchanged. Both the files have the same inode number and same number of links for them. This is the only reason why a hard link cannot span across different filesystem, since the linked files all share the same inode number and inode numbers come in a specific range for different filesystems. Removing one of the two files does not affect the existence of the other.

UNIX-$> rm one
UNIX-$> ls -ltr one*

121674 -rwxr-xr-x 1 oracle oracle 184 Feb 22 13:19 one_1

Soft links
Syntax : ln -s source_filename link_name

This is also called a symbolic link but more commonly known as soft link in technical jargon(As far as I know). Soft links do not allow the filenames to share the same inode numbers. A point to be noted about a soft link is it has an 'l' variable in the user permissions field of the 'ls -ltri' command output. Soft links can span across file systems. Depending on the location of the soft link, inode number changes.

If the soft link is created on the same filesystem as that of the source file then it gets an inode number from the range of the current filesystem inodes else it gets an inode from the range of the target file system on which it is created.

UNIX-$> ln -s one_1 one_2
UNIX-$> ls -ltri one*

121674 -rwxr-xr-x 1 oracle oracle 184 Feb 22 13:19 one_1
121644 lrwxrwxrwx 1 oracle oracle 3 Feb 22 13:20 one_2 -> one_1

The above soft link is created in the same location as that of the source file. The one shown below is created in a different filesystem, hence a different inode number range.

846 lrwxrwxrwx 1 oracle oracle 30 Feb 22 13:21 one_3 -> /ora1/oracle/dba/one_1

In both the examples above, the link number is unchaged, but the size differs and also the name of the link has a pointer to the original file. So a soft link only stores a pointer to the original file. However commands like vi,cat,more or pg etc. on 'one_3' or 'one_2' run as if it were executed on the original file.

Removing a soft link is a cautious job, one should not use the softlink name along with the pointer to remove a softlink as it has adverse effects.

UNIX-$> rm one_2 -> one_1

The above command will either remove the source file along with the softlink or it could even make the source file empty. The advisable command to remove a soft link is as shown below.

UNIX-$> rm one_2
UNIX-$> ls -ltri one*

121674 -rwxr-xr-x 1 oracle oracle 184 Feb 22 13:19 one_1

It is hoped that this material provides a simple but effective explanation of Inodes and Links.