Adding a New Disk Group in VMware OLE.
Oracle Database Version: 10.2.0.1
Oracle Real Applications Cluster
Oracle ASM
Oracle Enterprise Linux
VMware server 1.0.4
Login in as a root user on both the nodes, as this is a cluster database, configure the ASM disks on both the nodes.
Before going further make sure, that the hardware (New Raw Device) as been added / assigned / available to both the nodes.
1) Check the Existing Raw devices and verifying the availability by scanning the disks as below. It should be done on both the nodes.
****************************
[root@rac1 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
[root@rac1 ~]#
[root@rac2 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
[root@rac2 ~]#
*****************************
************************************************
[root@rac1 ~]# fdisk -l
[…Truncated]
Device Boot Start End Blocks Id System
/dev/sde1 1 261 2096451 83 Linux
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
[root@rac1 ~]#
Disk /dev/sdg doesn't contain a valid partition table
[root@rac1 ~]# fdisk /dev/sdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklab el
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
******************************************************
3. Create disk partition of the newly added disk and we will go ahead preparing raw disk for Oracle ASM (/dev/sdf, /dev/sdg).
********************************************
[root@rac1 ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]#
* Same for /dev/sdg
***********************************************************
4. Check the Partition information using fdisk -l again
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 1044 8385898+ 83 Linux
5. Create ASM Disk, on any of the Node as the root user.
[root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sdf1
Marking disk "/dev/sdf1" as an ASM disk: [ OK ]
[root@rac1 ~]#
[root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL5 /dev/sdg1
Marking disk "/dev/sdg1" as an ASM disk: [ OK ]
6. Make sure the ASM disks are visible from every node.
*********************************************
[root@rac1 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
VOL5
NODE rac2
[root@rac2 ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
VOL5
[root@rac2 ~]#
*********************************
7. Login in as Oracle user and set your x-windows, to run DBCA.
8. Operations: Select Configure Automatic Storage Management from Database Configuration Assistant(DBCA) and click next.
9. Node Selection: Select the nodes listed, as its a cluster Oracle Database, the Disk group should be visible for both the nodes .
10. ASM DISK GROUPS: Click on Create New Tab below to create a new disk group.
11. Create Disk Group: Enter the name, Redundany (normal) & select the member disks for the new group : VOL4 & VOL5. Select both as the redundancy type is normal. ASM needs to Member disks atleast to go with.
12. ASM Disk Groups: The new diskgroup added just before is seen in the below screen.
13. Click Finsih, to end the New Disk group. You can add disks, you can create tablespaces in the new disk group.
14. Check from ASMCMD as well,
rac1-> export ORACLE_SID=+ASM1
rac1-> asmcmd
ASMCMD> lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED NORMAL N N 512 4096 1048576 6134 3352 0 1676 0 DG1/
MOUNTED NORMAL N N 512 4096 1048576 16378 16192 0 8096 0 DG2/
MOUNTED EXTERN N N 512 4096 1048576 2047 1873 0 1873 0 RECOVERYDEST/
ASMCMD> ls
DG1/
DG2/
RECOVERYDEST/
ASMCMD>
Automatic storage management is a new feature in Oracle Database 10g that provides the solution for the DBA storage Management challeneges.
ASM enables the DBA to change the storage configuration without having to take the database offline. ASM automatically rebalances files across the disk group after disks have been added or dropped.