Applies to:Oracle Server - Enterprise Edition - Version: 184.108.40.206.0 to 220.127.116.11 - Release: 11.2 to 11.2
Information in this document applies to any platform.
It is not possible to directly restore a manual or automatic OCR backup if the OCR is located in an ASM disk group. This is caused by the fact that the command 'ocrconfig -restore' requires ASM to be up & running in order to restore an OCR backup to an ASM disk group. However, for ASM to be available, the CRS stack must have been successfully started. For the restore to succeed, the OCR also must not be in use (r/w), i.e. no CRS daemon must be running while the OCR is being restored.
A description of the general procedure to restore the OCR can be found in the documentation, this document explains how to recover from a complete loss of the ASM disk group that held the OCR and Voting files in a 11gR2 Grid environment.
When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated:
- the Oracle Cluster Registry file (OCR)
- the Voting file(s)
- the shared SPFILE for the ASM instances
Since the CRS disk group has been lost the CRS stack will not be available on any node.
The following settings used in the example would need to be replaced according to the actual configuration:
GRID user: oragrid
GRID home: /u01/app/11.2.0/grid ($CRS_HOME)
ASM disk group name for OCR: CRS
ASM/ASMLIB disk name: ASMD40
Linux device name for ASM disk: /dev/sdh1
Cluster name: rac_cluster1
Nodes: racnode1, racnode2
1. Locate the latest automatic OCR backup
When using a non-shared CRS home, automatic OCR backups can be located on any node of the cluster, consequently all nodes need to be checked for the most recent backup:
2. Make sure the Grid Infrastructure is shutdown on all nodes
Given that the OCR diskgroup is missing, the GI stack will not be functional on any node, however there may still be various daemon processes running. On each node shutdown the GI stack using the force (-f) option:
3. Start the CRS stack in exclusive mode
On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.
4. Label the CRS disk for ASMLIB use
If using ASMLIB the disk to be used for the CRS disk group needs to stamped first, as user root do:
5. Create the CRS diskgroup via sqlplus
The disk group can now be (re-)created via sqlplus from the grid user. The compatible.asm attribute must be set to 11.2 in order for the disk group to be used by CRS:
6. Restore the latest OCR backup
Now that the CRS disk group is created & mounted the OCR can be restored - must be done as the root user:
7. Start the CRS daemon on the current node (18.104.22.168 only !)
Now that the OCR has been restored the CRS daemon can be started, this is needed to recreate the Voting file. Skip this step for 22.214.171.124.0.
8. Recreate the Voting file
The Voting file needs to be initialized in the CRS disk group:
9. Recreate the SPFILE for ASM (optional)
Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup parameters - these may vary from the example below. If in doubt consult the ASM alert log as the ASM instance startup should list all non-default parameter values. Please note the last startup of ASM (in step 2 via CRS start) will not have used an SPFILE, so a startup prior to the loss of the CRS disk group would need to be located.
Now the SPFILE can be created using this PFILE:
10. Shutdown CRS
Since CRS is running in exclusive mode, it needs to be shutdown to allow CRS to run on all nodes again. Use of the force (-f) option may be required:
11. Rescan ASM disks
If using ASMLIB rescan all ASM disks on each node as the root user:
12. Start CRS
As the root user submit the CRS startup on all cluster nodes:
13. Verify CRS
To verify that CRS is fully functional again: