Tuesday, April 4, 2017

Upgrading Oracle Single Instance with ASM (Oracle Restart) from 12.1.0.2 to 12.2.0.1

This post lists the steps for upgrading an Oracle restart environment (Single instance non-CDB on ASM) from 12.1.0.2 to 12.2.0.1. The 12.1 setup was on RHEL 7 and had a OS upgrade from RHEL 7.0 to RHEL 7.3. The current versions of the GI are
[grid@rhel7 ~]$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[grid@rhel7 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.1.0.2.0]
The Oracle restart environment used is a role separate setup where GI is installed as grid user while Oracle software is installed Oracle user. The GI and Oracle homes had the latest PSU applied (Database Patch Set Update : 12.1.0.2.170117). However orachk pre-upgrade listed following patches as missing.
./orachk -u -o pre
...
 WARNING => Oracle patch 21856522 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
 WARNING => Oracle patch 20958816 is not applied on RDBMS_HOME /opt/app/oracle/product/12.1.0/dbhome_2
...
MOS note 2180188.1 list patches to apply before upgrading to 12.2. Cluvfy could be used to further check the system setup
./runcluvfy.sh stage -pre hacfg
This option is not for upgrade check but could be used to check readiness of oracle restart installation.
Unlike the previous versions, 12.2 grid infrastructure requires unzipping it into a grid home. There's no GI home selection on the OUI. Create the new directory path for 12.2 GI and unzip the GI bundle.
mkdir -p /opt/app/oracle/product/12.2.0/grid
cp ~/linuxx64_12201_grid_home.zip /opt/app/oracle/product/12.2.0/grid
cd /opt/app/oracle/product/12.2.0/grid
unzip linuxx64_12201_grid_home.zip 
ASM will be upgraded as part of the upgrade process. Therefore stop the database before starting the GI upgrade.
srvctl stop database -d se2db
Start the GI upgrade by running the gridSetup.sh from the grid home.
grid@rhel7 grid]$ ./gridSetup.sh
This will start the GI installer and select the upgrade GI option.
As mentioned earlier, the installer prompts to shutdown the DB as ASM is upgraded.
If wanted GI could be registered with a cloud control.
OS groups for authentication into ASM. These are same as 12.1 setup.
Oracle base location could be selected but not the GI Home location. This will be fixed, based on where the GI software is extracted.
Running of root scripts could be automated by providing the root password or user with sudo privileges.
Prerequisite check. The minimum memory requirement is 8GB. As this is a test setup this requirement is ignored and proceeded to next step.
Summary page and progress.
Depending on the option selected, (either to automatically run root scripts or manually run the root scripts) one of the following options will be presented.
In either case (auto run or manual run) the root upgrade script failed due to ACFS related known issue. Output below shows when rootupgrade.sh was run manually.
[root@rhel7 grid]# ./rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/oracle/product/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel7/crsconfig/roothas_2017-04-03_05-38-44PM.log
2017/04/03 17:38:45 CLSRSC-363: User ignored prerequisites during installation

ASM has been upgraded and started successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node rhel7 successfully pinned.
2017/04/03 17:39:32 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

CRS-4123: Oracle High Availability Services has been started.

2017/04/03 17:58:46 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/04/03 17:58:51 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'

rhel7     2017/04/03 17:58:54     /opt/app/oracle/product/12.2.0/grid/cdata/rhel7/backup_20170403_175854.olr     0

rhel7     2015/09/29 16:40:16     /opt/app/oracle/product/12.1.0/grid/cdata/rhel7/backup_20150929_164016.olr     0
2017/04/03 17:59:27 CLSRSC-400: A system reboot is required to continue installing.
The command '/opt/app/oracle/product/12.2.0/grid/perl/bin/perl -I/opt/app/oracle/product/12.2.0/grid/perl/lib -I/opt/app/oracle/product/12.2.0/grid/crs/install 
/opt/app/oracle/product/12.2.0/grid/crs/install/roothas.pl  -upgrade' execution failed
Looking at the log file it could be seen the error happens related to ACFS
2017-04-03 17:59:26: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has ... disabling CRS in preparation for reboot.
2017-04-03 17:59:26: Executing cmd: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has
2017-04-03 17:59:27: Command output:
>  CRS-4621: Oracle High Availability Services autostart is disabled.
>End Command output
2017-04-03 17:59:27: /opt/app/oracle/product/12.2.0/grid/bin/crsctl disable has ... CRS disabled, ready for reboot.
2017-04-03 17:59:27: USM driver install status is 3
2017-04-03 17:59:27: ACFS drivers unable to be installed.
2017-04-03 17:59:27: Executing cmd: /opt/app/oracle/product/12.2.0/grid/bin/clsecho -p has -f clsrsc -m 400
2017-04-03 17:59:27: Command output:
>  CLSRSC-400: A system reboot is required to continue installing.
>End Command output
2017-04-03 17:59:27: CLSRSC-400: A system reboot is required to continue installing.
According to MOS note 2025056.1 this issue could be ignored if ACFS is not used or not supported. Rootupgrade scripts are restartable when upgrading to 11.2.0.2 or above (1364947.1). During the subsequent execute of the rootupgrade.sh the script executes without any issue.
[root@rhel7 grid]# ./rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/oracle/product/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/oracle/product/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /opt/app/oracle/crsdata/rhel7/crsconfig/roothas_2017-04-03_06-09-16PM.log
2017/04/03 18:09:16 CLSRSC-363: User ignored prerequisites during installation
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel7'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel7'
CRS-2677: Stop of 'ora.evmd' on 'rhel7' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel7' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/03 18:10:35 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
When the root script failure happens during auto run of the root script, click retry button to run the root script again. It was found that after the initial failure, the subsequent execution complete successfully.
The GI software versions are now upgraded to 12.2
[grid@rhel7 ~]$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [12.2.0.1.0]
[grid@rhel7 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]
Cluvfy could be used for post upgrade verification.
cluvfy stage -post hacfg

Verifying Oracle Restart Integrity ...PASSED
Verifying OLR Integrity ...PASSED

Post-check for Oracle Restart configuration was successful.

CVU operation performed:      stage -post hacfg
Date:                         04-Apr-2017 13:05:07
CVU home:                     /opt/app/oracle/product/12.2.0/grid/
User:                         grid


The next step is the upgrade of database software. This is done by doing a out-of-place software only installation of the 12.2 DB software. One new thing introduced in 12.2 is the OS group specific for RAC administration. Following from install guide "You must designate a group as the OSRACDBA group during database installation. Members of this group are granted the SYSRAC privileges to perform day–to–day administration of Oracle databases on an Oracle RAC cluster". This user group could be used for oracle restart as well (This does lead to an issue which is mentioned towards the end of the post). In order to use this OS group, create it as root and assign it to oracle user.
[root@rhel7 oracle]# groupadd racdba
[root@rhel7 oracle]# usermod -g oinstall -G dba,oper,asmoper,asmdba,asmadmin,backupdba,dgdba,kmdba,racdba oracle
[root@rhel7 oracle]# id oracle
uid=1001(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(oper),1005(asmoper),1006(asmdba),1007(asmadmin),1008(backupdba),1009(dgdba),1010(kmdba),1011(racdba)
During the software installation select this group for RAC administration.
Summary
Once the DB software is installed, the last step is to upgrade the database. 12.2 provide a new pre-upgrade check tool which comes in the form of a jar file. (12.2_Oracle_Home/rdbms/admin/preupgrade.jar). This could be used to verify the pre-upgrade status of the database. Before running the tool set the Oracle home, oracle base and oracle sid and path environment variables. Below is the output of running the pre-upgrade tool.
[oracle@rhel7 ~]$ $ORACLE_HOME/jdk/bin/java -jar preupgrade.jar TERMINAL
Report generated by Oracle Database Pre-Upgrade Information Tool Version
12.2.0.1.0

Upgrade-To version: 12.2.0.1.0

=======================================
Status of the database prior to upgrade
=======================================

      Database Name:  SE2DB
     Container Name:  SE2DB
       Container ID:  0
            Version:  12.1.0.2.0
         Compatible:  12.1.0.2.0
          Blocksize:  8192
           Platform:  Linux x86 64-bit
      Timezone File:  18
  Database log mode:  ARCHIVELOG
           Readonly:  FALSE
            Edition:  SE

  Oracle Component                       Upgrade Action    Current Status
  ----------------                       --------------    --------------
  Oracle Server                          [to be upgraded]  VALID
  Oracle Workspace Manager               [to be upgraded]  VALID
  Oracle Text                            [to be upgraded]  VALID
  Oracle XML Database                    [to be upgraded]  VALID


==============
BEFORE UPGRADE
==============

  Run /preupgrade_fixups.sql to complete all
  of the BEFORE UPGRADE action items below marked with '(AUTOFIXUP)'.

  REQUIRED ACTIONS
  ================
   + Adjust TABLESPACE SIZES as needed.
                                                Auto      12.2.0.1.0
     Tablespace                        Size     Extend    Min Size    Action
     ----------                     ----------  --------  ----------  ------

     SYSAUX                             550 MB  DISABLED      500 MB  None
     SYSTEM                             700 MB  ENABLED       797 MB  None
     TEMP                                20 MB  ENABLED       150 MB  None
     UNDOTBS1                           200 MB  ENABLED       400 MB  None

     Note that 12.2.0.1.0 minimum sizes are estimates.
     If you plan to upgrade multiple pluggable databases concurrently,
     then you must ensure that the UNDO tablespace size is equal to at least
     the number of pluggable databases that you upgrade concurrently,
     multiplied by that minimum.  Failing to allocate sufficient space can
     cause the upgrade to fail.

  RECOMMENDED ACTIONS
  ===================
   + Run 12.1.0.2.0 $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile invalid
     objects.  You can view the individual invalid objects with

       SET SERVEROUTPUT ON;
       EXECUTE DBMS_PREUP.INVALID_OBJECTS;

     5 objects are INVALID.

     There should be no INVALID objects in SYS/SYSTEM or user schemas before
     database upgrade.

   + (AUTOFIXUP) Gather stale data dictionary statistics prior to database
     upgrade in off-peak time using:

      EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

     Dictionary statistics do not exist or are stale (not up-to-date).

     Dictionary statistics help the Oracle optimizer find efficient SQL
     execution plans and are essential for proper upgrade timing. Oracle
     recommends gathering dictionary statistics in the last 24 hours before
     database upgrade.

     For information on managing optimizer statistics, refer to the 12.1.0.2
     Oracle Database SQL Tuning Guide.

=============
AFTER UPGRADE
=============

  Run /postupgrade_fixups.sql to complete all
  of the AFTER UPGRADE action items below marked with '(AUTOFIXUP)'.

  REQUIRED ACTIONS
  ================
  None

  RECOMMENDED ACTIONS
  ===================
   + Upgrade the database time zone version using the DBMS_DST package.

     The database is using timezone datafile version 18 and the target
     12.2.0.1.0 database ships with timezone datafile version 26.

     Oracle recommends using the most recent timezone data.  For further
     information, refer to My Oracle Support Note 1585343.1.

   + (AUTOFIXUP) Gather dictionary statistics after the upgrade using the
     command:

       EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

     Oracle recommends gathering dictionary statistics after upgrade.

     Dictionary statistics provide essential information to the Oracle
     optimizer to help it find efficient SQL execution plans. After a
     database upgrade, statistics need to be re-gathered as there can now be
     tables that have significantly changed during the upgrade or new tables
     that do not have statistics gathered yet.

Preupgrade generated files:
    /opt/app/oracle/cfgtoollogs/se2db/preupgrade/preupgrade_fixups.sql
    /opt/app/oracle/cfgtoollogs/se2db/preupgrade/postupgrade_fixups.sql
Run the preupgrade_fixups.sql script to rectify any pre-upgrade issues raised. The timezone filed used by 12.1 is 18 and this will be upgraded to 26 as part of the DB upgrade
SQL> select * from v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_18.dat              18          0
Run the DBUA from 12.2 home to begin the database upgrade.
Upgrade Summary
Upgrade Results
Once the DBUA completes the upgrade run the post upgrade fix script, mentioned by the pre-upgrade tool
/@opt/app/oracle/cfgtoollogs/se2db/preupgrade/postupgrade_fixups.sql

Executing Oracle POST-Upgrade Fixup Script

Auto-Generated by:       Oracle Preupgrade Script
                         Version: 12.2.0.1.0 Build: 1
Generated on:            2017-03-31 18:27:51

For Source Database:     SE2DB
Source Database Version: 12.1.0.2.0
For Upgrade to Version:  12.2.0.1.0

                          Fixup
Check Name                Status  Further DBA Action
----------                ------  ------------------
old_time_zones_exist      Passed  None
post_dictionary           Passed  None

PL/SQL procedure successfully completed.
Verify the timezone file has been updated
SQL> select * from v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_26.dat              26          0
After the upgrade using srvctl to administer the database will result in error as shown below.
[oracle@rhel7 ~]$ srvctl stop database -db se2db
PRCD-1334 : failed to stop database se2db
PRCD-1124 : Failed to stop database se2db and its services
PRCD-1131 : Failed to stop database se2db and its services on nodes
PRCR-1133 : Failed to stop database se2db and its running services
PRCR-1132 : Failed to stop resources using a filter
ORA-01017: invalid username/password; logon denied
CRS-2675: Stop of 'ora.se2db.db' on 'rhel7' failed
The reason for this is the rac admin OS group that was created during database software installation. Oracle document only mentions of making grid part of sysdba group. Following from Oracle install guide "When you manually add a database to the Oracle Restart configuration, you must also add the Oracle grid infrastructure software owner as a member of the OSDBA group of that database. This is because the grid infrastructure components must be able to connect to the database as SYSDBA to start and stop the database. For example, if the host user who installed the grid infrastructure home is named grid and the OSDBA group of the new database is named dba, then user grid must be a member of the dba group". However if a separate OS group is ued for RAC administration then grid user must be part of that group as well to prevent errors as above.
[root@rhel7 ~]# id grid
uid=1002(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1005(asmoper),1006(asmdba),1007(asmadmin)
[root@rhel7 ~]# usermod -g oinstall -G dba,asmoper,asmdba,asmadmin,racdba grid
[root@rhel7 ~]# id grid
uid=1002(grid) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1005(asmoper),1006(asmdba),1007(asmadmin),1011(racdba)
After this chagne srvctl could be usd to manged the DB without any issue.
If satisfied with the upgrade and application testing change the compatibility parameter on the DB and ASM disk groups.
SQL> alter system set compatible='12.2.0.1.0' scope=spfile;
shutdown immediate;
Login as grid user and then login to ASM instance as sysasm
SQL>  alter diskgroup FRA SET attribute 'compatible.asm'='12.2.0.1.0';
SQL>  alter diskgroup DATA  SET attribute 'compatible.asm'='12.2.0.1.0';
SQL> alter diskgroup fra set attribute 'compatible.rdbms'='12.2.0.1.0';
SQL>  alter diskgroup data set attribute 'compatible.rdbms'='12.2.0.1.0';

SQL>  select g.name,a.name,a.value from v$asm_diskgroup g, v$asm_attribute a where g.group_number=a.group_number and a.name like '%compat%';

NAME                                     NAME                                     VALUE
---------------------------------------- ---------------------------------------- ----------------------------------------
DATA                                     compatible.asm                           12.2.0.1.0
DATA                                     compatible.rdbms                         12.2.0.1.0
FRA                                      compatible.asm                           12.2.0.1.0
FRA                                      compatible.rdbms                         12.2.0.1.0
Start the database and run orachk post ugprade check and cluvfy post checks
./orachk -u -o post

 cluvfy stage -post hacfg

Verifying Oracle Restart Integrity ...PASSED
Verifying OLR Integrity ...PASSED

Post-check for Oracle Restart configuration was successful.

CVU operation performed:      stage -post hacfg
Date:                         04-Apr-2017 13:05:07
CVU home:                     /opt/app/oracle/product/12.2.0/grid/
User:                         grid
Useful metalink notes
ACFS Drivers Install reports CLSRSC-400: A system reboot is required to continue installing [ID 2025056.1]
Patches to apply before upgrading Oracle GI and DB to 12.2.0.1 [ID 2180188.1]
12.2 Grid Infrastructure Installation: What's New [ID 2024946.1]
How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues [ID 2240959.1]

Related Posts
Upgrading Single Instance on ASM from 11.2.0.3 to 11.2.0.4
Upgrading Grid Infrastructure Used for Single Instance from 11.2.0.4 to 12.1.0.2

Wednesday, March 15, 2017

First Look at 12.2 (12.2.0.1) Installation, CDB & PDB Creation

Oracle database 12.2 (12.2.0.1) for on-premises is now available. This is an brief look at the changes on 12.2 compared to previous version of 12c. Full list of changes (new, depreciate and desupported) is available here.
One of the first thing is RHEL5/OEL5 kernels are no longer supported (see supported kernels). 12.1 had support for OEL5 and RHEL5. Therefore any upgrade plan must consider the OS support and if the current DB is running on an unsupported kernel, then OS must be upgraded before DB upgrade.
There's not much difference in the installation (software only) process. However, at the end there's an option to install trace file analyzer. This was previously prompted only for grid installation with 12.2 its available for single instance installation as well.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes
Installing Oracle Trace File Analyzer (TFA).
Log File: /opt/app/oracle/product/12.2.0/dbhome_1/install/root_hpc6.domain.net_2017-03-15_15-46-55-462047607.log
Finished installing Oracle Trace File Analyzer (TFA)
Following images shows the changes in the DBUA.
12.2 has new feature which allow local undo tablespaces for pdbs. Following from Admin guide "CDB undo mode - A CDB can run in local undo mode or shared undo mode. Local undo mode means that every container in the CDB uses local undo. Shared undo mode means that there is one active undo tablespace for a single-instance CDB. For an Oracle RAC CDB, there is one active undo tablespace for each instance for each PDB in local undo mode." DBUA gives an option to select this mode at the creation time. It is also possible to change the undo tablespace to local after the creation of the CDB.
It appears that non-CDBs could be created with 12.2 as well. However according to upgrade guide "The non-CDB architecture was deprecated in Oracle Database 12c. It can be desupported and unavailable in a release after Oracle Database 12c Release 2. Oracle recommends use of the CDB architecture."
It is now possible to remove DB components that are not required, to keep inline with the license policies and avoiding accidental use of features not licensed for. This was not possible with 12.1 especially when CDBs were created and manual removal of these options was the only way available(refer 2001512.1).
Memory management option defaults to automatic shared memory management.
The default process count is set at 640, increased from 300.
The default character set option is Unicode AL32UTF8. Previously this would default to OS specific language setting (WE8MSWIN1252).
If a DB template is created, with 12.2 it is possible to specify a custom location for the template file.
All initialization parameters dialog box allows addition of extra parameters to SPfile.
Summary page (compare with previous version)
The progress page does not have the activity log and alert log buttons to view the log files.
End of the CDB creation following components are installed.
SQL> select comp_name,status from dba_registry;

COMP_NAME                                STATUS
---------------------------------------- --------------------------------------------
Oracle Database Catalog Views            VALID
Oracle Database Packages and Types       VALID
Oracle Real Application Clusters         OPTION OFF
Oracle XML Database                      VALID
Oracle Workspace Manager                 VALID
Oracle Text                              VALID

6 rows selected.


Next is the creation of PDB. DBCA now allows OS based authentication when connecting to CDB to create PDBs. If OS based authentication is disabled then password based authentication is needed.
Option of creating a clone is added in addition to previously available options.
Option to lock existing users and specify a new admin user.
The PDB's data file location is determined by CDB's file location. In this case CDB is on OMF as such the PDB data files will be placed in a relative path to CDB and cannot be changed at creation time.
Summary
Once the PDB is created it will be available on read write mode
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB122                         READ WRITE NO
Datafile location
SQL> select con_id,file_name from cdb_data_files order by 1;

    CON_ID FILE_NAME
---------- --------------------------------------------------------------------------------------------------------------
         1 /opt/app/oracle/oradata/CDB122/datafile/o1_mf_system_ddlt81f7_.dbf
         1 /opt/app/oracle/oradata/CDB122/datafile/o1_mf_users_ddltbf4d_.dbf
         1 /opt/app/oracle/oradata/CDB122/datafile/o1_mf_undotbs1_ddlt8wow_.dbf
         1 /opt/app/oracle/oradata/CDB122/datafile/o1_mf_sysaux_ddlt8k2f_.dbf
         4 /opt/app/oracle/oradata/CDB122/4AC96F423E65320FE0536800A8C0EC46/datafile/o1_mf_system_ddlzp0jv_.dbf
         4 /opt/app/oracle/oradata/CDB122/4AC96F423E65320FE0536800A8C0EC46/datafile/o1_mf_users_ddlzp0k3_.dbf
         4 /opt/app/oracle/oradata/CDB122/4AC96F423E65320FE0536800A8C0EC46/datafile/o1_mf_undotbs1_ddlzp0k2_.dbf
         4 /opt/app/oracle/oradata/CDB122/4AC96F423E65320FE0536800A8C0EC46/datafile/o1_mf_sysaux_ddlzp0k0_.dbf

8 rows selected.
This has been a brief look at installing and creating CDB and PDB with 12.2.

Related Posts
Installing 12c (12.1.0.1) RAC on RHEL 6 with Role Seperation - Creating CDB & PDB

Tuesday, January 17, 2017

ASMCMD cp Fails on CIFS

Copying a file using asmcmd cp to a CIFS location fails as follows
ASMCMD> cp annnf0_ARCHIVE_LOG_0.382.923545643 /mnt/dbbackup
copying +flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643 -> /mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643
ASMCMD-08016: copy source->'+flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643' and target->'/mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643' failed
ORA-19510: failed to set size of 173851 blocks for file "/mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643" (block size=512)
ORA-27045: unable to close the file
Linux-x86_64 Error: 5: Input/output error
Additional information: 4
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 258
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)
However if the copied location is examined the file is there with same size as source. However integrity of the file cannot be guaranteed as cp command has failed. The error message suggests "unable to close the file" so it could be that file is copied but file descriptor could not be closed. Copying to other file systems (ext3) worked fine.
ASMCMD> cp annnf0_ARCHIVE_LOG_0.382.923545643 /home/oracle
copying +flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643 -> /home/oracle/annnf0_ARCHIVE_LOG_0.382.923545643
So issue was assumed not to be with asmcmd but the file system.
Trying to create a backup on the CIFS location also failed
RMAN> backup current controlfile format '/mnt/dbbackup/controlc.tl';

Starting backup at 28-Sep-2016 09:14:12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1029 instance=livedb2 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-Sep-2016 09:14:15
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 09/28/2016 09:14:16
ORA-19504: failed to create file "/mnt/dbbackup/controlc.tl"
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 22: Invalid argument


Looking in MOS several notes were found which suggest limitations of using certain Oracle utilities with CIFS. For backups it was suggested to change the filesystemio_options to asynch or failing that to mount the CIFS location with file type cifs and oracle user's user id , group id
mount -t cifs -o username=oracle,uid=oracle,gid=oinstall //cfis location /mount point
Second solution had already been implemented and asmcmd cp and rman backup still failed. However after changing filesystemio_options to asynch, rman backups succeded but asmcmd cp continue to fail. It seems asmcmd is another utility that cannot be used with CIFS (disclaimer: could not find any official oracle document to verify this though).
Beside asmcmd , datapump is also not supported on CIFS (793805.1). CIFS could be used for RMAN backups but it is not a certified file system. Which means if there's any issue when using RMAN with CIFS there won't be any oracle support to resolve the issue (444809.1).

Useful Metalink Notes
Data Pump Fails With Errors ORA-39097 ORA-39065 ORA-31643 ORA-19510 Using CIFS [793805.1]
Is use of CIFS Protocol for RMAN backups supported? [444809.1]
'cp' Fails with 'cp: cannot create symbolic link' When Copying Files to CIFS Filesystem [728223.1]
ORA-19504 ORA-27040 During Rman Backup To A Mounted Windows Directory In Linux [464267.1]
Which are the mount settings for SAMBA? [557057.1]
Product Support Document for Common Internet File System (CIFS)[1398067.1]

How to mount Windows share on Red Hat Enterprise Linux system using CIFS?