Friday, February 9, 2018

JDBC Auto Commit and Log File Sync

By default when a connection is checked out of a JDBC connection pool (either Oracle Data Source or UCP), it has auto commit set to true. This means for DMLs, after each statement execution a commit happens implicitly. Effect of this is high log file sync (foreground) and log file parallel write (background waits) on the database.
The Java test case at the end of the post count the log file sync waits and user commits for inserting 10,000 rows into a single column table (create table x (a number)). The table below shows summary for running test with auto commit on and off. The database used was a standard edition
MeasurementAuto Commit OnAuto Commit Off
Log File Sync Waits100001
User commits100001
Elapsed Time (sec)5023
Ideally, JDBC batching should be used when multiples rows are inserted like this (which requires setting auto commit to false anyway). However if this not possible then changing the auto commit property on the connection is the next best thing. Below output is from APConsole which gives a comparison of the wait times associated with having auto commit on and off. Log file sync wait is implicitly shown on the wait class graph as it's the only wait event in commit wait class.

Below java code could be used for testing for other DML events.
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class CMTest {

    private final HashMap valueMap = new HashMap<>();
    private final PoolDataSource ds;

    public CMTest() throws SQLException {

        ds = PoolDataSourceFactory.getPoolDataSource();


    public Connection getConnection() throws SQLException {

        return ds.getConnection();

    public static void main(String[] args) throws SQLException {

        CMTest test = new CMTest();
        Connection con = test.getConnection();
        con.setAutoCommit(false); // comment to make auto auto commit true

        test.printStat(con, 1);
        test.printStat(con, 2);

    public void printStat(Connection con, int i) throws SQLException {

        String SQL = "select,s.value from v$statname st, v$sesstat s where st.STATISTIC#=s.STATISTIC# and s.sid=SYS_CONTEXT ('USERENV', 'SID') and'user commits' "
                + "union "
                + "select e.event,TOTAL_WAITS from v$session_event e where e.event='log file sync' and e.sid=SYS_CONTEXT ('USERENV', 'SID')";
//                + "union "
//                + "select e.event,TIME_WAITED_MICRO from v$session_event e where e.event='log file sync' and e.sid=SYS_CONTEXT ('USERENV', 'SID')";

        PreparedStatement pr = con.prepareStatement(SQL);
        ResultSet rs = pr.executeQuery();
        while ( {
            if (i == 1) {

                valueMap.put(rs.getString(1), rs.getInt(2));

            } else {
                Integer x = valueMap.get(rs.getString(1));
                x = (x == null) ? 0 : x;

                System.out.println(rs.getString(1) + " " + (rs.getInt(2) - x));



    public void runTest(Connection con) throws SQLException {

        String sql = "insert into x values (?)";
//        String sql = "delete from x where a = ?";

        PreparedStatement pr = con.prepareStatement(sql);
        for (int i = 0; i < 10000; i++) {

            pr.setInt(1, i);
Related Posts
java.sql.SQLException: Could not commit with auto-commit set on When Using 12c JDBC Driver

Monday, January 22, 2018

RAID 10 vs RAID 5 Some Empirical Evidence

There are many trade-offs between RAID 10 (or 1+0) and RAID 5 in terms of cost and performance. It's considered RAID 10 better in terms of writes compared to RAID 5, which has a overhead associated with writes. In terms of cost RAID 10 would require twice the disk storage as RAID 5 to get same capacity thus doubling the cost. RAID 10 is the preferred RAID for Oracle but in real world this isn't always followed due to financial constraints. So it's not uncommon to find Oracle database deployed with RAID 5. The metalink document 30286.1 list preferred RAID level for each file type (there's no mention of RAID 10 (1+0), document needs updating).
The post shows the change in physical writes and read by way of empirical evidence after a database was moved from a RAID 5 to RAID 10. DB was a two node RAC ( using ASM for storage. It is a production database running a property software that could be considered as having OLTP characteristics. With RAID 5 it had two ASM disk groups with online redo, control file multiplexed between two disk groups. Data file resided in one disk group and archive logs were in another. On the physical level RAID 5 consisted of 4 x 400GB disk with one hot spare.
The new RAID 10 configuration consisted of 7 x 600 GB disks with one hot spare. During the move, all redo logs, control files, data files were moved to ASM disk group created using disk from RAID 10 LUN. Archive logs were still going to RAID 5.
Below is a comparison of physical reads,writes and log file sync waits for one month period before and after the change. The two periods had no statistically significant difference in the mean value for physical reads, physical writes and log file sync wait at 95% confidence level. Thus it is considered that two periods had same amount of "work". Physical read/write values were obtained from STATS$FILESTATXS (DB is Standard Edition, thus no AWR) for the data files in the main tablespace of the application. Statspack had hourly snapshots.

Physical Reads
Considering each period had same amount of physical reads it appears that on RAID 10 it took less read time compared to RIAD 5. This was confirmed by a statistical significance test.

Physical Writes
Unlike the physical reads earlier, there was no statistically significant reduction in the physical write times. This doesn't imply RAID 10 and RAID 5 perform same for writes, which is not true. One would expect RAID 10 to outperform RAID 5, especially when it comes to writes. But there was no empirical evidence for it at least for the sample periods looked at (as this seems to be contradictory to what's said about RAID 10 writes vs RAID 5 writes several sample periods were compared and all indicated there was no statistically significant reduction in write times). All that could be said about this is that, for this application at least, moving from RAID 5 to RAID 10 didn't result in signifiant reduction in physical write time (RAID 10 write times were not higher nor lower compare to RAID 5 write times). However as seen in the subsequent section other types of writes (redo log writes) have benefited significantly from going to RAID 10.

Log File Sync Waits
Log file sync wait times represent the time spent flushing log buffer to disk i.e to redo log files (34592.1). This is the other significant write in the system and happens in real time. Though physical writes didn't have a reduction, there was a significant reduction the log file sync times, which means redo log file writes have benefited from the RAID change.

In summary the empirical evidence is that RAID 10 outperform RAID 5 for OLTP like applications. However, there could be cases where there's no visible gain as in this case with physical writes.

Wednesday, November 15, 2017

Unable to obtain current patch information due to error: 20001, ORA-20001: Latest xml inventory is not loaded into table

After upgrading single instance oracle restart setup to 12.2 from following is observed in the alert log
Unable to obtain current patch information due to error: 20001, ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 864
ORA-06512: at "SYS.DBMS_QOPATCH", line 2222
ORA-06512: at "SYS.DBMS_QOPATCH", line 740
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247
Checking in the $ORACLE_HOME/QOpatch/qopatch_log.log showed
KUP-05007:   Warning: Intra source concurrency disabled because the preprocessor option is being used.

Field Definitions for table OPATCH_XML_INV
  Data in file has same endianness as the platform
  Reject rows with all null fields

  Fields in Data Source:

    XML_INVENTORY                   CHAR (100000000)
      Terminated by "UIJSVTBOEIZBEFFQBL"
      Trim whitespace same as SQL Loader
KUP-04095: preprocessor command /opt/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat encountered error "/opt/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat: 
line 55: /opt/app/oracle/product/12.2.0/dbhome_1/rdbms/log/stout_std11g2.txt: Permission denied
As per 1602089.1 this is due to running running the command with a different user who doesn't have permission on $ORACLE_HOME/QOpatch folder. But in this case it was not ture, the DB start was done by oracle user who has permissions on the QOPatch folder. Even then the same error was shown on alert log.

This is a role separate single instance setup and GI is owned by grid user. As mentioned in the password store post, in single instance setup with role separation has the problem of certain commands being run as grid user instead of oracle user. In this case the grid user doesn't have write permission on $ORACLE_HOME/rdbms/log folder which has 755 as the permission. That's the root cause of the issue. Change this log directory permission to 775 so that grid user is able to write to it. After this during the start up the full patch list is shown on the alert log.
Dumping current patch information
Patch Id: 26710464
Patch Description: Database Release Update : (26710464)
Patch Apply Time: 2017-10-20T13:05:21+01:00
Bugs Fixed: 14690846,17027695,17533661,19285025,19327292,19614243,20003668,
The issue was only observed in single instance role separated setup. In role separated RAC configuration there wasn't any permission issues.

Useful metalink notes
Queryable Patch Inventory - Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table [ID 1602089.1]