Quantcast
Channel: A! Help
Viewing all 315 articles
Browse latest View live

TimesTen JDBC Connection Pool Using UCP (Universal Connection Pool)

$
0
0
JDBC Connection to TimesTen in-memory database could be obtained either through the TimesTenDataSource or loading one of the TimesTen drivers (TimesTenDriver or TimesTenClientDriver) using Class.forName function.
But as stated in the Java developer's guide for TimeTen IMDB, the TimesTen driver does not implement any connection pooling. Looking at the sample code provided with the installation (TimesTenConnectionPool and TimesTenPooledConnection classes), it's clear that at the source of it is the TimesTenDataSource which is a non pooling data source. However instead of writing own code for TimesTen connection pooling, universal connection pool (UCP) could be used instead.
However could not find any Oracle documentation on UCP for TimesTen connection pooling. Therefore if there are any caveats to using UCP with TimesTen it's not known yet and post will be updated when such information comes to light.
Example java code for creating a TimesTen connection pool using UCP is as follows
   String urlc = "jdbc:timesten:client:ttc_server=192.168.0.99;tcp_port=53397
;ttc_server_dsn=bximcdb_1122;oraclePassword=asa";
// String urld = "jdbc:timesten:direct:bximcdb_1122";

PoolDataSource poolds = PoolDataSourceFactory.getPoolDataSource();
poolds.setConnectionFactoryClassName("com.timesten.jdbc.TimesTenDataSource"); // connection type determined by URL
// poolds.setConnectionFactoryClassName("com.timesten.jdbc.TimesTenDriver"); // Direct connections
// poolds.setConnectionFactoryClassName("com.timesten.jdbc.TimesTenClientDriver"); // client/server connections
poolds.setConnectionPoolName("TTPool");

poolds.setURL(urlc); // or urld
poolds.setUser("asanga"); // TT IMDB schema username
poolds.setPassword("asa2"); // TT IMDB schema password

poolds.setInitialPoolSize(5);
poolds.setMinPoolSize(5);
poolds.setMaxPoolSize(25);

Connection con = poolds.getConnection();

// application work using the connection
urlc is an example of an URL used in a client/server type of connection. Usually this is when the TimeTen database and JDBC client are on two separate locations, thus connect over the network. ttc_server is the server IP or hostname where the TimesTen (TT) database is running. tcp_port is the TT listening port.
ttStatus

Daemon pid 25149 port 53396 instance tt1122
TimesTen server pid 25158 started on port 53397
ttc_server_dsn is the data source name setup for the TT DB. Finally the oraclePassword is the password of the corresponding Oracle database schema (not the TT DB password). Unlike the TimesTenDataSource the UCP doesn't have a "setOraclePassword" function. Therefore oracle password must be included on the URL (refer 1404604.1)

urld is an example of an URL used for direct connection to the TT DB. For this both JDBC client and TT DB must be located on the same location and does not involve a network connection.

UCP requires a connection factory class name and as shown on the example code above any one of the following TimesTenDataSource, TimesTenDriver or TimesTenClientDriver could be used. If TimesTenDataSource is used then the connection type is determined by the value on the URL (jdbc:timesten:client or jdbc:timesten:direct). Whereas other two classes explicitly specify the connection type.

The username and password set are that of the schema in the TT DB. Finally the connection pool initial, minimum and maximum sizes.



Similar to JDBC thick client TT JDBC clients also require other native libraries to be in available, and also a data source name (DSN) setup. These steps are not listed here and it is assumed all these prerequisites are completed successfully.
Running some test java code shows the connection pooled. Below is the output for client/server connection using TimesTenDataSource.
$ ttStatus

Daemon pid 25149 port 53396 instance tt1122
TimesTen server pid 25158 started on port 53397
------------------------------------------------------------------------
Data store /opt/timesten/DataStore/bximcdb_1122
There are 28 connections to the data store
Shared Memory KEY 0x670615d8 ID 16482308
PL/SQL Memory KEY 0x680615d8 ID 16515077 Address 0x7fa0000000
Type PID Context Connection Name ConnID
Cache Agent 25324 0x0000000002d70be0 Handler 2
Cache Agent 25324 0x0000000002ed7720 Timer 3
Cache Agent 25324 0x000000000302cc30 CacheGridRec 7
Cache Agent 25324 0x000000000309ae80 CacheGridEnv 5
Cache Agent 25324 0x0000000003235290 CacheGridSend 6
Cache Agent 25324 0x00000000032837f0 BMReporter(1107736896) 4
Cache Agent 25324 0x0000000003329920 CacheGridRec 8
Cache Agent 25324 0x000000000344edb0 CacheGridRec 9
Cache Agent 25324 0x00000000034d7390 Refresher(S,5000) 10
Cache Agent 25324 0x000000000393bc90 LogSpaceMon(1095366976) 11
Cache Agent 25324 0x0000000003b48e00 Marker(1097472320) 12
Cache Agent 25324 0x0000000011d7e150 Refresher(D,5000) 13
Server 25763 0x00000000032b0a90 java 1
(Client Information: pid: 25743; IPC: TCP/IP;
Node: hpc5.domain.net (192.168.0.99))
Server 25768 0x000000001beeaa90 java 14
(Client Information: pid: 25743; IPC: TCP/IP;
Node: hpc5.domain.net (192.168.0.99))
Server 25773 0x000000000daf7a90 java 15
(Client Information: pid: 25743; IPC: TCP/IP;
Node: hpc5.domain.net (192.168.0.99))
Server 25778 0x0000000017887a90 java 16
(Client Information: pid: 25743; IPC: TCP/IP;
Node: hpc5.domain.net (192.168.0.99))
Server 25783 0x000000001fb90a90 java 17
(Client Information: pid: 25743; IPC: TCP/IP;
Node: hpc5.domain.net (192.168.0.99))

Subdaemon 25153 0x0000000001094570 Manager 142
Subdaemon 25153 0x00000000010eb3f0 Rollback 141
Subdaemon 25153 0x0000000002488580 Aging 136
Subdaemon 25153 0x000000000251d1f0 Checkpoint 133
Subdaemon 25153 0x000000000270ed90 AsyncMV 139
Subdaemon 25153 0x00000000027841b0 Log Marker 138
Subdaemon 25153 0x0000000002798da0 Deadlock Detector 137
Subdaemon 25153 0x000000000284f0e0 Flusher 140
Subdaemon 25153 0x00002aaac00008c0 Monitor 135
Subdaemon 25153 0x00002aaac0055710 HistGC 134
Subdaemon 25153 0x00002aaac00aa560 IndexGC 132
Replication policy : Manual
Cache Agent policy : Manual
TimesTen's Cache agent is running for this data store
PL/SQL enabled.
------------------------------------------------------------------------
Accessible by group oinstall
End of report
The five pooled connections are listed as "Server" type and there are five connections which is the initial pool size.
Following is the TT status output when direct connections are used.
$ ttStatus

Daemon pid 25149 port 53396 instance tt1122
TimesTen server pid 25158 started on port 53397
------------------------------------------------------------------------
Data store /opt/timesten/DataStore/bximcdb_1122
There are 28 connections to the data store
Shared Memory KEY 0x670615d8 ID 16482308
PL/SQL Memory KEY 0x680615d8 ID 16515077 Address 0x7fa0000000
Type PID Context Connection Name ConnID
Cache Agent 25324 0x0000000002d70be0 Handler 2
Cache Agent 25324 0x0000000002ed7720 Timer 3
Cache Agent 25324 0x000000000302cc30 CacheGridRec 7
Cache Agent 25324 0x000000000309ae80 CacheGridEnv 5
Cache Agent 25324 0x0000000003235290 CacheGridSend 6
Cache Agent 25324 0x00000000032837f0 BMReporter(1107736896) 4
Cache Agent 25324 0x0000000003329920 CacheGridRec 8
Cache Agent 25324 0x000000000344edb0 CacheGridRec 9
Cache Agent 25324 0x00000000034d7390 Refresher(S,5000) 10
Cache Agent 25324 0x000000000393bc90 LogSpaceMon(1095366976) 11
Cache Agent 25324 0x0000000003b48e00 Marker(1097472320) 12
Cache Agent 25324 0x0000000011d7e150 Refresher(D,5000) 13
Process 25691 0x000000004a53b4f0 java 1
Process 25691 0x000000004a614820 java 14
Process 25691 0x000000004a695540 java 15
Process 25691 0x000000004a717270 java 16
Process 25691 0x000000004a798fa0 java 17

Subdaemon 25153 0x0000000001094570 Manager 142
Subdaemon 25153 0x00000000010eb3f0 Rollback 141
Subdaemon 25153 0x0000000002488580 Aging 136
Subdaemon 25153 0x000000000251d1f0 Checkpoint 133
Subdaemon 25153 0x000000000270ed90 AsyncMV 139
Subdaemon 25153 0x00000000027841b0 Log Marker 138
Subdaemon 25153 0x0000000002798da0 Deadlock Detector 137
Subdaemon 25153 0x000000000284f0e0 Flusher 140
Subdaemon 25153 0x00002aaac00008c0 Monitor 135
Subdaemon 25153 0x00002aaac0055710 HistGC 134
Subdaemon 25153 0x00002aaac00aa560 IndexGC 132
Replication policy : Manual
Cache Agent policy : Manual
TimesTen's Cache agent is running for this data store
PL/SQL enabled.
------------------------------------------------------------------------
Accessible by group oinstall
End of report
In this case the connections are of "Process" type indicating direct connections and again there are 5 connections created against the TT DB.
The code was tested against a read only cache groups without any issues.

Useful metalink notes
How To Pass Oraclepwd Via Xml To Tomcat Server using JDBC interface? [ID 1404604.1]

Session Cached Cursors and JDBC PreparedStatement

$
0
0
Given two pseudo codes which would be better in terms of performance?


Case 1VsCase 2
Create PreparedStatemnt
For Loop
bind values
execute
End Loop
Close PreparedStatement
For Loop
Create PreparedStatemnt
bind values
execute
Close PreparedStatement
End Loop

From a developer's perspective it could be answered that first code only creates one preparedstatement and uses is multiple times and close it once whereas the second one creates create and close a preparedstatement with each iteration incurring additional overhead as such first code is better. What about the oracle perspective? would there be multiple parses on the second code? will it incur additional database resources? Looking at the trkprof output will have the following for these codes (java code used is given at the end of the post)
Case 1VsCase 2
call     count  
------- ------
Parse 1
Execute 1000
Fetch 1000
------- ------
total 2001
call     count
------- ------
Parse 1000
Execute 1000
Fetch 1000
------- ------
total 3000

Tkprof output shows second code resulted in 1000 parses while first only had one parse so again first code must be better(?). The answer is yes the first set of code is better but not because it does less parses than the second code. If session cached cursors parameter is set to a value other than 0 (default 50) and cursor (or sql in the preparedstatement) is cached then the number of actual parses are exactly the same for both set of codes. Any performance loss on the second code set is due to session cache look up not due to parsing.
Cursor access could be ordered in following way when it comes to how fast they could be accessed.
1. Finding an open cursor in the session cache. Does not require parsing, not even soft.
2. Finding a close cursor in the session cache. Does not require parsing, not event soft.
3. Finding the cursor in the shared pool. Results in soft parse.
4. Constructing the cursor from scratch. Results in hard parse.
For more detail information refer Oracle performance tuning student guide Chapter 9, section cursor usage and parsing.
The java code used for testing is given at the end of the post. In the java code SQL1 denotes the "business" SQL. SQL2 is used to get the session statistics (total and hard parses, number of session cached cursors and session cache hits). SQL3 list all the open cursors in the session. Java code checks out a single connection from the UCP and execute the entire test case before closing it. Therefore statistics could be calculated comparing values at the time of connection checkout and at the time of connection close.

First set of test cases are run with session cached cursors parameters set to 50 (default). Run the java code several times until the hard parse count is 0 when connection is taken from the connection pool. Test begins from this point onwards. Executing the test case 1 (comment the code related to case 2 in the java code) will give an output similar to below.
start got connection
1 select /*+ connect_by_filtering */ privilege#,level from sys DICTIONARY LOOKUP CURSOR CACHED SYS 34
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 34
3 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 34
4 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 34
5 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 34
6 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 34
7 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 34

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 0 7 5
end got connection

start execute
1 select * from (SELECT name, value FROM v$sesstat, v$statna DICTIONARY LOOKUP CURSOR CACHED ASANGA 34
2 select /*+ connect_by_filtering */ privilege#,level from sys DICTIONARY LOOKUP CURSOR CACHED SYS 34
3 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 34
4 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 34
5 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 34
6 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 34
7 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 34
8 select b from x where a = :1 DICTIONARY LOOKUP CURSOR CACHED ASANGA 34
9 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open DICTIONARY LOOKUP CURSOR CACHED ASANGA 34

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
10 0 2 9 93
end execute
Output of SQL3 list the row number, sql text of the cursor, cursor type and the username and finally the SID. It's interesting that Oracle reference doc says for v$open_cursor the username refers to "User that is logged in to the session" still some cursors list username as sys while the session id is of current session which is of user asanga. The second output (from SQL2) list total parses, hard parses, session cache cursor hits and session cache cursor count and CPU used by the session.
Looking at the output at the time of the connection checkout the session (or JDBC connection) had done 9 parses (soft) as there are 0 hard parses listed. It has 7 cursors in it's session cache and no session cache cursor hits and so far has used 5 CPU centi-seconds. The 7 cursors and their types are also listed. What's missing from this initial output is the SQL2 as it has not been run when the session cursor content was listed. But once the SQL2 is executed and cursor is closed that too will be added as a session cached cursor.
After executing the test case the total parse count is increased to 10 but as there are no hard parses , the additional parse is a soft parse. However during this period at least 3 different SQLs were executed (SQL1,SQL2 and SQL3) and SQL1 had multiple executions as well. Yet only one soft parse has resulted during this time. There are two session cache cursor hits, these are as a result of running SQL2 and SQL3 as they had already been cached. So the additional parse is for SQL1 and even though it was run 10000 times it resulted only in one soft parse (as show in the trace file output listed earlier). This shows that finding an open cursor in the session does not result in soft parses when the cursor is used for multiple executions. Total CPU usage for the test case 1 is 88 (93-5).
To run the test case 2 uncomment the case 2 section and comment the case 1 section. Output for this test case is shown below.
start got connection
1 select /*+ connect_by_filtering */ privilege#,level from sys DICTIONARY LOOKUP CURSOR CACHED SYS 34
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 34
3 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 34
4 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 34
5 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 34
6 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 34
7 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 34

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 0 7 6
end got connection

start execute
1 select * from (SELECT name, value FROM v$sesstat, v$statna DICTIONARY LOOKUP CURSOR CACHED ASANGA 34
2 select /*+ connect_by_filtering */ privilege#,level from sys DICTIONARY LOOKUP CURSOR CACHED SYS 34
3 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 34
4 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 34
5 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 34
6 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 34
7 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 34
8 select b from x where a = :1 SESSION CURSOR CACHED ASANGA 34
9 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open DICTIONARY LOOKUP CURSOR CACHED ASANGA 34

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
10 0 10001 9 151
end execute
The statistics values at connection checkout are identical to that of case 1. The test case output shows the total parse count increased by 1. This is same as case 1, even though in case 2 preparedstatement was created and closed with each iteration this has not resulted in additional parses. The reason for this is that a closed cursor was found in the session cache and resulted in a session cache cursor hit. This value is 10001 in the second test case. From the test case 1 it's known that SQL2 and SQL3 results in 2 session cache hits. This means SQL1 resulted in 9999 session cache hits when executing 10000 times. The one cache hit miss, the initial execution resulted in a single soft parse. Thus in terms number of parses done case 1 and case 2 are the same. However a tkprof output would list 10000 parses which is not necessarily the case. The total session cursor count is 9 same as test case 1 but the CPU time is high compared to previous test case and is at 145 (151-6) centi-seconds. This is the overhead of session cache cursor look up.

What would be the parse overhead for two cases when the session cached cursor parameter is set to 0. This would mean no cursors would be found in the session cache. Below is the output for case 1 when run with session cache cursor set to 0.
start got connection
1 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
2 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 0 0 3
end got connection

start execute
1 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
2 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
12 0 0 0 74
end execute
Number of parses at the connection checkout time is same as the test executed with SCC (session_cached_curosr) = 50. However there are no session cached cursors only opened ones. At the end of the execution total number of parses increased by 3. This is due to execution of SQL1, SQL2 and SQL3. Even though SQL1 was executed multiple times with same cursor (or preparedstatement) it only resulted in one soft parse which is same as test case 1 with SCC=50. Since there's no session cache SQL2 and SQL3 executed second time around also results in soft parses. The total CPU usage is 71 (74-3) centi-seconds which less compared to test 1 with SCC=50.
Output for test case 2 with SCC=0 is shown below.
start got connection
1 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
2 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 0 0 3
end got connection

start execute
1 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
2 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
10011 0 0 0 198
end execute
Compared to previous case 1 there's a huge increase in the amount of soft parses. Total parse count was increased by 10002 (10011-9). This is as a result of 10000 parses for SQL1 and 1 parse each for SQL2 and SQL3. As a result the CPU usage has also increased which is 195 (198-3) centi-seconds for case 2. So out of all scenarios case 2 with SCC=0 would be the worse in terms of parse overhead and CPU.



It is possible to enable statement caching on the JDBC connection pool/non-pooled physical connection as opposed to database session cache. This is useful to lower the CPU and parse overhead when SCC=0 and there's no way to set SCC initialization parameter. This could be tested in the java code by uncommenting the setMaxStatements line. Below is the output for test case 2 with setMaxStatements value set to 10.
start got connection
1 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
2 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 0 0 3
end got connection

start execute

1 select * from (SELECT name, value FROM v$sesstat, v$statna OPEN ASANGA 157
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 157
3 select b from x where a = :1 OPEN ASANGA 157
4 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 157

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
10 0 0 0 107
end execute
In this case the total parse count was only increased by 1 and CPU usage is low compared to test case 2 with SCC=0. This comparable to test cases with SCC=50 in terms CPU usage and parse count.
The parse counts and CPU usage for each test case could be summarized as below. SCC is session cached cursor value and MaxStm is the value set on setMaxStatements method.
TestParse countCPU (centi-seconds)
SCC=50,Case=1,MaxStm=0188
SCC=50,Case=2,MaxStm=01145
SCC=0,Case=1,MaxStm=0371
SCC=0,Case=2,MaxStm=010002195
SCC=0,Case=1,MaxStm=10197
SCC=0,Case=2,MaxStm=101104

CPU usage graph shown below.

Tests has shown that setting or not setting session cached cursor parameter has a higher degree of influence in terms of parse count when comparing case 1 and case 2. When session cached cursor is not set statement caching could be achieved with the help of connection pools and in terms of parse count this is comparable to scenario with session cached cursor set.
Table used for test
create table x (a number, b varchar2(10));

begin
for i in 1 .. 10
loop
insert into x values (i, 'abc'||i);
end loop;
end;
/
Java code used for test
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;

import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

/**
*
* @author Asanga
*/
public class CursorTest {

static String SQL1 = "select b from x where a = ?";
static String SQL2 = "select * from (SELECT name, value FROM v$sesstat, v$statname WHERE v$sesstat.statistic#=v$statname.statistic# "
+ "AND (v$statname.name in ('parse count (total)','parse count (hard)','session cursor cache hits','session cursor cache count','CPU used by this session'))"
+ " AND v$sesstat.sid= sys_context('USERENV','SID')"
+ " )pivot (sum(value) for name in ('parse count (total)' as Parse_Total,'parse count (hard)' Hard_Parse,'session cursor cache hits' as SES_CUR_CACHE_HITS,'session cursor cache count' as SES_CUR_CACHE_COUNT, 'CPU used by this session' as CPU))";
static String SQL3 = "select rownum,sql_text,cursor_type,USER_NAME,sid from v$open_cursor where sid=sys_context('USERENV','SID')";

public static void main(String[] args) throws Exception {
try {

PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
ds.setConnectionPoolName("TTPool");
ds.setURL("jdbc:oracle:thin:@192.168.0.93:1521/dbsrv1");
ds.setUser("asanga");
ds.setPassword("asa");
// ds.setMaxStatements(10);

Connection con = ds.getConnection();
PreparedStatement pr = null;

System.out.println("start got connection");
stat(con);
System.out.println("end got connection");

System.out.println("start execute");
ResultSet rs = null;

/****************************** Start CASE 1 ************************************/
pr = con.prepareStatement(SQL1);
for (int i = 0; i < 10000; i++) {
pr.setInt(1, i % 10);
rs = pr.executeQuery();

while (rs.next()) {
rs.getString(1);
}
rs.close();
}

pr.close();
/****************************** End CASE 1 ************************************/
/****************************** Start CASE 2 ************************************/
// for (int i = 0; i < 10000; i++) {
// pr = con.prepareStatement(SQL1);
// pr.setInt(1, i%10);
// rs = pr.executeQuery();
//
// while (rs.next()) {
// rs.getString(1);
// }
// rs.close();
// pr.close();
// }
/****************************** End CASE 2 ************************************/
stat(con);
System.out.println("end execute");
con.close();

} catch (SQLException ex) {
Logger.getLogger(CursorTest.class.getName()).log(Level.SEVERE, null, ex);
}
}

public static void stat(Connection con) throws SQLException {

PreparedStatement pr = null;
ResultSet rs = null;

pr = con.prepareStatement(SQL3);
rs = pr.executeQuery();
while (rs.next()) {
System.out.println(rs.getString(1) + "" + rs.getString(2) + "\t" + rs.getString(3) + "" + rs.getString(4) + "" + rs.getString(5));
}
rs.close();
pr.close();

System.out.println("Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU");
pr = con.prepareStatement(SQL2);
rs = pr.executeQuery();

while (rs.next()) {

System.out.println("\t" + rs.getString(1) + "\t " + rs.getInt(2) + "\t " + rs.getString(3) + "\t\t " + rs.getInt(4) + "\t " + rs.getInt(5));
}

rs.close();
pr.close();

}
}

Oracle Invoked Out-of-Memory Killer (oom-killer)

$
0
0
One node on a 2-node RAC (11.1.0.7) crashed and following could seen on the /var/log/messages
Feb 11 09:46:01 server02 logger: Oracle CSSD waiting for OPROCD to start
Feb 11 22:46:21 server02 kernel: oracle invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
Feb 11 22:46:21 server02 kernel:
Feb 11 22:46:21 server02 kernel: Call Trace:
Feb 11 22:46:21 server02 kernel: [] out_of_memory+0x8e/0x2f3
Feb 11 22:46:21 server02 kernel: [] __alloc_pages+0x27f/0x308
Feb 11 22:46:21 server02 kernel: [] getnstimeofday+0x10/0x28
Feb 11 22:46:21 server02 kernel: [] __do_page_cache_readahead+0x96/0x179
Feb 11 22:46:21 server02 kernel: [] filemap_nopage+0x14c/0x360
Feb 11 22:46:21 server02 kernel: [] __handle_mm_fault+0x1fa/0xfaa
Feb 11 22:46:21 server02 kernel: [] do_page_fault+0x4cb/0x874
Feb 11 22:46:21 server02 kernel: [] thread_return+0x62/0xfe
Feb 11 22:46:21 server02 kernel: [] error_exit+0x0/0x84
Feb 11 22:46:21 server02 kernel:
Feb 11 22:46:23 server02 kernel: Mem-info:
Feb 11 22:46:23 server02 kernel: Node 0 DMA per-cpu:
Feb 11 22:46:23 server02 kernel: cpu 0 hot: high 0, batch 1 used:0
Feb 11 22:46:24 server02 kernel: cpu 0 cold: high 0, batch 1 used:0
Feb 11 22:46:25 server02 kernel: cpu 1 hot: high 0, batch 1 used:0
Feb 11 22:46:26 server02 kernel: cpu 1 cold: high 0, batch 1 used:0
Feb 11 22:46:26 server02 kernel: cpu 2 hot: high 0, batch 1 used:0
Feb 11 22:46:26 server02 kernel: cpu 2 cold: high 0, batch 1 used:0
Feb 11 22:46:27 server02 kernel: cpu 3 hot: high 0, batch 1 used:0
Feb 11 22:46:27 server02 kernel: cpu 3 cold: high 0, batch 1 used:0
Feb 11 22:46:28 server02 kernel: Node 0 DMA32 per-cpu:
Feb 11 22:46:29 server02 kernel: cpu 0 hot: high 186, batch 31 used:16
Feb 11 22:46:29 server02 kernel: cpu 0 cold: high 62, batch 15 used:29
Feb 11 22:46:29 server02 kernel: cpu 1 hot: high 186, batch 31 used:26
Feb 11 22:46:29 server02 kernel: cpu 1 cold: high 62, batch 15 used:14
Feb 11 22:46:29 server02 kernel: cpu 2 hot: high 186, batch 31 used:27
Feb 11 22:46:29 server02 kernel: cpu 2 cold: high 62, batch 15 used:32
Feb 11 22:46:29 server02 kernel: cpu 3 hot: high 186, batch 31 used:9
Feb 11 22:46:30 server02 kernel: cpu 3 cold: high 62, batch 15 used:59
Feb 11 22:46:30 server02 kernel: Node 0 Normal per-cpu:
Feb 11 22:46:30 server02 kernel: cpu 0 hot: high 186, batch 31 used:80
Feb 11 22:46:30 server02 kernel: cpu 0 cold: high 62, batch 15 used:54
Feb 11 22:46:30 server02 kernel: cpu 1 hot: high 186, batch 31 used:38
Feb 11 22:46:30 server02 kernel: cpu 1 cold: high 62, batch 15 used:55
Feb 11 22:46:30 server02 kernel: cpu 2 hot: high 186, batch 31 used:30
Feb 11 22:46:30 server02 kernel: cpu 2 cold: high 62, batch 15 used:58
Feb 11 22:46:31 server02 kernel: cpu 3 hot: high 186, batch 31 used:38
Feb 11 22:46:31 server02 kernel: cpu 3 cold: high 62, batch 15 used:51
Feb 11 22:46:31 server02 kernel: Node 0 HighMem per-cpu: empty
Feb 11 22:46:31 server02 kernel: Free pages: 76148kB (0kB HighMem)


Oracle provides a metalink note for lowMem region memory pressure (452326.1) but this crash is due to highMem region so note wasn't much help. If it's not related to a bug (551991.1) then the root cause could be due to memory pressure(1502301.1). It must be verified that system is indeed under memory pressure. System statistics (either with sysstat or OSWatcher) could be used to check if memory consumption differed from normal due to any recent application changes.
Other solution includes setting vm.lower_zone_protection (only on 32-bit systems) or vm.lower_zone_protection kernel parameter depending on the architecture.
Useful Metalink notes
Linux: Out-of-Memory (OOM) Killer [452000.1]
How to Check Whether a System is Under Memory Pressure [1502301.1]
Linux Kernel: The SLAB Allocator [434351.1]
BUG 6167888: RMAN-10038 ERROR ON LINUX AFTER CREATING A LARGE BACKUPSET ON NFS [551991.1]
Linux Kernel Lowmem Pressure Issues and Kernel Structures [452326.1]

Checking If Server Is Physical Or Virtual

$
0
0
There are several ways to check if the server is a physical server or virtual server. Post list just two such methods.

1. Using dmidecode
Among many information listed by the dmidecode command (requires root access) one is the system-manufacturer string. If this is a physical server the command will output the vendor name, if not it will output the virtual software name.
On physical servers
# dmidecode -s system-manufacturer
Dell Inc.

# dmidecode -s system-manufacturer
IBM
On Virtual servers

# dmidecode -s system-manufacturer
VMware, Inc.

# dmidecode -s system-manufacturer
innotek GmbH
The innotek GmbH is from a VM server created with VirtualBox
There are other strings in the demidecode output that could also be used to find out which the server is physical or virtual.
Grep on Product on physical servers returns
# dmidecode | grep Product
Product Name: PowerEdge M610
Product Name: 02Y41P
Product Name:

# dmidecode | grep Product
Product Name: IBM System x3550 M4 Server -[7914ZT7]-
Product Name: 00J6242
On virtual servers return
# dmidecode | grep Product
Product Name: VMware Virtual Platform
Product Name: 440BX Desktop Reference Platform

# dmidecode | grep Product
Product Name: VirtualBox
Product Name: VirtualBox
It's also possible to grep on the string Virtual. Output from physical servers.
# dmidecode | grep Virtual
VME (Virtual mode extension)

# dmidecode | grep Virtual
VME (Virtual mode extension)
Enhanced Virtualization
Output from virtual servers
# dmidecode | grep Virtual
Product Name: VMware Virtual Platform
VME (Virtual mode extension)
VME (Virtual mode extension)
VME (Virtual mode extension)
VME (Virtual mode extension)
String 2: Welcome to the Virtual Machine

# dmidecode | grep Virtual
Version: VirtualBox
Product Name: VirtualBox
Family: Virtual Machine
Product Name: VirtualBox


2. Using virt-what
virt-what (included in virt-what-*.rpm, comes with RedHat and other distros) checks if the running system is a physical or virtual. This command must be run as root user.
On physical servers the command does not return anything and goes back to prompt.
[root@srv6 ~]# virt-what
[root@srv6 ~]#
On virtual servers
# virt-what
vmware

# virt-what
virtualbox

Session Cached Cursors and JDBC CallableStatement

$
0
0
Similar to earlier post that compared the various preparedstatments scenarios this post examine the use of callablestatements and the effects of session cached cursors (SCC) parameter. Case 1 and Case 2 are same as with the preparedstatements.

Case 1VsCase 2
Create CallableStatement
For Loop
bind values
execute
End Loop
Close CallableStatement
For Loop
Create CallableStatement
bind values
execute
Close CallableStatement
End Loop

However the execution of client side code via JDBC results in two calls, one from the JDBC calling the PL/SQL and other the SQL within the PL/SQL. First up is the case 1 with SCC=50 where the callablestatment is executed 10000 times. Output format is same as earlier.The java code and the pl/sql scripts used for the test is given at the end of the post.
start got connection
1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 19
2 select /*+ connect_by_filtering */ privilege#,level from sys SESSION CURSOR CACHED SYS 19
3 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 19
4 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 19
5 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 19
6 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 19
7 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 19

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 3 7 3

end got connection


start execute
1 SELECT B FROM X WHERE A = :B1 PL/SQL CURSOR CACHED ASANGA 19
2 begin :1 := cursorfunc(:2 ); end; DICTIONARY LOOKUP CURSOR CACHED ASANGA 19
3 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open DICTIONARY LOOKUP CURSOR CACHED ASANGA 19
4 select * from (SELECT name, value FROM v$sesstat, v$statna DICTIONARY LOOKUP CURSOR CACHED ASANGA 19
5 select /*+ connect_by_filtering */ privilege#,level from sys SESSION CURSOR CACHED SYS 19
6 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 19
7 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 19
8 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 19
9 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 19
10 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 19

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
11 0 10004 10 587
end execute
At the time of the connection get the connection (session) has done 9 parses in total and already had 3 session cache hits and 7 cursors cached and spent 3 centi-seconds of CPU. After the execute of the callablestatement the output shows that two additional (soft) parses occurred and 10001 session cache hits. Since the callablestatement is never closed as such it's never cached and the 10001 cache hits occurred due to the SQL called inside the PL/SQL function (1 in second output above). when the PL/SQL called for the first time the SQL inside it will be missing from the session cache. At the completion of the PL/SQL this cursor is implicitly closed (even though java side callablestatement is never closed) resulting it being added to the session cache and being found on subsequent executions from the same connection. This result in 9999 session cache hits. The other two are statistics gathering SQLs that were executed second time around.
Tkprof formatted output shows the following for this case.
begin :1  := cursorfunc(:2 ); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 2.89 3.23 0 6 0 10000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10001 2.89 3.23 0 6 0 10000

SELECT B FROM X WHERE A = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 0.28 0.39 0 0 0 0
Fetch 10000 0.66 0.75 0 60000 0 10000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20001 0.95 1.14 0 60000 0 10000
Both SQL and callablestatement were parsed once and executed multiple times. The CPU usage for this case is 584 centi-seconds.
Next is the case 2 with SCC=50, output given below.
start got connection

1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 19
2 select /*+ connect_by_filtering */ privilege#,level from sys SESSION CURSOR CACHED SYS 19
3 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 19
4 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 19
5 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 19
6 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 19
7 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 19

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
9 0 3 7 2

end got connection

start execute

1 SELECT B FROM X WHERE A = :B1 PL/SQL CURSOR CACHED ASANGA 19
2 begin :1 := cursorfunc(:2 ); end; SESSION CURSOR CACHED ASANGA 19
3 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open DICTIONARY LOOKUP CURSOR CACHED ASANGA 19
4 select * from (SELECT name, value FROM v$sesstat, v$statna DICTIONARY LOOKUP CURSOR CACHED ASANGA 19
5 select /*+ connect_by_filtering */ privilege#,level from sys SESSION CURSOR CACHED SYS 19
6 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 19
7 select value$ from props$ where name = 'GLOBAL_DB_NAME' DICTIONARY LOOKUP CURSOR CACHED SYS 19
8 select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('U DICTIONARY LOOKUP CURSOR CACHED SYS 19
9 select decode(failover_method, NULL, 0 , 'BASIC', 1, 'PRECON DICTIONARY LOOKUP CURSOR CACHED SYS 19
10 select privilege# from sysauth$ where (grantee#=:1 or grante DICTIONARY LOOKUP CURSOR CACHED SYS 19

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
11 0 20003 10 721

end execute
The case 2 test result in two additional soft parses after the callablestatement is executed. But the total session cache hit is 20000. Since the callable statement is repeatedly created with each iteration this results in 9999 session cache hits plus the 9999 from the SQL inside the PL/SQL function which is 19998 together with the two cache hits for the statistics gathering sql makes it 20000 session cache hits. Total CPU used is 719 centi-seconds, higher compared to case 1. Tkprof formatted output shows the following, trace file incorrectly list 10000 parses for callablestatement (same as in the case of preparedstatements)
begin :1  := cursorfunc(:2 ); end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10000 0.23 0.30 0 0 0 0
Execute 10000 3.08 3.15 0 6 0 10000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20000 3.31 3.45 0 6 0 10000

SELECT B FROM X WHERE A = :B1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 0.38 0.39 0 0 0 0
Fetch 10000 0.63 0.72 0 60000 0 10000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20001 1.02 1.12 0 60000 0 10000




In the second phase the same two test cases carried out but with session cached cursors parameter set to 0. Below is the output from case 1 with SCC=0
start got connection

1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 151
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 151

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
12 0 0 0 6

end got connection

start execute
1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 151
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 151

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
10015 0 0 0 717
end execute
As there's no session cached cursors the execution of callablestatement resulted in 10003 soft parses. This count is made up of 10000 soft parses on the SQL inside the PL/SQL, one soft parse on the callablestatement and the two statistics gathering statements. In this case repeated use of the open callablestatement does not result in additional parses. The tkprof output confirm is given below
begin :1  := cursorfunc(:2 ); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 4.16 4.32 0 6 0 10000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10001 4.16 4.32 0 6 0 10000

SELECT B FROM X WHERE A = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10000 0.24 0.31 0 0 0 0
Execute 10000 0.32 0.46 0 0 0 0
Fetch 10000 0.59 0.73 0 60000 0 10000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30000 1.17 1.50 0 60000 0 10000
Output of the case 2 is given below.
start got connection

1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 139
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 139

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
12 0 0 0 5

end got connection

start execute

1 select rownum,sql_text,cursor_type,USER_NAME,sid from v$open OPEN ASANGA 139
2 insert into sys.aud$( sessionid,entryid,statement,ntimestamp OPEN-RECURSIVE ASANGA 139

Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU
20014 0 0 0 945

end execute
Since callablestatement is closed after each iteration, this results in soft parses each time its called. Therefore in this case the total number of soft parses are 20002. This count is made up of 10000 soft parses for callablestatement, 10000 for the SQL inside PL/SQL and the two statistics gathering SQLs. The tkprof output also list the soft parses
begin :1  := cursorfunc(:2 ); end;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10000 0.37 0.41 0 0 0 0
Execute 10000 4.45 4.48 0 6 0 10000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 20000 4.83 4.90 0 6 0 10000

SELECT B FROM X WHERE A = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10000 0.30 0.32 0 0 0 0
Execute 10000 0.40 0.50 0 0 0 0
Fetch 10000 0.69 0.75 0 60000 0 10000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30000 1.40 1.58 0 60000 0 10000
As with the preparedstatment, it is possible to reduce some of the soft parses when SCC=0 by enabling cursor caching on the connection pool or connection. The output of these tests are not shown but the parse count and CPU is included in the summary table.
TestParse countCPU (centi-seconds)
SCC=50,Case=1,MaxStm=02584
SCC=50,Case=2,MaxStm=02719
SCC=0,Case=1,MaxStm=010003711
SCC=0,Case=2,MaxStm=020002940
SCC=0,Case=1,MaxStm=1010001724
SCC=0,Case=2,MaxStm=1010001731

CPU usage graph for each case.

These tests have shown setting SCC is helpful in reducing the parse count and the CPU usage associated with parsing. When SCC is not set or cannot be set, similar behavior could be achieved with by setting cursor caching on connection pool.

Related Post
Session Cached Cursors and JDBC PreparedStatement

Database test case Code
create table x (a number, b varchar2(10));

begin
for i in 1 .. 10
loop
insert into x values (i, 'abc'||i);
end loop;
end;
/

create or replace function cursorfunc(inval in number)
return varchar2 as
retval varchar2(10);
begin
select b into retval from x where a = inval;
return retval;
end;
/
Java Code
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;

import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

/**
*
* @author Asanga
*/
public class PLSQLCursorTest {

static String SQL1 = "begin ? := cursorfunc(?); end;";

static String SQL2 = "select * from (SELECT name, value FROM v$sesstat, v$statname WHERE v$sesstat.statistic#=v$statname.statistic# "
+ "AND (v$statname.name in ('parse count (total)','parse count (hard)','session cursor cache hits','session cursor cache count','CPU used by this session'))"
+ " AND v$sesstat.sid= sys_context('USERENV','SID')"
+ " )pivot (sum(value) for name in ('parse count (total)' as Parse_Total,'parse count (hard)' Hard_Parse,'session cursor cache hits' as SES_CUR_CACHE_HITS,'session cursor cache count' as SES_CUR_CACHE_COUNT, 'CPU used by this session' as CPU))";
static String SQL3 = "select rownum,sql_text,cursor_type,USER_NAME,sid from v$open_cursor where sid=sys_context('USERENV','SID')";


public static void main(String[] args) throws Exception {
try {

PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
ds.setConnectionPoolName("TTPool");
ds.setURL("jdbc:oracle:thin:@192.168.0.85:1521/dbsrv1");
ds.setUser("asanga");
ds.setPassword("asa");
// ds.setMaxStatements(10);

Connection con = ds.getConnection();
CallableStatement pr = null;

System.out.println("start got connection");
stat(con);
System.out.println("end got connection");

System.out.println("start execute");

/****************************** Start CASE 1 ************************************/
// pr = con.prepareCall(SQL1);
// pr.registerOutParameter(1, java.sql.Types.VARCHAR);
// for (int i = 0; i < 10000; i++) {
// pr.setInt(2,(i%10)+1 );
// pr.execute();
// pr.getString(1);
// }
//
// pr.close();
/****************************** End CASE 1 ************************************/

/****************************** Start CASE 2 ************************************/
for (int i = 0; i < 10000; i++) {
pr = con.prepareCall(SQL1);
pr.registerOutParameter(1, java.sql.Types.VARCHAR);
pr.setInt(2, (i%10)+1);
pr.execute();
pr.getString(1);
pr.close();
}
pr.close();
/****************************** End CASE 2 ************************************/

stat(con);
System.out.println("end execute");
con.close();

} catch (SQLException ex) {
Logger.getLogger(PLSQLCursorTest.class.getName()).log(Level.SEVERE, null, ex);
}


}

public static void stat(Connection con) throws SQLException {

PreparedStatement pr = null;
ResultSet rs = null;

pr = con.prepareStatement(SQL3);
rs = pr.executeQuery();
while (rs.next()) {
System.out.println(rs.getString(1) + "" + rs.getString(2) + "" + rs.getString(3) + "" + rs.getString(4) + "" + rs.getString(5));
}

rs.close();
pr.close();

System.out.println("Parse_Total Hard_Parse Ses_Cur_Cache_Hit Ses_Cur_Cache_Count CPU");
pr = con.prepareStatement(SQL2);
rs = pr.executeQuery();

while (rs.next()) {

System.out.println("\t" + rs.getString(1) + "\t " + rs.getInt(2) + "\t " + rs.getString(3) + "\t\t " + rs.getInt(4)+ "\t " + rs.getInt(5));
}

rs.close();
pr.close();

}
}

Session Cached Cursors and JDBC PreparedStatement

Oracle vs Standard JDBC Batching For Updates

$
0
0
Oracle provided two flavors of batching with the JDBC drivers. One was called standard which followed the JBDC 2.0 specification and the other flavor was an Oracle specific implementation (called Oracle Batching) which is independent of JDBC 2.0. It is not possible to mix both these modes in the same application. However with the latest implementation of the JDBC driver (12c) Oracle update batching is depreciated. Evolution of the documentation over the years on this is given below.
From 10.2 JDBC developers guide Oracle update batching is a more efficient model because the driver knows ahead of time how many operations will be batched. In this sense, the Oracle model is more static and predictable. With the standard model, the driver has no way of knowing in advance how many operations will be batched. In this sense, the standard model is more dynamic in nature.
From 11.2 JDBC developers guide Oracle recommends that you use JDBC standard features when possible. This recommendation applies to update batching as well. Oracle update batching is retained primarily for backwards compatibility.
From 12.1 JDBC developers guide Starting from Oracle Database 12c Release 1 (12.1), Oracle update batching is deprecated. Oracle recommends that you use standard JDBC batching instead of Oracle update batching.
As the Oracle mode batching is depreciated the comparison only holds for lower versions of the JDBC drivers. The test related to this post was carried out using 11.2 JDBC driver. The java code used for the test is given at the end of the post. Two tests were carried out. First involving updating number of rows in a table with standard, standard implementation of oracle batching and oracle batching. Standard batching test only execute the update batch once. The standard implementation mimics what the Oracle standard does by executing the update batch at fixed intervals (500 and 1000 updates at a time). Finally Oracle batching will use the values of 500 and 1000 for batch values. This test is to check if there's any considerable difference in the timing of the updates. Result table is given below.
# UpdatesStandared BatchingStandared Batching - 500Standared Batching - 1000Oracle Batching - 500Oracle Batchnig - 1000
1000.350.350.360.360.36
2000.620.620.650.630.63
5001.431.411.471.431.45
10002.762.732.882.812.81
20005.725.385.645.545.68
500013.5513.4313.9913.6713.79
1000027.8226.5828.0527.3127.58
2500080.0175.4378.6576.4276.93
50000165.24158.99164.32159.13159.04
75000243.25242.92248.73242.17242.79
100000330.92330.84333.34327.65331.22

There's not much difference in terms of time it takes to complete all the updates whether it is via standard batching or oracle batching. Scatter plot shown below.




The next test involved updating a column with a blob (size 8KB). The results table is given below.
# UpdatesStandared BatchingStandared Batching - 500Standared Batching - 1000Oracle Batching - 500Oracle Batchnig - 1000
1001.471.531.501.681.76
2002.792.962.913.143.19
5006.887.377.147.687.87
100013.6014.7014.2015.3015.50
200027.3129.2530.1330.0730.41
500068.3074.1074.8977.2374.82
10000143.31151.05143.14151.68152.43
25000348.83351.34365.54384.77394.99
50000701.67727.28707.79857.79859.15

There's not much difference in the update time between the different standard batching tests.

However there's some advantage when using standard batching compared to oracle standard as the number of updates are 25,000 or over. But the increase number of batching size requires more memory and this situation may be uncommon. For lower values again there's not much difference in terms of time saved.

These tests have shown that batching mode (Oracle or standard) does not have a real influence on the time taken for the updates. As mentioned earlier as of 12c the Oracle standard batching is depreciated and if these results hold true for 12c driver as well then there shouldn't be any performance degradation.

Test java code used
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;
import oracle.jdbc.pool.OracleDataSource;

/**
*
* @author Asanga
*/
public class Update {

public static void main(String[] args) {
try {
OracleDataSource dataSource = new OracleDataSource();
dataSource.setURL("jdbc:oracle:thin:@192.168.0.66:1521:ent11g2");
dataSource.setUser("asanga");
dataSource.setPassword("asa");

// String SQL = "update x set b = ?, c = ? where a = ?"; // update blob
String SQL = "update x set b = ? where a = ?";

Connection con = dataSource.getConnection();
con.setAutoCommit(false);

long t1 = System.nanoTime();

// byte[] lob = new byte[8*1024];
// lob[10] =10;
// lob[8000]=20;

PreparedStatement pr = con.prepareStatement(SQL);
// ((OraclePreparedStatement)pr).setExecuteBatch(1000);
for(int i =1; i <= 75000; i++){

StringBuilder b = new StringBuilder("khf").append(i);

pr.setString(1, b.toString());
// pr.setBytes(2,lob );
pr.setInt(2, i);

pr.addBatch();

if((i > 0) && (i%1000 == 0)){
int[] ret =pr.executeBatch();
System.out.println(ret.length);
}
// pr.executeUpdate();
}

int[] ret = pr.executeBatch();
// ((OraclePreparedStatement)pr).sendBatch();
con.commit();
long t2 = System.nanoTime();

System.out.println("returned "+ret.length+" time : "+(t2-t1));
// System.out.println("time : "+(t2-t1));

pr.close();
con.close();

} catch (SQLException ex) {
Logger.getLogger(Update.class.getName()).log(Level.SEVERE, null, ex);
}
}
}

Deleting a Node From 12cR1 RAC

$
0
0
Deleting a node from a 12cR1 RAC is similar to that of 11gR2. Deletion has three distinct phases, that is removing of the database instance, removing of Oracle database software and finally the clusterware. However as per oracle documentation "you can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. It must also be noted from Oracle documentation that "deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster". For example it is possible to see some information with regard to the deleted node on an OCR dump file and this shouldn't be a cause for concern.
The RAC setup in this case is a 2 node RAC and node named rhel12c2 will be removed from the cluster. The database is a CDB which has a single PDB.
SQL>  select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
--------------- ---------------- --------------------
1 cdb12c1 rhel12c1.domain.net
2 cdb12c2 rhel12c2.domain.net # node and instance to be removed

SQL> select con_id,dbid,name from gv$pdbs;

CON_ID DBID NAME
---------- ---------- ---------
2 4066687628 PDB$SEED
3 476277969 PDB12C
2 4066687628 PDB$SEED
3 476277969 PDB12C
1. First phase include removing the database instance from the node to be deleted. For this run the DBCA on any node except on the node that has the instance being deleted. In this case DBCA is run from node rhel12c1. Follow the instance management option to remove the instance.

Following message is shown as pdbsvc is created to connect to PDB.

2. At the end of the DBCA run the database instance is removed from the node to be deleted
SQL> select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
--------------- ---------------- -------------------
1 cdb12c1 rhel12c1.domain.net

SQL> select con_id,dbid,name from gv$pdbs;

CON_ID DBID NAME
---------- ---------- ---------
2 4066687628 PDB$SEED
3 476277969 PDB12C

[oracle@rhel12c1 ~]$ srvctl config database -d cdb12c
Database unique name: cdb12c
Database name: cdb12c
Oracle home: /opt/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/cdb12c/spfilecdb12c.ora
Password file: +DATA/cdb12c/orapwcdb12c
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb12c
Database instances: cdb12c1 # only shows the remaining instance
Disk Groups: DATA,FLASH
Mount point paths:
Services: pdbsvc
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed
3. Check if redo log threads for the deleted instance is removed from the database
SQL> select inst_id,group#,thread# from gv$log;

INST_ID GROUP# THREAD#
---------- ---------- ----------
1 1 1
1 2 1
As it only shows redo threads for instance 1 no further action is needed. If DBCA has not removed the redo log threads with alter database disable thread thread# With this step the first phase is completed
4. Second phase is to remove the Oracle database software. In 12c by default the listener runs out of the grid home. However it is possible to setup a listener to run out of the Oracle home (RAC home) as well. If this is the case then stop and disable any listeners running out of the RAC home. In this configuration there's no listeners running out of the RAC home.
5. On the node to be deleted update the node list to include only the node being deleted. Before running the node update command the inventory.xml will have all the nodes under the oracle home after the command is run this will reduce to containing only the node to be deleted. However the inventory.xml in other nodes will still have all the nodes in the cluster under oracle home.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>


[oracle@rhel12c2 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c2}" -local
Starting Oracle Universal Installer...

<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>
6. Run the deinstall command with local option. Without the -local option this will remove the oracle home of all the nodes!
[oracle@rhel12c2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /opt/app/oracle/product/12.1.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid
The following nodes are part of this cluster: rhel12c2,rhel12c1
Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1'
## [END] Install check configuration ##

Network Configuration check config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2014-02-27_11-08-10-AM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2014-02-27_11-08-16-AM.log
Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /opt/app/oraInventory/logs//ocm_check9786.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'.

Oracle Home selected for deinstall is: /opt/app/oracle/product/12.1.0/dbhome_1
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Checking the config status for CCR
rhel12c2 : Oracle Home exists with CCR directory, but CCR is not configured
rhel12c1 : Oracle Home exists and CCR is configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.err'
######################## CLEAN OPERATION START ########################
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2014-02-27_11-08-52-AM.log

Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2014-02-27_11-08-52-AM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /opt/app/oraInventory/logs//ocm_clean9786.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node : Done
Delete directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node : Done
The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/app/12.1.0/grid'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END

## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c2'
Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c1'
## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
Cleaning the CCR configuration by executing its binaries
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
7. After the deinstall completed run the node update command on any remaining node. This will update the node list by removing the deleted oracle home from the node list. The inventory.xml output before and after command has executed is shown. The command shown is for non shred oracle homes. For shared homes folow oracle documentation.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>

[oracle@rhel12c1 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c1}" LOCAL_NODE=rhel12c1
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
</NODE_LIST>
</HOME>
This conclude the second phase. Final phase is to remove the clusterware.


8. Check if the node to be deleted is active and unpinned. If the node is pinned then unpin it with crsctl unpin command. Following could be run as either grid user or root
[grid@rhel12c2 ~]$ olsnodes -t -s
rhel12c1 Active Unpinned
rhel12c2 Active Unpinned
9. On the node to be deleted run the node update command to update the node list for the grid home such that it will include only the node being deleted. The inventory.xml output before and after the command has been executed is shown below.
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>

[grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>
10. If the GI home is non shared run the deinstall with -local option. If -local option is omitted this will remove GI from all nodes.
[grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2014-02-27_04-47-10PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /opt/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid
The following nodes are part of this cluster: rhel12c2,rhel12c1
Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1'
## [END] Install check configuration ##

Traces log file: /tmp/deinstall2014-02-27_04-47-10PM/logs//crsdc_2014-02-27_04-48-27PM.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_check2014-02-27_04-48-29-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_check2014-02-27_04-48-29-PM.log
Database Check Configuration START
Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_check2014-02-27_04-48-29-PM.log
Database Check Configuration END

######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'.

Oracle Home selected for deinstall is: /opt/app/12.1.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.err'

######################## CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_clean2014-02-27_04-48-33-PM.log
ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_clean2014-02-27_04-48-33-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_clean2014-02-27_04-48-34-PM.log
Network Configuration clean config END

---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rhel12c2".
/tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp"
Press Enter after you finish running the above commands

<----------------------------------------

[root@rhel12c2 ~]# /tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp
Network 1 exists
Subnet IPv4: 192.168.0.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node rhel12c1
VIP Name: rhel12c1-vip
VIP IPv4 Address: 192.168.0.89
VIP IPv6 Address:
VIP exists: network number 1, hosting node rhel12c2
VIP Name: rhel12c2-vip
VIP IPv4 Address: 192.168.0.90
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel12c2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.FLASH.VOLUME1.advm' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.FLASH.VOLUME1.advm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2'
CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel12c2'
CRS-2677: Stop of 'ora.net1.network' on 'rhel12c2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel12c2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel12c2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.storage' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2'
CRS-2677: Stop of 'ora.ctssd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel12c2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel12c2'
CRS-2677: Stop of 'ora.cssd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel12c2'
CRS-2677: Stop of 'ora.gipcd' on 'rhel12c2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

2014/02/27 16:53:43 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node

Failed to delete the directory '/opt/app/oracle/product/12.1.0'. The directory is in use.
Failed to delete the directory '/opt/app/oracle/diag/rdbms/cdb12c/cdb12c2/log/test'. The directory is in use.
Removal of some of the directories failed but this had no impact on the removing of the node from the cluster. These directories could be manually cleaned up afterwards.
11. From any remaining node run the following command with reaming nodes as the node list. The inventory.xml output is given before and after the command is run
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
<NODE NAME="rhel12c2"/>
</NODE_LIST>
</HOME>

[grid@rhel12c1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c1}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 5119 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rhel12c1"/>
</NODE_LIST>
</HOME>
12. Finally use the cluster verification utility to check the node deletion has completed successfully.
[grid@rhel12c1 bin]$ cluvfy stage -post nodedel -n rhel12c2

Performing post-checks for node removal

Checking CRS integrity...
CRS integrity check passed
Clusterware version consistency passed.
Node removal check passed
Post-check for node removal was successful.
This conclude the deletion of node from 12cR1 RAC.
Related Post
Deleting a Node From 11gR2 RAC
Deleting a 11gR1 RAC Node

Adding a Node to 12cR1 RAC

$
0
0
This post list the steps for adding a node to 12cR1 standard cluster (not to a flex cluster) which is similar to that of adding a node to 11gR2 RAC. Node addition is done in three phases. Phase one is to add the clusterware to the new node. Second phase will add the database software and final phase will extend the database to the new node by creating a new instance for it. It is possible to do the node additional in silent mode or in an interactive mode with the use of GUIs. This post uses the latter method (earlier post of 11gR2 used silent mode and steps for 12c are similar to that).
1. It is assumed that physical connections (shared storage connections, network) are made to the new node being added. The pre node add steps could be checked with cluvfy by executing the pre node add command from an existing node and passing the hostname of the new node (in this case rhel12c2 is the new node).
[grid@rhel12c1 ~]$ cluvfy stage -pre nodeadd -n rhel12c2

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rhel12c1"

Checking user equivalence...
User equivalence check passed for user "grid"
Package existence check passed for "cvuqdisk"

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
Location check passed for: "/opt/app/12.1.0/grid"
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.0.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rhel12c1,rhel12c2
TCP connectivity check passed for subnet "192.168.0.0"

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rhel12c1,rhel12c2
TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp"
Free disk space check passed for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Group existence check passed for "asmadmin"
Group existence check passed for "asmoper"
Group existence check passed for "asmdba"

Checking ASMLib configuration.
Check for ASMLib configuration passed.

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

Check for integrity of file "/etc/resolv.conf" passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was successful.
2. To extend the cluster by installing clusterware on the new node run the addNode.sh in the $GI_HOME/addnode directory as grid user from an existing node. As mentioned earlier this post uses the interactive method to add the node.
Click add button and add the hostname and vip name of the new node

Fix any pre-req issues and click install to begin the GI installation on the new node.

Execute the root scripts on the new node

Output from root script execution
[root@rhel12c2 12.1.0]# /opt/app/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/12.1.0/grid/crs/install/crsconfig_params
2014/03/04 16:16:06 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2014/03/04 16:16:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel12c2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel12c2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel12c2'
CRS-2676: Start of 'ora.evmd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel12c2'
CRS-2676: Start of 'ora.gpnpd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel12c2'
CRS-2676: Start of 'ora.gipcd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel12c2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel12c2'
CRS-2676: Start of 'ora.diskmon' on 'rhel12c2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rhel12c2'
CRS-2676: Start of 'ora.cssd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel12c2'
CRS-2676: Start of 'ora.ctssd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel12c2'
CRS-2676: Start of 'ora.asm' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel12c2'
CRS-2676: Start of 'ora.storage' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel12c2'
CRS-2676: Start of 'ora.crsd' on 'rhel12c2' succeeded
CRS-6017: Processing resource auto-start for servers: rhel12c2
CRS-2672: Attempting to start 'ora.ons' on 'rhel12c2'
CRS-2676: Start of 'ora.ons' on 'rhel12c2' succeeded
CRS-6016: Resource auto-start has completed for server rhel12c2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/04 16:22:11 CLSRSC-343: Successfully started Oracle clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/04 16:22:37 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
This conclude the phase one. Next phase is to add the database software to new node.



3. To add database software run addNode.sh in the $ORACLE_HOME/addnode directory as oracle user from an existing node. When the OUI starts the new node comes selected by default.

At the end of the database software installation message is shown to invoke DBCA to extend the database to new node.

This conclude phase two
4. Final phase is to extend the database to the new node. Before invoking DBCA change the permission on the directory $ORACLE_BASE/admin to include write permission for the oinstall group so that oracle user is able to write into the directory. After the database software is installed the permission on this directory was as follows
[oracle@rhel12c2 oracle]$ ls -l
drwxr-xr-x. 3 grid oinstall 4096 Mar 4 16:21 admin
Since oracle user doesn't have write permission (as oinstall group doesn't have write permission) the DBCA fails with the following.
Change permissions with
chmod 775 admin
and invoke the DBCA.
5. Select instance management from DBCA and then add instance.
Select which database to extend (if there are multiple databases in the cluster and confirm the new instance details (comes auto populated)

6. Check the instance is visible on the cluster
[oracle@rhel12c1 addnode]$ srvctl config database -d cdb12c
Database unique name: cdb12c
Database name: cdb12c
Oracle home: /opt/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/cdb12c/spfilecdb12c.ora
Password file: +DATA/cdb12c/orapwcdb12c
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb12c
Database instances: cdb12c1,cdb12c2
Disk Groups: DATA,FLASH
Mount point paths:
Services: pdbsvc
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

SQL> select inst_id,instance_name,host_name from gv$instance;

INST_ID INSTANCE_NAME HOST_NAME
---------- ---------------- -------------------
1 cdb12c1 rhel12c1.domain.net
2 cdb12c2 rhel12c2.domain.net


SQL> select con_id,name from gv$pdbs;

CON_ID NAME
---------- ---------
2 PDB$SEED
3 PDB12C
2 PDB$SEED
3 PDB12C
The service created for this PDB is not yet available on the new node. As seen below only one instance appear as preferred instance and none on the available instance.
[oracle@rhel12c1 ~]$ srvctl config service -d cdb12c -s pdbsvc
Service name: pdbsvc
Service is enabled
Server pool: cdb12c_pdbsvc
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb12c
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
Preferred instances: cdb12c1
Available instances:
Modify the service to include the instance on the newly added node as well
[oracle@rhel12c1 ~]$ srvctl modify service -db cdb12c -pdb pdb12c -s pdbsvc -modifyconfig -preferred "cdb12c1,cdb12c2"

[oracle@rhel12c1 ~]$ srvctl config service -d cdb12c -s pdbsvc
Service name: pdbsvc
Service is enabled
Server pool: cdb12c_pdbsvc
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb12c
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
Preferred instances: cdb12c1,cdb12c2
Available instances:
7. Use cluvfy to perform the post node add checks
[grid@rhel12c1 ~]$ cluvfy stage -post nodeadd -n rhel12c2
With this concludes the addition of a new node to 12cR1 RAC.

Related Posts
Adding a Node to 11gR2 RAC
Adding a Node to 11gR1 RAC

ORA-20006: Number of RAC active instances and opatch jobs configured are not same

$
0
0
Following error could be observed when running the "Loading Modified SQL Files into the Database" section under the "Patch Post-Installation Instructions" during a PSU apply. In this case the system was a 2 node 12c RAC. and was installing the 12.1.0.1.3 Patch Set Update.
[oracle@rhel6m1 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.1.0 on Wed May 7 17:04:06 2014
Copyright (c) 2014, Oracle. All rights reserved.

Connecting to database...OK
Determining current state...
Currently installed SQL Patches: 17552800
DBD::Oracle::st execute failed: ORA-20006: Number of RAC active instances and opatch jobs configured are not same
ORA-06512: at "SYS.DBMS_QOPATCH", line 1007
ORA-06512: at line 4 (DBD ERROR: OCIStmtExecute) [for Statement "DECLARE
x XMLType;
BEGIN
x := dbms_qopatch.get_pending_activity;
? := x.getStringVal();
END;" with ParamValues: :p1=undef] at /opt/app/oracle/product/12.1.0/dbhome_1/sqlpatch/sqlpatch.pm line 1227.
Patch apply was carried out in a rolling fashion. Node 1 patched and brought up and then node 2 patched. While the node 2 patch apply is going on the post patch installation steps are carried out on the already started node (as these steps only need to be carried out on a single instance).
From the error above it seems this method (which worked on 11g) is no longer possible with the new datapatch utility. Metalink notes 1530108.1 and 1599479.1 list "ORA-20006: Number of RAC active instances and opatch jobs configured are not same" as "error which might be encountered during execution of Queryable Patch Inventory"



Solution provided on the above notes for this issue is to obtain support from Oracle!
Other workaround is wait until all the instances to start and then apply the post patch steps which doesn't result in an error.
[oracle@rhel6m1 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.1.0 on Wed May 7 17:15:00 2014
Copyright (c) 2014, Oracle. All rights reserved.

Connecting to database...OK
Determining current state...
Currently installed SQL Patches: 17552800
Currently installed C Patches: 18031528
Adding patches to installation queue and performing prereq checks...
Installation queue:
Nothing to roll back
The following patches will be applied: 18031528
Bundle patch 17552800 is included in bundle patch 18031528, so it does not need to be rolled back
Installing patches...
Useful metalink notes
Oracle Database 12.1 : FAQ on Queryable Patch Inventory [ID 1530108.1]
Datapatch errors at "SYS.DBMS_QOPATCH"[ID 1599479.1]

Skipping Tablespace During Backup and Recovery

$
0
0
Oracle provides skip and exclude options to keep a tablespace(s) out of a full database backup. However when the restore happens this exclusion must be handled before the database could be used by the application, and this handling depends on what effect this exclusion has on the application data structures.
Scenario given in this post shows where a table is using two tablespaces for storage, while the table remain in one tablespace the lob column related segments of that table are stored in a different tablespace. This type of table configuration is used to store transient data such as application session information. The session id is stored in a 2k tablespace while the session data (java serialized objects) stored in a 32kB tablespace. However for the testing 8k block size tablespaces are used for both table and lob segments. Database version is 11.2.0.3.
Set up the test case by creating the two tablespaces. Here the excbackup tablespace will be excluded from the backup.
create tablespace EXCBACKUP DATAFILE size 10m;
create tablespace INCBACKUP DATAFILE size 10m;

alter user asanga quota unlimited on excbackup quota unlimited on incbackup;
Create the table and populate it with data. The disable storage in row is important as the inserted data is less than 4k without disabling storage in row the lob segment will also get stored in the inbackup tablespace.
create table y (a number, b blob) tablespace incbackup lob(b) store as lobseg (tablespace excbackup DISABLE STORAGE IN ROW);

begin
for i in 1 .. 100
loop
insert into y values (i,utl_raw.cast_to_raw('ahgahgaashaghag'));
end loop;
end;
/

select segment_name,tablespace_name,bytes from user_segments;

SEGMENT_NAME TABLESPACE_NAME BYTES
------------------------------ ------------------------------ ----------
Y INCBACKUP 65536
SYS_IL0000105332C00002$$ EXCBACKUP 131072
LOBSEG EXCBACKUP 8388608
It could be seen that lob segments are stored in the excbackup tablespace.
Configure RMAN such that excbackup tablespace is excluded when a full database backup is done.
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE excbackup;

using target database control file instead of recovery catalog
Tablespace EXCBACKUP will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored

RMAN> show exclude;

RMAN configuration parameters for database with db_unique_name FGACDB are:
CONFIGURE EXCLUDE FOR TABLESPACE 'EXCBACKUP';
Once the configuration is done run a full database backup including archive logs. The truncated output below shows that datafile belonging to excbackup is not part of the backup
RMAN> backup database plus archivelog delete all input;
...
...
archived log file name=/opt/app/oracle/fast_recovery_area/FGACDB/archivelog/2014_06_06/o1_mf_1_7_9s3s0jxs_.arc RECID=62 STAMP=849545633
Finished backup at 06-JUN-14

Starting backup at 06-JUN-14
using channel ORA_DISK_1
file 5 is excluded from whole database backup
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00012 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cp_9rs1nq2y_.dbf
input datafile file number=00013 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_dumps_9rs1pg40_.dbf
input datafile file number=00014 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cms_9rs1pgkf_.dbf
input datafile file number=00015 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_monitor_9rs1ph0s_.dbf
input datafile file number=00017 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cachetbs_9rs1rjlm_.dbf
input datafile file number=00001 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_system_9rs1rll5_.dbf
input datafile file number=00002 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_sysaux_9rs1rm61_.dbf
input datafile file number=00003 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_undotbs1_9rs1smgo_.dbf
input datafile file number=00004 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_users_9rs1ss96_.dbf
input datafile file number=00006 name=/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_incbacku_9s3rrzr3_.dbf
channel ORA_DISK_1: starting piece 1 at 06-JUN-14
channel ORA_DISK_1: finished piece 1 at 06-JUN-14
piece handle=/opt/app/oracle/fast_recovery_area/FGACDB/backupset/2014_06_06/o1_mf_nnndf_TAG20140606T165402_9s3s0tyd_.bkp tag=TAG20140606T165402 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 06-JUN-14
...
...
Next step is the restore and recovery.



To simulate a situation where all data files are lost, database was shutdown and all data files removed (using OS utility) before the restore. The restore steps are given below.
RMAN> startup nomount;
RMAN> restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> restore database;
....
....
file 5 is excluded from whole database backup
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_system_9rs1rll5_.dbf
channel ORA_DISK_1: restoring datafile 00002 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_sysaux_9rs1rm61_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_undotbs1_9rs1smgo_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_users_9rs1ss96_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_incbacku_9s3rrzr3_.dbf
channel ORA_DISK_1: restoring datafile 00012 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cp_9rs1nq2y_.dbf
channel ORA_DISK_1: restoring datafile 00013 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_dumps_9rs1pg40_.dbf
channel ORA_DISK_1: restoring datafile 00014 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cms_9rs1pgkf_.dbf
channel ORA_DISK_1: restoring datafile 00015 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_monitor_9rs1ph0s_.dbf
channel ORA_DISK_1: restoring datafile 00017 to /opt/app/oracle/oradata/FGACDB/datafile/o1_mf_cachetbs_9rs1rjlm_.dbf
channel ORA_DISK_1: reading from backup piece /opt/app/oracle/fast_recovery_area/FGACDB/backupset/2014_06_06/o1_mf_nnndf_TAG20140606T165402_9s3s0tyd_.bkp
channel ORA_DISK_1: piece handle=/opt/app/oracle/fast_recovery_area/FGACDB/backupset/2014_06_06/o1_mf_nnndf_TAG20140606T165402_9s3s0tyd_.bkp tag=TAG20140606T165402
....
....
Again the skipping of the file belonging to excbackup tablespace is mentioned during the restore. Since the backupset used here doesn't contain the excbackup it is possible to use the restore database command. However if the restore process is using a backupset that has a particular tablespace and it need to be excluded then use RESTORE DATABASE SKIP TABLESPACE command.
Unlike the restore command the recover database will not work
RMAN> recover database;

Starting recover at 06-JUN-14
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/06/2014 16:58:43
RMAN-06094: datafile 5 must be restored
Recovery will complain the excluded file must be restored. To overcome this use recover command with skip tablespace. The tablespace name must be given in upper case.
RMAN> recover database skip tablespace 'excbackup'; # failure due to tablespace name being specified in lower case

Starting recover at 06-JUN-14
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/06/2014 16:59:00
RMAN-06094: datafile 5 must be restored

RMAN> recover database skip tablespace 'EXCBACKUP';

Executing: alter database datafile 5 offline
starting media recovery

archived log for thread 1 with sequence 8 is already on disk as file /opt/app/oracle/fast_recovery_area/FGACDB/onlinelog/o1_mf_4_86mco7kb_.log
archived log for thread 1 with sequence 9 is already on disk as file /opt/app/oracle/fast_recovery_area/FGACDB/onlinelog/o1_mf_1_86mcp3kk_.log
archived log file name=/opt/app/oracle/fast_recovery_area/FGACDB/onlinelog/o1_mf_4_86mco7kb_.log thread=1 sequence=8
archived log file name=/opt/app/oracle/fast_recovery_area/FGACDB/onlinelog/o1_mf_1_86mcp3kk_.log thread=1 sequence=9
media recovery complete, elapsed time: 00:00:01
Finished recover at 06-JUN-14
Finally open the database with reset logs
RMAN> alter database open resetlogs;
At this stage the database is open and could be used as long as the segments that were in the excluded tablespace are not needed. All table data that were part of the incbackup tablespace could be access without any error.
SQL> select a from y;

A
----------
1
2
3
4
5
Any attempt to access the blob segment which was in the excluded tablespace results in an error
SQL> select b from y;
ERROR:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5:
'/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_excbacku_9s3rryro_.dbf'
It is not possible to delete or truncate the table either
SQL> delete from y;
delete from y
*
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5:
'/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_excbacku_9s3rryro_.dbf'


SQL> truncate table y;
truncate table y
*
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5:
'/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_excbacku_9s3rryro_.dbf'
During this time the table segment information is listed as below.
SQL> select segment_name,tablespace_name,bytes from user_segments;

SEGMENT_NAME TABLESPACE_NAME BYTES
-------------------------- ------------------------------ ----------
Y INCBACKUP 65536
SYS_IL0000105365C00002$$ EXCBACKUP 65536
LOBSEG EXCBACKUP
Dropping the table doesn't remove the lob segments either (it could be that these don't occupy any space but entry is there under a different name)
SQL> drop table y purge;

SQL> select segment_name,tablespace_name,bytes from user_segments;

SEGMENT_NAME TABLESPACE_NAME BYTES
-------------- ------------------------------ ----------
5.130 EXCBACKUP
5.138 EXCBACKUP 65536
It is also possible to drop the missing data file offline and add another data file to the tablespace and recreate the table.
SQL>  alter database   datafile '/opt/app/oracle/oradata/FGACDB/datafile/o1_mf_excbacku_9s3rryro_.dbf' offline drop;

SQL> alter tablespace excbackup add DATAFILE size 10m;

SQL> create table y (a number, b blob) tablespace incbackup lob(b) store as lobseg (tablespace excbackup DISABLE STORAGE IN ROW);


begin
for i in 1 .. 100
loop
insert into y values (i,utl_raw.cast_to_raw('ahgahgaashaghag'));
end loop;
end;
/
But still the segment information from the earlier table remains.
SQL> select segment_name,tablespace_name,bytes from user_segments;

SEGMENT_NAME TABLESPACE_NAME BYTES
-------------------------- ------------------------------ ----------
Y INCBACKUP 65536
SYS_IL0000105337C00002$$ EXCBACKUP 65536
LOBSEG EXCBACKUP 983040
5.130 EXCBACKUP
5.138 EXCBACKUP
Only way to get rid of these phantom segment information is to drop the tablespace and recreate it again.
Therefore to have a clean restore and recovery in this type of scenario best course of actions is, once the database is open, drop the table, drop the tablespace and then recreate the tablespace and the table as before.

ORA-30012: undo tablespace does not exist or of wrong type

$
0
0
The following error message was observed on the alert logs of all but one instance trying to start them after a switchover using dataguard broker.
Undo initialization errored: err:30012 serial:0 start:568825214 end:568825224 diff:10 (0 seconds)
Errors in file /opt/app/oracle/diag/rdbms/xa04/xa04s2/trace/xa04s2_ora_19364.trc:
ORA-30012: undo tablespace 'UNDOTBS8' does not exist or of wrong type
Errors in file /opt/app/oracle/diag/rdbms/xa04/xa04s2/trace/xa04s2_ora_19364.trc:
ORA-30012: undo tablespace 'UNDOTBS8' does not exist or of wrong type
Error 30012 happened during db open, shutting down database
USER (ospid: 19364): terminating the instance due to error 30012
Instance terminated by USER, pid = 19364
ORA-1092 signalled during: ALTER DATABASE OPEN /* db agent *//* {3:48375:37745} */...
opiodr aborting process unknown ospid (19364) as a result of ORA-1092
Tue Jun 24 00:41:56 2014
ORA-1092 : opitsk aborting process
Both primary and standby databases are RACs in a dataguard configuration and the problem was happening on the new primary (old standby).
Checking the tablespace and datafile views shows the presence of the above mentioned tablespace and the undo data file. Also each instance has its own tablespace assigned to it, as it should be on a RAC environment.
xa04s1.undo_tablespace='UNDOTBS7'
xa04s2.undo_tablespace='UNDOTBS8'
xa04s3.undo_tablespace='UNDOTBS9'
xa04s4.undo_tablespace='UNDOTBS10'
xa04s5.undo_tablespace='UNDOTBS11'
The undo management was already set to auto when the dataguard configuration was created. Yet it was not possible to start the instances (xa04s2-xa04s5).



There was no data guard specific information on this error but MOS note 1344944.1 shows how to create an undo tablespace when the same error occurs for newly added RAC instance. Following this note new undo tablespaces were created for each of the instances after which it was possible to bring up all the instances. But it is not clear as to why this error occurred when each instance was assigned a specific undo tablespace and that file is physically present in the system.

Useful metalink notes
How to create undo tablespace for a newly added RAC instance (ORA-30012) [ID 1344944.1]
ORA-30012 Database Does Not Start With UNDO_MANAGEMENT=AUTO [ID 258506.1]

Recreating Dataguard Broker Configuration After ORA-16816: Incorrect Database Role

$
0
0
Following a failed switchover using dataguard broker the database role information was in a inconsistent state.
DGMGRL> show configuration

Configuration - APDG

Protection Mode: MaxPerformance
Databases:
XA04 - Primary database
Error: ORA-16816: incorrect database role

XA04S - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR
Even though the dataguard broker still consider the XA04 as the primary database and XA04S as the standby database, the databases themselves have under gone the role change and new role is reflected on the v$database view. From the XA04S which dataguard broker still considers the standby
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY
And from XA04 which dataguard broker considers the primary
SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY
It's clear that dataguard broker contains erroneous information.



Solution for this situation is to recreate the dataguard broker. This does not entail dropping the configuration. Dropping the dataguard broker would remove all data guard related configuration parameters from the spfile. To recreate the dataguard broker remove the dataguard broker configuration files using a OS utility (rm, delete) if the files are stored in the OS, or using ASMCMD rm if they are stored in the ASM (as in the case of RAC).
Once the configuration files are removed recreate the dataguard broker again by connecting to the new primary database instance.

Useful metalink notes
Step By Step How to Recreate Dataguard Broker Configuration [ID 808783.1]
Unable To Recreate Data Guard Fast Start Failover Configuration With DGMGRL [ID 454418.1]

GNS Setup for RAC

$
0
0
Setting up a GNS is not a must to install a RAC unless it's a flex cluster where the use of a GNS is mandatory. There are some advantages to using GNS, especially when it comes to adding and removing nodes and their IP assignment. This post list steps for a GNS setup that could be used for clusterware installation with GNS. The clusterware used in this case is 12cR1. GNS setup is independent of any cluster version and steps listed here could be used for a GNS setup to be used with 11gR2 clusterware as well. In this configuration public host names are resolved through the DNS and the private IPs are resolved through hosts files on the node.
GNS was setup on a separate server, in the following text 192.168.0.85 is the IP of this separate server (unimaginatively named rhel5new) where the DNS will run and 192.168.0.87 is the GNS VIP and GNS sub-domain is rac.mydomain.net.
It must be stated by no means this is comprehensive GNS setup and intended as a help for DBAs get test system setup. For production system setup always seek the services of a network administrator to setup the GNS.
1. Install rpm required to setup the GNS, this include DHCP related rpms (dhcp-3.0.5-31.el5_8.1) and DNS related rpms.
2. Modify the /etc/dhcpd.conf file and add the domain, DNS server IP and the range of IPs handed out by dhcp
cat /etc/dhcpd.conf
#
# DHCP Server Configuration file.
# see /usr/share/doc/dhcp*/dhcpd.conf.sample
#
ddns-update-style interim;
ignore client-updates;

subnet 192.168.0.0 netmask 255.255.255.0 {

option subnet-mask 255.255.255.0;
option domain-name "rac.mydomain.net";
option domain-name-servers 192.168.0.85;

range 192.168.0.86 192.168.0.98;
default-lease-time 21600;
max-lease-time 43200;

}
3. Edit the /etc/named.conf file and add the entries related to DNS setup.
# cat /etc/named.conf
options {
listen-on port 53 { 192.168.0.85; 127.0.0.1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
recursion yes;
allow-transfer {"none";};
};

logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};

zone "mydomain.net" IN {
type master;
file "mydomain.net.zone";
allow-update { none; };
};

zone "0.168.192.in-addr.arpa" IN {
type master;
file "rev.mydomain.net.zone";
allow-update { none; };
};

#include "/etc/named.rfc1912.zones";
#include "/etc/named.root.key";
4. Create the forward look-up file with an entry for sub-domain delegation.
cat /var/named/mydomain.net.zone
$TTL 1H ; Time to live
$ORIGIN mydomain.net.
@ IN SOA rhel5new root.mydomain.net. (
2009011201 ; serial (todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour

A 192.168.0.85
NS rhel5new

rhel5new A 192.168.0.85
gns A 192.168.0.87

$ORIGIN rac.mydomain.net.
@ IN NS gns.mydomain.net
.
5. Reverse look-up file. In this case reverse look up entry is added only for the GNS VIP
cat /var/named/rev.mydomain.net.zone
$ORIGIN 0.168.192.in-addr.arpa.
$TTL 1H
@ IN SOA rhel5new root.mydomain.net. ( 2
3H
1H
1W
1H )
0.168.192.in-addr.arpa. IN NS rhel5new.

85 IN PTR rhel5new.mydomain.net.
87 IN PTR gns.mydomain.net.


6. Use cluvfy tool with precrsinst option to check the suitability of GNS setup. This seem to check mainly if the GNS sub domain and VIP are in use, if so will flag unsuccessful. This doesn't check if the actual delegation happens which could only be checked after the clusterware has been installed.
$ ./runcluvfy.sh comp gns -precrsinst -domain rac.mydomain.net -vip 192.168.0.87 -verbose -n rhel12c1,rhel12c2

Verifying GNS integrity

Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "rac.mydomain.net" is a valid domain name
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.0.87" resolves to a valid IP address
Checking the status of GNS VIP...

GNS integrity check passed

Verification of GNS integrity was successful.
7. Use the GNS VIP and the sub-domain name during the clusterware installation.

When using GNS the virtual hostname is auto generated.
Summary

8. Use nslookup to verify the delegation is working. If the delegation is working nslookup with the DNS IP will resolve the SCAN name with a non-authoritive answer.
$ nslookup rhel12c-scan.rac.mydomain.net 192.168.0.85
Server: 192.168.0.85
Address: 192.168.0.85#53

Non-authoritative answer:
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.89
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.96
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.88

$ nslookup rhel12c-scan.rac.mydomain.net 192.168.0.85
Server: 192.168.0.85
Address: 192.168.0.85#53

Non-authoritative answer:

Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.88
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.89
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.96

$ nslookup rhel12c-scan.rac.mydomain.net 192.168.0.85
Server: 192.168.0.85
Address: 192.168.0.85#53

Non-authoritative answer:

Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.96
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.88
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.89
Non-authoritative answer is given when the query was answered with the help of another namesapce. Using the direct GNS VIP will give also resolve the scan name but this will be a "direct" answer
$ nslookup rhel12c-scan.rac.mydomain.net 192.168.0.87
Server: 192.168.0.87
Address: 192.168.0.87#53

Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.96
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.89
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.88

$ nslookup rhel12c-scan.rac.mydomain.net 192.168.0.87
Server: 192.168.0.87
Address: 192.168.0.87#53

Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.96
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.89
Name: rhel12c-scan.rac.mydomain.net
Address: 192.168.0.88
When nslookup called by specifying the GNS VIP the IPs associated with the SCAN do not rotate whereas the when SCAN is resolved through the DNS IP it does rotated in a round robin fashion. Oracle has confirmed that this expected behavior. 11gR2 also exhibited the same behavior.
Dig could be used to find out what is the authority section.
dig rhel12c-scan.rac.mydomain.net

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> rhel12c-scan.rac.mydomain.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35411
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;rhel12c-scan.rac.mydomain.net. IN A

;; ANSWER SECTION:
rhel12c-scan.rac.mydomain.net. 120 IN A 192.168.0.96
rhel12c-scan.rac.mydomain.net. 120 IN A 192.168.0.88
rhel12c-scan.rac.mydomain.net. 120 IN A 192.168.0.89

;; AUTHORITY SECTION:
rac.mydomain.net. 3600 IN NS gns.mydomain.net.


;; ADDITIONAL SECTION:
gns.mydomain.net. 3600 IN A 192.168.0.87

;; Query time: 5 msec
;; SERVER: 192.168.0.85#53(192.168.0.85)
;; WHEN: Tue Jun 10 12:40:50 2014
;; MSG SIZE rcvd: 128
Beside SCAN the host VIPs could also be resolved through the GNS
$ nslookup rhel12c1-vip.rac.mydomain.net 192.168.0.85
Server: 192.168.0.85
Address: 192.168.0.85#53

Non-authoritative answer:
Name: rhel12c1-vip.rac.mydomain.net
Address: 192.168.0.95

$ nslookup rhel12c2-vip.rac.mydomain.net 192.168.0.85
Server: 192.168.0.85
Address: 192.168.0.85#53

Non-authoritative answer:
Name: rhel12c2-vip.rac.mydomain.net
Address: 192.168.0.91
9. Edit the resolve.conf and include the DNS IP so the SCAN resolution and delegation happens automatically. Edit the nsswitch.conf and place the nis entry as the end of the search list. For more on this follow Oracle documentation.

10.Cluvfy also provides postcrsinst option to check the GNS.
$ cluvfy comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "rac.mydomain.net" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.0.0, 192.168.0.0, 192.168.0.0, 192.168.0.0, 192.168.0.0" match with the GNS VIP "192.168.0.0, 192.168.0.0, 192.168.0.0, 192.168.0.0, 192.168.0.0"
Checking if the GNS VIP is a valid address...
GNS VIP "gns.mydomain.net" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "rac.mydomain.net" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable
Checking status of GNS resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
rhel12c1 no yes
rhel12c2 yes yes

GNS resource configuration check passed
Checking status of GNS VIP resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
rhel12c1 no yes
rhel12c2 yes yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Verification of GNS integrity was successful.
11. srvctl config will list all GNS related information.
srvctl config gns -list -a
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: rac.mydomain.net
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 4217101cdaea4fbebf2339cfa673b58b
Name of the cluster where GNS is running: rhel12c
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.0.87:60360.
Oracle-GNS A 192.168.0.87 Unique Flags: 0x15
rhel12c-scan A 192.168.0.88 Unique Flags: 0x81
rhel12c-scan A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan1-vip A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan2-vip A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan3-vip A 192.168.0.88 Unique Flags: 0x81
rhel12c.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 60360 Weight: 0 Priority: 0 Flags: 0x15
rhel12c.Oracle-GNS TXT CLUSTER_NAME="rhel12c", CLUSTER_GUID="4217101cdaea4fbebf2339cfa673b58b", NODE_ADDRESS="192.168.0.87", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="rac.mydomain.net" Flags: 0x15
rhel12c1-vip A 192.168.0.95 Unique Flags: 0x81
rhel12c2-vip A 192.168.0.91 Unique Flags: 0x81
The IPs assigned to VIPs and SCAN are stored in the OCR (possible to read from the ocrdump file) but could change across cluster reboots.
srvctl config gns -list -a
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: rac.mydomain.net
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 4217101cdaea4fbebf2339cfa673b58b
Name of the cluster where GNS is running: rhel12c
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.0.87:60360.
Oracle-GNS A 192.168.0.87 Unique Flags: 0x15
rhel12c-scan A 192.168.0.88 Unique Flags: 0x81
rhel12c-scan A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan1-vip A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan2-vip A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan3-vip A 192.168.0.88 Unique Flags: 0x81

rhel12c.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 60360 Weight: 0 Priority: 0 Flags: 0x15
rhel12c.Oracle-GNS TXT CLUSTER_NAME="rhel12c", CLUSTER_GUID="4217101cdaea4fbebf2339cfa673b58b", NODE_ADDRESS="192.168.0.87", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="rac.mydomain.net" Flags: 0x15
rhel12c1-vip A 192.168.0.95 Unique Flags: 0x81
rhel12c2-vip A 192.168.0.91 Unique Flags: 0x81

srvctl config gns -list -a
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: rac.mydomain.net
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 4217101cdaea4fbebf2339cfa673b58b
Name of the cluster where GNS is running: rhel12c
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.0.87:28251.
Oracle-GNS A 192.168.0.87 Unique Flags: 0x15
rhel12c-scan A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan A 192.168.0.92 Unique Flags: 0x1
rhel12c-scan A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan1-vip A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan2-vip A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan3-vip A 192.168.0.92 Unique Flags: 0x1

rhel12c.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 28251 Weight: 0 Priority: 0 Flags: 0x15
rhel12c.Oracle-GNS TXT CLUSTER_NAME="rhel12c", CLUSTER_GUID="4217101cdaea4fbebf2339cfa673b58b", NODE_ADDRESS="192.168.0.87", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="rac.mydomain.net" Flags: 0x15
rhel12c1-vip A 192.168.0.98 Unique Flags: 0x81
rhel12c2-vip A 192.168.0.91 Unique Flags: 0x81

srvctl config gns -list -a
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: rac.mydomain.net
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 4217101cdaea4fbebf2339cfa673b58b
Name of the cluster where GNS is running: rhel12c
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.0.87:28251.
Oracle-GNS A 192.168.0.87 Unique Flags: 0x15
rhel12c-scan A 192.168.0.88 Unique Flags: 0x81
rhel12c-scan A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan1-vip A 192.168.0.96 Unique Flags: 0x81
rhel12c-scan2-vip A 192.168.0.89 Unique Flags: 0x81
rhel12c-scan3-vip A 192.168.0.88 Unique Flags: 0x81

rhel12c.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 28251 Weight: 0 Priority: 0 Flags: 0x15
rhel12c.Oracle-GNS TXT CLUSTER_NAME="rhel12c", CLUSTER_GUID="4217101cdaea4fbebf2339cfa673b58b", NODE_ADDRESS="192.168.0.87", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0", DOMAIN="rac.mydomain.net" Flags: 0x15
rhel12c1-vip A 192.168.0.98 Unique Flags: 0x81
rhel12c2-vip A 192.168.0.91 Unique Flags: 0x81

Useful metalink notes
DNS and DHCP Setup Example for Grid Infrastructure GNS [ID 946452.1]

Creating Extended Statistics With Function-base Column Groups

$
0
0
There are nine restrictions to creating extended statistics. These restrictions are same for version 11.1, 11.2 and 12.1.. One of the constraints is "A column group can not contain expressions". Oracle documentation also provides an example what is a column group and what is an expression when it comes to extended statistics extension "An example column group can be "(c1, c2)" and an example expression can be "(c1 + c2)". In short if columns are comma separated then it will consider as a column group. However this causes a problem when creating extended statistics with functions. Give below is an example.
SQL> create table exstat (a number, b date);

SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual;
select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual
*
ERROR at line 1:
ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
Trying to create extended statistics with column a and trunc(b) results in an error. What's clear from the error is that 1. Oracle was expecting a column group and 2. It must only contain columns separated by comma.
It is expecting a column group but the second portion of the extension is not recognized as a column hence the error. To overcome this create a function base index. For the above extended statistics extension following index was created
create index aidx on exstat(a,trunc(b));
After which the creation of the extended statistics works.
SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual;

DBMS_STATS.CREATE_EXTENDED_STATS(USER,'EXSTAT','(A,TRUNC(B))')
--------------------------------------------------------------------------------
SYS_STUE4B2X1G802ME0XHTBYWFY_Q
Simply create the index first and then extended statistics.



Trying to drop an extended statistics extension that uses function results in the following error
SQL>  exec dbms_stats.drop_extended_stats(user,'EXSTAT','(A,TRUNC(B))');
BEGIN dbms_stats.drop_extended_stats(user,'EXSTAT','(A,TRUNC(B))'); END;

*
ERROR at line 1:
ORA-20000: extension "(A,TRUNC(B))" does not exist
ORA-06512: at "SYS.DBMS_STATS", line 13055
ORA-06512: at "SYS.DBMS_STATS", line 45105
ORA-06512: at line 1
Even though the extension is present the error message says "does not exist". To resolve this drop the index created for the extended statistics extension
SQL> DROP INDEX AIDX;

Index dropped.
After which extended statistics is dropped without any issue.
SQL> exec dbms_stats.drop_extended_stats(user,'EXSTAT','(A,TRUNC(B))');

PL/SQL procedure successfully completed.
This was tested on 12.1, 11.2.0.4 and 11.2.0.3 and all exhibited the same behavior. But this test failed on 11.1.0.7. even after creating the index.
SQL> create table exstat (a number, b date);

SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual;
select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual
*
ERROR at line 1:
ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma

SQL> create index aidx on exstat(a,trunc(b));

SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual;
select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual
*
ERROR at line 1:
ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
The index has created a hidden virtual column. Trying to add a virtual column complains of duplication
SQL>  alter table exstat add (c as (trunc(b)));
alter table exstat add (c as (trunc(b)))
*
ERROR at line 1:
ORA-54015: Duplicate column expression was specified
But having a virtual column with the function expression didn't help either.
SQL> drop index aidx;

SQL> alter table exstat add (c as (trunc(b)));

SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual;
select dbms_stats.create_extended_stats(user,'EXSTAT','(a,trunc(b))') from dual
*
ERROR at line 1:
ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
It's temping to use the virtual column itself in the extended statistics column group. But "extension cannot contain a virtual column" is restriction number one!
SQL> select dbms_stats.create_extended_stats(user,'EXSTAT','(a,c)') from dual;
select dbms_stats.create_extended_stats(user,'EXSTAT','(a,c)') from dual
*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression

ASM Disk Group Dependency Exists Even After Being Dropped

$
0
0
Database using ASM has storage has dependency on the ASM disk groups.
[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
But it seems even if there's no real dependency exists between the database and ASM disk group, the ASM disk group is listed as a resource. By "no real dependency" it's meant that there's no database objects currently existing on disk group in concern. Following is the steps of the test case (tested on 11.2.0.3).
Create a new disk group and mount it on all nodes
SQL> create diskgroup test external redundancy disk '/dev/sdg1';

Diskgroup created.

SQL> select name,state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
CLUSTER_DG MOUNTED
DATA MOUNTED
FLASH MOUNTED
TEST MOUNTED
SQL> select name,state from v$asm_diskgroup;
As there are no database objects it's still not part of the DB configuration
[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
But listed a resources
[oracle@rhel6m1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTER_DG.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.DATA.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.FLASH.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.MYLISTENER.lsnr
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.TEST.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2

ora.asm
ONLINE ONLINE rhel6m1 Started
ONLINE ONLINE rhel6m2 Started
ora.gsd
OFFLINE OFFLINE rhel6m1
OFFLINE OFFLINE rhel6m2
ora.net1.network
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.ons
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.registry.acfs
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.MYLISTENER_SCAN1.lsnr
1 ONLINE ONLINE rhel6m2
ora.cvu
1 ONLINE ONLINE rhel6m2
ora.oc4j
1 ONLINE ONLINE rhel6m2
ora.rhel6m1.vip
1 ONLINE ONLINE rhel6m1
ora.rhel6m2.vip
1 ONLINE ONLINE rhel6m2
ora.scan1.vip
1 ONLINE ONLINE rhel6m2
ora.std11g2.db
1 ONLINE ONLINE rhel6m1 Open
2 ONLINE ONLINE rhel6m2 Open
ora.std11g2.myservice.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
ora.std11g2.srv.domain.net.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
Dismount from all but one node and drop the disk group from the node it's mounted
SQL> alter diskgroup test dismount;
SQL> drop diskgroup test;
SQL> select name,state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
CLUSTER_DG MOUNTED
DATA MOUNTED
FLASH MOUNTED
As seen from the above output disk group no longer exists and also is not listed on the resource list
[grid@rhel6m1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTER_DG.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.DATA.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.FLASH.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.MYLISTENER.lsnr
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.asm
ONLINE ONLINE rhel6m1 Started
ONLINE ONLINE rhel6m2 Started
ora.gsd
OFFLINE OFFLINE rhel6m1
OFFLINE OFFLINE rhel6m2
ora.net1.network
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.ons
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.registry.acfs
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.MYLISTENER_SCAN1.lsnr
1 ONLINE ONLINE rhel6m2
ora.cvu
1 ONLINE ONLINE rhel6m2
ora.oc4j
1 ONLINE ONLINE rhel6m2
ora.rhel6m1.vip
1 ONLINE ONLINE rhel6m1
ora.rhel6m2.vip
1 ONLINE ONLINE rhel6m2
ora.scan1.vip
1 ONLINE ONLINE rhel6m2
ora.std11g2.db
1 ONLINE ONLINE rhel6m1 Open
2 ONLINE ONLINE rhel6m2 Open
ora.std11g2.myservice.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
ora.std11g2.srv.domain.net.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
This would be the expected behavior. Drop the disk group and it should be removed from the cluster. Next is the oddity.



Create the disk group as before and create some database objects. In this a tablespace is created
SQL> create diskgroup test external redundancy disk '/dev/sdg1';
SQL> create tablespace testtbs datafile '+test(datafile)' SIZE 10M;
SQL> ALTER USER ASANGA QUOTA UNLIMITED ON TESTTBS;
Creating the tablespace makes the disk group part of the configuration.
[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH,TEST
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
A table is created on the tablespace created earlier and few rows inserted to simulate some DB activity.
SQL> create table test (a number) tablespace testtbs;
SQL> insert into test values(10);
SQL> commit;
SQL> select * from test;

A
----------
10
Remove the database objects and drop the disk group


SQL> drop table test purge;
SQL> alter user asanga quota 0 on testtbs;
SQL> drop tablespace testtbs including contents and datafiles;
Even though there are no database objects on this disk group it is still part of the DB configuration
[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH,TEST
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
Dropping the disk group doesn't make any difference either
SQL> drop diskgroup test;

SQL> select name,state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
CLUSTER_DG MOUNTED
DATA MOUNTED
FLASH MOUNTED

[grid@rhel6m1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTER_DG.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.DATA.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.FLASH.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.MYLISTENER.lsnr
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.TEST.dg
OFFLINE OFFLINE rhel6m1
OFFLINE OFFLINE rhel6m2

ora.asm
ONLINE ONLINE rhel6m1 Started
ONLINE ONLINE rhel6m2 Started
ora.gsd
OFFLINE OFFLINE rhel6m1
OFFLINE OFFLINE rhel6m2
ora.net1.network
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.ons
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.registry.acfs
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.MYLISTENER_SCAN1.lsnr
1 ONLINE ONLINE rhel6m2
ora.cvu
1 ONLINE ONLINE rhel6m2
ora.oc4j
1 ONLINE ONLINE rhel6m2
ora.rhel6m1.vip
1 ONLINE ONLINE rhel6m1
ora.rhel6m2.vip
1 ONLINE ONLINE rhel6m2
ora.scan1.vip
1 ONLINE ONLINE rhel6m2
ora.std11g2.db
1 ONLINE ONLINE rhel6m1 Open
2 ONLINE ONLINE rhel6m2 Open
ora.std11g2.myservice.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
ora.std11g2.srv.domain.net.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2


[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH,TEST
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
As seen from above outputs even though there's no disk group exists it's listed as part of the database configuration and listed as a resource. Trying to drop the resource result in following error.
[grid@rhel6m1 ~]$ crsctl delete resource ora.TEST.dg
CRS-2730: Resource 'ora.std11g2.db' depends on resource 'ora.TEST.dg'
CRS-4000: Command Delete failed, or completed with errors.
Solution is to remove database dependency on the disk group.
[oracle@rhel6m1 ~]$ srvctl modify database -d std11g2 -a "DATA,FLASH"
After which the disk group is not part of the DB configuration
[oracle@rhel6m1 ~]$ srvctl config database -d std11g2
Database unique name: std11g2
Database name: std11g2
Oracle home: /opt/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/STD11G2/PARAMETERFILE/spfile.257.806251953
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: std11g2
Database instances: std11g21,std11g22
Disk Groups: DATA,FLASH
Mount point paths:
Services: myservice,srv.domain.net
Type: RAC
Database is administrator managed
As there are no dependencies the delete command gets executed without any errors
[grid@rhel6m1 ~]$ crsctl delete resource ora.TEST.dg
Once deleted the disk group is no longer listed as a resource
[grid@rhel6m1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTER_DG.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.DATA.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.FLASH.dg
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.MYLISTENER.lsnr
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.asm
ONLINE ONLINE rhel6m1 Started
ONLINE ONLINE rhel6m2 Started
ora.gsd
OFFLINE OFFLINE rhel6m1
OFFLINE OFFLINE rhel6m2
ora.net1.network
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.ons
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
ora.registry.acfs
ONLINE ONLINE rhel6m1
ONLINE ONLINE rhel6m2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.MYLISTENER_SCAN1.lsnr
1 ONLINE ONLINE rhel6m1
ora.cvu
1 ONLINE ONLINE rhel6m2
ora.oc4j
1 ONLINE ONLINE rhel6m2
ora.rhel6m1.vip
1 ONLINE ONLINE rhel6m1
ora.rhel6m2.vip
1 ONLINE ONLINE rhel6m2
ora.scan1.vip
1 ONLINE ONLINE rhel6m1
ora.std11g2.db
1 ONLINE ONLINE rhel6m1 Open
2 ONLINE ONLINE rhel6m2 Open
ora.std11g2.myservice.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2
ora.std11g2.srv.domain.net.svc
1 ONLINE ONLINE rhel6m1
2 ONLINE ONLINE rhel6m2

RMAN-05609: Must specify a username for target connection when using active duplicate

$
0
0
In 11gR2 it was possible to run an active duplication command for data guard and cloning of DB via duplication without explicitly specifying a username for the target instance.
But in 12c1 this would lead to RMAN-05609: Must specify a username for target connection when using active duplicate error.
$ rman target / auxiliary sys/ent12c1db@ent12c1stns

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Aug 4 13:08:03 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: ENT12C1 (DBID=209099011)
connected to auxiliary database: ENT12C1S (not mounted)

RMAN>duplicate target database for standby from active database
2> spfile
3> parameter_value_convert 'ent12c1','ent12c1s','ENT12C1','ENT12C1S'
4> set db_unique_name='ent12c1s'
5> set db_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/ENT12C1S'
6> set log_file_name_convert='/data/oradata/ENT12C1','/opt/app/oracle/oradata/ENT12C1S','/data/flash_recovery/ENT12C1','/opt/app/oracle/fast_recovery_area/ENT12C1S'
7> set control_files='/opt/app/oracle/oradata/ENT12C1S','/opt/app/oracle/fast_recovery_area/ENT12C1S'
8> set db_create_file_dest='/opt/app/oracle/oradata'
9> set db_recovery_file_dest='/opt/app/oracle/fast_recovery_area'
10> set log_archive_max_processes='10'
11> set fal_client='ENT12C1STNS'
12> set fal_server='ENT12C1TNS'
13> set log_archive_dest_2='service=ENT12C1TNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=ent12c1'
14> set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=ent12c1s';

Starting Duplicate Db at 04-AUG-14
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/04/2014 13:08:11
RMAN-05501: aborting duplication of target database
RMAN-05609: Must specify a username for target connection when using active duplicate




Solution is to include the username and password for the target instance as well.
$ rman target sys/ent12c1 auxiliary sys/ent12c1@ent12c1stns
This is different to how duplication was done on 11gR2 as such may require changes to duplication scripts when used with 12c.

Upgrade Oracle Database 12c1 from 12.1.0.1 to 12.1.0.2

$
0
0
This post list the steps of upgrading from 12.1.0.1 to 12.1.0.2 for single instance database in a data guard configuration (physical standby). The single instance databases are non-CDB. When upgrading databases in a data guard configuration the upgrade process is initiated on the standby site by upgrading the standby database software first. Upgrade of database software to 12.1.0.2 is done as an out of place upgrade (oppose to in-place upgrade). As such the database software could be installed in a different oracle home while the redo apply is taking place. Installing 12.1.0.2 database software is identical to that of 12.1.0.1 and there no new steps to be carried out.
Once the database software is installed copy the spfile , initfile, oracle password file, tnsnames.ora and listener.ora file from the 12.1.0.1 oracle home into the 12.1.0.2 oracle home. In this case the 12.1.0.1 oracle home is /opt/app/oracle/product/12.1.0/dbhome_1 and 12.1.0.2 oracle home is /opt/app/oracle/product/12.1.0/dbhome_2

echo $ORACLE_HOME
/opt/app/oracle/product/12.1.0/dbhome_1 ## 12.1.0.1 ORALCE HOME
cd $ORACLE_HOME/dbs
$ cp spfileent12c1s.ora initent12c1s.ora orapwent12c1s ../../dbhome_2/dbs/
$ cd ../network/admin/
$ cp tnsnames.ora listener.ora ../../../dbhome_2/network/admin/
Open the listener.ora file that is in the new ORACLE_HOME/network/admin and edit the static listener entries to reflect the new oracle home path. These static listener entries were created as part of the data guard configuration.

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ent12c1)
(SID_NAME = ent12c1)
(ORACLE_HOME = /opt/app/oracle/product/12.1.0/dbhome_2)
)
)
Defer the redo transport on the primary until the standby is mounted using the new 12.1.0.2 oracle home.
SQL> alter system set log_archive_dest_state_2='defer';
and cancel the redo apply on the standby
SQL> alter database recover managed standby database cancel;
Modify the /etc/oratab to reflect the new oracle association with the standby instance
cat /etc/oratab
ent12c1s:/opt/app/oracle/product/12.1.0/dbhome_2:N
Stop the listener started out of the 12.1.0.1 home.
Modify the environment variables so that they are pointed to new 12.1.0.2 oracle home (ORACLE_HOME,PATH and etc). Start the listener using the 12.1.0.2 oracle home. Verify that listener is started from the new home
$ lsnrctl start

Starting /opt/app/oracle/product/12.1.0/dbhome_2/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /opt/app/oracle/product/12.1.0/dbhome_2/network/admin/listener.ora
Log messages written to /opt/app/oracle/diag/tnslsnr/hpc5/listener/alert/log.xml
...
Services Summary...
Service "ent12c1s" has 1 instance(s).
Instance "ent12c1s", status UNKNOWN, has 1 handler(s) for this service..
Mount the standby database using the 12.1.0.2 oracle home and start the redo apply
SQL> startup mount;
SQL> alter database recover managed standby database disconnect;
Verify from the alert log that database is started using 12.1.0.2 oracle home parameter file.
ORACLE_HOME = /opt/app/oracle/product/12.1.0/dbhome_2
System name: Linux
Node name: hpc5.domain.net
Release: 2.6.18-194.el5
Version: #1 SMP Tue Mar 16 21:52:39 EDT 2010
Machine: x86_64
Using parameter settings in server-side spfile /opt/app/oracle/product/12.1.0/dbhome_2/dbs/spfileent12c1s.ora
Enable the redo transport on the primary
alter system set log_archive_dest_state_2='enable'
This conclude the upgrade activity on the standby. The standby database instance will be upgrade once the redo generated during the primary database upgrade is transported and applied on to the standby.



Although the setup used on this post consists of a data guard configuration the primary site steps are valid for single instances without data guard configuration as well. The software upgrade on the primary is done as an out of place upgrade. Once the 12.1.0.2 software is installed run the preupgrade script from the new home.
cd /opt/app/oracle/product/12.1.0/dbhome_2/rdbms/admin
SQL> @preupgrd.sql


Loading Pre-Upgrade Package...


***************************************************************************
Executing Pre-Upgrade Checks in ENT12C1...
***************************************************************************


************************************************************

====>> ERRORS FOUND for ENT12C1 <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.


1) Check Tag: PURGE_RECYCLEBIN
Check Summary: Check that recycle bin is empty prior to upgrade
Fixup Summary:
"The recycle bin will be purged."

You MUST resolve the above error prior to upgrade

************************************************************

************************************************************

====>> PRE-UPGRADE RESULTS for ENT12C1 <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
/opt/app/oracle/cfgtoollogs/ent12c1/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
/opt/app/oracle/cfgtoollogs/ent12c1/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
/opt/app/oracle/cfgtoollogs/ent12c1/preupgrade/postupgrade_fixups.sql

************************************************************

***************************************************************************
Pre-Upgrade Checks in ENT12C1 Completed.
***************************************************************************

***************************************************************************
As instructed on the output run the fixup sql on the primary database
@/opt/app/oracle/cfgtoollogs/ent12c1/preupgrade/preupgrade_fixups.sql
Run pre upgrade sql again and check of errors and warnings. If there are any errors of warnings fix them before continuing with DBUA. To being the upgrade run DBUA from the 12.1.0.2 home.
Select the database upgrade option.

Select the source oracle home and the instance to upgrade.

The DBUA detects the data guard configuration is in place and prompt to sync the standby database. This is not an error but an information provided by the DBUA. There should not be any archive gaps prior to the primary upgrade.
The network configuration page did not detect the listener running out of the 12.1.0.1 home. Because of this when the upgrade finished the redo transport failed as there was no tnsnames.ora file in the 12.1.0.2 home and standby was unable to fetch the archive logs as the listener was running out of the old home. If during the upgrade no listener is detected similar to below, manually copy the listener.ora and tnsnames.ora files to the 12.1.0.2 home and edit the oracle home entry for static listener registration.
Take a backup before the upgrade or allow DBUA to take a backup as part of the upgrade process.
Summary and upgrade
Upgrade results

Once the upgrade is completed set the environment variables to reflect the new oracle home.
Also run the post upgrade script as suggested by the pre upgrade sql
@/opt/app/oracle/cfgtoollogs/ent12c1/preupgrade/postupgrade_fixups.sql
Verify listener is running out of the new home. If as mentioned earlier listener.ora and tnsnames.ora files are not moved to the new oracle home, move them manually to the 12.1.0.2 home. Once moved stop the listener and start it to run out of the new 12.1.0.2 oracle home.
To validate the data guard configuration is working carry out few log file switches and verify they are received at the standby and applied.
Once tested and satisfied with the upgrade, update the compatible initialization parameter to 12.1.0.2. It is not possible to downgrade the database once this has been set. In this case the standby was the first to be set compatible = 12.1.0.2 followed by the primary. Read data guard admin guide for exact steps.
This conclude the upgrade from 12.1.0.1 to 12.1.0.2 This is not an extensive "how to" guide but only the highlights. For complete information refer the oracle upgrade guide and the following metalink notes.

Useful metalink notes
Oracle 12cR1 Upgrade Companion [ID 1462240.1]
Oracle Database Upgrade Path Reference List [ID 730365.1]
Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC) [1520299.1]


Related Posts
Upgrade from 11.1.0.7 to 11.2.0.4 (Clusterware, ASM & RAC)
Upgrading from 10.2.0.4 to 10.2.0.5 (Clusterware, RAC, ASM)
Upgrade from 10.2.0.5 to 11.2.0.3 (Clusterware, RAC, ASM)
Upgrade from 11.1.0.7 to 11.2.0.3 (Clusterware, ASM & RAC)
Upgrading from 11.1.0.7 to 11.2.0.3 with Transient Logical Standby
Upgrading from 11.2.0.1 to 11.2.0.3 with in-place upgrade for RAC
In-place upgrade from 11.2.0.2 to 11.2.0.3
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 1
Upgrading from 11.2.0.2 to 11.2.0.3 with Physical Standby - 2
Upgrading from 11gR2 (11.2.0.3) to 12c (12.1.0.1) Grid Infrastructure

enq: HW - contention and latch: enqueue hash chains

$
0
0
An application uses serialized java objects to stored in the database to overcome the application server failovers. The java objects are stored as BLOB. Application has been using the same session storing mechanism for number of years (7+ years) and gone through the upgrades of Oracle versions 10.2 -> 11.1 -> 11.2 without any performance regress. During the inception (with a 10.2) high wait times on enq: HW - contention were encountered which was remedied with the use of the event 44951 (refer 740075.1)
However during a test, involving large number of concurrent application sessions (x10 baseline) the test system encountered high enq: HW - contention even though event 44951 has been set with level 1024.

Following actions were tried in order to reducing the number of and time spent on HW - contention event.
1. Large initial extent size for the lob segment.
  STORE AS "LOB_SEG"
(
...(INITIAL 5368709120 NEXT 134217728 ...
);
2. Large value for next extent, ideally this should be greater than the (average size of the LOB inserted x number of concurrent inserts).
STORE AS "LOB_SEG"
(
...INITIAL 5368709120 NEXT 134217728 ...
);
3. Tablespace with uniform extent allocation and extent size is greater than the (average size of the LOB inserted x number of concurrent inserts).
4. Large chunk size for lob segment and tablespace block size equal to the chunk size. Lob segment was placed in a tablespace with a block size of 32K and chunk size 32K was used for the lob semgment.
  STORE AS SECUREFILE "LOB_SEG"
(
TABLESPACE "TBS32K"CHUNK 32768
);
However these actions only slightly reduced the HW - contention wait events and the performance was not satisfactory. At this point it was decided to use the secure file for the lob segments as it was considered better in performance compared to basic file.



Investigating secure file related issues came across the following MOS note (1532311.1) which mentioned high waits on buffer busy waits and enq: TX - contention when there are frequent updates on secure file lobs. Solution for this was to increase the secure file concurrency estimate hidden parameter (_securefiles_concurrency_estimate). However instead of relying on a parameter that depends on the concurrency, the application was modified by replacing the statement sequence of (insert/update) with an (insert/delete/insert).
Running the same test as earlier resulted in high latch: enqueue hash chain waits.

This wait event was as a result of bug 13775960 which results high enqueue hash chains for concurrent inserts on secure files (refer 13775960.8). Luckily there's patch for the bug (13775960 which supersede the patch 13395403) and once applied the enqueue hash chain wait events were resolved.
Even though Oracle documents says that secure file out perform the basic files as in this case the secure file themselves pose some issues of their own which must be tested against.

Useful metalink notes
Bug 13775960 - "enqueue hash chains" latch contention for delete/insert Securefile workload [ID 13775960.8]
Securefiles DMLs cause high 'buffer busy waits'&'enq: TX - contention' wait events leading to whole database performance degradation [ID 1532311.1]
Bug 2530125 - Hang possible with "enqueue hash chains" latch held during deadlock detection [ID 2530125.8]
Bug 13395403 - "enqueue hash chains" latch contention on Securefile blob DMLs - superseded [ID 13395403.8]
'enq HW - contention' For Busy LOB Segment [ID 740075.1]
How To Analyze the Wait Statistic: 'enq: HW - contention'[ID 419348.1]

Nologging vs Logging for LOBs and enq: CF - contention

$
0
0
Changing the logging option is one of the possible performance tuning tasks when dealing with LOBs. However use of nologging will make recovery of these lob segments impossible as such use of this will also depends on the nature of the application data. For transient data where recovery is not expected in case of database failure nologging option would be suitable. When the logging option is not explicitly mentioned this results in the tablespace's logging option being inherited by the lob segment. Refer Oracle documentation for more logging related information on basicfile and securefile.
This post presents the result of a simple test carried out comparing the nologging vs logging for both basicfile and securefile LOB segments. The java code used for the test is given at the end of the post. First case is the basicfile. Nologging is not possible if the cache is enable on the lob segment. Therefore nocache option is chosen for both logging and nologging. The table definition is given below.
create table ses(sesid varchar2(100), sesob blob) SEGMENT CREATION IMMEDIATE TABLESPACE users
LOB
(
sesob
)
STORE AS object_lob_seg (
TABLESPACE lob32ktbs
DISABLE STORAGE IN ROW
CHUNK 32K
NOCACHE NOLOGGING
--NOCACHE LOGGING
PCTVERSION 0
STORAGE (MAXEXTENTS UNLIMITED)
INDEX object_lob_idx (
TABLESPACE lob32ktbs
STORAGE (MAXEXTENTS UNLIMITED)
)
)
/
The test involves inserting an 800KB LOB into the table 100 times and then later updating it with similar size LOB. Since the LOB is greater than 4K, the table is created with disable in row. Chunk size is set to 32k and the lob segment is stored in a tablespace of 32K block size (lob32ktbs) while the table resides in a different (users) tablespace of 8K block size. Pctversion is set to 0 as no consistent reads are expected. Only option that is changed is the logging option. The test database version is 11.2.0.3
Redo size statistic and the log file sync wait event times are compared for the two test runs as these are directly related to the logging option. Graphs below show the comparison of these for each test run.

There's no surprise that nologging option generates the lowest amount of redo. The logging test case generated around 83-84MB of redo for insert and update which is roughly the same size as the LOB inserted/updated (800KB x 100). There's minimal logging during the nologging test. Since redo is counted for the entire test, the redo seen could be the redo generated for the table data insert (as oppose to lobs) which still is on a tablespace with logging. Nevertheless a clear difference could be observed in the amount of redo generated when nologging is used. This is also reflected in the log file sync time for the two test cases which got reduced from several minutes to under a minute.
Next the same test was executed but this time the lob segment was stored as securefile. The table DDL is given below. Only difference apart from the securefile is Pctversion 0 has been replaced with retention none. All other settings are same as the basic file (tablespaces, database and etc). Chunk size is depreciated in securefile, and when specified is considered only as a guidance.
create table ses(sesid varchar2(100), sesob blob) SEGMENT CREATION IMMEDIATE TABLESPACE users
LOB
(
sesob
)
STORE AS securefile object_lob_seg (
TABLESPACE lob32ktbs
DISABLE STORAGE IN ROW
CHUNK 32K
NOCACHE NOLOGGING
--NOCACHE LOGGING
RETENTION NONE
STORAGE (MAXEXTENTS UNLIMITED)
INDEX object_lob_idx (
TABLESPACE lob32ktbs
STORAGE (MAXEXTENTS UNLIMITED)
)
)
/
Similar to the earlier test the nologging generated low amount of redo compared to logging and resulted in short wait on log file sync event.

Comparing redo size and log file sync time for nologging option between the basicfile and securefile shows a mix bag of results. Basicfile has performed well for inserts in reducing redo generated and log file sync while securefile performed well for updates.

Comparing the IO types on OEM console during the nologging test it was noted that securefile uses predominantly large writes while the basicfile uses small writes.
Comparing the test result with logging enabled the securefile out performance basic file for insert and updates in terms of log file sync wait time. Both securefile and basicfile generates roughly the same amount of redo. It must be noted nocache logging is the default for secure file.
The table below shows all the test results.




It seems that when application permits it's best to use nologging for lobs which reduce the amount of redo generated and log file sync waits. However there are some drawbacks to using nologging on LOBs which only comes to light when there are multiple sessions doing LOB related DMLS. Following is an AWR snippet from a load test on pre-production system.
After CPU the top wait event is log file sync and high portion of this wait event is due an basicfile LOB related insert statement that store some transient data. Changing the lob segment option to nologging resulted in lower log file sync time but it also introduced high enq: CF - contention wait times.
According to 1072417.1 enq: CF - contention is normal and expected when doing DML on LOBs with nocache nologging. CF contention occurs as oracle records the unrecoverable system change number (SCN) in the control file. From the application perspective the overall response time degraded after changing to nologging and had to be reverted back to cache logging.

Useful metalink notes
Performance Degradation as a Result of 'enq: CF - contention'[ID 1072417.1]
LOB Performance Guideline [ID 268476.1]
LOBS - Storage, Redo and Performance Issues [ID 66431.1]
LOBS - Storage, Read-consistency and Rollback [ID 162345.1]
Master Note - RDBMS Large Objects (LOBs) [ID 1268771.1]
Performance problems on a table that has hundreds of columns including LOBs [ID 1292685.1]
POOR PERFORMANCE WITH LOB INSERTS [ID 978045.1]
Securefiles Performance Appears Slower Than Basicfile LOB [ID 1323933.1]


Java Code Used for Testing
public class LobLoggingTest {

final String URL = "jdbc:oracle:thin:@192.168.0.66:1521:ent11g2";
final String USERNAME = "asanga";
final String PASSWORD = "asa";

public static void main(String[] args) {

LobLoggingTest test = new LobLoggingTest();
//Insert test
test.insertTest();

System.out.println("\n\n************* end of insert test **************\n\n");

//Update test
test.updateTest();

}

public void insertTest() {
try {

OracleDataSource pool = new OracleDataSource();
pool.setURL(URL);
pool.setUser(USERNAME);
pool.setPassword(PASSWORD);

Connection con = pool.getConnection();
con.setAutoCommit(false);

long t1 = System.currentTimeMillis();

LobStat.displayStats(con);
LobStat.displayWaits(con);

byte[] x = new byte[800 * 1024];
x[1] = 10;
x[798 * 1024] = 20;

for (int i = 0; i < 100; i++) {

OraclePreparedStatement pr = (OraclePreparedStatement) con.prepareStatement("insert into ses values(?,?)");

String sesid = "abcdefghijklmnopqrstuvwxy" + Math.random();
pr.setString(1, sesid);
pr.setBytes(2, x);

pr.execute();
con.commit();
pr.close();

}

long t2 = System.currentTimeMillis();

LobStat.displayStats(con);
LobStat.displayWaits(con);

con.close();

System.out.println("time taken " + (t2 - t1));

} catch (Exception ex) {
ex.printStackTrace();
}
}

public void updateTest() {
try {

OracleDataSource pool = new OracleDataSource();
pool.setURL(URL);
pool.setUser(USERNAME);
pool.setPassword(PASSWORD);

Connection con = pool.getConnection();
con.setAutoCommit(false);

String[] sesids = new String[100];

PreparedStatement pr1 = con.prepareStatement("select sesid from ses");
ResultSet rs1 = pr1.executeQuery();
int i = 0;
while (rs1.next()) {

sesids[i] = rs1.getString(1);
i++;
}
rs1.close();
pr1.close();
con.close();

con = pool.getConnection();
LobStat.displayStats(con);
LobStat.displayWaits(con);

OraclePreparedStatement pr = (OraclePreparedStatement) con.prepareStatement("update ses set SESOB=? where sesid=?");

byte[] xx = new byte[800 * 1024];
xx[1] = 10;
xx[798 * 1024] = 20;

long t1 = System.currentTimeMillis();
for (String x : sesids) {

pr.setBytes(1, xx);
pr.setString(2, x);
pr.execute();
con.commit();
}

long t2 = System.currentTimeMillis();
System.out.println("time taken " + (t2 - t1));

LobStat.displayStats(con);
LobStat.displayWaits(con);

pr.close();
con.close();

} catch (Exception ex) {
ex.printStackTrace();
}
}
}


public class LobStat {

public static void displayStats(Connection con) {
try {
PreparedStatement pr = con.prepareStatement("select name,value from v$mystat,v$statname where v$mystat.statistic#=v$statname.statistic# "
+ " and v$statname.name in ('CPU used when call started',"
+ "'CPU used by this session','db block gets','db block gets from cache','db block gets from cache (fastpath)',"
+ "'db block gets direct','consistent gets','consistent gets from cache','consistent gets from cache (fastpath)',"
+ "'consistent gets - examination','consistent gets direct','physical reads','physical reads direct',"
+ "'physical read IO requests','physical read bytes','consistent changes','redo size','redo writes',"
+ "'lob writes','lob writes unaligned','physical writes direct (lob)','physical writes','physical writes direct','physical writes from cache','physical writes direct temporary tablespace'"
+ " ,'physical writes direct temporary tablespace','securefile direct read bytes','securefile direct write bytes','securefile direct read ops'"
+ " ,'securefile direct write ops') order by 1");

ResultSet rs = pr.executeQuery();


while(rs.next()){

System.out.println(rs.getString(1)+" : "+rs.getDouble(2));
}

rs.close();
pr.close();
} catch (SQLException ex) {
ex.printStackTrace();
}

}

public static void displayWaits(Connection con){
try {
PreparedStatement pr = con.prepareStatement("select event,total_waits,TIME_WAITED_MICRO from V$SESSION_EVENT where sid=SYS_CONTEXT ('USERENV', 'SID') and event in ('log file sync','enq: CF - contention')");
ResultSet rs = pr.executeQuery();

System.out.println("event : total waits : time waited micro");
while(rs.next()){

System.out.println(rs.getString(1)+" : "+rs.getLong(2)+" : "+rs.getLong(3));

}

rs.close();
pr.close();
} catch (SQLException ex) {
ex.printStackTrace();
}

}
}

LOB Chunk and Tablespace Block Size

$
0
0
Chunk value corresponds to the data size used by oracle when reading or writing a lob value. Once set chunk size cannot be changed. Though it doesn't matter for lobs stored in row, for out of row lobs the space is used in multiples of the chunk size.
From Oracle documentation (for basicfile lobs) A chunk is one or more Oracle blocks. You can specify the chunk size for the BasicFiles LOB when creating the table that contains the LOB. This corresponds to the data size used by Oracle Database when accessing or modifying the LOB value. Part of the chunk is used to store system-related information and the rest stores the LOB value. If the tablespace block size is the same as the database block size, then CHUNK is also a multiple of the database block size. The default CHUNK size is equal to the size of one tablespace block, and the maximum value is 32K. Once the value of CHUNK is chosen (when the LOB column is created), it cannot be changed. Hence, it is important that you choose a value which optimizes your storage and performance requirements.
For securefile CHUNK is an advisory size and is provided for backward compatibility purposes.
From performance perspective it is considered that accessing lobs in big chunks is more efficient. You can set CHUNK to the data size most frequently accessed or written. For example, if only one block of LOB data is accessed at a time, then set CHUNK to the size of one block. If you have big LOBs, and read or write big amounts of data, then choose a large value for CHUNK.
This post shows the result of a test case carried out to compare the performance benefits of using a large chunk size along with a tablespace with a large block size (8k vs 32k). The blob used for the test case is 800KB. The java code used for the test case is given at the end of the post. For each chunk size (8k vs 32k) the caching option was also changed (nocache vs cache) as they also have direct impact on the IO usage. The lob is stored out of row in a separate tablespace than the table.
The table creation DDL used for basicfile test is shown below (comment and uncomment each option based on the test case). The LOB32KTBS is a tablespace of 32k block size while LOB8KTBS is a tabelspace of 8k block size.
create table ses(sesid varchar2(100), sesob blob) SEGMENT CREATION IMMEDIATE TABLESPACE users
LOB
(
sesob
)
STORE AS object_lob_seg (
TABLESPACE LOB32KTBS
--TABLESPACE LOB8KTBS
DISABLE STORAGE IN ROW
CHUNK 32K
--CHUNK 8K
CACHE
--NOCACHE
PCTVERSION 0
STORAGE (MAXEXTENTS UNLIMITED)
INDEX object_lob_idx (
TABLESPACE LOB32KTBS
--TABLESPACE LOB8KTBS
STORAGE (MAXEXTENTS UNLIMITED)
)
)
/
Test cases comprised of reading a lob column for a row and inserting lobs. The IO related statistics comparison for select test case is given below. Based on the graphs it could be seen that 32K chunk size on a tablespace with a block size of 32K requires less number of logical or physical reads compared to having a 8k chunk and lob segment on a 8k block size tablespace. Though not shown on the graphs, on a separate test where using a 32k chunk size and placing the lob segment on a 8K block size tablespace had the same performance characteristics of having a 8k chunk on a 8k block size tablespace. On the other hand having a chunk of 8k and placing the lob segment on a 32k block size tablespace had the same performance characteristics of having a 32k chunk on a 32k block size tablespace. This means that chunk size alone is not going to reduce the amount of IO but the tablespace block size where the lob segment is stored has an influence as well.

The next test was the inserting of lob. The results are shown on the following two graphs. Similar to the read test, having a large chunk size and tablespace block size for lob reduces the IO.




The same test was carried out for securefile lob segments. The table creation DDL is given below. Only difference to the DDL compared to basicfile is the "retention none". All other parameters/options are the same.
create table ses(sesid varchar2(100), sesob blob) SEGMENT CREATION IMMEDIATE TABLESPACE users
LOB
(
sesob
)
STORE AS securefile object_lob_seg (
TABLESPACE LOB32KTBS
--TABLESPACE LOB8KTBS
DISABLE STORAGE IN ROW
CHUNK 32K
--CHUNK 8K
CACHE
--NOCACHE
RETENTION NONE
STORAGE (MAXEXTENTS UNLIMITED)
INDEX object_lob_idx (
TABLESPACE LOB32KTBS
--TABLESPACE LOB8KTBS
STORAGE (MAXEXTENTS UNLIMITED)
)
)
/
The results of the select test is shown on the graphs below. Similar to basicfile the larger chunk size advisory and tablespace block size combination out perform the smaller chunk/block size combination. In all cases the securefile out perform basicfile for the amount of logical or physical reads.

The outcome for the insert test also same as that of basicfile insert test where larger chunk/block size combination out performs the smaller chunk/block size combination. Also between basicfile and secfile the secfile out performs the basicfile.

This tests have shown that it's better to use large chunk/tablespace block sizes for larger LOBs to reduce logical/physical IO related to LOBs.

Useful White papers
SecureFile Performance
Oracle 11g: SecureFiles

Related Post
Nologging vs Logging for LOBs and enq: CF - contention

Java code used for the test. For the code of LobStat class refer the earlier post
public class LobChunkTest {

final String URL = "jdbc:oracle:thin:@192.168.0.66:1521:ent11g2";
final String USERNAME = "asanga";
final String PASSWORD = "asa";

public static void main(String[] args) {

LobChunkTest test = new LobChunkTest();
//Insert test
test.insertTest();

System.out.println("\n\n************* end of insert test **************\n\n");

//select test
test.selectTest();

}

public void insertTest() {
try {

OracleDataSource pool = new OracleDataSource();
pool.setURL(URL);
pool.setUser(USERNAME);
pool.setPassword(PASSWORD);

Connection con = pool.getConnection();
con.setAutoCommit(false);

long t1 = System.currentTimeMillis();

LobStat.displayStats(con);

byte[] x = new byte[800 * 1024];
x[1] = 10;
x[798 * 1024] = 20;

for (int i = 0; i < 100; i++) {

OraclePreparedStatement pr = (OraclePreparedStatement) con.prepareStatement("insert into ses values(?,?)");

String sesid = "abcdefghijklmnopqrstuvwxy" + Math.random();
pr.setString(1, sesid);
pr.setBytes(2, x);

pr.execute();
con.commit();
pr.close();

}

long t2 = System.currentTimeMillis();

LobStat.displayStats(con);

con.close();

System.out.println("time taken " + (t2 - t1));

} catch (Exception ex) {
ex.printStackTrace();
}
}

public void selectTest() {
try {

OracleDataSource pool = new OracleDataSource();
pool.setURL("jdbc:oracle:thin:@192.168.0.66:1521:ent11g2");
pool.setUser("asanga");
pool.setPassword("asa");

Connection con = pool.getConnection();
con.setAutoCommit(false);

String[] sesids = new String[100];

PreparedStatement pr1 = con.prepareStatement("select sesid from ses");
ResultSet rs1 = pr1.executeQuery();
int i = 0;
while (rs1.next()) {

sesids[i] = rs1.getString(1);

i++;

}
rs1.close();
pr1.close();
con.close();

con = pool.getConnection();
LobStat.displayStats(con);

OraclePreparedStatement pr = (OraclePreparedStatement) con.prepareStatement("select SESOB from ses where sesid=?");

long t1 = System.currentTimeMillis();
for (String x : sesids) {


pr.setString(1, x);
ResultSet rs = pr.executeQuery();

while (rs.next()) {

byte[] blob = rs.getBytes(1);

}

rs.close();
}

long t2 = System.currentTimeMillis();
System.out.println("time taken " + (t2 - t1));

LobStat.displayStats(con);

pr.close();

con.close();

} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Viewing all 315 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>