Hi Johannes, You said it right yourself by saying "reasonable size", We need to calculate the reasonable size for hugepages and lock it for Oracle only usage and free it from OS kernel overhead to manage the memory. I think blindly having a hugepage settings is not a best practice and following this thumbrule(Based of total SGA sizes of all the DBs running on that server) is the right way to do.
↧
Comment on SGA & Hugepages Myth
↧
Wiki Page: Connecting to Oracle Database 12c from Oracle R with ROracle
Connecting to Oracle Database from Oracle R with ROracle R is an open source software for computing statistics and generating statistical graphs. The Oracle R Distribution is a repackaged enhanced version of the open source R. ROracle is a high performance R package that makes use of Oracle Call Interface (OCI) libraries for Oracle Database connection handling. In this tutorial we shall discuss connecting to Oracle Database 12c from the Oracle R Distribution using ROracle. This tutorial has the following sections. Overview of ROracle Methods Setting the Environment Loading the ROracle Library Loading the Oracle Database Driver Creating a Connection Creating an Oracle Database Table Adding Table Data Running an SQL SELECT Query Fetching and Displaying Query Result Listing Fields, Column Info and Tables Committing and Rolling Back a Transaction Getting Metadata Closing Resources Overview of ROracle Methods ROracle provides various methods to connect to Oracle Database, run SQL statements, fetch results, commit/rollback a transaction, get metadata, and close connection resources. The main methods, which we shall discuss in this tutorial, are discussed in following table. Method Description Oracle Creates and instantiates an Oracle client and returns an object with a connection to Oracle Database may be established. dbConnect Creates a connection object with which SQL queries may be run on Oracle Database. dbSendQuery Runs an SQL query statement but does not fetch the data, which has to be fetched with the fetch method. dbGetQuery Runs an SQL statement and fetches the data. A separate invocation of fetch is not required. fetch Fetches the data returned by a previously run query. dbCommit Commits the current transaction in an Oracle connection. dbGetInfo Get meta data dbColumnInfo Get column meta data dbRollback Rollback the current transaction in an Oracle connection dbDisconnect Disconnect the current connection and free up all resources associated with the connection object dbDriver Loads the Oracle Database driver. dbUnloadDriver Unloads the Oracle Database driver dbListTables Lists tables dbListConnections Lists connections dbListResults Returns a list all associated result sets dbClearResult Clears the result set and frees up the associated resources dbWriteTable Adds row/s to a new table or appends row/s to an existing table. Auto-commits current transaction and the data added or appended dbExistsTable Returns a boolean to indicate if a table exists dbRemoveTable Removes a table dbGetStatement Gets the SQL statement associated with a query dbHasCompleted Returns a boolean to indicate if a query has completed dbGetRowsAffected Returns the number of rows affected by a SQL query dbGetRowCount Returns the number of rows affected by a SQL SELECT query show A meta data method for a driver, result set, connection object summary A meta data method for summary of a driver, result set, connection object Setting the Environment First, download the following software. -Oracle Database 12c from http://www.oracle.com/us/corporate/features/database-12c/index.html . -Oracle R Distribution .exe file from http://www.oracle.com/technetwork/database/database-technologies/r/r-distribution/downloads/index.html . -ROracle zip file from http://www.oracle.com/technetwork/database/database-technologies/r/roracle/downloads/index.html . Windows OS is used in this article. Install Oracle Database 12c. Double-click on the Oracle R Distribution to install Oracle R. Start the Oracle R GUI. Select Packages Install package(s) from local zip files. The ROracle package gets installed. The ROracle package has a dependency on the DBI package. Next, install the DBI package. Select Packages Install package(s). Select a CRAN mirror and click on OK. Select the DB1 package and click on OK. The DBI package gets installed. Loading the ROracle Library To load the ROracle package run the following command in the R command shell. library(ROracle) The ROracle library gets loaded. Loading the Oracle Database Driver Next, load the Oracle Database driver. Each of the following commands loads the Oracle Database driver. driver - Oracle() driver - dbDriver("Oracle") After the Oracle driver has been loaded a connection with Oracle Database may be established. Creating a Connection The dbConnect(drv, username = "", password = "", dbname = "", prefetch = FALSE, bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE, sysdba = FALSE, ...) method is used to connect to Oracle Database from R using ROracle OCI. At the minimum a driver instance, and username and password must be provided to connect to a local Oracle database instance. connection - dbConnect(driver, username = "OE", password = "OE") To verify that a connection has been established run the following dbListTables command. dbListTables(connection, schema = "OE", all = FALSE, full = FALSE) The database tables in the OE schema get listed. An Oracle instance may also be specified in dbConnect with the dbname attribute. connection - dbConnect(driver, username = "OE", password = "OE",dbname = "ORCL") Subsequently run the dbListTables command to list the Oracle Database tables in the OE schema. Another option to connect to Oracle Database is to use a connect string constructed from the hostname, port and SID as follows. host - "localhost" port - 1521 sid - "ORCL" connect.string - paste( "(DESCRIPTION=", "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))", "(CONNECT_DATA=(SID=", sid, ")))", sep = "") connection - dbConnect(driver, username = "OE", password = "OE", dbname = connect.string) Subsequently the tables in OE schema get listed using the connection object. Creating an Oracle Database Table Next, create an Oracle Database table. A method specific to creating an Oracle Database table is not provided. Use one of the dbGetQuery or dbSendQuery to run an SQL statement to create a table. Create a connection object, and specify a SQL statement to create an Oracle Database table. Run the SQL statement using the dbGetQuery method with the connection object and the SQL statement string as args. connection - dbConnect(driver, username = "OE", password = "OE",dbname = "ORCL") createTable - "CREATE TABLE wlsserver (time_stamp VARCHAR2(55), category VARCHAR2(15), type VARCHAR2(55), servername VARCHAR2(15), code VARCHAR2(15), msg VARCHAR2(255))" dbGetQuery(connection, createTable) The command returns TRUE to indicate that the SQL statement ran without error. A database table WLSSERVER gets created in the OE schema. Adding Table Data Next, add rows to the wlsserver table using INSERT SQL statement, which is run with the dbGetQuery method. Create an INSERT statement with bind variable position holders for the bind data. Create variables for the bind data. Invoke the dbGetQuery method with the connection object, the INSERT statement string and the bind data as args. insertString - "insert into wlsserver values(:1, :2, :3, :4, :5, :6)"; time_stamp - "Apr-8-2014-7:06:16-PM-PDT"; category - "Notice"; type - "WebLogicServer"; servername - "AdminServer"; code - "BEA-000365"; msg - "Server state changed to STANDBY"; dbGetQuery(connection, insertString, data.frame(time_stamp, category, type, servername, code, msg)); When the dbGetQuery is run one row gets added to the wlsserver table. Multiple row data may be bound in a single data.frame bind data object. For example bind data for two rows is created as follows. time_stamp1 - "Apr-8-2014-7:06:17-PM-PDT"; msg1 - "Server state changed to STARTING"; time_stamp2 - "Apr-8-2014-7:06:18-PM-PDT"; msg2 - "Server state changed to ADMIN"; log.data = data.frame(time_stamp = c(time_stamp1, time_stamp2), category= c("Notice", "Notice"), type= c("WebLogicServer", "WebLogicServer"), servername= c("AdminServer", "AdminServer"), code= c("BEA-000365", "BEA-000365"), msg = c(msg1, msg2)); Invoke the dbSendQuery method with a connection object, an INSERT statement, and the bind data as args. rs - dbSendQuery(connection, "insert into WLSSERVER (time_stamp, category, type, servername, code, msg) values (:1, :2, :3, :4, :5, :6)", data = log.data) When the dbSendQuery is run two rows get added. Using the dbGetQuery method add two more rows. The same bind variable values may be reused if some of the column values are the same in different rows. time_stamp - "Apr-8-2014-7:06:19-PM-PDT"; msg - "Server state changed to RESUMING"; dbGetQuery(connection, insertString, data.frame(time_stamp, category, type, servername, code, msg)); time_stamp - "Apr-8-2014-7:06:20-PM-PDT"; code - "BEA-000331"; msg - "Started WebLogic AdminServer"; dbGetQuery(connection, insertString, data.frame(time_stamp, category, type, servername, code, msg)); When the preceding dbGetQuery method is run two more rows get added. An INSERT statement may also be constructed without using bind position holders and a data.frame object as follows. insertRow - "INSERT INTO WLSSERVER (time_stamp, category, type, servername, code, msg) values ('Apr-8-2014-7:06:21-PM-PDT', 'Notice', 'WebLogicServer', 'AdminServer', 'BEA-000365', 'Server state changed to RUNNING')" Run the INSERT statement using the dbSendQuery method with a connection object and the INSERT statement string as args. queryresult -dbSendQuery(connection, insertRow); Another row gets added. The dbWriteTable method may be used to create a new table and add rows or append to an existing table. Specify the row data to be added as variables. Invoke the dbWriteTable method with a connection object, the database table name, a data.frame bind data object as args. Specify append=TRUE and overwrite=FALSE to append data and not overwrite. Specify the schema name with OE . time_stamp - "Apr-8-2014-7:06:22-PM-PDT"; code - "BEA-000360"; msg - "Server started in RUNNING mode"; dbWriteTable(connection, "WLSSERVER", data.frame(time_stamp, category, type, servername, code, msg), overwrite = FALSE, append = TRUE, schema = "OE") When the preceding dbWriteTable method is invoked another row gets added to the OE.WLSSERVER table. The data added may be output by invoking the dbReadTable method with a connection object and table name as args. dbReadTable(connection, "WLSSERVER") The 7 rows added get listed. Running an SQL SELECT Query To select table data run a SELECT SQL statement using either the dbSendQuery method or the dbGetQuery method. For example, the following statement selects all rows in the OE.WLSSERVER table. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") When the preceding statement is run a result set object is returned, but the data in the result set object is not fetched. To fetch the data in the result set invoke the fetch method. The dbGetRowCount method returns the number of rows in the result set data. The dbGetRowsAffected method returns the number of rows affected by the by the SQL statement. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") resultdata - fetch(queryresult) dbGetRowCount(queryresult) dbGetRowsAffected(queryresult) As the query returns 7 rows the dbGetRowCount method returns ‘7’. As the SELECT statement does not affect any rows the dbGetRowsAffected method returns ‘0’. Fetching and Displaying Query Result Next, we shall fetch and display the data in the result set object. The following statement fetches the result set data into the resultdata object. resultdata - fetch(queryresult) To display the data fetched with fetch do not assign the result set data to a variable. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") fetch(queryresult) The result set data gets displayed. A specific number of rows may be fetched by specifying the number of rows as an arg to the fetch method. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") fetch(queryresult, n = 5) The preceding fetch fetches only 5 rows from the result set object. The execute method may be used to run a query statement but not return a result set object (as the dbSendQuery does) or return the result set data (as the dbGetQuery method does). queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") execute(queryresult) The execute method returns NULL . The dbHasCompleted method may be used to find if a query statement has completed. A query statement has not completed till all the data in the query statement result set has been fetched. For example invoke the dbSendQuery method to run a query statement but do not fetch the result set data. Invoke the dbHasCompleted method subsequent to the dbSendQuery method invocation. The dbHasCompleted should return FALSE as the query result data has not been fetched yet. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") dbHasCompleted(queryresult) If the dbHasCompleted method is invoked after fetching data with the fetch method the query statement is indicated to have completed with a return value of TRUE . queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") resultdata - fetch(queryresult) dbHasCompleted(queryresult) Without fetching result set the dbHasCompleted returns FALSE and after fetching the result set data the dbHasCompleted method returns TRUE . All the data in the result set must be fetched for the dbHasCompleted to return TRUE . For example, fetch only 5 rows from a result set that has 7 rows and invoke the dbHasCompleted method. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") fetch(queryresult, n = 5) dbHasCompleted(queryresult) The dbHasCompleted method returns FALSE as not all the data in the query result has been fetched yet. In contrast to the dbSendQuery method the dbGetQuery method returns the result data as an array of column data. For example, run a query statement and output the result data as follows. resultdata - dbGetQuery(connection, "select * from OE.WLSSERVER") resultdata [1] resultdata [2] resultdata [3] resultdata [4] resultdata [5] resultdata The resultdata [1] displays the first column, the resultdata [2] the second column and resultdata the 6 th column. The output from the dbGetQuery is listed: resultdata - dbGetQuery(connection, "select * from OE.WLSSERVER") resultdata [1] TIME_STAMP 1 Apr-8-2014-7:06:16-PM-PDT 2 Apr-8-2014-7:06:17-PM-PDT 3 Apr-8-2014-7:06:18-PM-PDT 4 Apr-8-2014-7:06:19-PM-PDT 5 Apr-8-2014-7:06:20-PM-PDT 6 Apr-8-2014-7:06:21-PM-PDT 7 Apr-8-2014-7:06:22-PM-PDT resultdata [2] CATEGORY 1 Notice 2 Notice 3 Notice 4 Notice 5 Notice 6 Notice 7 Notice resultdata [3] TYPE 1 WebLogicServer 2 WebLogicServer 3 WebLogicServer 4 WebLogicServer 5 WebLogicServer 6 WebLogicServer 7 WebLogicServer resultdata [4] SERVERNAME 1 AdminServer 2 AdminServer 3 AdminServer 4 AdminServer 5 AdminServer 6 AdminServer 7 AdminServer resultdata [5] CODE 1 BEA-000365 2 BEA-000365 3 BEA-000365 4 BEA-000365 5 BEA-000331 6 BEA-000365 7 BEA-000360 resultdata MSG 1 Server state changed to STANDBY 2 Server state changed to STARTING 3 Server state changed to ADMIN 4 Server state changed to RESUMING 5 Started WebLogic AdminServer 6 Server state changed to RUNNING 7 Server started in RUNNING mode Once the result set data has been fetched with fetch method subsequent invocation of fetch generates an error. For example, re-invoke the fetch method after fetching the result set data. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") resultdata - fetch(queryresult) fetch(queryresult) An error “no more data to fetch” is returned. Listing Fields, Column Info and Tables The dbExistsTable method may be used to find if a table exists. For example the following dbExistsTable invocation returns a boolean to indicate if the OE.WLSSERVER table exists. dbExistsTable(connection, "WLSSERVER", schema = "OE") The fields in the OE.WLSSERVER method may be listed with the dbListFields method as follows. dbListFields(connection, "WLSSERVER", schema = "OE") The column metadata in a query result set may be output using the dbColumnInfo method. queryresult - dbGetQuery(connection, "select * from OE.WLSSERVER") dbColumnInfo(queryresult) The dbExistsTable method returns TRUE . The dbListFields method lists the fields in the OE.WSLSERVER table. The dbColumnInfo method lists column metadata in the result set for a query statement to select all rows in the OE.WLSSERVER table. Committing and Rolling Back a Transaction The dbCommit method is used to save permanently in the database all changes applied in a connection object. The dbRollback method is used to roll back the changes to the previous save point. For example, run a query to delete all rows from the OE.WLSSERVER table and rollback the transaction if the number of rows affected by the query is more than 0. queryresult - dbSendQuery(connection, "DELETE from WLSSERVER") if(dbGetInfo(queryresult, what = "rowsAffected") 0) { warning("Don't delete data -- rolling back transaction") dbRollback(connection) } To commit a query run with dbSendQuery invoke the dbCommit method with the connection object used to run the query as the arg. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") dbCommit(connection) When the preceding statements are run the connection object used to delete all rows from a table is rolled back to the previous save point as the number of rows affected is more than 0. The dbCommit method returns TRUE after the changes in the connection object have been saved permanently regardless of the whether the query statement actually affects any rows. Getting Metadata Several methods are provided to output the metadata. The following show method invocation outputs the result set metadata. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER") fetch(queryresult, n = 5) show(queryresult) The output includes the query statement run, whether the statement is a Select statement, the number of rows affected, whether the statement completed, and the row count. The summary method is used to output the summary of a driver, connection, result set or result set data object. For example, run a query to select all rows from OE.WLSSERVER table which have code BEA-000360 . Subsequently fetch the result set data and output the summary of the result set data. Also output the dimension of the result set data using the dim method. queryresult - dbSendQuery(connection, "select * from OE.WLSSERVER where code = :1",data = data.frame(code = "BEA-000360")) summary(resultdata) dim(resultdata) The summary method returns the columns in the result set data and the number of rows. The dim method returns the dimension as 1 6, which implies 1 row and 6 columns. The dbListConnections method returns all connections associated with a driver object. driver - Oracle() dbListConnections(driver) The connection information includes the user name, the connect string if any, the Oracle database server version, the server type, the number of results processed and open results. The dbGetInfo method returns metadata information about the a driver, connection or result set object. The names method may be used to output the attribute names in the metadata returned by the dbGetInfo method. For example, output the metadata about a driver and the attributes in the metadata as follows. names(dbGetInfo(driver)) dbGetInfo(driver) The driver name, version, client version, total number of connections, open connections, and connection metadata for each connection is returned. The result sets associated with a connection are returned with the dbListResults method with the connection object as arg. dbListResults(connection) The output incudes each of the query statements, the number of rows affected by the query, the row count, whether the query statement is a Select statement, and whether the query statement completed are returned. Metadata about a connection object may be output using the dbGetInfo method with the connection object as an arg. dbGetInfo(connection) The user name, the Oracle database instance, the database version, the server type, the total number of result sets associated with the connection, and metadata about each of the result set objects is output. To output the query statement for a result set object invoke the dbGetStatement method. To output a result set metadata invoke the dbGetInfo method with the result set object as arg. The names method may be used to output the metadata attributes for a result set metadata. dbGetStatement(queryresult) names(dbGetInfo(queryresult)) dbGetInfo(queryresult) The dbGetStatement method returns the associated query statement. The dbGetInfo also returns the query statement in addition to whether the statement is a Select statement, the number of rows affected, whether the statement completed, and fields in the result set. Closing Resources As a best practice all the resources associated with a Oracle database connection session must be closed after completing the connection session. To clear a result set object invoke the dbClearResult method. If a database table is to be deleted invoke the dbRemoveTable method. To disconnect a connection use the dbDisconnect method and to unload a database driver object invoke the dbUnloadDriver method. dbClearResult(queryresult) dbRemoveTable(connection, "WLSSERVER") dbDisconnect(connection) dbUnloadDriver(driver) Each of these methods return TRUE to indicate that the resource has been closed/removed. If a query making use of any of the closed resources is run an error is generated. For example, if a closed connection object is used an “invalid connection” error is generated. In this tutorial we discussed connecting to Oracle Database from Oracle R using ROracle.
↧
↧
Wiki Page: Integrating Apache Solr Data into Oracle Database in Oracle Data Integrator 11g
Integrating Apache Solr Data into Oracle Database in Oracle Data Integrator 11g Apache Solr is a search & index engine providing fast indexing and full-text search. Solr is the most commonly used NoSQL database management system search engine. In an earlier tutorial (http://www.toadworld.com/platforms/oracle/w/wiki/11014.loading-solr-data-into-oracle-database-with-oracle-loader-for-hadoop-3-0-0.aspx) we loaded data from Solr into Oracle Database using Oracle Loader for Hadoop 3.0.0. The OLH involved configuration and command line invocation of the OLH tool. The Oracle Data Integrator provides a user interface in which Apache Solr data may loaded into Oracle Database using the integration module IKM File-Hive To Oracle. An IKM for direct integration from Apache Solr is not available but a Hive table may be created over an Apache Solr collection and the IKM File-Hive To Oracle may be used to integrate the Hive table data into Oracle Database. In this tutorial we shall integrate Solr data in Oracle Data Integrator (ODI) 11g. This tutorial has the following sections. Setting the Environment Creating the Hive and Oracle Database Tables Creating the Physical Architecture for Apache Solr Creating the Physical Architecture for Oracle Database Creating the Logical Architecture for Apache Solr Creating the Logical Architecture for Oracle Database Creating the Model for Solr Creating the Model for Oracle Database Creating an Integration Project Creating an Integration Interface Running the Interface Setting the Environment We shall use the same environment as in the tutorial on loading Solr data with OLH. We require an additional software, the Oracle Data Integrator 11g for this tutorial. Start the ODI with the following commands. cd /home/dvohra/dbhome_1/oracledi/client sh odi.sh In the ODI select Operator Connect to Repository to connect to the repository. In Oracle Data Integrator Login click on OK to login to ODI. Start the HDFS, the NameNode and DataNode, with the following commands. /solr hadoop namenode /solr hadoop datanode Start Hive server with the following command. /solr hive --service hiveserver For the directory with the example Solr instance start the Solr server with the following command. /solr cd /solr/solr-4.9.0/example/ /solr/solr-4.9.0/example/ java -jar start.jar Creating the Hive and Oracle Database Tables In this section we shall create the Hive table to load data from and the Oracle Database table to integrate data into. In an the OLH-Solr tutorial we created a Hive external table over a Solr data collection using a Hive storage handler for Solr. While OLH and ODI do not support direct loading/integration from the Hive external table defined over the Solr collection, OLH/ODI may be used to load/integrate from a table defined over the Hive external table. Next, create a Hive managed table into which we shall load data from the Hive external table wlslog , created in the OLH-Solr tutorial. Start the Hive shall with the following command. hive Create a Hive managed table with the same columns as the Hive external table wlslog . Run the following Hive script in the Hive shell. create table solr ( time_stamp STRING, category STRING, type STRING, servername STRING, code STRING, msg STRING); A Hive table gets created. Run the following command in the Hive shell to load data from the wlslog table to the solr table. The wlslog table data gets loaded into the solr table. To create the target database table OE.WLSLOG run the following command in SQL*Plus. CREATE TABLE OE.wlslog (time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000)); The Oracle Database table gets created and its structure may be listed with the DESC command. Creating the Physical Architecture for Apache Solr The physical architecture for Solr comprises of a Hive technology data server and a physical schema defined over the default database in Hive. The Hive table is not defined in the physical architecture but is defined in the logical architecture. In ODI select Topology Physical Architecture Technologies Hive. Right-click on the Hive technology and select New Data Server. In the Data Server Definition specify a Name; the Technology is pre-selected as Hive. Select the Flexfields tab. De-select the checkbox in the Default column and specify thrift://localhost:10000 in the Value column. Select the JDBC tab. Select the JDBC Driver as org.apache.hadoop.hive.jdbc.HiveDriver and JDBC Url as jdbc:hive://localhost:10000/default . Click on Test Connection to test the connection. Click on OK for the Confirmation dialog indicating that the data will be saved. Click on OK in the Information dialog indicating that the at least one physical schema should be registered for the data server. Click on Test in the Test Connection dialog. If the connection gets established a Successful Connection message gets displayed in a Information dialog. Click on OK. Click on Save. A data server for Solr gets created. The data server is not directly defined over the Solr search-engine as a Solr based technology is not available in ODI. The data server is defined over a Hive database. As prompted earlier we need to create a physical schema for the data server. Right-click on the Solr data server and select New Physical Schema. In the Physical Schema Definition the Name is pre-specified as Solr.default . Specify default in the Schema (Schema) and Schema (Work Schema) fields. Click on Save to save the physical schema definition. An Information dialog prompts to specify a context for the physical schema. Click on OK. We shall specify a context in a subsequent section. A physical schema Solr.default gets created. Creating the Physical Architecture for Oracle Database The physical architecture for Oracle Database also consists of a data server and a physical schema. Select Topology Physical Architecture Technologies Oracle. Right-click on Oracle and select New Data Server. In Data Server Definition specify a Name ( OracleDatabase ) and specify Instance as ORCL . The Technology is pre-selected as Oracle. Specify the User and Password for the schema in which the OE.WLSLOG target database table was created earlier. Select the JDBC tab. Select the JDBC Driver as oracle.jdbc.OracleDriver and select the JDBC Url as jdbc:oracle:thin:@127.0.0.1:1521:ORCL . Click on Test Connection to test the JDBC connection. Click on OK in a Confirmation dialog that indicates that the data will be saved. Click on Ok in an Information dialog that prompts that at least one physical schema should be registered with the data server. We shall register a physical schema after we have created the data server. Click on Test in the Test Connection dialog. If a connection with Oracle Database gets established a Successful Connection message gets displayed. Click on OK. Click on Save to complete the data server. A new data server Oracle Database gets created in the Oracle technology. Next, create the physical schema that was prompted for earlier. Right-click on the OracleDatabase data server and select New Physical Schema. In Physical Schema Definition specify the Schema (Schema) and Schema (Work Schema) as OE. The Name is pre-specified as OracleDatabase.OE . Click on Save to save the physical schema configuration. Click on OK in the Information dialog that prompts that a context should be specified for the physical schema. We shall specify a context in a subsequent section. A new physical schema gets created in the OracleDatabase data server. Creating the Logical Architecture for Apache Solr The logical architecture is the logical or abstract interface to the physical architecture and comprises of a logical schema. To create a logical schema for Solr select Topology Logical Architecture Technologies Hive. As a Solr specific technology is not provided in ODI we shall be creating a Hive technology based logical schema defined over a Hive technology based physical schema. In Logical Schema Definition specify a Name. In the Physical Schemas column for the Global Context select Solr.default . Click on Save. A new logical schema gets created in Hive technology. Creating the Logical Architecture for Oracle Database The logical schema is the abstract interface to the physical schema. In this section we shall define a logical schema for the Oracle Database physical schema defined earlier. A logical schema comprises of a Context; a context was prompted to be required when creating the physical schemas. Select the Topology Logical Architecture Technologies Oracle node. Right-click on Oracle and select New Logical Schema. In Logical Schema Definition specify a Name. In the Physical Schemas column for the Global Context select the OracleDatabase.OE physical created, which was created earlier. Click on Save. A new logical schema OracleDatabase gets created. Creating the Model for Solr While the logical schema for Solr defined the abstract interface for the Oracle Data Integrator from which to connect to the Solr connection we have still not defined a data model including the Hive table defined over the Solr data collection. Unless a data model is defined we won’t know which Hive table to load data from. In this section we shall define a model for Solr. But, first create a model folder, which is not required but recommended, especially if multiple models are to be created. Select Designer Models and select New Model Folder from the drop-down list. In Model Folder Definition specify a Name. Click on Save. A model folder gets created. To add a model to the model folder right-click on the model folder and select New Model. In Model Definition specify a Name (Solr) and select Technology as Hive. Select the Logical Schema Solr created earlier. Select Action Group as Generic Action . Click on Save to save the model definition. Next, add a datastore to the model. Right-click on the Solr model and select New Datastore. In Datastore Definition specify a Name and select Datastore Type as Table. In Resource Name specify solr , which is the Hive table created from data from Solr. Select the Columns tab. Click on Add Column to add columns in the datastore. Add columns TIME_STAMP, CATEGORY, TYPE, SERVERNAME, CODE and MSG, which correspond to the columns in the Hive table default.solr . Also specify the column Type and Logical length for each column. Select Not Null checkbox for each column. Click on Save to create the datastore. A datastore gets listed in the Solr model. To display the data in the model right-click on the datastore and select View Data. The data in the datastore gets displayed. It is the same data that was added to the Solr collection and later loaded into the Hive table solr from the Hive external table wlslog defined over the Solr collection. Creating the Model for Oracle Database We created a logical schema as an interface on which the ODI may access the Oracle Database but we have not yet defined a data model for the Oracle Database table into which to load the Solr data. To create a model for Oracle Database select New Model from the drop-down list in Designer Models. In Model Definition specify a Name (OracleDatabase) and select Technology as Oracle. Select Logical Schema as OracleDatabase. Select the Oracle Default Action Group. Select the Reverse Engineer tab. Specify the Mask as WLSLOG and the Characters to Remove from Table Alias also as WLSLOG . Select Types of objects to reverse-engineer as Table. Select Standard and Context as Global. All of these are pre-selected by default except the Mask and Characters to Remove from Table Alias. Click on Reverse Engineer. Click on Yes in the Conformation dialog that prompts that the data will be saved. A new model WLSLOG gets reverse-engineered from Oracle Database table OE.WLSLOG . The datastore should be initially empty as we have yet to integrate data from Solr. Right-click on the WLSLOG datastore and select View Data. The empty table for the WLSLOG datastore gets displayed. Creating an Integration Project Next, create an integration project for the integration of the Hive-Solr data into Oracle Database. Select Designer Projects. Select New Project from the drop-down list. In Project Definition specify a Name. Click on Save to add the integration project. As we shall be using the IKM File-Hive to Oracle to integrate the Hive data into Oracle Database we need to import the IKM into the integration project. Right-click on Knowledge Modules Integration and select Import Knowledge Modules. In Import Knowledge Modules select the IKM File-Hive To Oracle and click on OK. A Import Report gets displayed. Click on Close. The IKM File-Hive To Oracle gets imported into the integration project. Creating an Integration Interface An integration project still does not contain enough configuration to integrate the Hive-Solr data. We need to define an integration interface. Right-click on First Folder Interfaces and select New Interface. In Interface Definition specify a Name (Solr-OracleDatabase). Select the Mapping tab shown in the previous illustration. Two regions get displayed; one for the dataset for the source datastores and another for the target datastore. Select the Solr datastore created from the Hive table Solr and drag and drop the datastore to the region for the source dataset. The source dataset gets added. The diagram lists the columns in the source datastore. Similarly, select the target datastore WLSLOG and drag and drop the datastore in the region for the target datastore. Click on Yes in the Automap dialog. The target datastore also gets added. Click on the Quick-Edit tab. The Source and the target datastores are listed. For the Target Datastore WLSLOG table select Staging Area in the Execute On column for each of the columns. Select the Flow tab. The default flow diagram shows the flow of data into a staging area in the target datastore. We need the staging area to be in the source datastore. Select the Overview tab. Select the Staging Area Different From Target checkbox. Select the Hive:Solr model datastore. Select the Flow tab. The flow diagram is shown to be modified with the Staging Area in the source datastore. The IKM Selector is selected as the IKM File-Hive To Oracle. Click on Save to save the integration interface configuration. A new integration interface gets added. Running the Interface In this section we shall run the integration interface to integrate the Hive-Solr data into Oracle Database. Right-click on the Solr-OracleDatabase interface and select Execute. Click on OK in the Execution dialog. An Information dialog indicates that the Session has started. Click on OK. Three MapReduce jobs run to integrate the Hive data into Oracle Database. The OE.WLSLOG Oracle Database model datastore that was empty should have data integrated into it after the interface is run. Right-click on the WLSLOG datastore and select View Data. The data integrated from Hive gets displayed. The data integrated into Oracle Database may be selected using a SELECT query in SQL*Plus. The 7 documents integrated from Solr via Hive get listed. In this tutorial we integrated data from Solr search engine via a Hive table into Oracle Database 11g in Oracle Data Integrator 11g.
↧
Blog Post: SQL is Dead (NOT), Long Live SQL!
I participated in an Oracle Academy Ask the Experts webcast in November 2014, with the title "The Code Under the Covers, aka the Database Under the App." The objective of the webcast was to encourage professors to teach, and students to learn, SQL (the Structured Query Language) and PL/SQL (Procedural Language extensions to SQL). In preparing for the talk, I found myself thinking about Platonic Idealism . Strange, eh? So I thought I'd share some of these thoughts on my blog. I hope you find them interesting enough to share and comment. First, it is very much worth noting that while SQL is "old" - the first commercial SQL database was released by Oracle waaaay back in 1979 - it's not going anywhere. In fact, it is getting more important than ever, more entrenched, and more widely used. And I think that part of the reason for the stickiness of SQL has to do with the aforementioned Platonic Idealism , or to be more precise, it has to do with how different the underlying philosophy of SQL is from that of Platonic Idealism. Platonic Idealism is summarized on Wikipedia as follows: "A particular tree, with a branch or two missing, possibly alive, possibly dead, and with the initials of two lovers carved into its bark, is distinct from the abstract form of Tree-ness. A Tree is the ideal that each of us holds that allows us to identify the imperfect reflections of trees all around us." Surely you remember that from your Uni philosophy course? I did, sort of. And then the following idea came to me a few weeks ago: Wait a minute! Platonic idealism sounds an awful lot like like object-orientation. You have a class, which is the "perfect" and "ideal" thing. Then you have instances of that class and sub-classes of that class, all of which may vary in some way from the ideal. Don't they seem quite similar? The Platonic Ideal and the Class. It was even mentioned at Wikipedia : "The language for much of the article talks about 'instantiations', inherence, forms, etc... Sounds very much like inheritance/etc... from computer science. Perhaps this is deliberate, perhaps written by a comp. sci. person, perhaps it's totally my perception." So here's the thing: I don't believe that humans see the world according to Platonic Idealism. In other words, while object orientation may have some value inside computer systems, I don't think that humans exist in the world in an O-O fashion. When I stand in the forest down the street from my house and look at the trees, I don't see them as variations from some ideal Tree. There is no such thing in the world or in my head. Instead, I see lots of discrete entities that share characteristics. Here's another way to put it: I ingest data through my senses (see the different kinds of bark on the trees, hear the winds rustling through the leaves, taste the air blowing off the river), and my brain identifies patterns. It then uses those patterns to develop strategies for surviving, reproducing and thriving. So I can look at an invasive buckthorn (which I cut down by the hundreds each week) and a big, old oak tree and think to myself: "They are both trees." Which really means that they can be grouped together by common attributes. They are, in short, a set. I believe that humans naturally think about sets of things in the world as a basic, evolved mental strategy for getting by in the world. And if that is true, it is very easy to see why SQL was such a remarkable breakthrough back in the 70s (kudos to Codd, Date, IBM research labs, and Ellison). And why it played (and plays) such a critical role in the Information, Internet, Mobile and Internet of Things Eras. SQL synchs up so well with how our brain naturally operate that it is hard to imagine another data language that is different enough to supplant it. A much more likely scenario is that the SQL language will be changed to meet new requirements (as will the underlying SQL engines, such as Oracle Database, MySQL and so on). I'm not really arguing that SQL is "forever." I expect that at some point, the whole computing paradigm will shift in ways we can't even imagine today, and SQL then becomes irrelevant somehow. But as long as we write code the way we do today, still build apps the way we do, still need databases to hold data, we'll find ourselves relying on an ever-more-powerful SQL language to manipulate our data. SQL is Dead (NOT), Long Live SQL!
↧
Blog Post: Tnsnames Checker Utility
I have made avialable for free a utility that will parse a tnsnames.ora file and report back on any thing that it doesn’t like such as duplicate entries, invalid characters, redefinitions of parameters, errors etc etc. It’s a small utility, based on the ANTLR4 parser/compiler generator tool . I’ve had an interest in compilers and parsers for many many years – more than I care to remember – but this is only my second ever parser tool of any great use. You will need Java 6 (which is actually version 1.6 when you run java -version ) or higher. It has been tested with 1.6 and 1.7. I don’t have 1.8 yet. (I loathe don’t actually like Java but in this case, I gritted my teeth and got dirty made an exception!) It works on Windows or Linux. The program itself is downloadable from here , and there is a small pdf file which tries to explain it as well. Enjoy. Source code will be up on GitHub at some point soon, but it’s getting too late tonight to start fiddling! Its grammar is based on the Oracle specification for an 11gR2 tnsnames.ora file.
↧
↧
Blog Post: Cómo descargar archivos desde la línea de comandos.
Cada vez son más los sitios web que ofrecen la posibilidad de descargar datos de interés en archivos separados por comas, planillas Excel u otros formatos. Generalmente accedemos a dichos sitios web desde un navegador como Firefox o Chrome y hacemos la descarga. Pero, ¿qué pasa si necesitamos descargar los archivos en forma programática? ¿Podemos hacer la descarga desde un script utilizando la línea de comandos de Linux, Unix o Windows? La respuesta está en Curl . Curl es una herramienta de línea de comandos que nos permite transferir datos utilizando sintaxis de URLs. Precisamente por ser de línea de comandos podemos usarlo dentro de nuestros scripts para transferir datos de manera desatendida. Podemos descargar curl desde la siguiente URL: http://curl.haxx.se/ En mi caso particular y a modo de ejercicio para presentarlo en este post, he descargado e instalado Curl en su versión para Windows. A continuación descargaré desde la línea de comandos un archivo separado por comas publicado por el Gobierno de la Ciudad de Buenos Aires como parte de la iniciativa Buenos Aires Data . C:\Users\PC\Documents curl -O https://recursos-data.buenosaires.gob.ar/ckan2/acceso-informacion-publica/acceso-informacion-publica-2013.csv % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 283k 100 283k 0 0 230k 0 0:00:01 0:00:01 --:--:-- 230k C:\Users\PC\Documents dir acc*.* El volumen de la unidad C no tiene etiqueta. El número de serie del volumen es: 3C13-8990 Directorio de C:\Users\PC\Documents 02/12/2014 11:01 p.m. 290,342 acceso-informacion-publica-2013.csv 1 archivos 290,342 bytes 0 dirs 618,304,126,976 bytes libres C:\Users\PC\Documents Así que ya sabés; si necesitas descargar archivos desde la línea de comandos puedes utilizar Curl.
↧
Blog Post: OMS and OMR Performance- Part III, Metrics Page
Before heading off to UKOUG’s Tech 14 conference, thought I would jump back from the Agents performance page and look into the very important page in the Oracle Management Service, (OMS) and Oracle Management Repository, (OMR) regarding metrics. Standard metrics collection is demanding in itself, so when we add plugins, metric extensions and manual metric collection changes, it’s important to know how these changes can impact the performance of the Enterprise Manager 12c, (EM12c) Cloud Control environment. We have returned to the same location in the EM12c console page: Setup– Manage Cloud Control – Repository. The Repository Page through three tabs, which we’re currently working with the middle tab, Metrics . Metric Collections and Rows via Bubble Map We’ll jump into the first graph, which is a fantastic bubble graph, (love seeing us use these more in EM12c console pages!) Note the data is distributed across the graph by the left side, Number of Rows Loaded vs. bottom, Number of Collections . This information is important as we note that our heaviest hitters are the blue and the orange circles. If we hover our mouse over the blue circle, we then get to see a few more details: We now can see that the blue circle is the metric collections for cluster information and can see not just the two graphing points, but the amount of data in MB loaded. We can use this information to make decisions on updating collections intervals to ease stress on the EM12c OMS and OMR. If we then focus on the orange circle, we can view the same type of detail information: So there are a number more collections on the infiniband. This is expected, as this is the network connectivity between our nodes on the engineered systems. The amount of rows are higher, too, but note that the MB of data is no more than what the EM12c had to handle for the cluster data being uploaded. We can use this data to see if the collection interval pressure justifies the impact to the OMS and OMR. As we work through the rest of the data offered on this page, these are two important pain points to keep in the back of our mind regarding metric data uploaded vs. number of collections. Investigating Bubble Map Performance Data Now lets say we want to dig deeper into the inifiniband metric info that’s been shown here. We can double click on the orange circle in the graph and we’ll be taken to specific detail regarding this one metric and how target and metric details. Now we see the top 25 metrics data load information not just for the parent metric target type, but broken down by metric specifics. We can quickly see that the Switch Port Configuration Monitor consists of the most metric data. As the data is compiled and uploaded as one collection, the bubbles are superimposed on top of each other. If we switch to a Target view, a very different story is presented: Note that six different collections interval schedules [most likely] displayed here. In the center, you can see the bubbled super-imposed on top of each other that are interconnected, but the large, red bubble is of significant interest. If we hover our cursor over this bubble: One scan listener, (scan listener 3 for an exalogic environment) is uploading more data and more often than the rest of the environment? This is over 26% of the total metric impact for infiniband on the EM12c. Investigating this, reviewing agent patch levels and comparing collection intervals would be a good idea! Classic View of Metric Data For those that prefer a more standard graph of the data, the right side graph displays the high level data in just a second format: You also have the option, instead of the top 10, to display 15 or 20 top metrics. Metric Alerts Per Day Awareness of metric alerts per day can be valuable, especially when there are OMS, OMR or agent patches missing! I can commonly look at this graph and tell quickly if there an environment has skipped applying important EM12c patches, (can be located in the master note Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1)) Now you see the breaks in the graph and may wonder what’s up with that- this environment has been patched and down for quarterly maintenance. We can see this when we click on the Table View link and see the Unavailable sections: This also gives you quick access to the recent, raw data without having to query the MGMT$EVENTS view in the OMR directly or using EM CLI. Once we close this view, we can go back and highlight the different links below the graph to show advanced options. For the Opened, Closed and Backlogged Metric Alerts, we can view Problem Analysis, Metrics Detail or go to the Target home for this metric data. This is so cool that I’m going to do a separate blog post to do it justice, so be patient on this topic… Top Ten Metric Collection Errors The last graph on this page is another one that can give away if patches are missing pretty quickly. This is over 30 days, so if you are seeing 100’s of metric collections errors, you should first check to verify that there aren’t any patches that address metric collections of the type you are experiencing. If this isn’t the case- investigate the error messages for the collections in the MGMT$METRIC_ERROR_HISTORY view in the OMR. You can start with something as simple as: SELECT TARGET_NAME, METRIC_NAME, COLL_NAME, ERROR_MESSAGE FROM MGMT$METRIC_ERROR_HISTORY WHERE ERROR_TYPE='ERROR' AND COLLECTION_TIMESTAMP =sysdate-30; There is still a lot to cover in this series, but now it’s time to get ready for head over to Liverpool, England next week! Tags: EM12c Performance Del.icio.us Facebook TweetThis Digg StumbleUpon Comments: 0 (Zero), Be the first to leave a reply! You might be interested in this: Oracle Open World Schedule New Environments, New DBA Crushes... OEM Reports High Load Average Enkitec E4 2013 Abstract Submissions are Open! SQL Server and Distributed Transaction Tuning Copyright © DBA Kevlar [ OMS and OMR Performance- Part III, Metrics Page ], All Right Reserved. 2014.
↧
Comment on Cómo descargar archivos desde la línea de comandos.
Otra buena opción es wget.
↧
Wiki Page: Container Database (CDB) with Pluggable Databases (PDBs) in Oracle 12c RAC
Creating Pluggable Databases (PDBs) and Cloning Pluggable Database (PDBs) in RAC environment This technique copies a source PDB from a CDB and plugs the copy into the same CDB or into another CDB. Oracle multitenant option and Oracle 12c RAC will enable scope for Consolidation, Scalable and Reliable environments. The cloning pluggable databases (PDB) technique is suitable for situations where: 1. If you want to test the application patch of your production RAC pluggable database. You first clone your production application in a cloned PDB, patch the cloned PDB and test. 2. If you wish to diagnose performance issues or perform performance regression tests on your application. Because this operation cannot be done in parallel of the production in the same database, you clone the PDB into another CDB. Creating Multitenant database in Oracle RAC environment Creating Multitenant database (‘orcl’) with two pluggable database (PDBs) namely pdb1, pdb2 in Oracle RAC environment While creating select both the RAC nodes from the ‘Selected Nodes’ list. Check the summary with two pluggable databases in selected nodes with ‘Node List’ and ‘No of Pluggable database’ option. Check the pluggable databases (PDBs) instances in RAC instances (orcl1, orcl2) and check status of instances orcl1 and orcl2 with ASM instances. Pluggable database (PDBs) can be opened on selected instances. Pluggable Database (PDB2) in mounted mode in Instance-1 (orcl1) and Read Write mode in Instance-2 (orcl2) Before cloning pluggable database (PDB1) keep pluggable database (PDB1) in open read only mode. Cloning method will ‘clone_pdb1’ pluggable database automatically in both the instances. Open cloned pluggable database (CLONE_PDB1) in both the database instances (orcl1, orcl2) Cloned pluggable database (CLONE_PDB1) will have con_id with 5 in both the database instances (orcl1, orcl2) Connecting cloned pluggable databases using SCAN address Default service will be started for Container Database (CDB1) and Pluggable databases (PDB1 and PDB2). Connection method for container database, pluggable database and cloned pluggable database using SCAN address. SQL connect sys/oracle@racnroll-scan:1521/cdb1 as sysdba SQL connect sys/oracle@racnroll-scan:1521/pdb1 as sysdba SQL connect sys/oracle@racnroll-scan:1521/pdb2 as sysdba SQL connect sys/oracle@racnroll-scan:1521/clone_pdb1 as sysdba Conclusion: So far might we are in assumption that if you Clone pluggable database (PDB) it may available only in one node in case of cluster nodes, but its wrong. Whenever you clone the pluggable database (PDB) it will be available on all the nodes of the cluster to use. So which refers that PDB is scalable during clone method. This behavior will be same either 2 node or more. Cloning in Oracle 12c RAC environment also makes our life simpler. Common User effects in Container Database (CDB) vs Pluggable Databases (PDBs) Test-1: Change the password of a common user whereas one Pluggable Database (PDB) is closed. [oracle@12casm ~]$ . oraenv ORACLE_SID = [cdb1] ? cdb1 The Oracle base remains unchanged with value /u01/app/oracle Login into Container Database (cdb1) as ‘sysdba’ [ oracle@12casm ~]$ sqlplus /nolog SQL connect sys/oracle@192.168.56.150:1521/cdb1 as sysdba SQL select name,open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ----------------- PDB$SEED READ ONLY PDB1 MOUNTED PDB2 MOUNTED PDB3 MOUNTED SQL alter pluggable database pdb1 open; Pluggable database altered. SQL create user c##1 identified by oracle container=all; User created. SQL grant create session to c##1 CONTAINER=ALL; Grant succeeded. SQL select distinct username from cdb_users where username='C##1 '; no rows selected SQL select username, common, con_id from cdb_users where username like 'C##%'; USERNAME COM CON_ID --- ---------------------------------------- C##1 YES 1 C##1 YES 3 SQL grant create session to c##1 container=all; Grant succeeded. Check the connection for newly created user ‘c##1’ with Pluggable database (pdb1) SQL connect c##1/oracle@localhost:1521/pdb1; Connected. Login into Container Database (cdb1) as ‘sysdba’ and close the Pluggable Database (pdb1) and change the password for the user ‘c##1’ SQL connect sys/oracle@192.168.56.150:1521/cdb1 as sysdba Connected. SQL alter pluggable database pdb1 close; Pluggable database altered. SQL alter user c##1 identified by oracle123; User altered. SQL alter pluggable database pdb1 open; Pluggable database altered. SQL connect c##1/oracle@localhost:1521/pdb1; ERROR: ORA-01017: invalid username/password; logon denied Warning: You are no longer connected to ORACLE. SQL connect c##1/oracle123@localhost:1521/pdb1; Connected. Conclusion: Changing the password for common user in Container Database (CDB) will impact for all Pluggable Databases (PDBs) in that Container. Test-2: Create new Pluggable database (pdb) in a Container Database (cdb1) where common users already exist [oracle@localhost ~]$ . oraenv ORACLE_SID = [oracle] ? cdb1 The Oracle base has been set to /u01/app/oracle [oracle@localhost ~]$ sqlplus /nolog SQL connect sys/oracle@192.168.56.101:1521/cdb1 as sysdba Connected. SQL create user c##2 identified by oracle container=all; User created. SQL grant create session to c##2 CONTAINER=ALL; Grant succeeded. SQL select username , con_id from cdb_users where username='C##2'; USERNAME CON_ID ------------------------- ---------- C##2 1 C##2 4 C##2 3 SQL select name,open_mode,con_id from v$pdbs; NAME OPEN_MODE CON_ID -------------------- ---------- ------------- PDB$SEED READ ONLY 2 PDB2 READ WRITE 3 PDB1 READ WRITE 4 SQL show con_id CON_ID ------------------------ 1 SQL show con_name CON_NAME ------------------------------ CDB$ROOT SQL CREATE PLUGGABLE DATABASE pdb5 ADMIN USER pdb5_admin IDENTIFIED BY oracle ROLES=(CONNECT) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed','/u01/app/oracle/oradata/cdb1/pdb5'); Pluggable database created. SQL select name, con_id, open_mode from v$pdbs; NAME CON_ID OPEN_MODE ------------------------------------------------------------------------ PDB$SEED 2 READ ONLY PDB2 3 READ WRITE PDB1 4 READ WRITE PDB5 5 MOUNTED SQL alter pluggable database PDB5 open; Pluggable database altered. SQL connect pdb5_admin/oracle@pdb5 Connected. SQL show con_id CON_ID ------------------------ 5 SQL show con_name CON_NAME ----------------- PDB5 SQL connect sys/oracle@192.168.56.101:1521/cdb1 as sysdba Connected. SQL select username , con_id from cdb_users where username='C##2'; USERNAME CON_ID ------------------------- ---------- C##2 1 C##2 5 C##2 3 C##2 4 SQL Conclusion: Existing common user in container database (CDB) with container=all option will automatically created for newly created pluggable databases (PDBs) Test-3: Create new common user in Container Database (cdb1) when Pluggable databases (pdb) in closed state SQL connect sys/oracle@192.168.56.101:1521/cdb1 as sysdba Connected. SQL select name, open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ------------------ PDB$SEED READ ONLY PDB2 READ WRITE PDB1 READ WRITE PDB5 READ WRITE SQL alter pluggable database pdb5 close; Pluggable database altered. SQL select name, open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ------------------ PDB$SEED READ ONLY PDB2 READ WRITE PDB1 READ WRITE PDB5 MOUNTED SQL create user c##3 identified by oracle container=all; User created. SQL grant create session to c##3 CONTAINER=ALL; Grant succeeded. SQL select username , con_id from cdb_users where username='C##3'; USERNAME CON_ID ------------------------- ---------- C##3 1 C##3 3 C##3 4 SQL alter pluggable database pdb5 open; Pluggable database altered. SQL select username , con_id from cdb_users where username='C##3'; USERNAME CON_ID ------------------------- ---------- C##3 1 C##3 5 C##3 4 C##3 3 SQL connect c##3/oracle@pdb1 Connected. SQL connect c##3/oracle@pdb2 Connected. SQL connect c##3/oracle@pdb5 Connected. Conclusion: Newly created common user in container database (CDB) will automatically create in pluggable database even if they are in closed state.
↧
↧
Wiki Page: Upgrading
Upgrading Databases to the latest available
↧
Blog Post: Influence execution plan without adding hints
RSS content We often encounter situations when a SQL runs optimally when it is hinted but sub-optimally otherwise. We can use hints to get the desired plan but it is not desirable to use hints in production code as the use of hints involves extra code that must be managed, checked, and controlled with every Oracle patch or upgrade. Moreover, hints freeze the execution plan so that you will not be able to benefit from a possibly better plan in future. So how can we make such queries use optimal plan until a provably better plan comes along without adding hints? Well, the answer is to use SQL Plan Management which ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones. To demonstrate the procedure, I have created two tables CUSTOMER and PRODUCT having CUST_ID and PROD_ID respectively as primary keys. PROD_ID column in CUSTOMER table is the foreign key and is indexed. SQL onn hr/hr drop table customer purge; drop table product purge; create table product(prod_id number primary key, prod_name char(100)); create table customer(cust_id number primary key, cust_name char(100), prod_id number references product(prod_id)); create index cust_idx on customer(prod_id); insert into product select rownum, 'prod'||rownum from all_objects; insert into customer select rownum, 'cust'||rownum, prod_id from product; update customer set prod_id = 1000 where prod_id 1000; exec dbms_stats.gather_table_stats (USER, 'customer', cascade= true); exec dbms_stats.gather_table_stats (USER, 'product', cascade= true); – First, let’s have a look at the undesirable plan which does not use the index on PROD_ID column of CUSTOMER table. SQL conn / as sysdba alter system flush shared_pool; conn hr/hr variable prod_id number exec :prod_id := 1000 select cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id; select * from table (dbms_xplan.display_cursor()); PLAN_TABLE_OUTPUT ----------------------------------------------------------------------SQL_ID b257apghf1a8h , child number 0 ------------------------------------- select cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id Plan hash value: 3134146364 ---------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 412 (100)| | | 1 | NESTED LOOPS | | 88734 | 17M| 412 (1)| 00:00:01 | | 2 | TABLE ACCESS BY INDEX ROWID| PRODUCT | 1 | 106 | 2 (0)| 00:00:01 | |* 3 | INDEX UNIQUE SCAN | SYS_C0010600 | 1 | | 1 (0)| 00:00:01 | |* 4 | TABLE ACCESS FULL | CUSTOMER | 88734 | 9098K| 410 (1)| 00:00:01 | ---------------------------------------------------------------------- – Load undesirable plan into baseline to establish a SQL plan baseline for this query into which the desired plan will be loaded later SQL variable cnt number exec :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id = ' b257apghf1a8h '); col sql_text for a35 word_wrapped col enabled for a15 select sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines where sql_text like 'select cust_name, prod_name%'; SQL_TEXT SQL_HANDLE PLAN_NAME ENABLED ----------------------------------- ---------------------------------------------------------------------- select cust_name, prod_name SQL_7d3369334b24a117 SQL_PLAN_7ucv96d5k988rfe19664b YES – Disable undesirable plan so that this plan will not be used SQL variable cnt number exec :cnt := dbms_spm.alter_sql_plan_baseline (- SQL_HANDLE = ' SQL_7d3369334b24a117 ',- PLAN_NAME = ' SQL_PLAN_7ucv96d5k988rfe19664b ',- ATTRIBUTE_NAME = ' enabled ',- ATTRIBUTE_VALUE = ' NO '); col sql_text for a35 word_wrapped col enabled for a15 select sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines where sql_text like 'select cust_name, prod_name%'; SQL_TEXT SQL_HANDLE PLAN_NAME ENABLED ----------------------------------------------------------------------select cust_name, prod_name SQL_7d3369334b24a117 SQL_PLAN_7ucv96d5k988rfe19664b NO – Now we use hint in the above SQL to generate the optimal plan which uses index on PROD_ID column of CUSTOMER table SQL conn hr/hr variable prod_id number exec :prod_id := 1000 select /*+ index(c)*/ cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id; select * from table (dbms_xplan.display_cursor()); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SQL_ID 5x2y12dzacv7w , child number 0 ------------------------------------- select /*+ index(c)*/ cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id Plan hash value: 4263155932 ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 1618 (100)| | | 1 | NESTED LOOPS | | 88734 | 17M| 1618 (1)| 00:00:01 | | 2 | TABLE ACCESS BY INDEX ROWID | PRODUCT | 1 | 106 | 2 (0)| 00:00:01 | |* 3 | INDEX UNIQUE SCAN | SYS_C0010600 | 1 | | 1 (0)| 00:00:01 | |* 4 | TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER | 88734 | 9098K| 1616 (1)| 00:00:01 | | 5 | INDEX FULL SCAN | SYS_C0010601 | 89769 | | 169 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- – Now we will load the hinted plan into baseline – – Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement. SQL variable cnt number exec :cnt := dbms_spm.load_plans_from_cursor_cache(- sql_id = ' 5x2y12dzacv7w ', - plan_hash_value = 4263155932 , - sql_handle = ' SQL_7d3369334b24a117 '); – Verify that there are now two plans loaded for that SQL statement: Unhinted sub-optimal plan is disabled Hinted optimal plan which even though is for a “different query,” can work with earlier unhinted query (SQL_HANDLE is same) is enabled. SQL col sql_text for a35 word_wrapped col enabled for a15 select sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines where sql_text like 'select cust_name, prod_name%'; SQL_TEXT SQL_HANDLE PLAN_NAME ENABLED ---------------------------------------------------------------------- select cust_name, prod_name SQL_7d3369334b24a117 SQL_PLAN_7ucv96d5k988rea320380 YES select cust_name, prod_name SQL_7d3369334b24a117 SQL_PLAN_7ucv96d5k988rfe19664b NO – Verify that hinted plan is used even though we do not use hint in the query – – The note confirms that baseline has been used for this statement SQL variable prod_id number exec :prod_id := 1000 select cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id; select * from table (dbms_xplan.display_cursor()); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SQL_ID b257apghf1a8h, child number 0 ------------------------------------- select cust_name, prod_name from customer c, product p where c.prod_id = p.prod_id and c.prod_id = :prod_id Plan hash value: 4263155932 ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 1618 (100)| | | 1 | NESTED LOOPS | | 88734 | 17M| 1618 (1)| 00:00:01 | | 2 | TABLE ACCESS BY INDEX ROWID | PRODUCT | 1 | 106 | 2 (0)| 00:00:01 | |* 3 | INDEX UNIQUE SCAN | SYS_C0010600 | 1 | | 1 (0)| 00:00:01 | |* 4 | TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER | 88734 | 9098K| 1616 (1)| 00:00:01 | | 5 | INDEX FULL SCAN | SYS_C0010601 | 89769 | | 169 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("P"."PROD_ID"=:PROD_ID) 4 - filter("C"."PROD_ID"=:PROD_ID) Note ----- - SQL plan baseline SQL_PLAN_7ucv96d5k988rea320380 used for this statement With this baseline solution, you need not employ permanent hints the production code and hence no upgrade issues. Moreover, the plan will evolve with time as optimizer discovers better ones. Note: Using this method, you can swap the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying an optimizer setting, playing around with statistics etc. and then associate sub-optimally performing statement with the optimal plan. I hope this post was useful. Your comments and suggestions are always welcome! References: http://www.oracle.com/technetwork/issue-archive/2014/14-jul/o44asktom-2196080.html —————————————————————————————————————————————– Related links: HOME Tuning Index Tags: Del.icio.us Digg Comments: 0 (Zero), Be the first to leave a reply! You might be interested in this: 12c RAC: ORA-15477: cannot communicate with the volume driver Oracle Critical Patch Update Advisory - July 2014 CLONE AN EXISTING PDB AS NON-SYS USER 11g R2 RAC: SERVER POOLS 12c: Connecting to CDB/PDB - Set Container Vs Connect Copyright © ORACLE IN ACTION [ Influence execution plan without adding hints ], All Right Reserved. 2014. The post Influence execution plan without adding hints appeared first on ORACLE IN ACTION .
↧
Blog Post: How to Solve Difficult Problems
We just had a mastermind meeting in the Danish Oracle User Group. The meeting was called by a member who was facing an interesting technical challenge that could be addressed in many different ways. But instead of just having two developers discuss the problem, they sent out an invitation to all members of the user group to participate in the discussion. Lured by the promise of an interesting technical challenge (and the offer of food and beer), many experienced people showed up. We had a great time and through the meeting found and refined several useful, new solutions. If you are faced with an important and difficult design decision, don't make it alone. Get five or ten people together, present the problem and let them come up with alternatives. Ideally, invite people from outside your project and organization to benefit from different viewpoints and different experiences. A modern technical infrastructure is so complex and has so many options that no one person, no matter how skilled, can evaluate all options.
↧
Forum Post: Error Log function and exception. PLS-00201
I'm trying to create an error log table that displays package name, function name, error message, and a timestamp. EXCEPTION WHEN OTHERS THEN ERROR_MSG := SUBSTR(SQLERRM, 1, 100); INSERT INTO ERROR_LOG (PACKAGE_NAME, FUNCTION_NAME, ERROR_MESSAGE, ERROR_TIMESTAMP, USER_ID) VALUES ('USER_PKG', 'INSERT_FUNCT', ERROR_MSG, CURRENT_TIMESTAMP); It's saying ERROR_MSG must be declared by I'm not sure how to do that
↧
↧
Forum Post: Re: Error Log function and exception. PLS-00201
Error_msg error_log.error_message%type; Should do the trick. In the declare section or above the begin if it's a procedure or function. Cheers, Norm [ TeamT ] On 5 December 2014 17:10:37 GMT+00:00, "tyler.beets" bounce-tylerbeets@toadworld.com wrote: Error Log function and exception. PLS-00201 Thread created by tyler.beets I'm trying to create an error log table that displays package name, function name, error message, and a timestamp. EXCEPTION WHEN OTHERS THEN ERROR_MSG := SUBSTR(SQLERRM, 1, 100); INSERT INTO ERROR_LOG (PACKAGE_NAME, FUNCTION_NAME, ERROR_MESSAGE, ERROR_TIMESTAMP, USER_ID) VALUES ('USER_PKG', 'INSERT_FUNCT', ERROR_MSG, CURRENT_TIMESTAMP); It's saying ERROR_MSG must be declared by I'm not sure how to do that To reply, please reply-all to this email. Stop receiving emails on this subject. Or Unsubscribe from Oracle notifications altogether. Toad World - Oracle Discussion Forum Flag this post as spam/abuse. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
↧
Blog Post: Managing Oracle Database 12c with Enterprise Manager – Part XV
We are discussing the management of Oracle Database 12c in Oracle Enterprise Manager 12c. In our previous blog post on this topic, we started to look into the Performance Hub of Enterprise Manager Database Express 12c. Let us move to the Activity tab . The red line shows the CPU cores used by this VBOX image, and obviously at a point of time this line has been exceeded. The total CPU wait class can be seen in green, the red part shows the concurrency wait class and so on. We can drill down to the actual SQL Id and user session at this point of time. However, we can also change the top dimensions of this graph to the actual wait event, as seen below. Notice that the graph has now changed to display the individual wait events such as db file sequential read, log file parallel write, row cache lock and so on. You can drill down further on any of these wait events and find the actual SQL and session causing the events. You can also change the lower dimension, here we have drilled down on the wait event: db file sequential read, and changed the lower dimension to “object” – this displays the objects causing this wait event, and also the SQL below. Pretty powerful. You can select a number of other dimensions such as SQL ID, object, Instance, PDB, Service, User ID, and so on in this graph.
↧
Blog Post: Installation of Oracle Solaris 11 Repository Updates (SRU) - Offline/Online
Introduction: Oracle Solaris 11 uses Image packaging system (IPS). Oracle Solaris 11 Image packaging system (IPS) is available to be Installed online over the network or can be Installed using a local repository without an Internet connection. Introduction of IPS simplifies the Installation of Package(s) and its dependencies. Before IPS, patch and package management was a tedious task, system administrators has to manually check all the dependencies before Installation of a package. IPS also simplifies the life a System Administrators. Oracle releases periodic updates of package(s) for the repositories and these updates are generally available to the customers as Support Repository Update (SRU). Support Repositories are available to download for customers who have valid support ID contract for Oracle Solaris 11.These updates includes updates for existing packages on the system and can Introduce new packages with each SRU. Oracle Solaris 11 Supports Installation of SRU locally without an Internet connection and online with Oracle Solaris repository with an Internet connection This article demonstrates Installation of Support Repository update (SRU-11.2.2.8.0 & 11.2.3.4.1 ) on oracle Solaris 11.2 host using a local repository and online Installation of Support Repository update (SRU- 11.2.3.4.1) using " https://pkg.oracle.com/solaris/support/ " Most of the servers are not connected to Internet due to security concerns in that case SRU couldn't be updated online and we need to configure it in a local repository. There are two separate documents are available Oracle Solaris 11.1 and Oracle Solaris 11.2 SRU updates. Oracle Solaris 11.1 Support Repository Updates (SRU) Index (Doc ID 1501435.1) Oracle Solaris 11.2 Support Repository Updates (SRU) Index (Doc ID 1672221.1) In this article we will be working on Oracle Solaris 11.2 Oracle Solaris 11.2 Support Repository Updates SRU Release Date IPS Repository Readme Version 11.2.3.4.1 14-OCT-14 19664167 (Repository) 19664174 (Install Guide) Readme 0.5.11-0.175.2.3.0.4.1 11.2.2.8.0 26-SEP-14 19691311 (Repository) 19691315 (Install Guide) Readme 0.5.11-0.175.2.2.0.8.0 11.2.2.7.0 26-SEP-14 19682824 (Repository) 19682832 (Install Guide) Readme 0.5.11-0.175.2.2.0.7.0 11.2.2.5.0 26-SEP-14 19562447 (Repository) 19591381 (Install Guide) Readme 0.5.11-0.175.2.2.0.5.0 11.2.1.5.0 15-AUG-14 19348159 Readme 0.5.11-0.175.2.1.0.5.0 1- Installation of SRU using LOCAL REPOSITORY: 1.1- Preparation: 1.1.1) Download and copy patches on the Server 1.1.2) Execute install-repo.ksh script to create local repository for specific SRU 1.1.3) Check version of existing packages 2.1- Installation: 2.1.1) Execute dry run to check changes available in SRU 2.1.2) Update packages 2.1.3) Reboot system and verify updated version of packages 1- PREPARATION 1.1.1- Download and copy patches on the Server Oracle Solaris SRU 11.2.2.8.0 is available to download via patch 19691311 and 19691315 . The first patch contains the repository files and the second patch contains the Installation instructions § Login to https://support.oracle.com patches & updates tab § search for above listed patches and download § Upload it to the server using winscp or CD/DVD 1.1.2-Execute install-repo.ksh script to create local repository for specific SRU Create two repositories for 2 different SRU's ( 11.2.2.8.0 and 11.2.3.4.1 ). These repositories will be configured locally on two different directories. SRU Version Repository Location 11.2.3.4.1 file:///IPS_SRU_11_2_3_4_1/ 11.2.2.8.0 file:///IPS_SRU_11_2_2_8_0/ § Verify the existing configured local repository on server which will be updated with SRU root@soltest:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS/ root@soltest:~# root@soltest:/IPS# ls COPYRIGHT pkg5.repository README-repo-iso.txt NOTICES publisher readme.txt root@soltest:/IPS# If you want to know how to configure local repository in Solaris 11.2 then please refer below blogpost: http://appsdbaworkshop.blogspot.com/2014/09/configure-ips-repository-on-oracle.html § Configure repositories for SRU's SRU - 11.2.2.8.0 root@soltest2:/sw_home/sol_sru/sol_sru_11_2_2_8_0_files# ls install-repo.ksh p19691315_1100_SOLARIS64.zip README-zipped-repo.txt p19691311_1100_SOLARIS64_1of2.zip readme_11_2_2_8_0.html sol-11_2_2_8_0-incr-repo_md5sums.txt p19691311_1100_SOLARIS64_2of2.zip readme_11_2_2_8_0.txt root@soltest2:/sw_home/sol_sru/sol_sru_11_2_2_8_0_files# . /install-repo.ksh -d /IPS_SRU_11_2_2_8_0 Using p19691311_1100_SOLARIS64 files for sol-11_2_2_8_0-incr-repo download. Uncompressing p19691311_1100_SOLARIS64_1of2.zip...done. Uncompressing p19691311_1100_SOLARIS64_2of2.zip...done. Repository can be found in /IPS_SRU_11_2_2_8_0. root@soltest2:/sw_home/sol_sru/sol_sru_11_2_2_8_0_files# SRU - 11.2.3.4.1 root@soltest2:/sw_home/sol_sru/sol_sru_11_2_3_4_1_files# ls install-repo.ksh p19664174_1100_SOLARIS64.zip README-zipped-repo.txt p19664167_1100_SOLARIS64_1of2.zip readme_11_2_3_4_1.html sol-11_2_3_4_1-incr-repo_md5sums.txt p19664167_1100_SOLARIS64_2of2.zip readme_11_2_3_4_1.txt root@soltest2:/sw_home/sol_sru/sol_sru_11_2_3_4_1_files# ./install-repo.ksh -d /IPS_SRU_11_2_3_4_1 Using p19664167_1100_SOLARIS64 files for sol-11_2_3_4_1-incr-repo download. Uncompressing p19664167_1100_SOLARIS64_1of2.zip...done. Uncompressing p19664167_1100_SOLARIS64_2of2.zip...done. Repository can be found in /IPS_SRU_11_2_3_4_1. root@soltest2:/sw_home/sol_sru/sol_sru_11_2_3_4_1_files# 1.3 -Check version of existing packages root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS/ root@soltest2:~# pkg info entire Name: entire Summary: Incorporation to lock all system packages to the same build Description: This package constrains system package versions to the same build. WARNING: Proper system update and correct package selection depend on the presence of this incorporation. Removing this package will result in an unsupported system. Category: Meta Packages/Incorporations State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.2.0.0.42.0 Packaging Date: June 24, 2014 07:38:32 PM Size: 5.46 kB FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.0.0.42.0:20140624T193832Z root@soltest2:~# pkg update -nv No updates available for this image. root@soltest2:~# - There are no updates available in the current repository (default - configured with Installation media). 2. Installation: 2.1) Execute dry run to check changes available in SRU - 11.2.2.8.0 2.2) Install SRU11.2.2.8.0 2.3) Reboot system and verify updated version of packages 2.4) Execute dry run to check changes available in SRU - 11.2.3.4.1 2.5) Install SRU - 11.2.3.4.1 2.6) Reboot and Verify the incremental updates of SRU 2.1- Execute dry run to check changes available in SRU Configure two repositories for both SRU version and check for the changes using dry run. root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS/ root@soltest2:~# pkg set-publisher -G '*' -g file:///IPS_SRU_11_2_2_8_0/ solaris root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_2_8_0/ root@soltest2:~# - Execute dry run to check how many packages will be updated and actions performed by SRU - 11.2.2.8.0 - In this SRU it will update 94 packages which consumes 1.17 GB of Disk space, update process will create boot environment and need to reboot for activating patches Environment root@soltest2:~# pkg update -nv Packages to remove: 1 Packages to update: 94 Estimated space available: 70.51 GB Estimated space to be consumed: 1.17 GB Create boot environment: Yes Activate boot environment: Yes Create backup boot environment: No Rebuild boot archive: Yes Changed packages: solaris consolidation/install/install-incorporation 0.5.11,5.11-0.175.2.0.0.5.0:20130107T161003Z - None consolidation/SunVTS/SunVTS-incorporation ========== list of packages ========== 7.21.2,5.11-0.175.2.0.0.42.1:20140623T022054Z - 7.21.2,5.11-0.175.2.1.0.5.0:20140801T185058Z web/server/apache-22 2.2.27,5.11-0.175.2.0.0.42.1:20140623T022811Z - 2.2.27,5.11-0.175.2.2.0.3.0:20140826T022414Z Editable files to change: Update: etc/driver/drv/lmrc.conf etc/motd root@soltest2:~# - Similarly configure repository for SRU - 11.2.3.4.1 and execute "pkg update" in dry run method to verify changes available in SRU root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_2_8_0/ root@soltest2:~# pkg set-publisher -G '*' -g file:///IPS_SRU_11_2_3_4_1/ solaris root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_3_4_1/ root@soltest2:~# Dry Run SRU - 11.2.3.4.1 This SRU will update 104 packages, needs 1.18GB of disk space, create new boot environment and requires reboot to activate patches environment. root@soltest2:~# pkg update -nv Packages to remove: 1 Packages to update: 104 Estimated space available: 66.03 GB Estimated space to be consumed: 1.18 GB Create boot environment: Yes Activate boot environment: Yes Create backup boot environment: No Rebuild boot archive: Yes Changed packages: solaris consolidation/install/install-incorporation 0.5.11,5.11-0.175.2.0.0.5.0:20130107T161003Z - None compress/unzip ========== list of pakages ========== 2.2.27,5.11-0.175.2.0.0.42.1:20140623T022811Z - 2.2.27,5.11-0.175.2.2.0.3.0:20140826T022414Z Editable files to change: Update: etc/driver/drv/lmrc.conf etc/motd root@soltest2:~# 2.2- Install SRU In SRU - 11.2.2.8.0there are 94 packages which need updates whereas in SRU - 11.2.3.4.1 there are 135 packages which need updates. In this demonstration first SRU -11.2.2.8.0 will be Installed then SRU- 11.2.3.4.1 will be Installed as this SRU will Install minimum number of packages. Packages will be updated on the system using command "pkg update" command. root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_2_8_0/ root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 29.56G static 2014-10-18 00:15 root@soltest2:~# root@soltest2:~# pkg update Packages to remove: 1 Packages to update: 94 Create boot environment: Yes Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 95/95 4517/4517 225.3/225.3 0B/s PHASE ITEMS Removing old actions 693/693 Installing new actions 783/783 Updating modified actions 5201/5201 Updating package state database Done Updating package cache 95/95 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 A clone of solaris exists and has been updated and activated. On the next boot the Boot Environment solaris-1 will be mounted on '/'. Reboot when ready to switch to this updated BE. Updating package cache 1/1 --------------------------------------------------------------------------- NOTE: Please review release notes posted at: http://www.oracle.com/pls/topic/lookup?ctx=solaris11&id=SERNS --------------------------------------------------------------------------- -Check Boot Environments (B.E) - pkg update command created a new B.E. solaris-1 and it will be activate on a next reboot. root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 5.59M static 2014-10-18 00:15 solaris-1 R - 30.53G static 2014-10-19 17:03 root@soltest2:~# - Select solaris-1 B.E. - After reboot: root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 5.59M static 2014-10-18 00:15 solaris-1 NR / 30.66G static 2014-10-19 17:03 root@soltest2:~# - solaris-1 boot environment is now active. 2.3) Reboot system and verify updated version of packages As listed above all 94 packages will be updated we will verify the version of package "entire" root@soltest2:~# pkg info entire Name: entire Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.2.2.8.0). Description: This package constrains system package versions to the same build. WARNING: Proper system update and correct package selection depend on the presence of this incorporation. Removing this package will result in an unsupported system. For more information see https://support.oracle.com/rs?type=doc&id=1672221.1. Category: Meta Packages/Incorporations State: Installed Publisher: solaris Version: 0.5.11 (Oracle Solaris 11.2.2.8.0) Build Release: 5.11 Branch: 0.175.2.2.0.8.0 Packaging Date: September 26, 2014 07:02:09 PM Size: 5.46 kB FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.2.0.8.0:20140926T190209Z 2.4) Execute dry run to check changes available in SRU - 11.2.3.4.1 Update SRU - 11.2.3.4.1 - Set repository for SRU - 11.2.3.4.1 root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_2_8_0/ root@soltest2:~# pkg set-publisher -G '*' -g file:///IPS_SRU_11_2_3_4_1/ solaris root@soltest2:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F file:///IPS_SRU_11_2_3_4_1/ root@soltest2:~# - Execute Dry run for SRU - 11.2.3.4.1 root@soltest2:~# pkg update -nv Packages to update: 36 Estimated space available: 65.09 GB Estimated space to be consumed: 335.24 MB Create boot environment: Yes Activate boot environment: Yes Create backup boot environment: No Rebuild boot archive: Yes Changed packages: solaris compress/unzip 6.0,5.11-0.175.2.0.0.42.1:20140623T010359Z - 6.0,5.11-0.175.2.3.0.4.0:20141002T141542Z consolidation/ips/ips-incorporation ================= List of packages ================= 0.5.11,5.11-0.175.2.0.0.42.2:20140624T190214Z - 0.5.11,5.11-0.175.2.3.0.1.2:20140905T183635Z web/curl 7.21.2,5.11-0.175.2.1.0.5.0:20140801T185058Z - 7.21.2,5.11-0.175.2.3.0.4.0:20141002T141741Z Editable files to change: Update: etc/motd root@soltest2:~# This SRU will update only 36 packages. Earlier when we check for updates before Installing SRU - 11.2.2.8.0 there were 135 packages which needed to be updates. After Installation of SRU - 11.2.2.8.0 there are only 36 packages which needs to be updated. - Check Boot Environment before Installing the update: root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 5.59M static 2014-10-18 00:15 solaris-1 NR / 30.66G static 2014-10-19 17:03 2.5) Install SRU - 11.2.3.4.1 root@soltest2:~# pkg update Packages to update: 36 Create boot environment: Yes Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 36/36 2472/2472 64.9/64.9 0B/s PHASE ITEMS Removing old actions 118/118 Installing new actions 133/133 Updating modified actions 3240/3240 Updating package state database Done Updating package cache 36/36 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 A clone of solaris-1 exists and has been updated and activated. On the next boot the Boot Environment solaris-2 will be mounted on '/'. Reboot when ready to switch to this updated BE. Updating package cache 1/1 --------------------------------------------------------------------------- NOTE: Please review release notes posted at: https://support.oracle.com/rs?type=doc&id=1672221.1 --------------------------------------------------------------------------- root@soltest2:~# - It will create new boot environment for SRU - 11.2.3.4.1 root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 5.59M static 2014-10-18 00:15 solaris-1 N / 150.0K static 2014-10-19 17:03 solaris-2 R - 31.31G static 2014-10-19 17:47 root@soltest2:~# 2.6) Reboot ad Verify the incremental updates of SRU - Reboot the host and boot from B.E. - solaris-2 - Select "solaris-2" host - Check B.E. root@soltest2:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 5.59M static 2014-10-18 00:15 solaris-1 - - 151.0K static 2014-10-19 17:03 solaris-2 NR / 31.43G static 2014-10-19 17:47 - Verify version of packages: root@soltest2:~# pkg info entire Name: entire Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.2.3.4.1). Description: This package constrains system package versions to the same build. WARNING: Proper system update and correct package selection depend on the presence of this incorporation. Removing this package will result in an unsupported system. For more information see https://support.oracle.com/rs?type=doc&id=1672221.1. Category: Meta Packages/Incorporations State: Installed Publisher: solaris Version: 0.5.11 (Oracle Solaris 11.2.3.4.1) Build Release: 5.11 Branch: 0.175.2.3.0.4.1 Packaging Date: October 2, 2014 10:39:23 PM Size: 5.46 kB FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.3.0.4.1:20141002T223923Z To rollback to any specific SRU you need to reboot the host with its respective Boot Environment. 2 Installation of SRU using ONLINE REPOSITORY: In Oracle Solaris 11 by default repository is configured to "http://pkg.oracle.com/solaris/release/". This repository is publically available and any individual can use this repository for Installation of packages. To have an access to Oracle Supported repositories you need to have a valid CSI account for Oracle Solaris 11. In this article default configured repository will be updated Oracle Solaris supported Repositories "https://pkg.oracle.com/solaris/support" 2.1 Check for Eligibility 2.2 Download the Key and certificate 2.3 Install key, certificate and configure supported repository 2.4 Execute dry run to check for updates 2.5 Installing SRU 2.6 Reboot and verify package versions 2.1 Check for Eligibility Access website https://pkg-register.oracle.com and login with a valid CSI account then it will allow you download your personal certificates which will be used for configuring your supported repository. click on "Request Certificates" - Based on our support contract it will list the products for which access has been granted. Here we can see "Access granted" for Oracle Solaris 11 Support. 2.2 -Download the Key and certificate This page provide the details for your product. Make sure "Oracle Solaris 11 Support" is listed and then click on the "certificate page" to download the key and the certificate. - Download and save both key and the certificate 2.3 - Install key, certificate and configure supported repository The download certificate page illustrates the instruction for Installation of key and certificate. Copy the certificate and key from download location to directory "/var/pkg/ssl" soladmin@soltest1:/var/pkg/ssl$ pwd /var/pkg/ssl soladmin@soltest1:/var/pkg/ssl$ ls soladmin@soltest1:/var/pkg/ssl$ cd - /export/home/soladmin/Downloads soladmin@soltest1:~/Downloads$ ls pkg.oracle.com.certificate.pem pkg.oracle.com.key.pem soladmin@soltest1:~/Downloads$ su - Password: Oracle Corporation SunOS 5.11 11.2 June 2014 root@soltest1:~# cd /export/home/soladmin/Downloads root@soltest1:/export/home/soladmin/Downloads# cp * /var/pkg/ssl root@soltest1:/export/home/soladmin/Downloads# cd /var/pkg/ssl root@soltest1:/var/pkg/ssl# ls pkg.oracle.com.certificate.pem pkg.oracle.com.key.pem root@soltest1:/var/pkg/ssl# - Default package publisher root@soltest1:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http://pkg.oracle.com/solaris/release/ root@soltest1:~# - Installation of certificate and configuration of support repository root@soltest1:/var/pkg/ssl# pkg set-publisher -k /var/pkg/ssl/pkg.oracle.com.key.pem \ -c /var/pkg/ssl/pkg.oracle.com.certificate.pem \ -g https://pkg.oracle.com/solaris/support/ \ -G http://pkg.oracle.com/solaris/release/ solaris root@soltest1:/var/pkg/ssl# - Default package configured to oracle supported repository. root@soltest1:~# pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F https://pkg.oracle.com/solaris/support/ root@soltest1:~# 2.4 - Execute dry run to check for updates Dry run will check packages which needs update and list down the changes it will perform. All Installation steps are same as local package Installation. root@soltest1:~# pkg update -nv Packages to remove: 1 Packages to update: 104 Estimated space available: 89.91 GB Estimated space to be consumed: 1.25 GB Create boot environment: Yes Activate boot environment: Yes Create backup boot environment: No Rebuild boot archive: Yes Changed packages: solaris consolidation/install/install-incorporation 0.5.11,5.11-0.175.2.0.0.5.0:20130107T161003Z - None compress/unzip ======================== LIST OF PACKAGES ======================== 6.0,5.11-0.175.2.0.0.42.1:20140623T010359Z - 6.0,5.11-0.175.2.3.0.4.0:20141002T141542Z consolidation/SunVTS/SunVTS-incorporation 2.2.27,5.11-0.175.2.0.0.42.1:20140623T022811Z - 2.2.27,5.11-0.175.2.2.0.3.0:20140826T022414Z Editable files to change: Update: etc/driver/drv/lmrc.conf etc/motd root@soltest1:~# 2.5 Installing SRU This will Install SRU, create new boot environment and requires a reboot to enable the patched environment. root@soltest1:~# pkg update Packages to remove: 1 Packages to update: 104 Create boot environment: Yes Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 105/105 4609/4609 227.1/227.1 126k/s PHASE ITEMS Removing old actions 746/746 Installing new actions 851/851 Updating modified actions 5285/5285 Updating package state database Done Updating package cache 105/105 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 A clone of solaris exists and has been updated and activated. On the next boot the Boot Environment solaris-1 will be mounted on '/'. Reboot when ready to switch to this updated BE. Updating package cache 1/1 --------------------------------------------------------------------------- NOTE: Please review release notes posted at: http://www.oracle.com/pls/topic/lookup?ctx=solaris11&id=SERNS --------------------------------------------------------------------------- You have mail in /var/mail/root 2.6 - Reboot and verify package versions Reboot and select solaris-1 boot environment. root@soltest1:~# pkg info entire Name: entire Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.2.3.4.1). Description: This package constrains system package versions to the same build. WARNING: Proper system update and correct package selection depend on the presence of this incorporation. Removing this package will result in an unsupported system. For more information see https://support.oracle.com/rs?type=doc&id=1672221.1. Category: Meta Packages/Incorporations State: Installed Publisher: solaris Version: 0.5.11 (Oracle Solaris 11.2.3.4.1) Build Release: 5.11 Branch: 0.175.2.3.0.4.1 Packaging Date: October 2, 2014 10:39:23 PM Size: 5.46 kB FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.2.3.0.4.1:20141002T223923Z root@soltest1:~# root@soltest1:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 9.29M static 2014-10-17 19:57 solaris-1 NR / 7.56G static 2014-10-17 22:03 root@soltest1:~# 3 - Conclusion: Image packaging system is an advance package management system which provides maintenance, upgrades of system easier. Single IPS Server can serve multiple X86/SPARC clients. It supports full virtualization for Oracle VM sparc (LDOMS) and zones. It reduces downtime for complex patch Installations. Moreover IPS is fully integrated with ZFS and Boot Environments. You can Install and rollback packages easily compare to Oracle Solaris 10.
↧
Blog Post: ASM DiskGroup Performance Statistics
The V$ASM_DISKGROUP_STAT view provides 12c ASM DiskGroup Performance Statistics for current DiskGroups. The data is extracted from the X$KFGRP_STAT source table. This view like all other ASM views are available in both the ASM and the Database instance. The data for this view accessed from the ASM includes information for all databases connected to the […] The post ASM DiskGroup Performance Statistics appeared first on VitalSoftTech .
↧
↧
Blog Post: APEX 503 – Service Unavailable – And you don’t know the APEX_PUBLIC_USER Password
It’s probably Monday morning. The caffeine from your first cup of coffee has not quite worked it’s way into your system. The cold sweat running down the back of your neck provides an unpleasant contrast to the warm blast of panicked users as they call up to inform you that the Application is down. APEX, which has been behaving impeccibly all this time, has suddenly decided to respond to all requests with : 503 – Service Unavailable. The database is up. The APEX Listener is up. But something else is up. APEX just doesn’t want to play. Better still, the person who set up the APEX in the first place has long-departed the company. You have no idea how the Apex Listener was configured. Out of sympathy with your current predicament, what follows is : How to confirm that this problem is related to the APEX_PUBLIC_USER (the most likely cause) A quick and fairly dirty way of getting things back up and running again How to stop this happening again Note : These steps were tested Oracle Developer Day VM with a 12c database running on Oracle Linux 6.5. In this environment, APEX is configured to run with the APEX Listener. Confirming the APEX User name First of all, we want to make sure that APEX is connecting to the database as APEX_PUBLIC_USER. To do this, we need to check the default.xml file. Assuming you’re on a Linux box : cd /u01/oracle/apexListener/apex cat default.xml If you don’t see an entry for db.username then APEX_PUBLIC_USER is the one that’s being used. If there is an entry for db.username then that is the name of the database user you need to check in the following steps. For now, I’ll assume that it’s set to the default. Incidentally, there will also be an entry for db.password. This will almost certainly be encrypted so is unlikely to be of use to you here. Confirming the status of the APEX_PUBLIC_USER The most likely reason for your current troubles is that the APEX_PUBLIC_USER’s database password has expired. To verify this – and get the information we’ll need to fix it, connect to the database and run the query : select account_status, profile from dba_users where username = 'APEX_PUBLIC_USER' / If the account_status is EXPIRED, then the issue you are facing is that the APEX_PUBLIC_USER is expired and therefore APEX can’t connect to the database. The other item of interest here is the PROFILE assigned to the user. We need to check this to make sure that there is no PASSWORD_VERIFY_FUNCTION assigned to the profile. If there is then you need to supply the existing password in order to change it, which is a bit of a problem if you don’t know what it is. Whilst we’re at it, we need to check whether there is any restriction in place as to the length of time or number of password changes that must take place before a password can be reused. In my case, APEX_PUBLIC_USER has been assigned the DEFAULT profile. select resource_name, limit from dba_profiles where profile = 'DEFAULT' and resource_name in ( 'PASSWORD_REUSE_TIME', 'PASSWORD_REUSE_MAX', 'PASSWORD_VERIFY_FUNCTION' ) / When I ran this, I was lucky and got : RESOURCE_NAME LIMIT ------------------------------ -------------------- PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED PASSWORD_VERIFY_FUNCTION NULL So, there are no restrictions on password reuse for this profile. Neither is there any verify function. If your APEX_PUBLIC_USER is attached to a profile that has these restrictions, then you’ll want to change this before re-setting the password. As we’re going to have to assign this user to another profile anyway, we may as well get it out of the way now. The New Profile for the APEX_PUBLIC_USER Oracle’s advice for the APEX_PUBLIC_USER is to set the PASSWORD_LIFE_TIME to UNLIMITED . Whilst it’s only these four parameters we need to set in the profile for us to get out of our current predicament, it’s worth also including a limitation on the maxiumum number of failed login attempts, if only to provide some limited protection against brute-forcing. In fact, I’ve just decided to use the settings from the DEFAULT profile for the attributes that I don’t need to change : create profile apex_public limit failed_login_attempts 10 password_life_time unlimited password_reuse_time unlimited password_reuse_max unlimited password_lock_time 1 composite_limit unlimited sessions_per_user unlimited cpu_per_session unlimited cpu_per_call unlimited logical_reads_per_session unlimited logical_reads_per_call unlimited idle_time unlimited connect_time unlimited private_sga unlimited / As we don’t specify a PASSWORD_VERIFY_FUNCTION, none is assigned to the new profile. NOTE – it’s best to check the settings in your own default profile as they may well differ from those listed here. Next, we assign this profile to APEX_PUBLIC_USER… alter user apex_public_user profile apex_public / The next step is to reset the APEX_PUBLIC_USER password, which is the only way to unexpire the user. No password, no problem Remember, in this scenario, we don’t know the current password for APEX_PUBLIC_USER. We don’t want to reset the password to just anything because we’re not sure how to set the password in the DAD used by the Apex Listener. First of all, we need to get the password hash for the current password. To do this : select password from sys.user$ where name = 'APEX_PUBLIC_USER' / You’ll get back a hex string – let’s say something like ‘DF37145AF23CCA4′. Next step is to re-set the APEX_PUBLIC_USER password : alter user apex_public_user identified by sometemporarypassword / We now immediately set it back to it’s original value using IDENTIFIED BY VALUES : alter user apex_public_user identified by values 'DF37145AF23CCA4' / At this point, APEX should be back up and running. Once the dust settles… Whilst your APEX installation may now be back up and running, you now have a database user for which the password never changes. Although the APEX_PUBLIC_USER has only limited system and table privilges, it also has access to any database objects that are available to PUBLIC. Whilst this is in-line with Oracle’s currently documented recommendations, you may consider that this is a situation that you want to address from a security perspective. If there is a sensible way of changing the APEX_PUBLIC_USER password without breaking anything, then you may consider it preferable to simply setup some kind of reminder mechanism so that you know when the password is due to expire and can change it ahead of time. You would then be able to set the password to expire as normal. If you’re wondering why I’m being a bit vague here, it’s simply because I don’t currently know of a sensible way of doing this. If you do, it would be really helpful if you could let me know :) Filed under: APEX , Oracle , SQL Tagged: APEX 503 Unavailable , create profile , dba_profiles , dba_users.account_status , failed_login_attempts , identified by values , password_reuse_max , password_reuse_time , password_verify_function
↧
Comment on APEX 503 – Service Unavailable – And you don’t know the APEX_PUBLIC_USER Password
Why don't you alter apex_public_user password and rerun apex listener setup? Do not get it?
↧
Blog Post: You Can Get There Making All Right-Hand Turns But …
It would appear that some DBAs are still using the optimizer_index_cost_adj parameter to make index access paths more ‘desirable’ to the optimizer. In decades past this might have been a good strategy however with the improvement in statistics gathering in recent relesaes of Oracle this might not be the case. Let’s look at an example to see why this might do more ‘harm’ than good. The optimizer_index_cost_adj parameter was first provided in Oracle 9i as a way to ‘gently’ influence the Cost-Based Optimizer to favor index scans over full table scans. It did that rather efficiently and it still does, which brings us to the inherent problem of using it: it does its job all TOO well sometimes. For efficiency and reduced physical I/O sometimes a full table scan is better than using an index, a fact that’s been known in Oracle circles for years now. Still, some sites are still setting optimizer_index_cost_adj to a non-default value, possibly through migrations to newer relesaeas and failing to modify the init.ora file configured for the older version. Using 11.2.0.4 here’s an example of what can occur when this parameter is altered to what might seem to be a ‘reasonable’ value. We start by disabling the automatic statistics gathering with the CREATE INDEX statement: SQL SQL alter session set "_optimizer_compute_index_stats"=false; Session altered. SQL Now, create the index on the EMP table: SQL create index emp_idx on emp(job); Index created. Let’s set optimizer_index_cost_adj to 10 and see what plans we get for two queries, one that should generate a full table scan and one that should use the index: SQL SQL set autotrace on SQL SQL alter session set optimizer_index_cost_adj=10; Session altered. SQL SQL select empno, ename, job, sal 2 from emp 3 where job = 'CLERK'; EMPNO ENAME JOB SAL ---------- ---------- --------- ---------- 7369 SMITH CLERK 800 7876 ADAMS CLERK 1100 7900 JAMES CLERK 950 7934 MILLER CLERK 1300 7369 SMITH CLERK 800 ... 7934 MILLER CLERK 1300 1024 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 1472992808 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1024 | 39936 | 6 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1024 | 39936 | 6 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_IDX | 1024 | | 1 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("JOB"='CLERK') Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 9 recursive calls 0 db block gets 204 consistent gets 4 physical reads 0 redo size 31902 bytes sent via SQL*Net to client 1247 bytes received via SQL*Net from client 70 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1024 rows processed SQL SQL select empno, ename, job, sal 2 from emp 3 where job = 'PRESIDENT'; EMPNO ENAME JOB SAL ---------- ---------- --------- ---------- 7839 KING PRESIDENT 5000 7869 JACK PRESIDENT 5000 Execution Plan ---------------------------------------------------------- Plan hash value: 1472992808 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 78 | 1 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| EMP | 2 | 78 | 1 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_IDX | 2 | | 1 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("JOB"='PRESIDENT') Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 8 recursive calls 0 db block gets 40 consistent gets 1 physical reads 0 redo size 812 bytes sent via SQL*Net to client 499 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 2 rows processed SQL Notice that both queries used the index. This is because the index cost was artificially lowered by the optimizer_index_cost_adj setting in force. This was a fairly small table (4098 rows) and that 25% of the total row count was returned by the first query. Let’s now adjust the parameter setting back to its default and generate statistics on the schema: SQL alter session set optimizer_index_cost_adj=100; Session altered. SQL SQL exec dbms_stats.gather_schema_stats('BING') PL/SQL procedure successfully completed. SQL SQL select empno, ename, job, sal 2 from emp 3 where job = 'CLERK'; EMPNO ENAME JOB SAL ---------- ---------- --------- ---------- 7369 SMITH CLERK 800 7876 ADAMS CLERK 1100 7900 JAMES CLERK 950 7934 MILLER CLERK 1300 7369 SMITH CLERK 800 ... 7934 MILLER CLERK 1300 1024 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 3956160932 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1024 | 23552 | 9 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| EMP | 1024 | 23552 | 9 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("JOB"='CLERK') Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 98 consistent gets 0 physical reads 0 redo size 31902 bytes sent via SQL*Net to client 1247 bytes received via SQL*Net from client 70 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1024 rows processed SQL SQL select empno, ename, job, sal 2 from emp 3 where job = 'PRESIDENT'; EMPNO ENAME JOB SAL ---------- ---------- --------- ---------- 7839 KING PRESIDENT 5000 7869 JACK PRESIDENT 5000 Execution Plan ---------------------------------------------------------- Plan hash value: 1472992808 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 46 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID| EMP | 2 | 46 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_IDX | 2 | | 1 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("JOB"='PRESIDENT') Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 5 consistent gets 0 physical reads 0 redo size 812 bytes sent via SQL*Net to client 499 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 2 rows processed SQL Now we get the ‘correct’ execution plans as the query returning 25% of the total rows uses a full table scan to reduce I/O. This is because the index access cost has not been artificially modified to favor such scans. We still get an index scan for the second query that returns 2 rows, which is as it should be. What was good in earlier releases of Oracle may no longer provide benefit since changes in the CBO and statistics gathering could make such settings detrimental to perfrmance, as the above example illustrates. This example was also executed against an EMP table containing around 1.5 million rows with the same results; it isn’t difficult to realize that doubling the I/O arbitrarily isn’t a good idea. Sometimes the status quo shouldn’t be maintained. It’s easy at times to pass through settings that once provided benefit in older releases of Oracle. Such settings should be examined and tested before passing them on to production unaltered as they may increase the work Oracle does to retrieve data. Testing is key; yes, that can prolong completing a migration to a newer release but it could prove invaluable in reducing I/O and improving performance, as illustrated here. And it may prevent the DBA from hunting down a performance problem that could have been avoided. Sometimes it’s better to make a left-hand turn now and then.
↧