Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Wiki Page: Streaming Kafka Messages to Oracle Database

$
0
0
Written by Deepak Vohra A typical messaging system such as Apache Kafka consists of a message producer and a message consumer. Apache Kafka was introduced in an earlier tutorial . LinkedIn invented Apache Kafka and uses Kafka to move data around between systems. With its widespread use LinkedIn needs to move around large quantities of data quickly and reliably. Kafka provides resiliency and reliability while performing high throughput. LinkedIn is sent over 800 billion message a day, which is 175 terabytes of data. And, more than 650 terabytes of data are consumed each day. While LinkedIn receives millions of messages per second another relatively small scale company may not require as much throughput. The number of Kafka brokers and clusters may be scaled according to the data volume requirements. One common use case of Apache Kafka is to develop a Stream Data Platform. A Stream Data Platform provides two main uses: Data Integration and Stream Processing. Data Integration involves collecting streams of events and sending and storing them in a data store such as a relation database or HDFS. Stream Processing is the continuous, real-time processing of data streams. A Stream Data Platform could be used for several different purposes. Consider the use case that a user is producing messages and the messages are streamed to Oracle Database. While Apache Kafka is used to provide a publish/subscribe messaging system for such a use case Apache Flume could be used to stream the messages to an Oracle sink. The following sequence should be used to stream Kafka messages to Oracle Database. Start Oracle Database. Create an Oracle Database table to receive Kafka messages. Start Kafka ZooKeeper Start Kafka Server Create a Kakfa Topic to send messages to from a Kafka Producer Create another Kafka Topic for a Kafka channel to be used by Apache Flume Start a Kafka Producer Configure a Apache Flume agent with source of type Kafka, channel of type Kafka and sink of type JDBC (Oracle Database) Start Apache Flume Agent Send Messages from Kafka Producer Kafka Messages get streamed to Oracle Database table. The sequence of streaming messages from Kafka producer to Oracle Database is shown in following illustration. This tutorial has the following sections. Setting the Environment Creating an Oracle Database Table Starting Kafka Configuring an Apache Flume Agent Starting the Flume Agent Producing messages at Kafka Producer Querying Oracle Database Table Setting the Environment The following software is required for this tutorial. -Oracle Database -Apache Flume 1.6 -Apache Kafka -Stratio JDBC Sink -Oracle JDBC Driver Jar -Jooq -Apache Maven -Java 7 Create a directory to install the software (except Oracle Database) and set its permissions to global (777). mkdir /flume chmod -R 777 /flume cd /flume Download and extract the Apache Kafka tar file. wget http://apache.mirror.iweb.ca/kafka/0.8.2.1/kafka_2.10-0.8.2.1.tgz tar -xvf kafka_2.10-0.8.2.1.tgz Download and extract the Apache Flume tar file. Apache Flume version must be 1.6 for Kafka support. wget http://archive.apache.org/dist/flume/stable/apache-flume-1.6.0-bin.tar.gz tar -xvf apache-flume-1.6.0-bin.tar.gz Copy the Kakfa jars to Flume classpath. cp /flume/kafka_2.10-0.8.2.1/libs/* /flume/apache-flume-1.6.0-bin/lib Set the environment variables for Oracle Database, Flume, Kafka, Maven, and Java. vi ~/.bashrc export MAVEN_HOME=/flume/apache-maven-3.3.3-bin export FLUME_HOME=/flume/apache-flume-1.6.0-bin export KAFKA_HOME=/flume/kafka_2.10-0.8.2.1 export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export FLUME_CONF=$FLUME_HOME/conf export JAVA_HOME=/flume/jdk1.7.0_55 export PATH=/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:$FLUME_HOME/bin:$JAVA_HOME/bin:$MAVEN_HOME/bin:$ORACLE_HOME/bin:$KAFKA_HOME/bin export CLASSPATH=$FLUME_HOME/lib/* Download, compile and package the Stratio JDBC sink and copy the jar file generated to Flume lib directory. cp stratio-jdbc-sink-0.5.0-SNAPSHOT.jar $FLUME_HOME/lib Copy the Oracle JDBC driver jar to Flume lib directory. cp ojdbc6.jar $FLUME_HOME/lib Copy the Jooq jar to Flume lib directory. cp jooq-3.6.2 $FLUME_HOME/lib Creating an Oracle Database Table Start SQL*Plus and create Oracle Database table to store the Kafka messages streamed to it. Run the following SQL script to create a table called kafkamsg . CREATE TABLE kafkamsg(msg VARCHAR(4000)); Oracle Database table gets created. Starting Kafka Apache Kafka comprises of the following main components. -ZooKeeper server -Kafka server -Kafka Topic/s -Kafka Producer -Kafka Consumer Start the Kafka ZooKeeper. cd /flume/kafka_2.10-0.8.2.1 zookeeper-server-start.sh config/zookeeper.properties ZooKeeper server gets started. Start Kafka server. cd /flume/kafka_2.10-0.8.2.1 kafka-server-start.sh config/server.properties Kafka server gets started. We need to create two Kafka topics. Topic kafka-orcldb to produce messages to be streamed to Oracle Database Topic kafkachannel for Flume channel of type Kafka Run the following commands to create the two Kafka topics. kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafka-orcldb kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafkachannel The two Kafka topics get created. We only need to start the Kafka producer and not the Kafka consumer as we shall be streaming the messages using Apache Flume and not consuming the messages at a consumer. Run Producer with the following command to producer messages at topic kafka-orcldb . kafka-console-producer.sh --broker-list localhost:9092 --topic kafka-orcldb Kafka producer gets started. Configuring an Apache Flume Agent Apache Flume makes use of a configuration file to get configure the Flume source, channel and sink. The Flume source, channel and sink should be of following types. -Flume Source of type Kafka -Flume channel of type Kafka -Flume sink of type JDBC The Flume configuration properties are discussed in following table. Configuration Property Description Value agent.sources Sets the Flume source kafkaSrc agent.channels Sets the Flume channel channel1 agent.sinks Sets the Flume sink jdbcSink agent.channels.channel1.type Sets the channel type org.apache.flume.channel.kafka. KafkaChannel agent.channels.channel1.brokerList Sets the channel broker list localhost:9092 agent.channels.channel1.topic Sets the Kafka channel topic kafkachannel agent.channels.channel1.zookeeperConnect Sets the Kafka channel ZooKeeper host:port localhost:2181 agent.channels.channel1.capacity Sets the channel capacity 10000 agent.channels.channel1.transactionCapacity Sets the channel transaction capacity 1000 agent.sources.kafkaSrc.type Sets the source type org.apache.flume.source.kafka. KafkaSource agent.sources.kafkaSrc.channels Sets the channel on the source channel1 agent.sources.kafkaSrc.zookeeperConnect Sets the source ZooKeeper host:port localhost:2181 agent.sources.kafkaSrc.topic Sets the Kafka source topic kafka-orcldb agent.sinks.jdbcSink.type Sets the sink type com.stratio.ingestion.sink.jdbc.JDBCSink agent.sinks.jdbcSink.connectionString Sets the connection URI for Oracle Database jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sinks.jdbcSink.username Sets the Oracle Database username OE agent.sinks.jdbcSink.password Sets the Oracle Database password OE agent.sinks.jdbcSink.batchSize Sets the batch size 10 agent.sinks.jdbcSink.channel Sets the channel on the sink channel1 agent.sinks.jdbcSink.sqlDialect Sets the SQL dialect. A Oracle specific dialect is not provided but the DERBY dialect could be used DERBY agent.sinks.jdbcSink.driver Sets the Oracle Database JDBC driver class oracle.jdbc.OracleDriver agent.sinks.jdbcSink.sql Sets the custom SQL to add data to Oracle Database INSERT INTO kafkamsg(msg) VALUES(${body:varchar}) The flume.conf is listed: agent.sources=kafkaSrc agent.channels=channel1 agent.sinks=jdbcSink agent.channels.channel1.type=org.apache.flume.channel.kafka.KafkaChannel agent.channels.channel1.brokerList=localhost:9092 agent.channels.channel1.topic=kafkachannel agent.channels.channel1.zookeeperConnect=localhost:2181 agent.channels.channel1.capacity=10000 agent.channels.channel1.transactionCapacity=1000 agent.sources.kafkaSrc.type = org.apache.flume.source.kafka.KafkaSource agent.sources.kafkaSrc.channels = channel1 agent.sources.kafkaSrc.zookeeperConnect = localhost:2181 agent.sources.kafkaSrc.topic = kafka-orcldb agent.sinks.jdbcSink.type = com.stratio.ingestion.sink.jdbc.JDBCSink agent.sinks.jdbcSink.connectionString = jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sinks.jdbcSink.username=OE agent.sinks.jdbcSink.password=OE agent.sinks.jdbcSink.batchSize = 10 agent.sinks.jdbcSink.channel =channel1 agent.sinks.jdbcSink.sqlDialect=DERBY agent.sinks.jdbcSink.driver=oracle.jdbc.OracleDriver agent.sinks.jdbcSink.sql=INSERT INTO kafkamsg(msg) VALUES(${body:varchar}) Copy the Flume configuration file to Flume conf directory. cp flume.conf $FLUME_HOME/conf/flume.conf Starting the Flume Agent Next, run the Flume agent with the following command. flume-ng agent --classpath --conf $FLUME_CONF/ -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Flume agent gets started. Producing messages at Kafka Producer We already started a Kafka producer. Send messages at the Kafka producer. Add a message and click on “Enter” button to send the message. An empty line sent is also considered a message. In the following illustration three messages have been produced with the 2 nd message being an empty message, which is also streamed. Querying Oracle Database Table The messages produced at the Kafka producer are streamed to Oracle Database by Flume agent. In SQL*Plus run a SQL query to list the messages. The three messages including the empty message get listed. The messages produced at Kafka producer are streamed as they are produced. Send more messages at Kafka producer. The messages get streamed to Oracle Database and get listed with an SQL query. In this tutorial we used Apache Flume to stream Kafka messages to Oracle Database.

Blog Post: Acciones Dinámicas ejecutando Código PL/SQL

$
0
0
El siguiente ejemplo extraído del demo de las acciones dinámicas de APEX vamos a recrear un ejemplo en el cual usaremos una acción dinámica ejecutando código PL/SQL que actualice el informe interactivo de los empleados e incremente su salario en un 10%. Crear Informe Interactivo de Empleados Lo primero que necesitamos realizar es la creación de un Informe Interactivo de la tabla EMP con la siguiente consulta SQL: SELECT e.EMPNO , e.ENAME , e.JOB , m.ename MGR , e.HIREDATE , e.SAL , e.COMM , d.dname DEPTNO FROM EMP e , EMP m , DEPT d WHERE e.mgr = m.empno AND e.deptno = d.deptno Crear Botón “Actualizar Salario en un 10%” Creamos un botón que lo llamaremos “Actualizar Salario en un 10%” dentro de la región del Informe Interactivo de Empleados y la posición del botón le asignamos “Barra de Búsqueda a la Derecha del Informe Interactivo”. En Apariencia, la plantilla del botón “Text with Icon”, en Directa le indicamos en “Sí” y finalmente en Classes CSS de Icono, asiganos en “fa-cog” y en posición del icono le indicamos en “Left”. En Comportamiento definimos que la Acción este “Definida por Acción Dinámica”. Crear Acción Dinámica “Ejecutar Código PL/SQL” Desde el Diseñador de Páginas hacemos clic con el botón derecho del mouse sobre el nombre del botón ACTUALIZAR_SALARIO y creamos una acción dinámica: - Identificación Nombre: Actualizar Salario - Cuando Evento: Clic Tipo de Selección: Botón Botón: ACTUALIZAR_SALARIO En Acción Verdadera - Identificación Acción: Ejecutar Código PL/SQL - Código PL/SQL update emp set sal = sal * 1.1; - Opciones de Ejecución Arrancar cuando Resultado de Evento Sea: Verdadero Arrancar al Cargar Página: No Crear Acción TRUE “Refrescar” Ahora necesitamos crear una acción verdadera para refrescar el reporte interactivo y muestre el salario actualizado del empleado. Desde el Diseñador de Páginas hacemos clic con el botón derecho del mouse sobre Verdadero y seleccionamos Crear Acción True: - Identificación Acción: Refrescar - Elementos Afectados Tipos de Selección: Región Región: Empleados - Opciones de Ejecución Evento: Actualizar Salario Arrancar cuando Resultado de Evento Sea: Verdadero Arrancar al Cargar Página: No Crear Acción TRUE “Alerta” Desde el Diseñador de Páginas hacemos clic con el botón derecho del mouse sobre Verdadero y seleccionamos Crear Acción True: - Identificación Acción: Alerta - Configuración Texto: Salario Actualizado! - Opciones de Ejecución Evento: Actualizar Salario Arrancar cuando Resultado de Evento Sea: Verdadero Arrancar al Cargar Página: No Cuando ejecutamos la página y luego hacemos clic en el botón Actualizar Salario por un 10% veremos que el salario de los empleados es actualizado en un 10% y se abre una ventanita de alerta modal en el cual nos muestra el mensjae que ha sido actualizado todos los registros del reporte interactivo. De esta forma podemos implementar diferentes tipos de actualizaciones en nuestros reportes interactivos usando acciones dinámicas ejecutando Código PL/SQL. Hasta la próxima!

Blog Post: Oracle: Now Really in the Cloud

$
0
0
The "buying experience" for Oracle Cloud solutions has not been very cloud-like until now. Most services did not have the "Buy Now" button that other cloud vendors offer, but instead a list of phone numbers (yes, really!) for local Oracle sales offices. In effect, you could not buy an Oracle Cloud service until your local Oracle rep was in the office. That's set to change with what Oracle calls the " Accelerated Buying Experience ." It seems that we will actually be able to buy Oracle cloud offerings without having to be subjected to a lengthy sales talk for whatever product pays the most bonus this month. I'm looking forward to it.

Blog Post: How to Drop a Tablespace with deleted datafiles

$
0
0
Some times we have to deal with a tablespace that for some reason, its datafiles were deleted. In that case, we have two options, either restore the deleted datafiles and apply a recovery process or drop the tablespace if its data is not needed anymore. I won't explain how to restore and recover the datafiles, since this is not the approach that I want to talk about. I will explain how to delete a tablespace when some of its datafiles (or all of them) were deleted and its data is not needed anymore. Firstable I will create a tablespace with 4 datafiles: SQL> create tablespace tbs_test datafile '/home/oracle/datafile1.dbf' size 10M; Tablespace created. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile2.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile3.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile4.dbf' size 10M; Tablespace altered. So here are our 4 datafiles: SQL> select file#, name, status, enabled from v$datafile where TS# = (select TS# from v$tablespace where name='TBS_TEST') FILE# NAME STATUS ENABLED ---------- ------------------------------ ------- ---------- 6 /home/oracle/datafile1.dbf ONLINE READ WRITE 7 /home/oracle/datafile2.dbf ONLINE READ WRITE 8 /home/oracle/datafile3.dbf ONLINE READ WRITE 9 /home/oracle/datafile4.dbf ONLINE READ WRITE SQL> Now we will delete 2 datafiles, simulating a failure: [oracle@a1 ~]$ rm -rf /home/oracle/datafile2.dbf [oracle@a1 ~]$ rm -rf /home/oracle/datafile3.dbf [oracle@a1 ~]$ Here is where we will start our topic, If we try to drop the tablespace directly we will get an error like the following: SQL> drop tablespace tbs_test including contents and datafiles; drop tablespace tbs_test including contents and datafiles * ERROR at line 1: ORA-01116: error in opening database file 7 ORA-01110: data file 7: '/home/oracle/datafile2.dbf' ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3 Some poeple could think about to offline the tablespace, perhaps using one of the following options: OFFLINE NORMAL Specify NORMAL to flush all blocks in all data files in the tablespace out of the system global area (SGA). You need not perform media recovery on this tablespace before bringing it back online. This is the default. OFFLINE TEMPORARY If you specify TEMPORARY, then Oracle Database performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written. Files that are offline when you issue this statement may require media recovery before you bring the tablespace back online. OFFLINE IMMEDIATE If you specify IMMEDIATE, then Oracle Database does not ensure that tablespace files are available and does not perform a checkpoint. You must perform media recovery on the tablespace before bringing it back online. Let's try the options: SQL> alter tablespace tbs_test offline ; alter tablespace tbs_test offline * ERROR at line 1: ORA-01116: error in opening database file 7 ORA-01110: data file 7: '/home/oracle/datafile2.dbf' ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3 SQL> alter tablespace tbs_test offline temporary ; alter tablespace tbs_test offline temporary * ERROR at line 1: ORA-01116: error in opening database file 7 ORA-01110: data file 7: '/home/oracle/datafile2.dbf' ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3 Even the immediate option doesn't work: SQL> alter tablespace tbs_test offline immediate ; alter tablespace tbs_test offline immediate * ERROR at line 1: ORA-01145: offline immediate disallowed unless media recovery enabled The right approach is offline the datafile or datafiles, not the whole tablespace, that's why we have to use the following option for datafiles: FOR DROP If the database is in NOARCHIVELOG mode, then you must specify FOR DROP clause to take a data file offline. However, this clause does not remove the data file from the database. To do that, you must use an operating system command or drop the tablespace in which the data file resides. Until you do so, the data file remains in the data dictionary with the status RECOVER or OFFLINE. If the database is in ARCHIVELOG mode, then Oracle Database ignores the FOR DROP clause. It is very import to know that this option has a condition, if no-archivelog and if archivelog mode. This is important and we will see why. If the database is in no-archivelog mode, then this is the way to drop the tablespace: SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination /u01/app/oracle/product/11.2/db_1/dbs/arch Oldest online log sequence 22 Current log sequence 24 SQL> SQL> alter database datafile '/home/oracle/datafile2.dbf' offline for drop ; Database altered. SQL> alter database datafile '/home/oracle/datafile3.dbf' offline for drop ; Database altered. SQL> SQL> select file#, name, status, enabled from v$datafile where TS# = (select TS# from v$tablespace where name='TBS_TEST') FILE# NAME STATUS ENABLED ---------- ------------------------------ ------- ---------- 6 /home/oracle/datafile1.dbf ONLINE READ WRITE 7 /home/oracle/datafile2.dbf RECOVER READ WRITE 8 /home/oracle/datafile3.dbf RECOVER READ WRITE 9 /home/oracle/datafile4.dbf ONLINE READ WRITE SQL> And now we are able to drop the tablespace: SQL> drop tablespace tbs_test including contents and datafiles; Tablespace dropped. SQL> Why it's important if the database is in archivelog or not? Well, things are easier if the database is in archivelog as we will see: turn on archivelog mode: SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 1870647296 bytes Fixed Size 2254304 bytes Variable Size 503319072 bytes Database Buffers 1358954496 bytes Redo Buffers 6119424 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/product/11.2/db_1/dbs/arch Oldest online log sequence 22 Next log sequence to archive 24 Current log sequence 24 SQL> Let's create again our tablespace with 4 datafiles: SQL> create tablespace tbs_test datafile '/home/oracle/datafile1.dbf' size 10M; Tablespace created. SQL> SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile2.dbf' size 10M; Tablespace altered. SQL> SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile3.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile4.dbf' size 10M; Tablespace altered. Deleting some datafiles: SQL> !rm -rf /home/oracle/datafile2.dbf SQL> !rm -rf /home/oracle/datafile3.dbf Now we don't need "FOR DROP", since the database is in archivelog we can use only "offline": SQL> alter database datafile '/home/oracle/datafile2.dbf' offline; Database altered. SQL> alter database datafile '/home/oracle/datafile3.dbf' offline ; Database altered. SQL> drop tablespace tbs_test including contents and datafiles; Tablespace dropped. But the story doesn't finish there... the things are even easier with archivelog. Let's create the problem again: SQL> create tablespace tbs_test datafile '/home/oracle/datafile1.dbf' size 10M; Tablespace created. SQL> SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile2.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile3.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile4.dbf' size 10M; Tablespace altered. Deleting 2 datafiles: SQL> !rm -rf /home/oracle/datafile2.dbf SQL> !rm -rf /home/oracle/datafile3.dbf Look at this: SQL> alter tablespace tbs_test offline immediate; Tablespace altered. SQL> drop tablespace tbs_test including contents and datafiles; Tablespace dropped. And... even easier: SQL> create tablespace tbs_test datafile '/home/oracle/datafile1.dbf' size 10M; Tablespace created. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile2.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile3.dbf' size 10M; Tablespace altered. SQL> alter tablespace tbs_test add datafile '/home/oracle/datafile4.dbf' size 10M; Tablespace altered. SQL> !rm -rf /home/oracle/datafile2.dbf SQL> !rm -rf /home/oracle/datafile3.dbf Directly to dropping the tablespace: SQL> drop tablespace tbs_test including contents and datafiles; Tablespace dropped. SQL> Follow me:

Wiki: Oracle Wiki

$
0
0
A useful collection of Oracle database development and adminstration resources that can be freely added to and updated by the community.

Wiki Page: Preparing for Oracle DB Cloud and Java Cloud

$
0
0
Oracle has gotten much better at provisioning their Platform-as-a-Service (PaaS) instances lately. If you’ve never tried the Oracle PaaS cloud earlier, or have given up because of the various issues the service used to have, now is a good time to try it out. Oracle claims it is really easy to set up a PaaS instance. I do not agree that it is easy, and Oracle do not seem to have gotten around to writing meaningful error messages for their PaaS cloud yet. So if anything goes wrong, expect simply a message of “unexpected error” and no help. However, if I can manage to get a PaaS installation up and running, so can you. The PaaS Cloud Process The process of establishing a PaaS cloud with database and Java involves the following steps: Creating an SSH key pair Creating Oracle Storage Creating Oracle Database Cloud Creating Oracle Java Cloud This article describes the first two steps. I’ll get back to actually creating the database cloud and java cloud in a later article. Once you are through these four steps, you are ready to deploy applications to the Java cloud instance. Creating an SSH key pair The first step is to create a public/private key pair for SSH communication. If you are on Linux or Mac, you are lucky: You can simply run ssh-keygen like this: $ ssh-keygen -b 2048 -t rsa -f Enter a passphrase twice and ssh-keygen will generate a matching public/private key pair for you. The private key will be given the file name you provided, and the public key will be given the same name with the suffix .pub. You’ll need the public key for many of the following operations. If you are on Windows, you can install the open source PuTTY software package if you don’t already have it. This package contains the PuTTYgen program that you can use to generate the key. You should choose a 2048-bit SSH-2 RSA key. This program generates a .ppk file with your private key and shows the public key. You’ll need to copy all the characters from the public key into a text file and save it with a .pub extension. Refer to the section “Generating a Secure Shell (SSH) Public/Private Key Pair” in the Using Oracle Java Cloud Service manual ( https://docs.oracle.com/cloud/latest/jcs_gs/JSCUG/GUID-4285B8CF-A228-4B89-9552-FE6446B5A673.htm#JSCUG3297 ). Creating Oracle Storage The next step is to create an Oracle Storage Container. This is a place to store files and is needed by both the database service and the Java service as a backup destination. Unfortunately, this process is very manual and somewhat error-prone. Creating storage is done through a REST API, which is all good and well for applications needing storage. However, Oracle haven’t gotten around to automating this process yet, so you have to issue URL commands by hand using a tool like cUrl. Again, Linux and Mac users are in luck and already have this – Windows users will need to install it. Oracle provides instructions about this process at http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/objectstorage/creating_containers_REST_API/files/installing_curl_command_line_tool_on_windows.html Once you have a cUrl tool, you first run it to get an authentication token and they to actually create a storage container. But before that, you need to set the replication policy on your storage service. Setting the Replication Policy To see if you have a replication policy set for your storage service, go to cloud.oracle.com, choose your datacenter and click My Services. Enter your identity domain and log on with your username and password. Find your Oracle Cloud Storage Service. If it says "Replication Policy Not Set", continue below. Otherwise, skip to creating a token. To set the replication policy, choose the action menu in the top right corner of the Cloud Storage Service and choose Set Replication Policy. Choose your primary data center and where your data will be replicated to, and confirm your choice. Getting a Token To get a token, issue a command like this: $ curl -v -s -X GET -H "X-Storage-User: Storage- : " -H "X-Storage-Pass: " https:// .storage.oraclecloud.com/auth/v1.0 Note that you need to provide your identity domain (this was set when you signed up for the cloud service and can be found in your activation email), as well as your cloud user and password. The output should be something like this: * Trying 160.34.16.106... * Connected to .storage.oraclecloud.com (160.34.16.106) port 443 (#0) * TLS 1.2 connection using TLS_RSA_WITH_AES_128_CBC_SHA * Server certificate: *.storage.oraclecloud.com * Server certificate: VeriSign Class 3 Secure Server CA - G3 * Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5 > GET /auth/v1.0 HTTP/1.1 > Host: .storage.oraclecloud.com > User-Agent: curl/7.43.0 > Accept: */* > X-Storage-User: Storage- : > X-Storage-Pass: > .storage.oraclecloud.com left intact Hopefully, the output includes the line HTTP/1.1 200 OK , which indicates that the token request was successful. From the output, you can read and copy your storage URL and the Auth Token . Note that this token is time-limited and is only valid for 30 minutes. After that, you have to ask for a new token. Creating a Container To actually create a container, you need to call the storage web service again using cUrl, like this: $ curl -v -s -X PUT -H "X-Auth-Token: " / The token and the storage URL comes from the token request above, and the contain name is one you provide. You should get a result like this: * Trying 160.34.16.106... * Connected to em2.storage.oraclecloud.com (160.34.16.106) port 443 (#0) * TLS 1.2 connection using TLS_RSA_WITH_AES_128_CBC_SHA * Server certificate: *.storage.oraclecloud.com * Server certificate: VeriSign Class 3 Secure Server CA - G3 * Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5 > PUT /v1/ / HTTP/1.1 > Host: em2.storage.oraclecloud.com > User-Agent: curl/7.43.0 > Accept: */* > X-Auth-Token: " / You should get an output like this: * Trying 160.34.16.106... * Connected to em2.storage.oraclecloud.com (160.34.16.106) port 443 (#0) * TLS 1.2 connection using TLS_RSA_WITH_AES_128_CBC_SHA * Server certificate: *.storage.oraclecloud.com * Server certificate: VeriSign Class 3 Secure Server CA - G3 * Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5 > GET /v1/ / HTTP/1.1 > Host: em2.storage.oraclecloud.com > User-Agent: curl/7.43.0 > Accept: */* > X-Auth-Tok en: > .Storage.Storage_ReadWriteGroup .Storage.Storage_ReadOnlyGroup,vesterli.Storage.Storage_ReadWriteGroup < X-Container-Bytes-Used: 0 < X-Trans-Id: txd04f0fab005344dfa6a05-00565c2ef5ga < Date: Mon, 30 Nov 2015 11:11:49 GMT < Connection: keep-alive < X-Storage-Class: Standard < X-Last-Modified-Timestamp: 1448881830.16499 < Content-Type: text/html;charset=UTF-8 < Server: Oracle-Storage-Cloud-Service < * Connection #0 to host em2.storage.oraclecloud.com left intact Getting an HTTP/1.1 204 No Content message is correct since you have not put anyting into your storage container yet. You can also see that there are no objects (X-Container-Object-Count: 0) and no bytes used (X-Container-Bytes-Used: 0). Next Steps These were the pre-requisites you need to have in place before you can start creating your Oracle Database Cloud Service and Oracle Java Cloud Service, and they are the hardest part of getting started with the Oracle PaaS cloud. Next steps are: Creating a Database Cloud instance (see http://www.oracle.com/pls/topic/lookup?ctx=cloud&id=CSDBI3299 ) Creating a Java Cloud instance (see for example http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/javaservice/JCS/JCS_Coherence_Create_Instance/create_jcs_coherence_instance.html#section1s5 ) I'll go through these steps in more detail in another article.

Blog Post: TFA - Upgrading your latest version to the latest version

$
0
0
In my previous article about TFA, I showed how to upgrade TFA with the binaries from Oracle Metalink, this allowed us to use many of the diagnostic tools directly from TFA. However, my friend Gleb Otochkin provided me a tip that can be useful for all of you, what about if TFA is already in the latest version but that TFA was installed via PSU and not via the installer from Metalink? What will happen is that when you try to upgrade TFA you will receive the following message: [root@rac1 grid]# ./installTFALite TFA Installation Log will be written to File : /tmp/tfa_install_759_2016_05_27-03_43_19.log Starting TFA installation TFA Build Version: 121270 Build Date: 201603032146 Installed Build Version: 121270 Build Date: 201603032146 TFA is already running latest version. No need to patch. So our installer that was dowloaded from Metalink is telling us that we don't need to upgrade TFA because TFA is already in the latest version. Yes, it is, in fact, however to have the latest version doesn't mean we have the TFA that includes all the diagnostic tools already deployed. So basically what we have to do is to install that latest version in order to upgrade it to the latest version [:)] In other words, we have to uninstall TFA that came via PSU and then install the TFA that was downloaded from Metalink, both are the same version (the latest) however they are from different sources. That is what we will do in the following steps: If we look in the "tfactl" command, we can see that we can use "toolstatus" option, which means that this TFA can handle all the diagnostic tool, but there is not tool deployed.... [root@rac1 grid]# tfactl toolstatus .-----------------------------------. | External Support Tools | +------+--------------+-------------+ | Host | Tool | Status | +------+--------------+-------------+ '------+--------------+-------------' [root@rac1 grid]# So we have to uninstall this TFA: [root@rac1 bin]# ./tfactl uninstall TFA will be Uninstalled on Node rac1: Removing TFA from rac1 only Please remove TFA locally on any other configured nodes Notifying Other Nodes about TFA Uninstall... TFA is not yet secured to run all commands FAIL Sleeping for 10 seconds... Stopping TFA Support Tools... Stopping TFA in rac1... Shutting down TFA oracle-tfa stop/waiting . . . . . Killing TFA running with pid 1476 . . . Successfully shutdown TFA.. Deleting TFA support files on rac1: Removing /u01/app/grid/tfa/rac1/database... Removing /u01/app/grid/tfa/rac1/log... Removing /u01/app/grid/tfa/rac1/output... Removing /u01/app/grid/tfa/rac1... Removing /u01/app/grid/tfa... Removing /etc/rc.d/rc0.d/K17init.tfa Removing /etc/rc.d/rc1.d/K17init.tfa Removing /etc/rc.d/rc2.d/K17init.tfa Removing /etc/rc.d/rc4.d/K17init.tfa Removing /etc/rc.d/rc6.d/K17init.tfa Removing /etc/init.d/init.tfa... Removing /u01/app/12.1.0/grid/bin/tfactl... Removing /u01/app/12.1.0/grid/tfa/bin... Removing /u01/app/12.1.0/grid/tfa/rac1... [root@rac1 bin]# And then we proceed to install the same version of TFA, but using the binaries from Metalink: [root@rac1 grid]# ./installTFALite -local -tfabase /u01/app/12.1.0/grid/tfa -javahome /u01/app/12.1.0/grid/jdk/jre TFA Installation Log will be written to File : /tmp/tfa_install_26070_2016_05_27-03_33_10.log Starting TFA installation Using JAVA_HOME : /u01/app/12.1.0/grid/jdk/jre Running Auto Setup for TFA as user root... Installing TFA now... Discovering Nodes and Oracle resources Checking whether CRS is up and running List of nodes in cluster 1. rac1 Searching for running databases . . . . . Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS . . . . . . . . . . . . . . . . TFA Will be Installed on rac1... TFA will scan the following Directories ++++++++++++++++++++++++++++++++++++++++++++ .-------------------------------------------------------------------------. | rac1 | +--------------------------------------------------------------+----------+ | Trace Directory | Resource | +--------------------------------------------------------------+----------+ | /u01/app/12.1.0/grid/cfgtoollogs | CFGTOOLS | | /u01/app/12.1.0/grid/crf/db/rac1 | CRS | | /u01/app/12.1.0/grid/crs/log | CRS | | /u01/app/12.1.0/grid/css/log | CRS | | /u01/app/12.1.0/grid/cv/log | CRS | | /u01/app/12.1.0/grid/evm/admin/log | CRS | | /u01/app/12.1.0/grid/evm/admin/logger | CRS | | /u01/app/12.1.0/grid/evm/log | CRS | | /u01/app/12.1.0/grid/install | INSTALL | | /u01/app/12.1.0/grid/log | CRS | | /u01/app/12.1.0/grid/network/log | CRS | | /u01/app/12.1.0/grid/oc4j/j2ee/home/log | DBWLM | | /u01/app/12.1.0/grid/opmn/logs | CRS | | /u01/app/12.1.0/grid/racg/log | CRS | | /u01/app/12.1.0/grid/rdbms/log | ASM | | /u01/app/12.1.0/grid/rdbms/log | ASM | | /u01/app/12.1.0/grid/scheduler/log | CRS | | /u01/app/12.1.0/grid/srvm/log | CRS | | /u01/app/grid/cfgtoollogs | CFGTOOLS | | /u01/app/grid/crsdata/rac1/acfs | ACFS | | /u01/app/grid/crsdata/rac1/afd | ASM | | /u01/app/grid/crsdata/rac1/chad | CRS | | /u01/app/grid/crsdata/rac1/core | CRS | | /u01/app/grid/crsdata/rac1/crsconfig | CRS | | /u01/app/grid/crsdata/rac1/crsdiag | CRS | | /u01/app/grid/crsdata/rac1/cvu | CRS | | /u01/app/grid/crsdata/rac1/evm | CRS | | /u01/app/grid/crsdata/rac1/output | CRS | | /u01/app/grid/crsdata/rac1/trace | CRS | | /u01/app/grid/diag/asm/+asm/+ASM1/cdump | ASM | | /u01/app/grid/diag/clients/user_grid/host_2190098445_82/cdum | DBCLIENT | | /u01/app/grid/diag/clients/user_oracle/host_2190098445_82/cd | DBCLIENT | | /u01/app/grid/diag/crs/rac1/crs/cdump | CRS | | /u01/app/grid/diag/crs/rac1/crs/trace | CRS | | /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/cdump | RDBMS | | /u01/app/grid/diag/tnslsnr | TNS | | /u01/app/grid/diag/tnslsnr/rac1/listener/cdump | TNS | | /u01/app/grid/diag/tnslsnr/rac1/listener_scan1/cdump | TNS | | /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/cdump | TNS | | /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/cdump | TNS | | /u01/app/oraInventory/ContentsXML | INSTALL | | /u01/app/oraInventory/logs | INSTALL | | /usr/tmp | ZDLRA | '--------------------------------------------------------------+----------' Installing TFA on rac1: HOST: rac1 TFA_HOME: /u01/app/12.1.0/grid/tfa/rac1/tfa_home .-------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | +------+---------------+-------+------+------------+----------------------+ | rac1 | RUNNING | 27225 | 5000 | 12.1.2.7.0 | 12127020160303214632 | '------+---------------+-------+------+------------+----------------------' Running Inventory in All Nodes... Enabling Access for Non-root Users on rac1... Adding default users to TFA Access list... Summary of TFA Installation: .--------------------------------------------------------------. | rac1 | +---------------------+----------------------------------------+ | Parameter | Value | +---------------------+----------------------------------------+ | Install location | /u01/app/12.1.0/grid/tfa/rac1/tfa_home | | Repository location | /u01/app/grid/tfa/repository | | Repository usage | 0 MB out of 1141 MB | '---------------------+----------------------------------------' TFA is successfully installed... Usage : /u01/app/12.1.0/grid/bin/tfactl [options] = start Starts TFA stop Stops TFA enable Enable TFA Auto restart disable Disable TFA Auto restart print Print requested details access Add or Remove or List TFA Users purge Delete collections from TFA repository directory Add or Remove or Modify directory in TFA host Add or Remove host in TFA diagcollect Collect logs from across nodes in cluster collection Manage TFA Collections analyze List events summary and search strings in alert logs. set Turn ON/OFF or Modify various TFA features toolstatus Prints the status of TFA Support Tools run Run the desired support tool start Starts the desired support tool stop Stops the desired support tool syncnodes Generate/Copy TFA Certificates diagnosetfa Collect TFA Diagnostics uninstall Uninstall TFA from this node For help with a command: /u01/app/12.1.0/grid/bin/tfactl -help And that's it, now we can see that all the tools are already deployed, they came inside the binaries that we downloaded from Metalink: [root@rac1 grid]# /u01/app/12.1.0/grid/bin/tfactl toolstatus .-----------------------------------. | External Support Tools | +------+--------------+-------------+ | Host | Tool | Status | +------+--------------+-------------+ | rac1 | alertsummary | DEPLOYED | | rac1 | exachk | DEPLOYED | | rac1 | ls | DEPLOYED | | rac1 | pstack | DEPLOYED | | rac1 | orachk | DEPLOYED | | rac1 | sqlt | DEPLOYED | | rac1 | grep | DEPLOYED | | rac1 | summary | DEPLOYED | | rac1 | prw | NOT RUNNING | | rac1 | vi | DEPLOYED | | rac1 | tail | DEPLOYED | | rac1 | param | DEPLOYED | | rac1 | dbglevel | DEPLOYED | | rac1 | darda | DEPLOYED | | rac1 | history | DEPLOYED | | rac1 | oratop | DEPLOYED | | rac1 | oswbb | RUNNING | | rac1 | changes | DEPLOYED | | rac1 | events | DEPLOYED | | rac1 | ps | DEPLOYED | | rac1 | srdc | DEPLOYED | '------+--------------+-------------' [root@rac1 grid]# Follow me:

Blog Post: Plantilla de Región Hero

$
0
0
Generalmente cuando creamos nuestras páginas en APEX, la región de la “Ruta de Navegación” se muestra con la plantilla “Title Bar”, como vemos en la imagen de abajo: Para que nuestras páginas se muestren como lo hacen las aplicaciones empaquetadas y esta región sea más vistosa, podemos elegir usar la plantilla “Hero”. Desde el diseñador de páginas, nos ubicamos en la región de la Ruta de Navegación y seleccionamos en la sección de Apariencia, la plantilla: “Hero”. Seleccionamos además en Clases CSS de Icono por ejemplo este icono de Font Awesome “fa-area-chart” que representa un gráfico. Luego en el Título de la Región lo cambiamos al nombre de la aplicación, por ejemplo “Demo”. Guardamos los cambios y ejecutamos la página. Podemos ver lo fácil que es cambiar los estilos de la región usando las distintas alternativas que nos provee el Tema Universal 42. Hasta pronto!

Blog Post: How to change the database name in 12c

$
0
0
To change the database name is a common task for every Oracle Database Administrator, however with every version some things change and it is necessary to review the process that we have been using in order to confirm if it is still valid or not in a new version, in this case 12c. In this article I will show you that, I have done some test in my 12c environments and I will show you on which parts changing the database name changed. Pluggable Databases Firstable, in 12c Oracle has introduced a new concept: Oracle Pluggable Databases (PDB) Every PDB is an isolated database. Based on that, if this is a database then we should be able to rename it. How we can do it? that's what we will see in the next steps. For these examples, I am using a Container Database (CDB) with 2 Pluggable Databases (PDBs) as you can see below: SQL>select con_id, dbid, con_uid, guid, name, open_mode from v$containers CON_ID DBID CON_UID GUID NAME ------ ---------- ---------- -------------------------------- ---------- 1 2029615230 1 FD9AC20F64D344D7E043B6A9E80A2F2F CDB$ROOT 2 3232131500 3232131500 33E566D292696F62E0530401A8C0010D PDB$SEED 3 878113373 878113373 33E573FD368272A9E0530401A8C0AC1C PDB1 4 221610700 221610700 33E576C86F0F74FBE0530401A8C05574 PDB2 Now, I will proceed to change the name of PDB1. To do so, I have to be connected to the PDB that I want to change the name, in this case PDB1. First I want to see where I am conntected to: SQL> show con_name CON_NAME ------------------------------ CDB$ROOT Before changing the PDB name, we have to put the PDB in restricted mode: SQL> alter pluggable database pdb1 close; Pluggable database altered. SQL> alter pluggable database pdb1 open restricted; Pluggable database altered. NOTE: If you try to change the PDB name while the PDB is not in restricted mode you will receive the following error: ORA-65045: pluggable database not in a restricted mode Moving to PDB: SQL> alter session set container=pdb1; Session altered. Confirming that in fact, I am connected to PDB1: SQL> show con_name; CON_NAME ------------------------------ PDB1 And finally, let's change the PDB name: SQL> ALTER PLUGGABLE DATABASE RENAME GLOBAL_NAME TO pdb1new; Pluggable database altered. After to change the PDB name, let's verify if any "ID" changed as well: select con_id, dbid, con_uid, guid, name, open_mode,restricted from v$containers; CON_ID DBID CON_UID GUID NAME OPEN_MODE RESTRICTED ------ ---------- ---------- -------------------------------- ---------- ---------- --- 1 2029615230 1 FD9AC20F64D344D7E043B6A9E80A2F2F CDB$ROOT READ WRITE NO 2 3232131500 3232131500 33E566D292696F62E0530401A8C0010D PDB$SEED READ ONLY NO 3 878113373 878113373 33E573FD368272A9E0530401A8C0AC1C PDB1NEW READ WRITE YES 4 221610700 221610700 33E576C86F0F74FBE0530401A8C05574 PDB2 MOUNTED As you can see, CON_ID, DBID, CON_UID and GUID did not change. Only the PDB name changed, and after the rename the PDB, it will be open in restricted mode yet so we have to open it in normal mode. SQL> alter pluggable database pdb1new close; Pluggable database altered. SQL> alter pluggable database pdb1new open; Pluggable database altered. Confirming: SQL> select con_id, dbid, con_uid, guid, name, open_mode,restricted from v$containers; CON_ID DBID CON_UID GUID NAME OPEN_MODE RES ------ ---------- ---------- -------------------------------- ---------- ---------- --- 1 2029615230 1 FD9AC20F64D344D7E043B6A9E80A2F2F CDB$ROOT READ WRITE NO 2 3232131500 3232131500 33E566D292696F62E0530401A8C0010D PDB$SEED READ ONLY NO 3 878113373 878113373 33E573FD368272A9E0530401A8C0AC1C PDB1NEW READ WRITE NO 4 221610700 221610700 33E576C86F0F74FBE0530401A8C05574 PDB2 MOUNTED Container Database: Now it's time to change the name for a Container Database where there are some pluggable databases. We will keep using the same environment that we have, 1 CDB and 2 PDBs. In order to change the CDB name, the first step is to put the CDB in mount state. we cannot change the CDB name while the database is in read-write. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 767557632 bytes Fixed Size 2929112 bytes Variable Size 310382120 bytes Database Buffers 448790528 bytes Redo Buffers 5455872 bytes Database mounted. SQL> NOTE: If we try to change the CDB name and the database is not in mount state we will receive the following error: NID-00121: Database should not be open Now let's take a look at some of the options that we have with NID tool: [oracle@db12102 ~]$ nid -help DBNEWID: Release 12.1.0.2.0 - Production on Sat May 28 06:51:36 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Keyword Description (Default) ---------------------------------------------------- TARGET Username/Password (NONE) DBNAME New database name (NONE) LOGFILE Output Log (NONE) REVERT Revert failed change NO SETNAME Set a new database name only NO APPEND Append to output log NO HELP Displays these messages NO Now let's proceed to change it: [oracle@db12102 ~]$ nid target=/ dbname=cdbnew DBNEWID: Release 12.1.0.2.0 - Production on Sat May 28 06:55:33 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Connected to database CDB (DBID=2029615230) Connected to server version 12.1.0 Control Files in database: /data/cdb/CDB/controlfile/o1_mf_cnlw9wn8_.ctl /data/cdb/CDB/controlfile/o1_mf_cnlw9wpm_.ctl Change database ID and database name CDB to CDBNEW? (Y/[N]) => y Proceeding with operation Changing database ID from 2029615230 to 4097604774 Changing database name from CDB to CDBNEW Control File /data/cdb/CDB/controlfile/o1_mf_cnlw9wn8_.ctl - modified Control File /data/cdb/CDB/controlfile/o1_mf_cnlw9wpm_.ctl - modified Datafile /data/cdb/CDB/datafile/o1_mf_system_cnlw7bw3_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_sysaux_cnlw5xrj_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_undotbs1_cnlw934v_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_system_cnlwb479_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_users_cnlw922n_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_sysaux_cnlwb478_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E573FD368272A9E0530401A8C0AC1C/datafile/o1_mf_system_cnlwllgp_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E573FD368272A9E0530401A8C0AC1C/datafile/o1_mf_sysaux_cnlwllh0_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E573FD368272A9E0530401A8C0AC1C/datafile/o1_mf_users_cnlwlwgm_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E576C86F0F74FBE0530401A8C05574/datafile/o1_mf_system_cnlwn242_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E576C86F0F74FBE0530401A8C05574/datafile/o1_mf_sysaux_cnlwn249_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E576C86F0F74FBE0530401A8C05574/datafile/o1_mf_users_cnlwn24b_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/o1_mf_temp_cnlwb13f_.tm - dbid changed, wrote new name Datafile /data/cdb/CDB/datafile/pdbseed_temp012016-05-28_06-22-34-AM.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E573FD368272A9E0530401A8C0AC1C/datafile/o1_mf_temp_cnlwllh1_.db - dbid changed, wrote new name Datafile /data/cdb/CDB/33E576C86F0F74FBE0530401A8C05574/datafile/o1_mf_temp_cnlwn24b_.db - dbid changed, wrote new name Control File /data/cdb/CDB/controlfile/o1_mf_cnlw9wn8_.ctl - dbid changed, wrote new name Control File /data/cdb/CDB/controlfile/o1_mf_cnlw9wpm_.ctl - dbid changed, wrote new name Instance shut down Database name changed to CDBNEW. Modify parameter file and generate a new password file before restarting. Database ID for database CDBNEW changed to 4097604774. All previous backups and archived redo logs for this database are unusable. Database is not aware of previous backups and archived logs in Recovery Area. Database has been shutdown, open database with RESETLOGS option. Succesfully changed database name and ID. DBNEWID - Completed succesfully. [oracle@db12102 ~]$ NOTE: Right after the DB name changed, we have to change also the parameter "db_name", otherwise when you try to mount the database you will receive the following error: ORA-01103: database name 'CDBNEW' in control file is not 'CDB' Based on that let's change the parameter (not dynamic): SQL> startup nomount; ORACLE instance started. Total System Global Area 767557632 bytes Fixed Size 2929112 bytes Variable Size 310382120 bytes Database Buffers 448790528 bytes Redo Buffers 5455872 bytes SQL> SQL> show parameters db_name NAME TYPE VALUE --------- ----------- ------------------------------ db_name string CDB SQL> alter system set db_name='CDBNEW' scope=spfile; System altered. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. Let's start the database, now it should be mount: SQL> startup mount; ORACLE instance started. Total System Global Area 767557632 bytes Fixed Size 2929112 bytes Variable Size 310382120 bytes Database Buffers 448790528 bytes Redo Buffers 5455872 bytes Database mounted. SQL> And finally, we have to open the database with resetlogs: SQL> alter database open resetlogs; Database altered. NOTE: if we don't open the database with resetlogs, when we try to open it in read-write and normally we will get the following error: ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now, as we did when we changed a PDB name, let's verify if any "ID" changed after to rename the CDB's name: SQL> select con_id, dbid, con_uid, guid, name, open_mode,restricted from v$containers CON_ID DBID CON_UID GUID NAME OPEN_MODE RES ------ ---------- ---------- -------------------------------- ---------- ---------- --- 1 4097604774 1 FD9AC20F64D344D7E043B6A9E80A2F2F CDB$ROOT READ WRITE NO 2 3232131500 3232131500 33E566D292696F62E0530401A8C0010D PDB$SEED READ ONLY NO 3 878113373 878113373 33E573FD368272A9E0530401A8C0AC1C PDB1NEW MOUNTED 4 221610700 221610700 33E576C86F0F74FBE0530401A8C05574 PDB2 MOUNTED SQL> The only value that changed is the DBID and only for the CDB$ROOT, al the others containers remains the same. This is because we used "RESETLOGS" when we open the database. Follow me:

Blog Post: PREDICTION_DETAILS function in Oracle

$
0
0
When building predictive models the data scientist can spend a large amount of time examining the models produced and how they work and perform on their hold out sample data sets. They do this to understand is the model gives a good general representation of the data and can identify/predict many different scenarios. When the "best" model has been selected then this is typically deployed is some sort of reporting environment, where a list is produced. This is typical deployment method but is far from being ideal. A more ideal deployment method is that the predictive models are build into the everyday applications that the company uses. For example, it is build into the call centre application, so that the staff have live and real-time feedback and predictions as they are talking to the customer. But what kind of live and real-time feedback and predictions are possible. Again if we look at what is traditionally done in these applications they will get a predicted outcome (will they be a good customer or a bad customer) or some indication of their value (maybe lifetime value, possible claim payout value) etc. But can we get anymore information? Information like what was reason for the prediction. This is sometimes called prediction insight. Can we get some details of what the prediction model used to decide on the predicted value. In more predictive analytics products this is not possible, as all you are told is the final out come. What would be useful is to know some of the thinking that the predictive model used to make its thinking. The reasons when one customer may be a "bad customer" might be different to that of another customer. Knowing this kind of information can be very useful to the staff who are dealing with the customers. For those who design the workflows etc can then build more advanced workflows to support the staff when dealing with the customers. Oracle as a unique feature that allows us to see some of the details that the prediction model used to make the prediction. This functions (based on using the Oracle Advanced Analytics option and Oracle Data Mining to build your predictive model) is called PREDICTION_DETAILS. When you go to use PREDICTION_DETAILS you need to be careful as it will work differently in the 11.2g and 12c versions of the Oracle Database (Enterprise Editions). In Oracle Database 11.2g the PREDICTION_DETAILS function would only work for Decision Tree models. But in 12c (and above) it has been opened to include details for models created using all the classification algorithms, all the regression algorithms and also for anomaly detection. The following gives an example of using the PREDICTION_DETAILS function. select cust_id, prediction(clas_svm_1_27 using *) pred_value, prediction_probability(clas_svm_1_27 using *) pred_prob, prediction_details(clas_svm_1_27 using *) pred_details from mining_data_apply_v; The PREDICTION_DETAILS function produces its output in XML, and this consists of the attributes used and their values that determined why a record had the predicted value. The following gives some examples of the XML produced for some of the records. I've used this particular function in lots of my projects and particularly when building the applications for a particular business unit. Oracle too has build this functionality into many of their applications. The images below are from the HCM application where you can examine the details why an employee may or may not leave/churn. You can when perform real-time what-if analysis by changing some of attribute values to see if the predicted out come changes.

Blog Post: adstpall.sh: Database connection could not be established. Either the database is down or the APPS credentials supplied are wrong.

$
0
0
trying to startup EBS R12.2. with the following error :- adstpall.sh: Database connection could not be established. Either the database is down or the APPS credentials supplied are wrong. to solve this do the following :- [oracle@tiperp tsterp]$ cd $APPL_TOP [oracle@tiperp appl]$ pwd /u01/app/product/tsterp/fs1/EBSapps/appl after souring the environment try to startup the EBS again. Thank you Osama

Blog Post: NoSQL workshop at Oracle user group conference

$
0
0
Oracle NoSQL Database has been regularly featured at the conferences of the Northern California Oracle Users Group. But, at its most recent conference, the Northern California Oracle Users Group dared to play outside the Oracle sandbox with a whole day NoSQL workshop featuring three Oracle competitors: MongoDB, Couchbase, and Cassandra. First up was MongoDB with a session on NoSQL modeling. Oracle NoSQL Database seems to have confused the value in “key/value pair” with values in the justifiably infamous “ Entity Attribute Value ” anti-pattern. However, in the NoSQL world, the value in a “key/value pair” is either a document or a blob, not an atomic value. Click here to download the MongoDB presentation. Next up was Couchbase, with its “Non-first-normal-form Query Language” or N1QL—pronounced Nickel to rhyme with Sequel (SQL). N1QL is a declarative query language that extends SQL for JSON. Click here to download the Couchbase presentation. What’s a whole-day workshop without a case study? The eBay application architects spoke about the Cassandra implementation at eBay and why they chose Cassandra. Click here to download the Cassandra presentation. Finally, there were closing remarks by the NoCOUG conference chair. Click here to download the closing remarks. P.S. SQL is correctly pronounced “sequel” not “es-que-el”. SQL was originally given the name SEQUEL (Structured English Query Language) by its creators Donald Chamberlin and Raymond Boyce but the acronym was later shortened to SQL because—as recounted by Donald Chamberlin in The 1995 SQL Reunion: People, Projects, and Politics —SEQUEL was a trademarked name. this means that the correct pronunciation of “SQL” is sequel, not “es-que-el.”

Wiki Page: Hurdle - Reinstate database after failover performed by Fast Start Failover

$
0
0
Concept Fast Start Failover feature automates the failover by using Observer and it doesn't requires DBA intervention during disaster recovery scenario. Observer is a component of Dataguard Broker typically running outside the network of both primary and standby database which continuously monitors primary database. Whenever Observer detects unavailability of primary database it ignites failover after waiting for number of seconds defined in the property 'FastStartFailoverThreshold'. And when broker connectivity is established to former primary database it will reinstate the database as standby provided flashback is configured. One of the major enhancement from 11g and above releases is we can implement Fast Start Failover when protection mode is in Maximum Performance, in this protection mode we should set the property 'FastStartFailoverLagLimit' for number of seconds of data which can be lost. Earlier releases had the requirement of protection mode to be either Maximum Protection or Maximum Availability. From 11g and above releases we can configure user-configurable failover conditions and upon detection of such conditions broker will ignore the property 'FastStartFailoverThreshold' and failover immediately. Below are the user-configurable failover conditions. Datafile Offline - Enabled by default, initiates failover if a datafile on the primary database experiences an I/O error resulting in a datafile being taken offline. Corrupted Dictionary - Enabled by default, initiates failover if corruption of a critical database object is found. Corrupted Controlfile - Enabled by default, initiates failover of corruption of controlfile is detected. Inaccessible Log File - Disabled by default, initiates failover of LGWR is unable to write to a member of a log group. Stuck Archiver - Disabled by default, initiates failover if the archiver on the primary database is hung. Application Induced Failover - Failover can be initiated by using dbms_dg.initiate_fs_failover function, this provides capability of failover to Application. When these user-configurable conditions are detected, broker will not bring up the failed primary database and thus it leaves it in down state. Also the automatic reinstate of the database will not be attempted by the broker. The two user-configurable failover conditions 'Inaccessible Log File' and 'Stuck Archiver' are instance specific in case of RAC. So if any instance in a RAC cluster detects these two failover conditions then failover will occur regardless of other instance availability in the RAC cluster. Hence these conditions has to be considered carefully before enabling it in RAC cluster. We can also set ORA errors for failover to take place whenever it detects these defined ORA errors. But according to Oracle manuals only ORA error that is supported is ORA-00240. Usually you can set any user defined ORA error condition as long as you reasonably believe that the said ORA error occurrence impedes business availability of your primary database. There is an diagnostic event 16616 which can be set along with different levels to simulate different error conditions, but in 12c all the levels doesn't seems to work. Level 284 attempts to simulate ORA-00240, however the method it uses for simulation does not work on 12c. In 12c major changes have been made to the code which deals with updating, reading records in controlfile so there is lesser need to get exclusive enqueue on controlfile and this is the reason why this simulation does not work. Levels provided in broker diagnostic event 16616 for simulating various error conditions are 251 /* healthchk: datafile offline */ 252 /* healthchk: corrupted controlfil */ 253 /* healthchk: corrupted dictionary */ 254 /* healthchk: inaccessible logfile */ 255 /* healthchk: stuck archiver */ 256 /* healthchk: lost write */ 280 /* oracle error 27102 */ 281 /* oracle error 16506 */ 282 /* oracle error 16526 */ 283 /* oracle error 1578 */ 284 /* oracle error 240 */ Any of these broker diagnostic events can be set for simulation purpose by setting corresponding levels as shown below alter session set events '16616 trace name context forever, level '; Environment In this article we will go through one of the user-configurable failover condition 'Inaccessible Log File' by artificially simulating it and confirm the behavior is as expected. This demo has been conducted in environment having primary two node(prim-node1/prim-node2) RAC 12c database(primdb) and standby two node(stdby-node1/stdby-node2) RAC database(stdbydb) along with Observer(obsrvr-node) placed in different network from primary and standby database. Protection mode has been configured for Maximum Availability. Current broker configuration status is as shown below prim-node1 {/home/oracle}: dgmgrl sys/*****@primdb "show configuration verbose" DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. Configuration - fsfo-demo Protection Mode: MaxAvailability Members: primdb - Primary database stdbydb - (*) Physical standby database (*) Fast-Start Failover target Properties: FastStartFailoverThreshold = '60' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '60' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' ExternalDestination1 = '' ExternalDestination2 = '' PrimaryLostWriteAction = 'CONTINUE' Fast-Start Failover: ENABLED Threshold: 60 seconds Target: stdbydb Observer: obsrvr-node Lag Limit: 60 seconds (not in use) Shutdown Primary: TRUE Auto-reinstate: TRUE Observer Reconnect: (none) Observer Override: FALSE Configuration Status: SUCCESS Fast start failover has been configured and user-configurable condition 'Inaccessible Log File' has been enabled as shown below. prim-node1 {/home/oracle}: dgmgrl sys/*****@primdb "show fast_start failover" DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. Fast-Start Failover: ENABLED Threshold: 30 seconds Target: stdbydb Observer: obsrvr-node Lag Limit: 30 seconds (not in use) Shutdown Primary: TRUE Auto-reinstate: TRUE Observer Reconnect: (none) Observer Override: FALSE Configurable Failover Conditions Health Conditions: Corrupted Controlfile NO Corrupted Dictionary NO Inaccessible Logfile YES Stuck Archiver NO Datafile Offline NO Oracle Error Conditions: Property 'Shutdown Primary' is set to TRUE, when failover happens old primary will be shutdown immediately to avoid any further modifications of data from already connected sessions. Property 'Auto-reinstate' is set to TRUE, when failover happens old primary will be automatically reinstate as standby database and will sync with new primary database. But both of these properties are not applicable for user-configurable conditions. When failover is triggered due to user-configurable condition then old primary will be shutdown and re-instate will not be attempted by the broker automatically. Simulation To simulate 'Inaccessible Logfile' condition we will remove/delete all the online redo log files related to thread 1 of first RAC instance. Before proceeding let's gather the status information of each redo log group. SQL> select thread#,group#, sequence#, members, ARCHIVED, status from v$log; THREAD# GROUP# SEQUENCE# MEMBERS ARC STATUS ---------- ---------- ---------- ---------- --- ---------------- 1 1 70 2 YES ACTIVE 1 2 71 2 NO CURRENT 2 3 69 2 YES ACTIVE 2 4 70 2 NO CURRENT Redo log group 1 of thread 1 status is active which means they contain data which is not yet written to data files by dbwr. Due to protection mode set as MaxAvailability with synchronous log transfer we can concur that there will be no data loss when failover occurs. Remove/delete all online log files related to thread 1 SQL> !rm /oradata1/PRIMDB/onlinelog/o1_mf_1__18637110189630_.log SQL> !rm /oradata2/PRIMDB/onlinelog/o1_mf_1__18637117200858_.log SQL> !rm /oradata1/PRIMDB/onlinelog/o1_mf_2__18637124324657_.log SQL> !rm /oradata2/PRIMDB/onlinelog/o1_mf_2__18637131414786_.log Immediately after removing online log files broker detects the 'Inaccessible Logfile' condition due to I/O errors caused by LGWR while it tries to write into online log files placed on NFS storage. In alert log file of primary database we could see all these details. Thu Apr 28 02:45:44 2016 Direct NFS: NFSERR 304 error encountered. Server lab-demo path lab-demo Thu Apr 28 02:45:44 2016 Errors in file /u01/app/oracle/diag/rdbms/PRIMDB/primdb1/trace/primdb1_lgwr_16386.trc: ORA-00345: redo log write error block 53 count 1 ORA-00312: online log 2 thread 1: '/oradata1/PRIMDB/onlinelog/o1_mf_2__18637124324657_.log' ORA-17500: ODM err:KGNFS WRITE FAIL:Stale File handle ORA-17500: ODM err:KGNFS WRITE FAIL:Stale File handle Thu Apr 28 02:45:44 2016 Errors in file /u01/app/oracle/diag/rdbms/PRIMDB/primdb1/trace/primdb1_lgwr_16386.trc: ORA-00346: log member marked as STALE and closed ORA-00312: online log 2 thread 1: '/oradata1/PRIMDB/onlinelog/o1_mf_2__18637124324657_.log' Thu Apr 28 02:45:49 2016 Direct NFS: write FAILED 70 Thu Apr 28 02:45:49 2016 Direct NFS: NFSERR 304 error encountered. Server lab-demo path lab-demo Thu Apr 28 02:45:51 2016 A user-configurable Fast-Start Failover condition was detected. The primary is shutting down due to Inaccessible Logfile. Database primdb will not be automatically reinstated. So old primary database has been shutdown and broker will not try to reinstate it due to detection of user-configurable condition. In Observer log file we could see that failover has taken place and it got completed in approx 2 minutes. 07:45:51.54 Thursday, April 28, 2016 Initiating Fast-Start Failover to database "stdbydb"... Performing failover NOW, please wait... Failover succeeded, new primary is "stdbydb" 07:47:30.82 Thursday, April 28, 2016 Status of broker configuration after failover is as shown below. prim-node1 {/home/oracle}: dgmgrl sys/****@primdb "show configuration verbose" DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. Configuration - poc054.int.thomsonreuters.com Protection Mode: MaxAvailability Members: primdb - Primary database Warning: ORA-16817: unsynchronized fast-start failover configuration stdbydb - (*) Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated (*) Fast-Start Failover target Properties: FastStartFailoverThreshold = '60' OperationTimeout = '30' TraceLevel = 'USER' FastStartFailoverLagLimit = '60' CommunicationTimeout = '180' ObserverReconnect = '0' FastStartFailoverAutoReinstate = 'TRUE' FastStartFailoverPmyShutdown = 'TRUE' BystandersFollowRoleChange = 'ALL' ObserverOverride = 'FALSE' ExternalDestination1 = '' ExternalDestination2 = '' PrimaryLostWriteAction = 'CONTINUE' Fast-Start Failover: ENABLED Threshold: 60 seconds Target: primdb Observer: obsrvr-node Lag Limit: 60 seconds (not in use) Shutdown Primary: TRUE Auto-reinstate: TRUE Observer Reconnect: (none) Observer Override: FALSE Configuration Status: WARNING As expected FSFO is in un-synchronized configuration due to unavailability of standby database. Broker configuration status is WARNING and states that standby database has to be reinstated. Let's reinstate standby database manually through broker as we had enabled flashback before simulating this scenario. prim-node1 {/home/oracle}: dgmgrl sys/****@stdbydb "reinstate database primdb" DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. Reinstating database "primdb", please wait... Error: ORA-16653: failed to reinstate database Failed. Reinstatement of database "primdb" failed So reinstate of standby got failed and as per alert log file flashback database was failed due to unavailability of online log files which we deleted previously. Starting background process NSV0 Thu Apr 28 02:55:56 2016 NSV0 started with pid=53, OS id=12844 FLASHBACK DATABASE TO SCN 102068803 Thu Apr 28 02:56:01 2016 Errors in file /u01/app/oracle/diag/rdbms/primdb/primdb1/trace/primdb1_rsm0_23313.trc: ORA-00313: open failed for members of log group 1 of thread 1 ORA-00312: online log 1 thread 1: '/oradata2/PRIMDB/onlinelog/o1_mf_1__18637117200858_.log' ORA-17503: ksfdopn:4 Failed to open file /oradata2/PRIMDB/onlinelog/o1_mf_1__18637117200858_.log ORA-17500: ODM err:File does not exist ORA-00312: online log 1 thread 1: '/oradata1/PRIMDB/onlinelog/o1_mf_1__18637110189630_.log' ORA-17503: ksfdopn:4 Failed to open file /oradata1/PRIMDB/onlinelog/o1_mf_1__18637110189630_.log ORA-17500: ODM err:File does not exist Thu Apr 28 02:56:01 2016 Errors in file /u01/app/oracle/diag/rdbms/primdb/primdb1/trace/primdb1_rsm0_23313.trc: ORA-00313: open failed for members of log group 2 of thread 1 ORA-00312: online log 2 thread 1: '/oradata2/PRIMDB/onlinelog/o1_mf_2__18637131414786_.log' ORA-17503: ksfdopn:4 Failed to open file /oradata2/PRIMDB/onlinelog/o1_mf_2__18637131414786_.log ORA-17500: ODM err:File does not exist ORA-38754 signalled during: FLASHBACK DATABASE TO SCN 102068803... Thu Apr 28 02:56:04 2016 Since these online log files were deleted we can try to clear the online logfile groups 1 and 2. SQL> alter database clear logfile group 1; alter database clear logfile group 1 * ERROR at line 1: ORA-01624: log 1 needed for crash recovery of instance poc054a1 (thread 1) ORA-00312: online log 1 thread 1: '/oradata1/PRIMDB/onlinelog/o1_mf_1__18637110189630_.log' ORA-00312: online log 1 thread 1: '/oradata2/PRIMDB/onlinelog/o1_mf_1__18637117200858_.log' SQL> alter database clear logfile group 2; alter database clear logfile group 2 * ERROR at line 1: ORA-01624: log 2 needed for crash recovery of instance poc054a1 (thread 1) ORA-00312: online log 2 thread 1: '/oradata1/PRIMDB/onlinelog/o1_mf_2__18637124324657_.log' ORA-00312: online log 2 thread 1: '/oradata2/PRIMDB/onlinelog/o1_mf_2__18637131414786_.log' Its not allowing to clear the logfile groups as controlfile has the information about these logfiles which are required for crash recovery and thus flashback operation would also fail again. So without this redo information we cannot proceed further with standby reinstate. Resolution Since this testing is done in Maximum Availability mode with SYNC log transfer setting there is no loss in data while FSFO performed failover to standby database. So to reinstate the standby database we can get the redo information of these deleted online log files from our new primary database. Let's query and find what we need on old primary database for each thread. SQL> select group#,first_change# from v$log where thread#=1; GROUP# FIRST_CHANGE# ------ ------------- 1 102068615 2 102068695 We need log information from SCN 102068695, so query from primary to find the required log files. SQL> select name from v$archived_log where FIRST_CHANGE#>=102068695; NAME -------------------------------------------------------------------------------- /oraarch/stdbydb/2_1_910320360.dbf /oraarch/stdbydb/1_71_910144956.dbf /oraarch/stdbydb/2_70_910144956.dbf /oraarch/stdbydb/1_1_910320360.dbf /oraarch/stdbydb/2_2_910320360.dbf Transfer these log files from primary to standby database and register these files into standby controlfile by using RMAN catalog. Note that few of the logfiles are of different incarnation. scp /oraarch/stdbydb/2_1_910320360.dbf oracle@prim-node01:/oraarch/primdb/ scp /oraarch/stdbydb/1_71_910144956.dbf oracle@prim-node01:/oraarch/primdb/ scp /oraarch/stdbydb/2_70_910144956.dbf oracle@prim-node01:/oraarch/primdb/ scp /oraarch/stdbydb/1_1_910320360.dbf oracle@prim-node01:/oraarch/primdb/ scp /oraarch/stdbydb/2_2_910320360.dbf oracle@prim-node01:/oraarch/primdb/ prim-node01 {/home/oracle}: rman target / Recovery Manager: Release 12.1.0.2.0 - Production on Thu Apr 28 03:23:02 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: POC054A (DBID=1278101452, not open) RMAN> catalog archivelog '/oraarch/primdb/2_1_910320360.dbf'; cataloged archived log archived log file name=/oraarch/primdb/2_1_910320360.dbf RECID=7646 STAMP=910322773 RMAN> catalog archivelog '/oraarch/primdb/1_71_910144956.dbf'; cataloged archived log archived log file name=/oraarch/primdb/1_71_910144956.dbf RECID=7647 STAMP=910322787 RMAN> catalog archivelog '/oraarch/primdb/2_70_910144956.dbf'; cataloged archived log archived log file name=/oraarch/primdb/2_70_910144956.dbf RECID=7648 STAMP=910322800 RMAN> catalog archivelog '/oraarch/primdb/1_1_910320360.dbf'; cataloged archived log archived log file name=/oraarch/primdb/1_1_910320360.dbf RECID=7649 STAMP=910322811 RMAN> catalog archivelog '/oraarch/primdb/2_2_910320360.dbf'; cataloged archived log archived log file name=/oraarch/primdb/2_2_910320360.dbf RECID=7650 STAMP=910322822 Now again try to reinstate old primary database primdb through broker. prim-node1 {/home/oracle}: dgmgrl sys/****@stdbydb "reinstate database primdb" DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDBA. Reinstating database "primdb", please wait... Reinstatement of database "primdb" succeeded In alert log file of new standby primdb it seems that reinstation was successfull but mrp errors out saying incarnation has been changed to proceed further for applying the archive log files of different incarnation. MRP0: Background Media Recovery applied all available redo. Recovery will be restarted once new redo branch is registered Thu Apr 28 03:35:27 2016 Errors in file /u01/app/oracle/diag/rdbms/primdb/primdb1/trace/primdb1_pr00_13044.trc: ORA-19906: recovery target incarnation changed during recovery Incarnations in new primary database stdbydb: RMAN> list incarnation using target database control file instead of recovery catalog List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ---------------- --- ---------- ---------- 44 44 POC054A 1278101452 PARENT 98432256 18-APR-16 45 45 POC054A 1278101452 PARENT 101420992 26-APR-16 46 46 POC054A 1278101452 CURRENT 102068825 28-APR-16 Incarnations in old primary database primdb: RMAN> list incarnation using target database control file instead of recovery catalog List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ---------------- --- ---------- ---------- 44 44 POC054A 1278101452 PARENT 98432256 18-APR-16 45 45 POC054A 1278101452 CURRENT 101420992 26-APR-16 Since standby is not aware of this new incarnation we could snap in new standby controlfile from primary to this standby database to make standby aware of both incarnations. Create standby controlfile on new primary stdbydb database SQL> alter database create standby controlfile as '/tmp/newstdbycntrl.ctl'; Database altered. SQL> !scp /tmp/newstdbycntrl.ctl oracle@prim-node1:/home/oracle Stop standby database and replace the standby controlfile by using the latest standby controlfile generated from new primary which is aware of both incarnations. prim-node1 {/home/oracle}: srvctl stop database -d primdb prim-node1 {/home/oracle}: cp /home/oracle/newstdbycntrl.ctl /oradata1/primdb/PRIMDB/controlfile/o1_mf__102262659671_.ctl prim-node1 {/home/oracle}: cp /home/oracle/newstdbycntrl.ctl /oradata2/primdb/PRIMDB/controlfile/o1_mf__102262723422_.ctl prim-node1 {/home/oracle}: srvctl start database -d primdb Now standby detects the new incarnation from controlfile and continues the recovery to catch up with primary. We were able to reinstate standby with minimal efforts and avoid complete rebuild of standby. Conclusion Fast-Start failover provides minimum RTO (Recovery Time Objective) but there are many other aspects of it which has to be considered before implementing it in production. One of the aspect is about time required to bring back the new standby to avoid prolonged single point of failure window for the new primary database after failover operation. By leveraging flashback feature on both primary and standby database we can automate the process of standby reinstate through broker, but success of reinstating database depends on many other factors. In this article we considered user-configurable condition "Inaccessible logfile" and simulated the scenario to check whether we could bring back the new standby as soon as possible, and we had to perform few manual steps to sync the redo data from new primary database as redo was lost on old primary database. So RTO is directly proportional to the kind of database failure which has caused the failover and thus it is highly recommended to develop proof of concept of RTO for each and every condition which can ignite failover before moving ahead in the production environment.

Wiki Page: Configuring database role based Global services – 12c

$
0
0
Written by Nassyam Basha Introduction We have seen managing Global data services in Data Guard configuring by failover test and so on, if any situation occur like the role transitions of the Data Guard configuration between Primary database and Physical standby database then in normal service we have to reconfigure the services based on the nature of the database role, but in case of global services the GDS will take care of everything even though after the role transitions. Role Based Global Services As we know Global data services is suitable for replica aware databases and it provides great flexibility in load balancing, service failover and so on. Let’s suppose we have added the Data Guard broker to the GDS configuration and created few services to connect read write services to the primary database and the read only services to physical standby database and now consider there is server maintenance on primary database and to escape the down time on production then it is good practice to perform the role transitions/switch over and then switch back. But here the question is how the read write services and read only services work? Where in case of local services we have to reconfigure the services and we need some maintenance time to do this. In case of RAC systems then we have to modify the services using srvctl command line reference. Now with Global data services the job is so simple and you will basically nothing to do except the configuration of the services in preferred manner. Demo on creating services We will see some demo on this role based services, in order to do that we will create one service for primary usage and one for standby service. Here there will be configuration steps included only the configuration of services. Below we have created service cobol_process with primary database and the nvision_report for the physical standby database. GDSCTL>add service -service cobol_process -gdspool psfin -preferred_all -role PRIMARY GDSCTL>add service -service nvision_report -gdspool psfin -preferred_all -role PHYSICAL_STANDBY Start the created global data services. GDSCTL>start service -service cobol_process -gdspool psfin GDSCTL>start service -service nvision_report -gdspool psfin Check status of services and databases GDSCTL>services Service "cobol_process.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%1", name: "ORC1", db: "CANADA", region: "westcan", status: ready. Service "nvision_report.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%11", name: "ORC1", db: "INDIA", region: "apac", status: ready. GDSCTL>databases Database: "canada" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: westcan Service: "cobol_process" Globally started: Y Started: Y Scan: N Enabled: Y Preferred: Y Service: "nvision_report" Globally started: Y Started: N Scan: Y Enabled: Y Preferred: Y Registered instances: psfin%1 Database: "india" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: apac Service: "cobol_process" Globally started: Y Started: N Scan: N Enabled: Y Preferred: Y Service: "nvision_report" Globally started: Y Started: Y Scan: Y Enabled: Y Preferred: Y Registered instances: psfin%11 GDSCTL> If we see the service cobol_process is running with the preferred instance CANADA and the database role is primary database, the nvision_report is running on Instance INDIA and the database role is physical standby database. Connectivity and the Listener status of Primary database [oracle@ORA-C2 ~]$ sqlplus sys/oracle@cobol_process as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 1 04:27:41 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select database_role from v$database; DATABASE_ROLE ---------------- PRIMARY SQL> [oracle@ORA-C1 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 01-JUN-2016 05:09:39 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ORA-C1.localdomain)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 28-MAY-2016 12:56:23 Uptime 3 days 16 hr. 13 min. 16 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/ORA-C1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ORA-C1.localdomain)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Services Summary... Service "CANADA" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "CANADA_DGB" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "CANADA_DGMGRL" has 1 instance(s). Instance "ORC1", status UNKNOWN, has 1 handler(s) for this service... Service "ORC1XDB" has 1 instance(s). Instance "ORC1", status READY, has 0 handler(s) for this service... Service "cobol_process.psfin.oradbcloud" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@ORA-C1 ~]$ Connectivity to the standby database and the Listener status [oracle@ORA-C2 ~]$ sqlplus sys/oracle@nvision_report as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 1 04:27:55 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select database_role from v$database; DATABASE_ROLE ---------------- PHYSICAL STANDBY SQL> [oracle@ORA-C2 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 01-JUN-2016 05:10:10 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ORA-C2.localdomain)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 28-MAY-2016 12:59:40 Uptime 3 days 16 hr. 10 min. 30 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/ORA-C2/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ORA-C2.localdomain)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ORA-C2.localdomain)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/GCAT/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "GCAT" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "GCATXDB" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "GDS$CATALOG.oradbcloud" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "INDIA" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "INDIA_DGB" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "INDIA_DGMGRL" has 1 instance(s). Instance "ORC1", status UNKNOWN, has 1 handler(s) for this service... Service "nvision_report.psfin.oradbcloud" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@ORA-C2 ~]$ Broker configuration and Switchover Test [oracle@ORA-C1 ~]$ dgmgrl nassyam/oracle@canada DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production Copyright (c) 2000, 2013, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected as SYSDG. DGMGRL> show configuration Configuration - hadg Protection Mode: MaxPerformance Members: canada - Primary database india - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS (status updated 22 seconds ago) DGMGRL> DGMGRL> switchover to india Performing switchover NOW, please wait... Operation requires a connection to instance "ORC1" on database "india" Connecting to instance "ORC1"... Connected as SYSDBA. New primary database "india" is opening... Operation requires start up of instance "ORC1" on database "canada" Starting instance "ORC1"... ORACLE instance started. Database mounted. Database opened. Switchover succeeded, new primary is "india" DGMGRL> The configuration status is in SUCCESS state and the switchover is successful and now the new primary database is INDIA. Review of Databases and Services GDSCTL>databases Database: "canada" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: westcan Service: "cobol_process" Globally started: Y Started: N Scan: N Enabled: Y Preferred: Y Service: "nvision_report" Globally started: Y Started: Y Scan: N Enabled: Y Preferred: Y Registered instances: psfin%1 Database: "india" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: apac Service: "cobol_process" Globally started: Y Started: Y Scan: N Enabled: Y Preferred: Y Service: "nvision_report" Globally started: Y Started: N Scan: N Enabled: Y Preferred: Y Registered instances: psfin%11 GDSCTL>services Service "cobol_process.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%11", name: "ORC1", db: "INDIA", region: "apac", status: ready. Service "nvision_report.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%1", name: "ORC1", db: "CANADA", region: "westcan", status: ready. GDSCTL> After the successful switchover and above from the output of command “databases” we can see the Canada is physical standby database where it was the primary database prior to switchover and the india is now the primary database. When it comes to the switchover the cobol_process global service now is running on the India instance and the nvision_report global services is running on the Canada which is physical standby database. So we can see that the services role was changed automatically without human interaction, which is the great flexibility of the global data services if we configure role based. Crosschecking the services nature We have seen how the services role changed, now we will confirm by doing the manual connectivity to the databases using the global data services configured. Now we will connect to the nvision_report read only service, it was running earlier on the INDIA Instance and now it should run on CANADA instance after the role transitions. [oracle@ORA-C2 ~]$ sqlplus sys/oracle@nvision_report as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 1 05:28:16 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select database_role from v$database; DATABASE_ROLE ---------------- PHYSICAL STANDBY SQL> [oracle@ORA-C1 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 01-JUN-2016 05:29:20 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ORA-C1.localdomain)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 28-MAY-2016 12:56:23 Uptime 3 days 16 hr. 32 min. 57 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/ORA-C1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ORA-C1.localdomain)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Services Summary... Service "CANADA" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "CANADA_DGB" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "CANADA_DGMGRL" has 1 instance(s). Instance "ORC1", status UNKNOWN, has 1 handler(s) for this service... Service "ORC1XDB" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "nvision_report.psfin.oradbcloud" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@ORA-C1 ~]$ Now we will test connectivity of the read write cobol_process and also we can see that service is registered with the new primary database. [oracle@ORA-C2 ~]$ sqlplus sys/oracle@cobol_process as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 1 05:28:44 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select database_role from v$database; DATABASE_ROLE ---------------- PRIMARY SQL> [oracle@ORA-C2 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 01-JUN-2016 05:29:07 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ORA-C2.localdomain)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 28-MAY-2016 12:59:40 Uptime 3 days 16 hr. 29 min. 27 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/ORA-C2/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ORA-C2.localdomain)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ORA-C2.localdomain)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/GCAT/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "GCAT" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "GCATXDB" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "GDS$CATALOG.oradbcloud" has 1 instance(s). Instance "GCAT", status READY, has 1 handler(s) for this service... Service "INDIA" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "INDIA_DGB" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... Service "INDIA_DGMGRL" has 1 instance(s). Instance "ORC1", status UNKNOWN, has 1 handler(s) for this service... Service "cobol_process.psfin.oradbcloud" has 1 instance(s). Instance "ORC1", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@ORA-C2 ~]$ Conclusion This article basically gives idea how to configure the role based global services so that the services will work without any maintenance after the role transitions between the Data Guard configurations. In case of local data services additional DBA should be available to manage the services or to relocate or reconfigure but here everything will take care of by GDS.

Wiki Page: Using Global and Session sequences - Oracle Active Data Guard 12c

$
0
0
Introduction We are using sequences so far in primary database so that users can generate their unique integers, Even in 11gR2 Active Data Guard there are many limitations respective to the Applications in using global temporary tables, From 12c applications are allowed to use the already created global temporary tables and can be used global or session sequences as needed. This article explains how to use global and session sequences and how it actually works. Global Temporary tables with Active Data Guard To work on global sequences or session sequences we must have the temporary tables, so that standby(ADG) can do perform DML's on temporary tables where standby cannot perform such DML's on regular tables. Below is the startup of Global temporary tables on how to create it and how easy to do DML's on standby, I have written one more detailed article on Global Temporary tables if in case looking for more information on it[http://www.toadworld.com/platforms/oracle/w/wiki/11084.12c-active-data-guard-dml-on-temporary-tables.aspx]. Primary Note that, DDL's such as create temporary table statement should be issued on primary database and of course as said above DML's are allowed to do on standby database. SQL> select * from seq_data; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> create global temporary table gtt_seq on commit preserve rows as select * from seq_data; Table created. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> From the above output we can see the rows which are inserted, but if you check the same query on standby and it returns with zero rows and now you can start using temporary table by adding or updating or any DML's. Standby: SQL> select database_role from v$database; DATABASE_ROLE ---------------- PHYSICAL STANDBY SQL> select * from gtt_seq; no rows selected SQL> insert into gtt_seq select * from seq_data; 4 rows created. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> So from the above example, it clears that we can able to perform the DML's even in standby database. Remember that once the session disconnected then the information will no more exists with the temporary table. Global Sequences From Active Data Guard 12c, Sequences created using the default settings i.e. Cache and NoOrder can be used from standby database. if for example the sequence used by the standby and then it allocate itself the unique range of sequence numbers, once the complete range is used then again the new set of range will be allocated to the standby database and note that the sequence range where ever assigned then the unique stream of sequences will be maintained in across the Data Guard configuration. There are few instructions with usage of sequences, i.e. while creating the sequences ensure it is Cache and NoOrder and the standby should have configured the remote destination(log_archive_dest_n) to the primary database back. Apart from that Oracle recommends to have big cache because it has to allocate and communicate to across all the databases of the configurations and hence the performance can be benefited. Creating sequences can be accepted with the default values or you can configure on your own settings based on the requirements which can suffice. For example Increment by, cache size, start with value so on. As Active data guard will accept the default configuration , hence we have created the global sequence with default settings. SQL> create sequence gseq global; Sequence created. SQL> Or we can create sequence with custom values such as create sequence gseq increment by 1 start with 1 nomaxvalue nocycle cache 100 global; We can check the sequence settings by using the view "user_sequences" SQL> select sequence_name,min_value,max_value,cache_size,order_flag from user_sequences where sequence_name='GSEQ'; SEQUENCE_NAME MIN_VALUE MAX_VALUE CACHE_SIZE O -------------- ---------- ------------------------------- ---------- - GSEQ 1 9999999999999999999999999999 20 N SQL> After having the global sequence created, Now we are very much to start playing with the global sequences. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> If you look carefully the rows in above table with QID column have the values from 1 to 4, but when we try to update the column QID with the nextval of the sequence and the QID values are changed based on the sequence settings. Of course before start using sequences you are always allowed to alter any settings. SQL> update gtt_seq set qid=gseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 21 AAA 22 BBB 23 CCC 24 DDD SQL> Now from the above output it clears that the values of QID are already updated, Next check how the values from standby database. Standby: From standby database of same table, the values are not updated with the sequence nextval because the sequence range was specific to primary and they are not going to visible to standby database. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD If we update the same table gtt_seq on standby with the same conditions, probably you are expecting with the cache size value? No. Because the sequence range was already assigned to primary. Hence the new set of sequence range will be allocated to standby i.e. from 40 SQL> update gtt_seq set qid=gseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 41 AAA 42 BBB 43 CCC 44 DDD SQL> Now the above output is clear enough to know how the global sequence is working. Finally we will check with one more test on primary database by updating the table. Primary: SQL> update gtt_seq set qid=gseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 25 AAA 26 BBB 27 CCC 28 DDD SQL> Probably you were expecting the QID value starting from 61? No... because the given sequence range is still available with primary database and it can be used. So this output illustrates how the Data Guard manages the global temporary tables. Session Sequences In regular sequences(global), it maintains the uniqueness of sequence range but when it comes to session sequences, it maintains unique range number of sequences with in a session. In global sequences there are limitations to configure cache and noorder but in Session sequences supports most of the combinations. The session sequences we should create them in primary database and later they can be accessed on standby databases. Now we will walk through with the test case with Session sequences. The practice is almost same as global sequences but the results vary, we will see how. The main prerequisite is to having the Global temporary table and having sequence with "session" attribute. Primary SQL> create sequence sseq session; Sequence created. SQL> SQL> select * from gtt_seq; no rows selected SQL> insert into gtt_seq select * from seq_data; 4 rows created. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> After creating session sequence, we have inserted few rows from other table and now we will update the table with session sequence next value. SQL> update gtt_seq set qid=sseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> Based on the cache size the unique range will be allocated to the session and the values remained same as it's an initial allocation. Now we will perform same transaction over global temporary table. Standby SQL> select * from gtt_seq; no rows selected SQL> insert into gtt_seq select * from seq_data; 4 rows created. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD After inserting rows, when updating the column QID with the session sequence next value, in case of global temporary tables the series started from 21 but because with session sequence the unique range again started from 1. SQL> update gtt_seq set qid=sseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 1 AAA 2 BBB 3 CCC 4 DDD SQL> For the confirmation how the session sequence is working, below example should give clear picture after updating the same table with the session sequence next value. SQL> update gtt_seq set qid=sseq.nextval; 4 rows updated. SQL> commit; Commit complete. SQL> select * from gtt_seq; QID QNAME ---------- ---------- 5 AAA 6 BBB 7 CCC 8 DDD SQL> As we are performing from same session without exit, then the sequence allocated after 4 and used the values 5 to 8 as per the expectations of session sequence. Altering Sequences After creating sequences we can alter the session type any time, i.e. eiter from Global sequence to Session sequence or vice versa. SQL> select sequence_name,session_flag from user_sequences where sequence_name='GSEQ'; SEQUENCE_NAME S --------------- - GSEQ N To know the sequence type, we can describe the view "user_sequences" and for the column "session_flag". SQL> alter sequence gseq session; Sequence altered. SQL> select sequence_name,session_flag from user_sequences where sequence_name='GSEQ'; SEQUENCE_NAME S --------------- - GSEQ Y After performing the sequence type, now it shows that the sequence type is changed from global to session sequence. Likewise we can change from session sequence to global sequence as below. SQL> alter sequence gseq global; Sequence altered. SQL> select sequence_name,session_flag from user_sequences where sequence_name='GSEQ'; SEQUENCE_NAME S --------------- - GSEQ N SQL> Conclusion we've seen how to use global and session sequence as required to the application and great flexibility to use them with global temporary tables from the standby database of 12c Active Data Guard feature. References: https://docs.oracle.com/database/121/SBYDB/manage_ps.htm#SBYDB5164 https://docs.oracle.com/database/121/SQLRF/statements_6017.htm#SQLRF01314

Wiki Page: Oracle RAC on Solaris LDOM Shared Disk configuration

$
0
0
Introduction: All these days there are certain common words like " cloud, virtualization, consolidation " very frequently used in IT industry. There are so many organization still not convinced to run the Application and Database workloads on a virtualized platforms. Does really it's not a good idea to run databases and applications on virtualized platform?? I believe the answer for this question is both "YES" and "NO" . If the virtualized environments are poorly configured and deployed then the answer is YES , because business may face bad system performance, business outage's and time to find the root cause for the problem and fixing it. If the virtualized environment is properly deployed considering all hardware consideration then the answer will be NO and such environments are far better in terms of availability and better hardware utilization. This article doesn't cover step by step method for configuring and Installing a Real Application cluster on Oracle Solaris Sparc based virtualization LDOM, rather it covers the best practices you should follow for configuring shared disk devices inside the LDOMS that will be used for RAC Installation. The below diagram shows the typical deployment of sparc based virtualization. There are two servers and Oracle RAC is Installed on LDOMS using each physical server. The main purpose of writing this article is to help individuals who are planning to use LDOMS for their RAC deployments and what things we should consider while configuring the Shared storage devices on LDOM servers. If the shared devices are not configured properly then we may encounter node eviction issues here and then, so we must be very careful while configuring the shared devices. In this article I will demonstrate one issue which was encountered by at least three customers. Environments Details: Two Physical Severs - Oracle Sparc T5-4 RAC deployed on two ldom from 2 physical servers each Oracle ZFS Storage was used for Shared Storage S.No. Servers Description 1 controlhost01 Controller host domain - server1 2 racnode1 Guest ldom server on server1 3 controlhost02 Controller host domain - server2 4 racnode1 Guest ldom server on server2 Oracle Grid Infrastructure 11.2.0.3 was running without issues, but for some maintenance one of the server rebooted and since after node reboot, that node started evicting from the cluster. Observations: Log message from the operating system : Jan 14 03:52:10 racnode1 last message repeated 1 time Jan 14 04:26:32 racnode1 CLSD: [ID 770310 daemon.notice] The clock on host racnode1 has been updated by the Cluster Time Synchronization Service to be synchronous with th e mean cluster time. Jan 14 04:45:22 racnode1 vdc: [ID 795329 kern.notice] NOTICE: vdisk@1 disk access failed Jan 14 04:49:33 racnode1 last message repeated 5 times Jan 14 04:50:23 racnode1 vdc: [ID 795329 kern.notice] NOTICE: vdisk@1 disk access failed Jan 14 04:51:14 racnode1 last message repeated 1 time Jan 14 05:00:11 racnode1 CLSD: [ID 770310 daemon.notice] The clock on host racnode1 has been updated by the Cluster Time Synchronization Service to be synchronous with th e mean cluster time. Jan 14 05:33:34 racnode1 last message repeated 1 time Jan 14 05:45:25 racnode1 vdc: [ID 795329 kern.notice] NOTICE: vdisk@1 disk access failed Jan 14 05:48:54 racnode1 last message repeated 4 times Jan 14 05:49:44 racnode1 vdc: [ID 795329 kern.notice] NOTICE: vdisk@1 disk access failed Jan 14 05:52:22 racnode1 last message repeated 3 times Jan 14 06:09:21 racnode1 CLSD: [ID 770310 daemon.notice] The clock on host racnode1 has been updated by the Cluster Time Synchronization Service to be synchronous with th e mean cluster time. Log message from the GI logs : NOTE: cache mounting group 3/0xF98788E3 (OCR) succeeded NOTE: cache ending mount (success) of group OCR number=3 incarn=0xf98788e3 GMON querying group 1 at 10 for pid 18, osid 8795 Thu Jan 28 02:15:18 2016 NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1 SUCCESS: diskgroup ARCH was mounted GMON querying group 2 at 11 for pid 18, osid 8795 NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 2 SUCCESS: diskgroup DATA was mounted GMON querying group 3 at 12 for pid 18, osid 8795 NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 3 SUCCESS: diskgroup OCR was mounted SUCCESS: ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:2} */ SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */ SUCCESS: ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */ Thu Jan 28 02:15:19 2016 WARNING: failed to online diskgroup resource ora.ARCH.dg (unable to communicate with CRSD/OHASD) WARNING: failed to online diskgroup resource ora.DATA.dg (unable to communicate with CRSD/OHASD) WARNING: failed to online diskgroup resource ora.OCR.dg (unable to communicate with CRSD/OHASD) Thu Jan 28 02:15:36 2016 NOTE: Attempting voting file refresh on diskgroup OCR NOTE: Voting file relocation is required in diskgroup OCR NOTE: Attempting voting file relocation on diskgroup OCR [/u01/grid/bin/oraagent.bin(9694)]CRS-5818:Aborted command 'check' for resource 'ora.ARCH.dg'. Details at (:CRSAGF00113:) {1:57521:2} in /u01/grid/log/racnode1/agent/crsd/oraagent_grid/oraagent_grid.log. 2016-01-28 02:18:42.213 [/u01/grid/bin/oraagent.bin(9694)]CRS-5818:Aborted command 'check' for resource 'ora.DATA.dg'. Details at (:CRSAGF00113:) {1:57521:2} in /u01/grid/log/racnode1/agent/crsd/oraagent_grid/oraagent_grid.log. 2016-01-28 02:18:42.213 [/u01/grid/bin/oraagent.bin(9694)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER_SCAN3.lsnr'. Details at (:CRSAGF00113:) {1:57521:2} in /u01/grid/log/racnode1/agent/crsd/oraagent_grid/oraagent_grid.log. 2016-01-28 02:18:42.410 [/u01/grid/bin/oraagent.bin(9694)]CRS-5016:Process "/u01/grid/opmn/bin/onsctli" spawned by agent "/u01/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/grid/log/racnode1/agent/crsd/oraagent_grid/oraagent_grid.log" 2016-01-28 02:18:50.897 [/u01/grid/bin/oraagent.bin(9694)]CRS-5818:Aborted command 'check' for resource 'ora.asm'. Details at (:CRSAGF00113:) {1:57521:2} in /u01/grid/log/racnode1/agent/crsd/oraagent_grid/oraagent_grid.log. Cause: On further Investigation we found the logical devices names used by ASM on both LDOM Cluster nodes are not same and this is the reason one of the Instance is always getting evicted. RACNODE1 root@controlhost01:~# ldm list -o disk racnode NAME racnode DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@racnode 0 disk@0 primary cdrom cdrom@racnode 1 disk@1 primary DATA DATA@RACDisk 5 disk@5 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary ARCH ARCH@RACDisk 6 disk@6 primary root@controlhost01:~# RACNODE2 root@controlhost02:~# ldm list -o disk racnode NAME racnode DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@racnode 0 disk@0 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary DATA DATA@RACDisk 5 disk@5 primary ARCH ARCH@RACDisk 1 disk@1 primary root@controlhost02:~# If we observe there is difference in the number of devices of racnode on controlhost01 and controlhost02. controlhost01 has seven devices in total and controlhost02 has six devices in total and if you observe there is difference in the logical name as well. controlhost01 ==> ARCH ARCH@RACDisk 1 disk@1 primary controlhost02 ==> ARCH ARCH@RACDisk 6 disk@6 primary On racnode1 the logical device name allocated is "1" and on racnode2 the logical device name allocated is "6". Lets see the device name allocated on each cluster node: RACNODE2: -bash-3.2# echo|format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /virtual-devices@100/channel-devices@200/disk@0 1. c0d2 OCR1 /virtual-devices@100/channel-devices@200/disk@2 2. c0d3 OCR2 /virtual-devices@100/channel-devices@200/disk@3 3. c0d4 OCR3 /virtual-devices@100/channel-devices@200/disk@4 4. c0d5 DATA /virtual-devices@100/channel-devices@200/disk@5 5. c0d6 ARCH /virtual-devices@100/channel-devices@200/disk@6 Specify disk (enter its number): Specify disk (enter its number): -bash-3.2# RACNODE1: -bash-3.2# echo|format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 /virtual-devices@100/channel-devices@200/disk@0 1. c0d1 ARCH /virtual-devices@100/channel-devices@200/disk@1 2. c0d2 OCR1 /virtual-devices@100/channel-devices@200/disk@2 3. c0d3 OCR2 /virtual-devices@100/channel-devices@200/disk@3 4. c0d4 OCR3 /virtual-devices@100/channel-devices@200/disk@4 5. c0d5 DATA /virtual-devices@100/channel-devices@200/disk@5 Specify disk (enter its number): Specify disk (enter its number): -bash-3.2# Let's check the ASM disks from ASM instance's on both rac nodes: racnode2 SQL> select name, path from v$asm_disk; NAME PATH -------------------------------------------------- -------------------- OCR_0002 /dev/rdsk/c0d4s4 DATA_0000 /dev/rdsk/c0d5s4 OCR_0000 /dev/rdsk/c0d2s4 OCR_0001 /dev/rdsk/c0d3s4 ARCH_0000 /dev/rdsk/c0d1s4 racnode1 SQL> select name, path from v$asm_disk; NAME PATH -------------------------------------------------- -------------------- OCR_0002 /dev/rdsk/c0d4s4 DATA_0000 /dev/rdsk/c0d5s4 OCR_0000 /dev/rdsk/c0d2s4 OCR_0001 /dev/rdsk/c0d3s4 ARCH_0000 /dev/rdsk/c0d6s4 Here is the problem the ASM disks paths for ARCH Disk group is different and it creating problem for Grid Infrastructure to understand which path is a correct and valid path. So we must be very careful about the logical device names of shared disks. If we came across such situation what we should do to overcome this Issue. On racnode1 there is an additional device allocated - CDROM and it is the one which caused the problem for changing the device names. root@controlhost01:~# ldm list -o disk racnode NAME racnode DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@racnode 0 disk@0 primary cdrom cdrom@racnode 1 disk@1 primary DATA DATA@RACDisk 5 disk@5 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary ARCH ARCH@RACDisk 6 disk@6 primary root@controlhost01:~# On racnode2 CDROM device doesn't even exists: root@controlhost02:~# ldm list -o disk racnode NAME racnode DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@racnode 0 disk@0 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary DATA DATA@RACDisk 5 disk@5 primary ARCH ARCH@RACDisk 1 disk@1 primary root@controlhost02:~# There is no CDROM available for racnode2. The solution for this issue is to remove the incorrect logical device from the guest domain using controller domain and assign it again with correct logical name. We need to even delete the CDROM from racnode1 guest domain. Remove the CDROM: root@controlhost01:~# ldm rm-vdisk cdrom racnode1 Check the status of the devices: root@controlhost01:~# ldm list -o disk racnode NAME database DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@RACDisk 0 disk@0 primary DATA DATA@RACDisk 5 disk@5 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary ARCH ARCH@RACDisk 6 disk@6 primary root@controlhost01:~# - The CDROM is now removed from the guest domain. Now its time to remove the ASM logical device, so we must ensure that backup is done and it can be restored. The ASM is not identifying the device once its removed and reconnected again. We must follow the complete procedure of provisioning new LUN to the ASM device. Action Plan: - Perform backup of data residing on the disk group - Drop the disk group - remove the device from the guest domain - Add the device using the correct logical name - label the device - Change the ownership and permissions - Create ASM Disk group - Restore the data on the ASM disk group I will not demonstrate the detailed steps for the above listed action plan. But I will list down the steps which is required to be performed at the Controller domain and the Guest Domain. -Remove the incorrect logical device from the Guest Domain: root@controlhost01:~# ldm rm-vdisk ARCH racnode root@controlhost01:~# ldm list -o disk racnode NAME database DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@RACDisk 0 disk@0 primary DATA DATA@RACDisk 5 disk@5 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary root@controlhost01:~# There is no ARCH disk now after removal of logical device. - Add logical device with correct Name root@controlhost01:~# ldm add-vdisk id=1 ARCH ARCH@RACDisk racnode root@controlhost01:~# ldm list -o disk racnode NAME database DISK NAME VOLUME TOUT ID DEVICE SERVER MPGROUP OS OS@RACDisk 0 disk@0 primary DATA DATA@RACDisk 5 disk@5 primary OCR1 OCR1@RACDisk 2 disk@2 primary OCR2 OCR2@RACDisk 3 disk@3 primary OCR3 OCR3@RACDisk 4 disk@4 primary ARCH ARCH@RACDisk 1 disk@1 primary root@host01:~# The newly added device is now available with correct logical device id. Label the disk at the guest LDOM operating system , change the permission and ownership of the newly added device. After this step the device is ready to be used by ASM disk group. Conclusion: The purpose of writing this article is to help individuals who are implementing oracle RAC on Oracle Solaris LDOM's. It took three days for us and oracle support to do the root cause analysis for this problem. I strongly recommend to verify the logical device names across the all cluster nodes before installing the cluster software by performing multiple hard/soft reboots and it should also be tested even after cluster installations. If we observed there is difference in the number of devices of racnode on controlhost01 and controlhost02. controlhost01 has seven devices in total and controlhost02 has six devices in total and if you observe there is difference in the logical name as well.

Blog Post: Oracle Cloud v/s Amazon Cloud

$
0
0
A few years ago, I taught an online class in Oracle Database administration for the University of Washington. Every student was given their own virtual machine in the Amazon cloud for the duration of the class, courtesy of Amazon. It was ridiculously simple to clone, start, stop, and destroy virtual machines using the Amazon CLI (command line interface). All students had full SSH and SQL*Net access to their virtual machines in the Amazon cloud. At that time, Oracle had a pitiful cloud offering: a single schema on Oracle Database 11g, a maximum of 50 GB of database storage, and no Oracle Net (SQL*Net) access But Oracle has come a long way in the last couple of years and its cloud offering now boasts a command line interface and full SSH and SQL*Net access. I checked out the Oracle CLI and it appears to have everything that the Amazon CLI offers. For example, the Oracle CLI functions iaas-start-vservers and iaas-stop-vservers for stopping and starting virtual machines correspond to the Amazon CLI functions ec2_start_instances and ec2_stop_instances . The future bodes well. Oracle showed off its stuff at the recent NoCOUG conference at PayPal . There were hands-on labs on In-Memory Option, Enterprise Manager 13c, and Oracle Database 12c. Each lab had 25 students, each of whom got their own virtual machine with full SSH and SQL*Net access.

Wiki Page: SQL Plan Management

$
0
0
The Oracle documentation does not reveal all the details of how SQL Plan Management works. According to the documentation : When the database performs a hard parse of a SQL statement, the optimizer generates a best-cost plan. By default, the optimizer then attempts to find a matching plan in the SQL plan baseline for the statement. If no plan baseline exists, then the database runs the statement with the best-cost plan. If a plan baseline exists, then the optimizer behavior depends on whether the newly generated plan is in the plan baseline: If the new plan is in the baseline, then the database executes the statement using the found plan. If the new plan is not in the baseline, then the optimizer marks the newly generated plan as unaccepted and adds it to the plan history. Optimizer behavior depends on the contents of the plan baseline: If fixed plans exist in the plan baseline, then the optimizer uses the fixed plan (see "Fixed Plans") with the lowest cost. If no fixed plans exist in the plan baseline, then the optimizer uses the baseline plan with the lowest cost. If no reproducible plans exist in the plan baseline, which could happen if every plan in the baseline referred to a dropped index, then the optimizer uses the newly generated cost-based plan. Here is what actually happens: When the optimizer performs a hard parse of a SQL statement, it first generates a plan without regard to whether a plan baseline exists or not. If a plan baseline exists, then: If the generated plan matches a baseline plan that is both enabled and accepted (or if there is no baseline plan that is both enabled and accepted), then the database uses the generated plan. The phv2 (plan hash value 2; the predicate section of the plan are included in the computation) is used for matching. If there is at least one baseline plan that is both enabled and accepted but the generated plan does not match any of them then: Beginning with any fixed baseline plans, the optimizer attempts to reproduce one of the enabled and accepted baseline plans. For each such plan, the optimizer uses the set of hints in the plan together with the optimizer_features_enable setting in the session. If that fails, the optimizer makes one more attempt, but this time it uses the optimizer_features_enable hint in the baseline plan but not the other hints. The process stops as soon as an enabled and accepted plan is reproduced. If none of the enabled and accepted plans can be reproduced, then, from among the all the plans that have been generated so far, the optimizer picks the plan that has the least estimated cost. If there is no fixed baseline plan, the optimizer also adds this plan to the plan history with ENABLED=YES, ACCEPTED=NO, and FIXED=NO. The following script can be used to see first-hand how SQL Plan Management really works. It uses the Employees table in the HR schema. Every time the query is executed, the script drops the index that was used. This forces the optimizer to consider alternative plans. The script was tested using Oracle Database 12.1.0.2. connect hr/hr -- ----------------------------------------------------------------------------- set linesize 200 set tab off set pagesize 1000 set echo on column sql_handle format a40 column plan_name format a40 -- ----------------------------------------------------------------------------- variable job_id varchar2(10); variable manager_id number; variable department_id number; exec :job_id := 'AD_VP'; exec :manager_id := 100; exec :department_id := 90; -- ----------------------------------------------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id; select * from table(dbms_xplan.display_cursor(null, null)); select p.sql_id, p.plan_hash_value, p.child_number, t.phv2 as planid from v$sql_plan p, xmltable('for $i in /other_xml/info where $i/@type eq "plan_hash_2" return $i' passing xmltype(p.other_xml) columns phv2 number path '/') t where p.sql_id = 'cfkmj5v408ssr' and p.other_xml is not null; set serveroutput on declare l_plans_loaded PLS_INTEGER; begin l_plans_loaded := dbms_spm.load_plans_from_cursor_cache( sql_id => 'cfkmj5v408ssr'); dbms_output.put_line('Plans Loaded: ' || l_plans_loaded); end; / set serveroutput off select sql_handle, plan_name, enabled, accepted, fixed from dba_sql_plan_baselines where sql_text like '%SPM TEST%' and sql_text not like '%dba_sql_plan_baselines%'; -- ----------------------------------------------------------------------------- drop index emp_job_ix; -- ----------------------------------------------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id; / select * from table(dbms_xplan.display_cursor); select p.sql_id, p.plan_hash_value, p.child_number, t.phv2 as planid from v$sql_plan p, xmltable('for $i in /other_xml/info where $i/@type eq "plan_hash_2" return $i' passing xmltype(p.other_xml) columns phv2 number path '/') t where p.sql_id = 'cfkmj5v408ssr' and p.other_xml is not null; select sql_handle, plan_name, enabled, accepted, fixed from dba_sql_plan_baselines where sql_text like '%SPM TEST%' and sql_text not like '%dba_sql_plan_baselines%'; -- ----------------------------------------------------------------------------- drop index emp_department_ix; -- ----------------------------------------------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id; / select * from table(dbms_xplan.display_cursor); select p.sql_id, p.plan_hash_value, p.child_number, t.phv2 as planid from v$sql_plan p, xmltable('for $i in /other_xml/info where $i/@type eq "plan_hash_2" return $i' passing xmltype(p.other_xml) columns phv2 number path '/') t where p.sql_id = 'cfkmj5v408ssr' and p.other_xml is not null; select sql_handle, plan_name, enabled, accepted, fixed from dba_sql_plan_baselines where sql_text like '%SPM TEST%' and sql_text not like '%dba_sql_plan_baselines%'; -- ----------------------------------------------------------------------------- declare l_report clob; begin l_report := dbms_spm.EVOLVE_SQL_PLAN_BASELINE( sql_handle => 'SQL_528708e78dcc00fd', verify => 'NO' ); end; / set serveroutput on declare l_plans_altered PLS_INTEGER; begin l_plans_altered := dbms_spm.alter_sql_plan_baseline( sql_handle => 'SQL_528708e78dcc00fd', plan_name => 'SQL_PLAN_551s8wy6ws07xa6e4155c', attribute_name => 'fixed', attribute_value => 'YES'); dbms_output.put_line('Plans Altered: ' || l_plans_altered); end; / declare l_plans_altered PLS_INTEGER; begin l_plans_altered := dbms_spm.alter_sql_plan_baseline( sql_handle => 'SQL_528708e78dcc00fd', plan_name => 'SQL_PLAN_551s8wy6ws07xfa0c27e4', attribute_name => 'fixed', attribute_value => 'YES'); dbms_output.put_line('Plans Altered: ' || l_plans_altered); end; / set serveroutput off select sql_handle, plan_name, enabled, accepted, fixed from dba_sql_plan_baselines where sql_text like '%SPM TEST%' and sql_text not like '%dba_sql_plan_baselines%'; -- ----------------------------------------------------------------------------- drop index emp_manager_ix; -- ----------------------------------------------------------------------------- alter session set tracefile_identifier='SPM_TEST'; alter session set max_dump_file_size=unlimited; alter session set events 'trace[RDBMS.SQL_Optimizer.*][sql:cfkmj5v408ssr]'; select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id; / select * from table(dbms_xplan.display_cursor); select p.sql_id, p.plan_hash_value, p.child_number, t.phv2 as planid from v$sql_plan p, xmltable('for $i in /other_xml/info where $i/@type eq "plan_hash_2" return $i' passing xmltype(p.other_xml) columns phv2 number path '/') t where p.sql_id = 'cfkmj5v408ssr' and p.other_xml is not null; alter session set events 'trace[RDBMS.SQL_Optimizer.*][sql:cfkmj5v408ssr] off'; -- ----------------------------------------------------------------------------- select sql_handle, plan_name, enabled, accepted from dba_sql_plan_baselines where sql_text like '%SPM TEST%' and sql_text not like '%dba_sql_plan_baselines%'; Here is the output of the script: SQL> connect hr/hr Connected. SQL> -- ----------------------------------------------------------------------------- SQL> set linesize 200 SQL> set tab off SQL> set pagesize 1000 SQL> set echo on SQL> column sql_handle format a40 SQL> column plan_name format a40 SQL> -- ----------------------------------------------------------------------------- SQL> variable job_id varchar2(10); SQL> variable manager_id number; SQL> variable department_id number; SQL> SQL> exec :job_id := 'AD_VP'; PL/SQL procedure successfully completed. SQL> exec :manager_id := 100; PL/SQL procedure successfully completed. SQL> exec :department_id := 90; PL/SQL procedure successfully completed. SQL> -- ----------------------------------------------------------------------------- SQL> select /* SPM TEST */ 2 employee_id, 3 job_id, 4 manager_id, 5 department_id 6 from employees 7 where job_id=:job_id 8 and manager_id=:manager_id 9 and department_id=:department_id; EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> select * from table(dbms_xplan.display_cursor(null, null)); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SQL_ID cfkmj5v408ssr, child number 0 ------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id Plan hash value: 2096651594 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2 (100)| | |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 20 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_JOB_IX | 2 | | 1 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("DEPARTMENT_ID"=:DEPARTMENT_ID AND "MANAGER_ID"=:MANAGER_ID)) 2 - access("JOB_ID"=:JOB_ID) Note ----- - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 26 rows selected. SQL> SQL> select 2 p.sql_id, 3 p.plan_hash_value, 4 p.child_number, 5 t.phv2 as planid 6 from 7 v$sql_plan p, 8 xmltable('for $i in /other_xml/info 9 where $i/@type eq "plan_hash_2" 10 return $i' 11 passing xmltype(p.other_xml) 12 columns phv2 number path '/') t 13 where p.sql_id = 'cfkmj5v408ssr' 14 and p.other_xml is not null; SQL_ID PLAN_HASH_VALUE CHILD_NUMBER PLANID ------------- --------------- ------------ ---------- cfkmj5v408ssr 2096651594 0 3830632964 SQL> SQL> set serveroutput on SQL> declare 2 l_plans_loaded PLS_INTEGER; 3 begin 4 l_plans_loaded := dbms_spm.load_plans_from_cursor_cache( 5 sql_id => 'cfkmj5v408ssr'); 6 7 dbms_output.put_line('Plans Loaded: ' || l_plans_loaded); 8 end; 9 / Plans Loaded: 1 PL/SQL procedure successfully completed. SQL> set serveroutput off SQL> SQL> select sql_handle, plan_name, enabled, accepted, fixed 2 from dba_sql_plan_baselines 3 where sql_text like '%SPM TEST%' 4 and sql_text not like '%dba_sql_plan_baselines%'; SQL_HANDLE PLAN_NAME ENA ACC FIX ---------------------------------------- ---------------------------------------- --- --- --- SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xe452d204 YES YES NO SQL> -- ----------------------------------------------------------------------------- SQL> drop index emp_job_ix; Index dropped. SQL> -- ----------------------------------------------------------------------------- SQL> select /* SPM TEST */ 2 employee_id, 3 job_id, 4 manager_id, 5 department_id 6 from employees 7 where job_id=:job_id 8 and manager_id=:manager_id 9 and department_id=:department_id; EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> / EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SQL_ID cfkmj5v408ssr, child number 0 ------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id Plan hash value: 235881476 --------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2 (100)| | |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 20 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 3 | | 1 (0)| 00:00:01 | --------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("JOB_ID"=:JOB_ID AND "MANAGER_ID"=:MANAGER_ID)) 2 - access("DEPARTMENT_ID"=:DEPARTMENT_ID) Note ----- - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 26 rows selected. SQL> SQL> select 2 p.sql_id, 3 p.plan_hash_value, 4 p.child_number, 5 t.phv2 as planid 6 from 7 v$sql_plan p, 8 xmltable('for $i in /other_xml/info 9 where $i/@type eq "plan_hash_2" 10 return $i' 11 passing xmltype(p.other_xml) 12 columns phv2 number path '/') t 13 where p.sql_id = 'cfkmj5v408ssr' 14 and p.other_xml is not null; SQL_ID PLAN_HASH_VALUE CHILD_NUMBER PLANID ------------- --------------- ------------ ---------- cfkmj5v408ssr 235881476 0 2799965532 SQL> SQL> select sql_handle, plan_name, enabled, accepted, fixed 2 from dba_sql_plan_baselines 3 where sql_text like '%SPM TEST%' 4 and sql_text not like '%dba_sql_plan_baselines%'; SQL_HANDLE PLAN_NAME ENA ACC FIX ---------------------------------------- ---------------------------------------- --- --- --- SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xa6e4155c YES NO NO SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xe452d204 YES YES NO SQL> -- ----------------------------------------------------------------------------- SQL> drop index emp_department_ix; Index dropped. SQL> -- ----------------------------------------------------------------------------- SQL> select /* SPM TEST */ 2 employee_id, 3 job_id, 4 manager_id, 5 department_id 6 from employees 7 where job_id=:job_id 8 and manager_id=:manager_id 9 and department_id=:department_id; EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> / EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SQL_ID cfkmj5v408ssr, child number 0 ------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id Plan hash value: 621391157 ------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | | | 3 (100)| | |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 20 | 3 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | EMP_MANAGER_IX | 14 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("JOB_ID"=:JOB_ID AND "DEPARTMENT_ID"=:DEPARTMENT_ID)) 2 - access("MANAGER_ID"=:MANAGER_ID) Note ----- - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 26 rows selected. SQL> SQL> select 2 p.sql_id, 3 p.plan_hash_value, 4 p.child_number, 5 t.phv2 as planid 6 from 7 v$sql_plan p, 8 xmltable('for $i in /other_xml/info 9 where $i/@type eq "plan_hash_2" 10 return $i' 11 passing xmltype(p.other_xml) 12 columns phv2 number path '/') t 13 where p.sql_id = 'cfkmj5v408ssr' 14 and p.other_xml is not null; SQL_ID PLAN_HASH_VALUE CHILD_NUMBER PLANID ------------- --------------- ------------ ---------- cfkmj5v408ssr 621391157 0 4195100644 SQL> SQL> select sql_handle, plan_name, enabled, accepted, fixed 2 from dba_sql_plan_baselines 3 where sql_text like '%SPM TEST%' 4 and sql_text not like '%dba_sql_plan_baselines%'; SQL_HANDLE PLAN_NAME ENA ACC FIX ---------------------------------------- ---------------------------------------- --- --- --- SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xa6e4155c YES NO NO SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xe452d204 YES YES NO SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xfa0c27e4 YES NO NO SQL> -- ----------------------------------------------------------------------------- SQL> declare 2 l_report clob; 3 begin 4 l_report := dbms_spm.EVOLVE_SQL_PLAN_BASELINE( 5 sql_handle => 'SQL_528708e78dcc00fd', 6 verify => 'NO' 7 ); 8 end; 9 / PL/SQL procedure successfully completed. SQL> SQL> set serveroutput on SQL> declare 2 l_plans_altered PLS_INTEGER; 3 begin 4 l_plans_altered := dbms_spm.alter_sql_plan_baseline( 5 sql_handle => 'SQL_528708e78dcc00fd', 6 plan_name => 'SQL_PLAN_551s8wy6ws07xa6e4155c', 7 attribute_name => 'fixed', 8 attribute_value => 'YES'); 9 10 dbms_output.put_line('Plans Altered: ' || l_plans_altered); 11 end; 12 / Plans Altered: 1 PL/SQL procedure successfully completed. SQL> declare 2 l_plans_altered PLS_INTEGER; 3 begin 4 l_plans_altered := dbms_spm.alter_sql_plan_baseline( 5 sql_handle => 'SQL_528708e78dcc00fd', 6 plan_name => 'SQL_PLAN_551s8wy6ws07xfa0c27e4', 7 attribute_name => 'fixed', 8 attribute_value => 'YES'); 9 10 dbms_output.put_line('Plans Altered: ' || l_plans_altered); 11 end; 12 / Plans Altered: 1 PL/SQL procedure successfully completed. SQL> set serveroutput off SQL> SQL> select sql_handle, plan_name, enabled, accepted, fixed 2 from dba_sql_plan_baselines 3 where sql_text like '%SPM TEST%' 4 and sql_text not like '%dba_sql_plan_baselines%'; SQL_HANDLE PLAN_NAME ENA ACC FIX ---------------------------------------- ---------------------------------------- --- --- --- SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xa6e4155c YES YES YES SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xe452d204 YES YES NO SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xfa0c27e4 YES YES YES SQL> -- ----------------------------------------------------------------------------- SQL> drop index emp_manager_ix; Index dropped. SQL> -- ----------------------------------------------------------------------------- SQL> alter session set tracefile_identifier='SPM_TEST'; Session altered. SQL> alter session set max_dump_file_size=unlimited; Session altered. SQL> alter session set events 'trace[RDBMS.SQL_Optimizer.*][sql:cfkmj5v408ssr]'; Session altered. SQL> SQL> select /* SPM TEST */ 2 employee_id, 3 job_id, 4 manager_id, 5 department_id 6 from employees 7 where job_id=:job_id 8 and manager_id=:manager_id 9 and department_id=:department_id; EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> / EMPLOYEE_ID JOB_ID MANAGER_ID DEPARTMENT_ID ----------- ---------- ---------- ------------- 101 AD_VP 100 90 102 AD_VP 100 90 SQL> SQL> select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SQL_ID cfkmj5v408ssr, child number 0 ------------------------------------- select /* SPM TEST */ employee_id, job_id, manager_id, department_id from employees where job_id=:job_id and manager_id=:manager_id and department_id=:department_id Plan hash value: 1445457117 ------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4 (100)| | |* 1 | TABLE ACCESS FULL| EMPLOYEES | 1 | 20 | 4 (0)| 00:00:01 | ------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("JOB_ID"=:JOB_ID AND "DEPARTMENT_ID"=:DEPARTMENT_ID AND "MANAGER_ID"=:MANAGER_ID)) Note ----- - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 25 rows selected. SQL> SQL> select 2 p.sql_id, 3 p.plan_hash_value, 4 p.child_number, 5 t.phv2 as planid 6 from 7 v$sql_plan p, 8 xmltable('for $i in /other_xml/info 9 where $i/@type eq "plan_hash_2" 10 return $i' 11 passing xmltype(p.other_xml) 12 columns phv2 number path '/') t 13 where p.sql_id = 'cfkmj5v408ssr' 14 and p.other_xml is not null; SQL_ID PLAN_HASH_VALUE CHILD_NUMBER PLANID ------------- --------------- ------------ ---------- cfkmj5v408ssr 1445457117 0 3476115102 SQL> SQL> alter session set events 'trace[RDBMS.SQL_Optimizer.*][sql:cfkmj5v408ssr] off'; Session altered. SQL> -- ----------------------------------------------------------------------------- SQL> select sql_handle, plan_name, enabled, accepted 2 from dba_sql_plan_baselines 3 where sql_text like '%SPM TEST%' 4 and sql_text not like '%dba_sql_plan_baselines%'; SQL_HANDLE PLAN_NAME ENA ACC ---------------------------------------- ---------------------------------------- --- --- SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xa6e4155c YES YES SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xe452d204 YES YES SQL_528708e78dcc00fd SQL_PLAN_551s8wy6ws07xfa0c27e4 YES YES SQL> -- ----------------------------------------------------------------------------- SQL> exit Here the relevant lines from the trace file generated in the last part of the script: SPM: statement found in SMB SPM: finding a match for the generated plan, planId = 3476115102 SPM: fixed planId's of plan baseline are: 2799965532 4195100644 SPM: using qksan to reproduce, cost and select accepted plan, sig = 5946731623575453949 SPM: plan reproducibility round 1 (plan outline + session OFE) SPM: using qksan to reproduce accepted plan, planId = 2799965532 SPM: planId in plan baseline = 2799965532, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: plan reproducibility round 2 (hinted OFE only) SPM: using qksan to reproduce accepted plan, planId = 2799965532 SPM: planId in plan baseline = 2799965532, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: change REPRODUCED status to NO, planName = SQL_PLAN_551s8wy6ws07xa6e4155c SPM: plan reproducibility round 1 (plan outline + session OFE) SPM: using qksan to reproduce accepted plan, planId = 4195100644 SPM: planId in plan baseline = 4195100644, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: plan reproducibility round 2 (hinted OFE only) SPM: using qksan to reproduce accepted plan, planId = 4195100644 SPM: planId in plan baseline = 4195100644, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: change REPRODUCED status to NO, planName = SQL_PLAN_551s8wy6ws07xfa0c27e4 SPM: REPRODUCED status changes: cntRepro = 2, bitvecRepro = 000 SPM: planId's of plan baseline are: 3830632964 SPM: using qksan to reproduce, cost and select accepted plan, sig = 5946731623575453949 SPM: plan reproducibility round 1 (plan outline + session OFE) SPM: using qksan to reproduce accepted plan, planId = 3830632964 SPM: planId in plan baseline = 3830632964, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: plan reproducibility round 2 (hinted OFE only) SPM: using qksan to reproduce accepted plan, planId = 3830632964 SPM: planId in plan baseline = 3830632964, planId of reproduced plan = 3476115102 SPM: failed to reproduce the plan using the following info: SPM: generated non-matching plan: SPM: change REPRODUCED status to NO, planName = SQL_PLAN_551s8wy6ws07xe452d204 SPM: REPRODUCED status changes: cntRepro = 3, bitvecRepro = 000 SPM: couldn't reproduce any enabled+accepted plan so using the cost-based plan, planId = 3476115102 SPM: kkopmCheckSmbUpdate (enter) xscP=0x7f1341dc49f8, pmExCtx=0x208bec820, ciP=0x276b5e068, dtCtx=0x7f1342c14f40, sig=5946731623575453949, planId=3476115102 SPM: kkopmCheckSmbUpdate (enter) xscP=0x7f1341dc49f8, pmExCtx=0x208bec820, ciP=0x276b5e068, dtCtx=0x7f1342c14f40, sig=5946731623575453949, planId=3476115102

Blog Post: Major and Minor keys in Oracle NoSQL Database

$
0
0
Oracle NoSQL Database uses Major and Minor key values to achieve user-controllable record co-location. Records are stored based on the hash of the major key, so all of the records with the same major key will be co-located on the same server. A JSON document could be divided into sub-parts, with each sub-part having a different minor key. For example, customer meta data could be stored in the main customer record, transactions in a different record, recommendations in a third, friends and associates in yet another, etc. Atomic multi-record operations are supported as long as all of the records share the same major key. For example, “Get all of the sales records for UserID 123” (where UserID is the Major Key) would return a transactionally consistent data set. You could also write multiple records and get transactional behavior. For example, “Write the following 6 records for UserID 123” or “Delete record type X, add record type Y, and update record type Z for UserID 123” could be done as an atomic transaction. No other NoSQL database vendor offers a similar feature. So, based on the examples in the September 2011 white paper that announced Oracle NoSQL Database, I believe that the Oracle NoSQL Database developers confused the “value” in “key-value” with the value in the infamous “ Entity-Attribute-Value ” model: “Applications can take advantage of subkey capabilities to achieve data locality. A key is the concatenation of a Major Key Path and a Minor Key Path, both of which are specified by the application. All records sharing a Major Key Path are co-located to achieve data locality. Within a co-located collection of Major Key Paths, the full key, comprised of both the Major and Minor Key Paths, provides fast, indexed lookup. For example, an application storing user profiles might use the profile-name as a Major Key Path and then have several Minor Key Paths for different components of that profile such as email address, name, phone number, etc. [emphasis added]” The disadvantage of Major/Minor keys is that, if you have a highly variable number of minor keys per major key, you can end up with an uneven record distribution. For example a Major/Minor key of UserID/MessageID will potentially put all the records for a highly active user into a single partition. P.S. Oracle NoSQL Database has started using the terms “Primary Key” and “Sharding Key.” The sharding key is what Oracle NoSQL Database previously called the major key while the primary key is the combination of the major and minor keys. Refer to http://docs.oracle.com/cd/NOSQL/html/GettingStartedGuideTables/primaryshardkeys.html .

Wiki Page: Creating Oracle Database Cloud

$
0
0
In an earlier article , I showed the steps to prepare for using the Oracle Database Cloud and Oracle Java Cloud. There are four steps: Creating an SSH key pair Creating Oracle Storage Creating Oracle Database Cloud Creating Oracle Java Cloud The previous article described the first two necessary preparation steps. This article will show how to go about actually creating an Oracle Database Cloud instance. I'll get back to creating the Java Cloud instance in a later article. I've already written about how to deploy an ADF application to an existing Java Cloud instance in this article . Invoking the Create Wizard To create your database instance, you use the Create Database Cloud Service Instance wizard. Go to cloud.oracle.com and click Sign In. Select your data center and click My Service. Enter your identity domain and then log on with your cloud username and password. Find the database service on the service dashboard and click Service Console. Click Services and then the Create Service button to start the wizard. It is pretty self-explanatory, but not the following: Set Service Level to Oracle Database Cloud Service (not “Virtual Image”). This makes it easier to create a database and provices some extra tooling. You might have a choice of database versions. At the time of writing (May 2016), the options were 11.2.0.4 and 12.1.0.2 Chose the relevant software edition. You will have to read the documentation to find out exactly which combinations of the normal Enterprise Edition database options hide behind the various EE choices (“high” and “extreme”) For Cloud Storage Container, use the container you created during preparation. When you are done with the wizard, the create process starts and runs for a while (often around 30 minutes). Connecting to the Database Once your database is created, you can click on the instance to get the connect information. For security reasons, the Oracle cloud sets up a number of access rules, but do not automatically enable access. You need to activate these rules before you can connect to the database from your development workstation. To do this, click on the menu icon next to your database name and choose Access Rules. You will see the list of rules that have been created automatically for you. To access the database from SQL Developer through SQL*Net, you need to enable the ora_p2_db_listener rule (covering port the default port 1521). If you want to use the DBaaS monitor, you also need the ora_p2_httpssl rule (port 443), and if you want to use APEX applications, you need the ora_p2_http rule (port 80). For each rule, click the menu icon to the right and choose Enable. If you chose to create an 11g database, the IP number, port and SID is all you need to connect to your database. However, if you chose a 12c database, connecting to the SID will connect you to the container database, not the pluggable database (PDB) where your actual data will reside. To connect to the PDB, you need to know the service name. You find this in the DBaaS console that you can open from the menu icon for your cloud service. You will have to log in with the user dbaas_monitor and the password you provided when you created the database instance. In the DBaaS Monitor, click on the Database Status to see your PDBs. From the action menu icon in the top right corner of your PDB instance, you can choose Connection Details to see the service name that you need to use to connect to your PDB from SQL Developer or another tool. With this information, you can create a connection from a development tool on your local machine to your Oracle Cloud database instance. Working with Application Express Of course, your Cloud database instance comes with Application Express (APEX) like every other Oracle database. Because you have enabled port 443, you can now access APEX by simply entering the IP number of your instance in a browser. Note that your browser is going to give you a security warning. This is because Oracle has only created a self-signed certificate for your cloud instance – depending on your browser, you will have to create a security exception where you explictly trust this self-signed certificate to avoid annoying security prompts. From your database main page, you can access the Database Monitor and Application Express. When you click on the Application Express icon, you get the normal APEX signin screen. You log in first time with workspace internal, user admin and the password you provided when you created the database. From the APEX Instance Administration screen, you can click Create Workspace to create a new workspace for application development. Moving Data to the Cloud If you don't want to use the normal APEX features to create tables and import data, you can use a Cloud Connection from an Oracle development tool like SQL Developer or JDeveloper. To use this feature, you need to know your SFTP username and password in addition to your connection information. You can find the SFTP users from the Users tab on your cloud overview page. You need to choose the action menu for one of these users and select Reset Password / Unlock account. When you have provided a password, the account is unlocked. When you have unlocked an SFTP account, you can create a Cloud connection from your chosen development tool and then use the neat "Database Cart" feature to move your database objects from your local database to your cloud instance. Open the database cart from the Window menu and then drag and drop your database objects onto the cart. You will notice that each object has a DDL column that you can check to create the DDL statement to create the object, and tables also has a Data column. If you check the checkbox in the Data column, JDeveloper or SQL Developer will include insert statements to re-create your data in the cloud. When you have everything you need in the cart, you click the cloud button to create and run a batch job to move everything to the cloud.
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>