Introduction In my previous article we have reviewed Oracle 10g Flashback features. We started with the Flashback Query enhancements (Flashback Version Query, Flashback Transaction Query, Flashback Table) and continued with additional flashback enhancements (Flashback Drop and Flashback Database). In this part, we will review Oracle 11g and 12c Flashback Features. Oracle 11gR1 - Flashback Transaction In the previous part, when we reviewed “Flashback Version Query” section we have demonstrated how to view historical changes of records in a table and also how find the associated Transaction ID (XID) per each version of record in the table. In the “Flashback Transaction Query” we have demonstrated how we can also view the undo statement for the selected transaction. Starting from Oracle 11g, using the Flashback Transaction feature, we can even take these 2 features one step further by rolling-back the changes made by a transaction and its dependent transactions (optionally). In the following example, a sample table is created. Afterwards, the first transaction will insert a single record, and then a second transaction updates the record. In the last step of this demo we will perform a rollback for the first transaction: SQL> CREATE TABLE DEPARTMENTS ( DEPT_ID NUMBER, Name VARCHAR2 (20), CONSTRAINT id_pk PRIMARY KEY (DEPT_ID) ); Table created. SQL> insert into DEPARTMENTS values (1, 'SALES'); 1 row created. SQL> commit; Commit complete. SQL> update DEPARTMENTS set name = 'HR' where dept_id = 1; 1 row updated. SQL> commit; Commit complete. SQL> select * from DEPARTMENTS; DEPT_ID NAME ---------- -------------------- 1 HR SQL> SELECT versions_starttime, versions_endtime, versions_xid, versions_operation, dept_id, name FROM DEPARTMENTS VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE; VERSIONS_STARTTIME VERSIONS_ENDTIME VERSIONS_XID V DEPT_ID NAME ------------------------- ------------------------- ---------------- - ---------- ------ 28-JAN-16 06.18.10 PM 28-JAN-16 06.19.01 PM 000A001A0000BF39 I 1 SALES 28-JAN-16 06.19.01 PM 000800110000648B U 1 HR SQL> EXECUTE DBMS_FLASHBACK.transaction_backout(numtxns=>1, xids=>xid_array('000A001A0000BF39')); ERROR at line 1: ORA-55504: Transaction conflicts in NOCASCADE mode ORA-06512: at "SYS.DBMS_FLASHBACK", line 37 ORA-06512: at "SYS.DBMS_FLASHBACK", line 70 ORA-06512: at line 2 The reason that the “ORA-55504: Transaction conflicts in NOCASCADE mode” error has been raised is because the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure has a parameter named “options” which defaults to NOCASCADE, meaning that if a dependent transaction has been found, Oracle will raise an error. In our demonstration, the second transaction that updates the row depends on the first transaction that inserts the row; therefore, Oracle raises an error. We can tell Oracle to rollback the dependent transactions using the CASCADE value for the “options” parameter, as follows: SQL> BEGIN DBMS_FLASHBACK.transaction_backout (numtxns=> 1, xids => xid_array('000A001A0000BF39'), options => DBMS_FLASHBACK.cascade); END; PL/SQL procedure successfully completed. SQL> select * from DEPARTMENTS; no rows selected Note that supplemental logging for primary key columns must be enabled in order to use Flashback Transaction: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; Database altered. Oracle 11gR1 - Flashback Data Archive As mentioned, most of the Flashback Features (including: Flashback Version Query, Flashback Transaction Query, and Flashback Table and Flashback Transaction) rely on the undo information that reside in the Undo Tablespace. If the information is not available in the Undo Tablespace, the flashback operation will fail. Whether the information is available or not depends on several factors including: Size of the undo tablespace UNDO_RETENTION parameter Auto extend property Undo Guarantee property In order to allow an extended and even unlimited historical undo information for flashback purposes, Oracle introduced in version 11gR1 a new feature named Flashback Data Archive (also known as Total Recall). Flashback Data Archive is a repository for storing undo records and it is transparent to the application, i.e. once configured, the usage of the Oracle Flashback features remain the same in terms of syntax. In order to configure this feature, an object named Flashback Archive must be created. Flashback Archive has the following properties: Tablespace – Where the information will be stored Quota - Maximum amount of storage that can be allocated on the tablespace Retention – The amount of time that information will be kept in the tablespace Oracle introduced a new process named “fbda” that takes care of all the Flashback Data Archive related tasks such as writing the information into the Flashback Archive and purging it once the information is older than the retention period. In the following example a new default Flashback Archive object named “my_fda” with 25GB quota on a tablespace named “my_fda_ts” and a retention period of 5 years is created: SQL> CREATE TABLESPACE my_fda_ts DATAFILE SIZE 100M AUTOEXTEND ON NEXT 10M; TABLESPACE created. SQL> CREATE FLASHBACK ARCHIVE DEFAULT my_fda TABLESPACE my_fda_ts QUOTA 25G RETENTION 5 YEAR; FLASHBACK archive created. When enabling Flashback Archive for a table, unless explicitly specifying the Flashback Archive name, the default Flashback Archive will be used (in our case, “my_fda”. There can be only 1 default Flashback Archive. Let us create an additional non-default Flashback Archive with a 5GB quota and a retention period of 1 year: SQL> CREATE FLASHBACK ARCHIVE one_year_fda TABLESPACE my_fda_ts QUOTA 5G RETENTION 1 YEAR; Flashback archive created. The retention information for the Flashback Archives is available in the DBA_FLASHBACK_ARCHIVE dictionary view: SQL> SELECT flashback_archive_name, retention_in_days, status FROM DBA_FLASHBACK_ARCHIVE; FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS STATUS ------------------------ ----------------- ------- MY_FDA 1825 DEFAULT ONE_YEAR_FDA 365 The information about the quota per each Flashback Archive and the associated tablespace is available in the DBA_FLASHBACK_ARCHIVE_TS dictionary view: SQL> SELECT flashback_archive_name, tablespace_name, quota_in_mb/1024 QUOTA_GB FROM DBA_FLASHBACK_ARCHIVE_TS; FLASHBACK_ARCHIVE_NAME TABLESPACE_NAME QUOTA_GB ------------------------- ------------------------------ ---------- MY_FDA MY_FDA_TS 25 ONE_YEAR_FDA MY_FDA_TS 5 Once the Flashback Archive has been configured, the user can easily enable and disable this feature for the desired tables, as follows: SQL> alter table departments flashback archive; Table altered. SQL> alter table departments no flashback archive; Table altered. To enable FDA for a table using a non-default Flashback Archive, the Flashback Archive must be explicitly set, as follows: SQL> alter table departments flashback archive ONE_YEAR_FDA; Table altered. The DBA_FLASHBACK_ARCHIVE_TABLES dictionary view displays all of the tables that have a Flashback Archive configured: SQL> SELECT table_name,owner_name,flashback_archive_name FROM DBA_FLASHBACK_ARCHIVE_TABLES; TABLE_NAME OWNER_NAME FLASHBACK_ARCHIVE_NAME ----------- ----------- ------------------------- DEPARTMENTS SALES ONE_YEAR_FDA Oracle 11gR2 Flashback enhancements In Oracle 11gR2 the following enhancements were introduced: Flashback Database – Prior to 11gR2, the DBA had to restart the database to a MOUNTED state and only then enable the Flashback Database feature. Now, the DBA can enable the Flashback Database when the instance is open with no downtime. Flashback Data Archive – Prior to Oracle 11gR2, several DDL commands on a table with Flashback Archive enabled raised an error: “ERROR at line 1 ORA-55610: Invalid DDL statement on history-tracked table”. This includes the following DDL operations: TRUNCATE TABLE ALTER TABLE [ADD| DROP| RENAME| MODIFY] Column DROP|TRUNCATE Partition ALTER TABLE RENAME ALTER TABLE [ADD| DROP| RENAME| MODIFY] Constraint Staring with Oracle 11gR2, these operations are now supported on tables with Flashback Archive enabled. Oracle 12c Flashback enhancements 12.1.0.1 New Features: In Oracle version 12.1.0.1, new enhancements were introduced to the Flashback Data Archive feature including: The user context information for the transactions is now tracked as well. This allows us to understand not only what the changes were but also who is responsible for those changes. Support for tables that use the Hybrid Columnar Compression (HCC) feature. Support for exporting and importing the Flashback Data Archive tables. 12.1.0.2 New Features: In version 12.1.0.1, the Flashback Data Archive feature was supported only for Non-CDB environments. Starting from version 12.1.0.2 the Flashback Data Archive feature is supported for a Container Database (also known as the Multitenant Architecture) in addition to Non-CDB option. Summary In this article we have reviewed Oracle 11g and 12c Flashback new features and enhancements. In the next part (last one in this articles series) we will review the Oracle Flashback licensing. In addition, we will also summarize everything and we will see which Oracle Flashback feature should be used in various human errors use cases.
↧
Blog Post: Ensuring Data Protection Using Oracle Flashback Features - Part 4
↧
Blog Post: Oracle SQL and XML
Here is some useful information from the IAOUG Roundtable. I got this brilliant idea last fall when several wanted to share ideas/issues/etc after I gave my OOW update ppt. I’ve had problems getting headcount up on this group…so…the brilliant idea is to name the event (the Iowa Oracle Roundtable)…and it is an online forum. Troy did a great job finding the site and setting it all up. He uses ‘Campfire’ software. He loads the email addresses in…part of my brilliant idea is you have to attend a meeting to participate in the online forum! New email address for Dan: DanHotka@GMail.com . Thank you for keeping in touch… From the forum: This is more of a development question than a DBA question. Do any of you parse xml docs using SQL or PL/SQL? I have a developer who's trying to read in an xml doc from a vendor and we're getting hung up on the xmlns parameter. We're attempting to use the xmltype function, but the root xml tag has this xmlns parameter attatched to it. It looks something like this: If you delete out that xmlns part and just have , we can get our insert to work just fine. With it in, no rows inserted. Any ideas? Not sure how much it'll help, but here's an example of our insert statement: INSERT INTO test_xml(invoice_num) WITH t AS (SELECT xmltype(bfilename('EXPORT_DIR','test_file.xml'), nls_charset_id('WE8ISO8859P1')) xmlcol FROM dual) SELECT extractValue(value(x),'/invoice/invoice_number') invoice_num FROM t, TABLE(XMLSequence(extract(t.xmlcol,'/invoice'))) x; First Response: Is it possible to attach a sample file so we know the exact format of the file? That would at least give something to work with and test against. Here's a simplified example: PO-1 Rod Library Here's the query I came up with. You have to call out that namespace from what I found, otherwise it won't work. With some of the XML stuff you are using, it has been deprecated so I moved on to the xmltable. Hopefully this works for you. I tested in my environment on 12c and it did. INSERT INTO test_xml(invoice_num) WITH t AS (SELECT xmltype(bfilename('EXPORT_DIR','test_file.xml'), nls_charset_id('WE8ISO8859P1')) xmlcol FROM dual) select x.* from t, xmltable(xmlnamespaces(default ' http://com/exlibris/repository/acq/invoice/xmlbeans '), '//invoice' passing t.xmlcol columns invoice_num varchar2(30) path 'invoice_number') x / Thank you from the originator: Awesome! Thanks! Now I haven't been able to find anything on this, but I've heard there is a way to upload your xml file's "definition file", or whatever it's called....and upload that into the database somewhere. Then you don't have to explicitly set your columns like we're doing in this example, you just reference the "definition file" and the database does it automatically. Additional Reading: Here is some documentation that talks about it: http://docs.oracle.com/cd/B28359_01/appdev.111/b28369/xdb05sto.htm Thank you Brian and Nevin. Dan Hotka Oracle ACE Director Author/Instructor/CEO
↧
↧
Wiki Page: Cómo Mostrar Imágenes de Fondo en Diferentes Regiones en Oracle APEX 5.0
Written by Clarisa Maman Orfali Este artículo que te presento hoy es en respuesta a una consulta que me hicieron y me pareció muy conveniente poder compartir la respuesta con toda la Comunidad. Básicamente, la consulta se refería a cómo podemos colocar diferentes imágenes en las distintas regiones de una página en Oracle Application Express. Para llevar a cabo este ejemplo, he creado una aplicación de escritorio en Apex y he creado dos regiones, una de las regiones es una región con un Informe clásico que muestra los registros de la tabla EMP, cuya consulta SQL de origen es: select * from EMP. La otra región, es una región de Contenido Estático con texto dummy en ella. En la siguiente imagen podemos ver las dos regiones creadas: Subir Imágenes al Espacio de Trabajo (WORKSPACE) Para poder referenciar las imágenes en nuestro CSS necesitamos cargarlas en nuestro espacio de trabajo. Para ello vamos a Componentes Compartidos y seleccionamos Archivos de Espacio de Trabajo Estáticos y allí cargamos las imágenes, en mi caso las referencias son: #WORKSPACE_IMAGES#sky.jpg y #WORKSPACE_IMAGES#gotas.png Crear Identificador Estático de la Región Para poder asignar las reglas CSS a una región determinada, necesitamos crear un identificador para cada una de las regiones, en nuestro ejemplo vamos a crear un nombre de identificador en la región del Informe Clásico y otro nombre de identificador estático en la región de Contenido Estático. Desde el Diseñador de Páginas de la Página de Inicio, seleccionamos la región del Informe Clásico y en el panel de propiedades de la izquierda vamos a la sección Avanzada y en Identificador Estático ingresamos el nombre fondo-region-ic (fondo de la región de informe clásico) De igual modo ingresamos el identificador estático para la región de contenido estático el cual lo llamaremos fondo-region-ce (fondo de la región de contenido estático) Crear reglas CSS para los identificadores Necesitamos crear ahora las reglas CSS para cada uno de nuestros identificadores: Identificador fondo-region-ic #fondo-region-ic { background: #2B9AF3 url(#WORKSPACE_IMAGES#gotas.png) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } Identificador fondo-region-ce #fondo-region-ce { background: #6694C6 url(#WORKSPACE_IMAGES#sky.jpg) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } Copiamos los estilos CSS de cada identificador y lo pegamos en la página donde se encuentran las regiones, seleccionamos el título de la página y en el panel de propiedades nos dirigimos a la zona de los CSS en Línea colocamos allí las reglas CSS. Si ejecutamos la Página podremos ver un pequeño borde en cada región pero muy poco perceptible. Eso se debe a que la región del Informe Clásico tiene un fondo y la región del contenido estático también. Para que las regiones tengan el body transparente necesitamos agregar lo siguiente a nuestro CSS, para las dos regiones: #fondo-region-ic .t-Region-body { background: transparent; } #fondo-region-ce .t-Region-body { background: transparent; } Como podemos ver ya tenemos las imágenes que se muestran de fondo de cada una de nuestras regiones, solo necesitamos arreglar unos estilos CSS para que la tabla del Informe Clásico no sea transparente y que el texto del contenido estático sea blanco y además podemos cambiar el color de fondo de los títulos de cada región. Identificar los nombres de clases usados en cada Región Mientras estamos ejecutando la aplicación necesitamos identificar el nombre de la clase que pertenece a cada región (región del informe clásico y el body de la región del contenido estático), empecemos identificando la que corresponde a la región del informe clásico. Hay en cada navegador una utilidad para poder inspeccionar las páginas, en mi caso yo uso Firefox con el plugin Firebug que instala herramientas de Web Developer que nos permite inspeccionar la página entre muchas otras funcionalidades. Nos ubicamos encima de la tabla del Informe Clásico y con el botón derecho del mouse seleccionamos “ Inspect Element (Q) ” En el Inspector de Elementos podemos ver que la clase que corresponde a la tabla del Informe Interactivo es: t-Report-report Vamos a agregar una regla CSS para esta clase para que la tabla del Informe Clásico tenga fondo blanco. #fondo-region-ic .t-Report-report { background: #fff; } Hacemos lo mismo para encontrar el nombre de la clase CSS del body de la región de contenido estático, el nombre de la clase es: t-Region-body Agregamos la siguiente regla CSS para colocar el texto de la región en color blanco: #fondo-region-ce .t-Region-body { color: #fff; } Finalmente vamos a cambiar el color de fondo del encabezado de cada región, el nombre de la clase es: t-Region-header Agregamos el CSS para cada región: #fondo-region-ic .t-Region-header{ background-color: #246396; } #fondo-region-ce .t-Region-header{ background-color: #4DA74D; } Los títulos están de color negro y como los colores de fondo son oscuros, es mejor pasarlos a un color blanco, identificamos la clase correspondiente al título, cuyo nombre es: t-Region-title e ingresamos las siguientes CSS: #fondo-region-ic .t-Region-title { color: #fff; } #fondo-region-ce .t-Region-title { color: #fff; } Nuestro resultado es como se visualiza a continuación: Colocar Imagen de Fondo al Título de la Página “Inicio” También podemos cambiar el fondo del título de nuestra página referenciando una imagen. Para el ejemplo vamos a usar nuevamente la imagen de gotas.png. Desde el Diseñador de Páginas ingresamos a la página de Inicio y seleccionamos el título de la página, luego ingresamos el nombre de la clase “ fondo-titulo-body ” en el recuadro de Clases CSS en el panel de la derecha referente a las propiedades de la página: Identificamos la clase que corresponde al título de la página, cuyo nombre es: t-Body-title y luego ingresamos las siguientes reglas CSS: .fondo-titulo-body .t-Body-title { background: #2B9AF3 url(#WORKSPACE_IMAGES#gotas.png) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } Ejecutamos la página y podemos ver los resultados usando las imágenes como fondo en las distintas regiones de nuestra página en Apex: Nota: es importante diferenciar que lo que ingresamos arriba es una clase de la página y NO un identificador estático (id). Las clases comienzan con un punto y los identificadores comienzan con el numeral #. La diferencia entre ellos es que el identificador está pensado para que el elemento que se selecciona sea único, es decir los identificadores son únicos, sin embargo, las clases están pensadas para poder definir el mismo estilo a varios elementos de una página. Si no usáramos los identificadores estáticos de cada región, al estar modificando las clases que se aplican a todos los elementos de la aplicación, no podríamos ver los cambios ya que Apex no podría saber a qué elemento corresponde el cambio de los estilos CSS, es por ello que necesitamos colocar los identificadores estáticos para que la clase al cual estamos cambiando los estilos CSS sea identificada dentro de la región la cual pertenece el nombre del identificador estático. Aquí están todas las reglas CSS juntas: #fondo-region-ic { background: #2B9AF3 url(#WORKSPACE_IMAGES#gotas.png) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } #fondo-region-ce { background: #6694C6 url(#WORKSPACE_IMAGES#sky.jpg) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } #fondo-region-ic .t-Region-body { background: transparent; } #fondo-region-ce .t-Region-body { background: transparent; } #fondo-region-ic .t-Report-report { background: #fff; } #fondo-region-ce .t-Region-body { color: #fff; } #fondo-region-ic .t-Region-header{ background-color: #246396; } #fondo-region-ce .t-Region-header{ background-color: #4DA74D; } #fondo-region-ic .t-Region-title { color: #fff; } #fondo-region-ce .t-Region-title { color: #fff; } .fondo-titulo-body .t-Body-title { background: #2B9AF3 url(#WORKSPACE_IMAGES#gotas.png) top center; background-repeat: repeat; background-size: cover; background-attachment: fixed; } Cabe destacar que usé colores vívidos para que se note los cambios y podamos identificar claramente la clase que corresponde a cada región. Como podemos ver, la personalización de nuestras regiones se hace de forma muy sencilla si trabajamos con los Estilos CSS en Línea. Si ya queremos realizar personalizaciones más complejas nos conviene utilizar una hoja de estilos separada y que sea llamada por nuestra aplicación. Hasta pronto!
↧
Blog Post: Oracle Optimizer Internals (Oracle11)
Review my blogs. A while back I described how Oracle11 (Oracle11.2) does this cardinality feedback. The optimizer notices that after the first execution of a SQL that the actual row counts are significantly different and maybe a different explain plan could be used…it marks it for re-hard parsing and upon the 2 nd execution, it re-hard parses using the actual row counts. IF you were to run an explain plan using the DBMS_XPLAN package, there would be a note at the bottom saying ‘cardinality feedback used’… I’ve long been an advocate of getting the actual row counts when trying to tune a problem SQL. The cardinality column of the explain plan is useless…it obviously wasn’t accurate enough for the CBO…and to you need the actual row counts to do accurate SQL tuning. See another blog or use my SQL Tuner tool for ACTUAL rowcounts with explain plans…the 10046 trace shows actual row counts too. Oracle11r2 implemented this cardinality feedback. I thought this was a really good idea until I recently received some feedback from a production user that when they notice this, the SQL actually runs more poorly. There are a couple of ways to turn off cardinality feedback: Add this hint to the problem SQL (my preference): SELECT /*+ opt_param('_OPTIMIZER_USE_FEEDBACK' 'FALSE') */ Or…set the init.ora setting: optimizer_features_enable < 11.2.0.1 Thank you Trent. Dan Hotka Oracle ACE Director Author/Instructor/CEO
↧
Blog Post: Datapump import failed with ORA-04091: table is mutating
Recently one of my application team asked me to refresh few tables in a lower environment database using the production copy of the tables. As usual, I opted to perform a Datapump import (impdp) to refresh (data only) these tables. However, to my surprise the import failed with a unusual error as shown below. ##--- ##--- impdp failed with ORA-04091 table mutating error ---## ##--- Import: Release 12.1.0.2.0 - Production on Tue May 17 17:28:18 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production Master table "MYAPP"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "MYAPP"."SYS_IMPORT_TABLE_01": myapp/********@mypdb1 directory=EXP dumpfile=myapp_exp.dmp logfile=pop_doc.log tables=MYAPP.POPULAR_DOCUMENT table_exists_action=TRUNCATE content=DATA_ONLY Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA ORA-31693: Table data object "MYAPP"."POPULAR_DOCUMENT" failed to load/unload and is being skipped due to error: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6 ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT' Job "MYAPP"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Tue May 17 17:28:23 2016 elapsed 0 00:00:02 As a first step of investigation, I quickly looked into the error description and here is what the description states about it. ##--- ##--- ORA-04091 error description ---## ##--- [oracle@cloudserver1 ~]$ oerr ora 04091 04091, 00000, "table %s.%s is mutating, trigger/function may not see it" // *Cause: A trigger (or a user defined plsql function that is referenced in // this statement) attempted to look at (or modify) a table that was // in the middle of being modified by the statement which fired it. // *Action: Rewrite the trigger (or function) so it does not read that table. As per Oracle documentation: A mutating table is a table that is currently being modified by an update, delete, or insert statement. You will encounter the ORA-04091 error if you have a row trigger that reads or modifies the mutating table . For example, if your trigger contains a select statement or an update statement referencing the table it is triggering off of you will receive the error. Another way that this error can occur is if the trigger has statements to change the primary, foreign or unique key columns of the table the trigger is triggering from. A table mutation acts as a defence to prevent a transaction to access inconsistent data when the table in under modification within the same transaction. This way, Oracle guarantees the fundamental principle of data consistency. As per the descriptions, it seems like a trigger (MYAPP.TR_POPULAR_DOCUMENT in our case) attempted to query/modify a table (MYAPP.POPULAR_DOCUMENT in our case) which is already in the middle of being modified by the statement (in our case the INSERT from Datapump import job) which fired it (the trigger). Let’s check what the trigger is about to troubleshoot further. ---// ---// row level (before event) trigger enabled for the concerned table //--- ---// SQL> select owner,trigger_name,trigger_type,table_name,status 2 from dba_triggers where table_name='POPULAR_DOCUMENT'; OWNER TRIGGER_NAME TRIGGER_TYPE TABLE_NAME STATUS ---------- -------------------- -------------------- -------------------- -------- MYAPP TR_POPULAR_DOCUMENT BEFORE EACH ROW POPULAR_DOCUMENT ENABLED ---// ---// trigger definition from the database //--- ---// SQL> select dbms_metadata.get_ddl('TRIGGER','TR_POPULAR_DOCUMENT','MYAPP') ddl from dual; DDL ---------------------------------------------------------------------------------------------------- CREATE OR REPLACE EDITIONABLE TRIGGER "MYAPP"."TR_POPULAR_DOCUMENT" BEFORE INSERT ON POPULAR_DOCUMENT FOR EACH ROW BEGIN DECLARE lng_popular_document_id NUMBER; lng_count NUMBER; BEGIN SELECT Count( * ) INTO lng_count FROM POPULAR_DOCUMENT WHERE user_id = :NEW.user_id AND document_id = :NEW.document_id; IF lng_count = 0 THEN IF :NEW.popular_id IS NULL OR :NEW.popular_id = 0 THEN SELECT seq_pop_doc_id.NEXTVAL INTO lng_popular_document_id FROM DUAL; :NEW.popular_id := lng_popular_document_id; END IF; ELSE Raise_application_error( -20001, 'Popular already exists in the user''s My Popular' ); END IF; END; END; ALTER TRIGGER "MYAPP"."TR_POPULAR_DOCUMENT" ENABLE ---// ---// table definition //--- ---// SQL> desc POPULAR_DOCUMENT Name Null? Type ----------------------------- -------- -------------------- POPULAR_ID NOT NULL NUMBER(10) USER_ID NOT NULL NUMBER(10) DOCUMENT_ID NOT NULL VARCHAR2(10 CHAR) TITLE NOT NULL VARCHAR2(4000 CHAR) MODULE_TYPE NOT NULL NUMBER(5) CREATED_DATE NOT NULL DATE ACCESSED_DATE NOT NULL DATE CONTENTDATABASEID NUMBER(5) This seems to be a simple trigger, which runs before INSERT of each row in the table POPULAR_DOCUMENT to check if a record already exists in the same POPULAR_DOCUMENT table with the matching USER_ID and DOCUMENT_ID. If the record doesn’t exist, it gets the next sequence number from SEQ_POP_DOC_ID if the value of POPULAR_ID is passed as NULL or 0. This trigger should not be causing a mutation error as there would be a consistent a view of the table data each time the trigger is executed before the insert statement. To validate this, I have tried to insert few dummy records in to the POPULAR_DOCUMENT table and the records were inserted without any errors as shown below. ---// ---// records inserted without any mutating errors //--- ---// SQL> insert into POPULAR_DOCUMENT (USER_ID,DOCUMENT_ID,TITLE,MODULE_TYPE,CREATED_DATE,ACCESSED_DATE) 2 values (1,'1','Sample Doc','3',sysdate,sysdate); 1 row created. SQL> insert into POPULAR_DOCUMENT (USER_ID,DOCUMENT_ID,TITLE,MODULE_TYPE,CREATED_DATE,ACCESSED_DATE) 2 values (1,'4','Sample Doc','3',sysdate,sysdate); 1 row created. As we can see, we are able to insert records without causing any table mutation. However, when we run Datapump import (impdp) to load data, it fails with table mutation errors for this table. This clearly indicates that Datapump (impdp) is doing something different which is generating inconsistent view of the table data within the same transaction and in turn causing the table mutation. Let’s find out more details about the Datapump import. To help in diagnosis, I have enabled tracing for the Datapump import (TRACE=1FF0300) and here are some key finding from the trace file. ##--- ##--- Parallel DML is enabled for Datapump import session ---## ##--- SHDW:17:28:22.089: *** DATAPUMP_JOB call *** META:17:28:22.092: DEBUG set by number META:17:28:22.092: v_debug_enable set KUPP:17:28:22.092: Current trace/debug flags: 01FF0300 = 33489664 *** MODULE NAME:(Data Pump Worker) 2016-05-17 17:28:22.093 *** ACTION NAME:(SYS_IMPORT_TABLE_01) 2016-05-17 17:28:22.093 KUPW:17:28:22.093: 0: ALTER SESSION ENABLE PARALLEL DML called. KUPW:17:28:22.093: 0: ALTER SESSION ENABLE PARALLEL DML returned. KUPV:17:28:22.093: Attach request for job: MYAPP.SYS_IMPORT_TABLE_01 --- --- output trimmed for readability --- ##--- ##--- Datapump opted for a EXTERNAL table load ---## ##--- KUPD:17:28:23.345: KUPD$DATA_INT access method = 4 KUPD:17:28:23.345: Table load using External Table KUPD:17:28:23.345: Current context KUPD:17:28:23.345: index = 1 KUPD:17:28:23.345: counter = 2 KUPD:17:28:23.345: in_use = TRUE KUPD:17:28:23.345: handle = 20001 KUPD:17:28:23.345: schema name = MYAPP KUPD:17:28:23.345: table name = POPULAR_DOCUMENT KUPD:17:28:23.345: file size = 0 KUPD:17:28:23.345: process order = 5 KUPD:17:28:23.345: set param flags= 16896 KUPD:17:28:23.345: dp flags = 0 KUPD:17:28:23.345: scn = 0 KUPD:17:28:23.345: job name = SYS_IMPORT_TABLE_01 KUPD:17:28:23.345: job owner = MYAPP KUPD:17:28:23.345: data options = 0 KUPD:17:28:23.345: table_exists_action = TRUNCATE KUPD:17:28:23.345: alternate method = 0 KUPD:17:28:23.345: in EtConvLoad KUPD:17:28:23.345: in Get_Load_SQL KUPD:17:28:23.346: in Do_Modify with transform MODIFY KUPD:17:28:23.353: in Setup_metadata_environ META:17:28:23.353: set xform param: EXT_TABLE_NAME:ET$00123AEA0001 KUPD:17:28:23.353: External table name is ET$00123AEA0001 META:17:28:23.353: set xform param: EXT_TABLE_CLAUSE: ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT" JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL COMPRESSION DISABLED ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') ) PARALLEL 1 REJECT LIMIT UNLIMITED KUPD:17:28:23.353: Setup_metadata_environ: external table clause built META:17:28:23.353: set xform param: OPERATION_TYPE:IMPORT META:17:28:23.353: set xform param: TARGET_TABLE_SCHEMA:MYAPP META:17:28:23.353: set xform param: TARGET_TABLE_NAME:POPULAR_DOCUMENT META:17:28:23.353: set xform param: EXT_TABLE_SCHEMA:MYAPP If we scroll down to the trace file, we can find that Datapump import tried to perform a parallel direct path multi row insert from the external table (ET$00123AEA0001) to the target table (POPULAR_DOCUMENT) using INSERT INTO SELECT statement as shown below. ##--- ##--- Datapump import tried to perform direct path multi row insert ---## ##--- KUPD:17:28:23.548: CREATE TABLE "MYAPP"."ET$00123AEA0001" ( "POPULAR_ID" NUMBER(10,0), "USER_ID" NUMBER(10,0), "DOCUMENT_ID" VARCHAR2(10 CHAR), "TITLE" VARCHAR2(4000 CHAR), "MODULE_TYPE" NUMBER(5,0), "CREATED_DATE" DATE, "ACCESSED_DATE" DATE, "CONTENTDATABASEID" NUMBER(5,0) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT" JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL COMPRESSION DISABLED ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') ) PARALLEL 1 REJECT LIMIT UNLIMITED KUPD:17:28:23.548: INSERT /*+ APPEND PARALLEL("POPULAR_DOCUMENT",1)+*/ INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE) ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID") SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID" FROM "MYAPP"."ET$00123AEA0001" KU$ KUPD:17:28:23.548: in execute_sql KUPD:17:28:23.548: Verb item: DROP KUPD:17:28:23.552: Verb item: CREATE KUPD:17:28:23.552: Start to execute create table stmt KUPD:17:28:23.559: Just executed dbms_sql.parse KUPD:17:28:23.559: Verb item: INSERT KUPD:17:28:23.559: in explain_plan_to_tracefile KUPD:17:28:23.565: Sql plan statement is EXPLAIN PLAN INTO SYS.DATA_PUMP_XPL_TABLE$ FOR INSERT /*+ APPEND PARALLEL("POPULAR_DOCUMENT",1)+*/ INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE) ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABAS KUPD:17:28:23.565: Sql plan statement is EID") SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID" FROM "MYAPP"."ET$00123AEA0001" KU$ KUPA:17:28:23.578: parse tree dump for ET$00123AEA0001 However, due to the presence of row level trigger (TR_POPULAR_DOCUMENT) on target table (POPULAR_DOCUMENT) direct path insert and parallel DML were disabled during the load and only INSERT AS SELECT (IAS) was executed to load the data from external table to the target table, which ultimately failed due to table mutation as shown below. KUPA: Total datastream length processed is 0 KUPD:17:28:23.594: Explain plan output KUPD:17:28:23.701: Plan hash value: 4129170041 KUPD:17:28:23.701: KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------ KUPD:17:28:23.701: | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------ KUPD:17:28:23.701: | 0 | INSERT STATEMENT | | 8168 | 16M| 29 (0)| 00:00:01 | KUPD:17:28:23.701: | 1 | LOAD TABLE CONVENTIONAL | POPULAR_DOCUMENT | | | | | KUPD:17:28:23.701: | 2 | EXTERNAL TABLE ACCESS FULL| ET$00123AEA0001 | 8168 | 16M| 29 (0)| 00:00:01 | KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------ KUPD:17:28:23.701: KUPD:17:28:23.701: Note KUPD:17:28:23.701: ----- KUPD:17:28:23.701: - PDML disabled because triggers are defined KUPD:17:28:23.701: - Direct Load disabled because triggers are defined KUPD:17:28:23.702: - Direct Load disabled because triggers are defined KUPD:17:28:23.702: executing IAS using DBMS_SQL --- --- output trimmed for readability --- KUPD:17:28:23.738: Exception raised KUPD:17:28:23.738: Sqlcode is -29913 KUPD:17:28:23.738: Drop external table, ET$00123AEA0001 KUPD:17:28:23.751: Table ET$00123AEA0001 dropped KUPD:17:28:23.751: Error stack is ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6 ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT' KUPD:17:28:23.751: Exception raised KUPD:17:28:23.751: Sqlcode is -29913 KUPD:17:28:23.751: start_job: external table dropped KUPD:17:28:23.751: in free_context_entry KUPW:17:28:23.753: 1: KUPD$DATA.START_JOB returned. In procedure CREATE_MSG KUPW:17:28:23.753: 1: ORA-31693: Table data object "MYAPP"."POPULAR_DOCUMENT" failed to load/unload and is being skipped due to error: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6 ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT' To elaborate more, with a row level (before event) trigger when a single row insert is performed in the POPULAR_DOCUMENT table, Oracle will have a consistent view (as the insert is yet to be performed) of the table records within that transaction and hence will not cause any mutation for that table. However, with a row level (before event) trigger when a multi row insert is performed on the POPULAR_DOCUMENT table, there would be uncommitted records (caused by the very first insert) within the same transaction resulting into inconsistent data and in turn cause table mutation for that table within the trigger. Following example illustrates this behaviour of multi row insert with row level (before event) trigger (I am using the same table POPULAR_DOCUMENT on which trigger is defined). ---// ---// create temporary table t from POPULAR_DOCUMENT //--- ---// SQL> create table t as select * from POPULAR_DOCUMENT; Table created. ---// ---// truncating table to avoid primary key violation //--- ---// SQL> truncate table POPULAR_DOCUMENT; Table truncated. ---// ---// multi row insert failed with table mutation //--- ---// SQL> insert into POPULAR_DOCUMENT select * from t; insert into POPULAR_DOCUMENT select * from t * ERROR at line 1: ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6 ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT' As we can see, multi row insert could cause table mutation due to the existence of row level (before event) trigger referring to the same table; we can either choose to disable the row level trigger and perform the data load (using impdp) or we can opt for CONVENTIONAL data load while loading data using Datapump as shown below ##--- ##--- import succeeds with CONVENTIONAL access method ---## ##--- Import: Release 12.1.0.2.0 - Production on Tue May 17 17:54:19 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production Master table "MYAPP"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "MYAPP"."SYS_IMPORT_TABLE_01": myapp/********@mypdb1 directory=EXP dumpfile=myapp_exp.dmp logfile=pop_doc.log tables=MYAPP.POPULAR_DOCUMENT table_exists_action=TRUNCATE content=DATA_ONLY ACCESS_METHOD=CONVENTIONAL Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "MYAPP"."POPULAR_DOCUMENT" 28.22 KB 228 rows Job "MYAPP"."SYS_IMPORT_TABLE_01" successfully completed at Tue May 17 17:54:23 2016 elapsed 0 00:00:02 We are now able to load the data using Datapump import (with conventional access method). If we trace the Datapump import job, we could see with conventional access method, Oracle performs a single row insert (INSERT INTO VALUES) rather than performing a multi row insert using INSERT AS SELECT (IAS) as shown below. ##--- ##--- Datapump used single row insert with conventional load ---## ##--- KUPD:17:54:23.330: In routine kupd$data.conventional_load KUPD:17:54:23.330: Verb item: DROP KUPD:17:54:23.334: Verb item: CREATE KUPD:17:54:23.334: CREATE TABLE "MYAPP"."ET$00875BDA0001" ( "POPULAR_ID" NUMBER(10,0), "USER_ID" NUMBER(10,0), "DOCUMENT_ID" VARCHAR2(10 CHAR), "TITLE" VARCHAR2(4000 CHAR), "MODULE_TYPE" NUMBER(5,0), "CREATED_DATE" DATE, "ACCESSED_DATE" DATE, "CONTENTDATABASEID" NUMBER(5,0) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT" JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL COMPRESSION DISABLED ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') ) PARALLEL 1 REJECT LIMIT UNLIMITED KUPD:17:54:23.334: Start to execute create table stmt KUPD:17:54:23.340: Just executed dbms_sql.parse KUPD:17:54:23.340: Verb item: SELECT KUPD:17:54:23.341: Long position is 0 KUPD:17:54:23.341: Verb item: INSERT KUPD:17:54:23.341: Calling kupcls, conventional path load KUPCL:17:54:23.341: Data Pump Conventional Path Import KUPCL:17:54:23.341: Schema: MYAPP Table: POPULAR_DOCUMENT KUPCL:17:54:23.341: Long Position: 0 KUPCL:17:54:23.343: Select Statement: SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID" FROM "MYAPP"."ET$00875BDA0001" KU$ KUPCL:17:54:23.343: Insert Statement: INSERT INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE) ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID") VALUES (:1, :2, :3, :4, :5, :6, :7, :8) Footnote: When loading data using tools like Datapump or SQLLDR, it would be a good practice to check the presence of any row level (before event) trigger defined on the table which queries/modifies the same table within the trigger and opt for conventional load if required. In case, a multi row insert (IAS) needs to be performed on a table with row level (before event) trigger accessing the same table, the only option would be to disable the row level (before event) trigger prior to executing the IAS statement. Throughout this article, I have stressed on the word “before event” whenever I mentioned about the row level trigger. This is because, if we have a AFTER EVENT row level trigger on a table, referring to (query/modify) the same table within the trigger, it will always cause table mutation error (due to inconsistent state of data) irrespective of whether a single row or multi row insert being performed on the table.
↧
↧
Blog Post: MySQL Database Replication Setup
MASTER - Omegha-erp SLAVE - Omegha-bcp Database Version - MySQL 5.5 In the MASTER server 1. Enable binary logging root@Omegha-erp:~#sudo vi /etc/mysql/my.cnf --- [mysqld] #log-bin=mysql-bin log-bin=/var/log/mysql/mysql-bin.log server-id=1 innodb_flush_log_at_trx_commit=1 sync_binlog=1 --- root@Omegha-erp:~#sudo /etc/init.d/mysql restart 2. Creation replication user account on the source database mysql> CREATE USER 'repl'@'%' IDENTIFIED BY '*****'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%'; 3. Take backup and note the position for slave replication mysql> flush tables with read lock; mysql> show master status ; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 330 | | | +------------------+----------+--------------+------------------+ $ sudo mysqldump -uroot -p*** --all-databases --master-data > dbdump.db unlock tables; In the SLAVE server 4. Set unique server id root@Omegha-bcp:~#sudo vi /etc/mysql/my.cnf --- server-id = 2 --- root@Omegha-bcp:~#sudo /etc/init.d/mysql restart 5. Transfer backup file from the Master to Slave server root@Omegha-bcp:~#scp -i ~/.ssh/pub-key.pem ubuntu@xx.xx.xx.xx:dbdump.db . 6. Import data root@Omegha-bcp:~#mysql -uroot -p*** change master to master_host='xx.xx.xx.xx', master_user='repl', master_password='****', master_log_file='mysql-bin.000001', master_log_pos=330; mysql> start slave; mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: xx.xx.xx.xx Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000002 Read_Master_Log_Pos: 267 Relay_Log_File: mysqld-relay-bin.000004 Relay_Log_Pos: 413 Relay_Master_Log_File: mysql-bin.000002 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 267 Relay_Log_Space: 716 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.01 sec) mysql> HAPPY LEARNING!
↧
Blog Post: Usando el Plugin “Stripe Report” de tipo Acciones Dinámicas en Oracle APEX 5.0
El plugin que presentamos en este artículo, nos permite mostrar en forma intercalada colores en las filas de nuestros Informes Interactivos, permitiéndonos presentar nuestros Informes con un agradable diseño. Crear Aplicación e Informe Interactivo Para mostrar cómo usamos el Plugin, vamos a crear una aplicación de tipo escritorio con un Informe Interactivo con la siguiente consulta de Origen: SELECT e.EMPNO , e.ENAME , e.JOB , m.ename MGR , e.HIREDATE , e.SAL , e.COMM , d.dname DEPTNO FROM #OWNER#.EMP e , #OWNER#.EMP m , #OWNER#.DEPT d WHERE e.mgr = m.empno (+) AND e.deptno = d.deptno (+) Para mostrar en nuestro informe las filas intercaladas de color necesitamos usar el plugin “Stripe Report” y crear una Acción Dinámica dentro de nuestro Informe Interactivo. Instalar Plugin “Stripe Report” Desde la página de inicio de la aplicación, hacemos clic en “Componentes Compartidos”: En la sección “Otros Componentes” hacemos clic en “Plugins”. Hacemos clic en el botón “Importar” luego hacemos clic en el botón Browse… seleccionamos el archivo sql: dynamic_action_plugin_com_oracle_apex_stripe_report.sql , hacemos clic en el botón “Siguiente”, nuevamente clic en “Siguiente”, indicamos que se instale el Plugin en la aplicación que hemos creado para el ejemplo y finalmente hacemos clic en el botón “Instalar Plugin”. Crear Acción Dinámica Regresamos a la página donde se encuentra nuestro Informe Interactivo de Empleados: Hacemos clic sobre el Informe Interactivo con el botón derecho del mouse y seleccionamos “Crear Acción Dinámica” En el panel derecho de Propiedades: En la sección Identificación Nombre: Reporte Stripe En la sección Cuando Evento: Después de Refrescamiento Tipo de Selección: Región Región: Empleados En la sección Avanzada Ámbito de Evento: Estático Hacemos clic en Acción Verdadera “Mostrar” del panel de la izquierda y desde el panel de propiedades de la derecha: En la sección Identificación Acción: Stripe Report [Plugin] En la sección Configuración Stripe Color: LemonChiffon En la sección Opciones de Ejecución Evento: Reporte Stripe Arrancar cuando el Resultado del Evento Sea: Verdadero Arrancar al Cargar Página: Sí Guardamos los cambios Ahora cuando ejecutemos la página podremos ver que se visualiza en el Informe Interactivo las filas intercaladas de color amarillo. El plugin está disponible en la aplicación “ Sample Dynamic Actions ” de la galería de aplicaciones empaquetadas de Oracle Application Express. Hasta Pronto!
↧
Blog Post: The Seven Stages of Oracle Announcements
I just listened to an Oracle webcast, where they "announced" a product. It sounded great and I got all excited, but when trying to actually use the product, I found that it was in an earlier stage of announcement. Experience shows that there are seven stages of Oracle announcements: Larry has an idea for a product (first OpenWorld) We have started working on a product We are still working on the product (second OpenWorld) The product is in beta testing The product is available to one customer The product is available to a select group of hand-held customers The product is actually available to normal people (typically just before third OpenWorld) Unfortunately, the product I heard "announced" had only reached Oracle Announcement Stage 3.
↧
Blog Post: Remote Connect to MySQL Database Using 'MySQL Query Browser'
There was a recent requirement for giving remote access to the MySQL database to our ERP consultant for him to connect from home and do the work. Our MySQL database was running on our cloud “Omegha” and was by default restricted of any remote access. Instead of asking the consultant to use remote SSH using PuttY, we decided to go with a better GUI client for MySQL databases. Below are the steps used for enabling remote access to the MySQL database. Note: - Strictly not recommended for any production environment with critical data. This is a temporary requirement and will be reverted by actual security settings once the work is done. After writing my book on “ Oracle SQL Developer 4.1 ” I’ve understood non-DBAs are more comfortable with GUI clients like “Oracle SQL Developer” or “MySQL Query Browser”, In fact it is true that such beautiful free tools, which normally people don’t care about make the work quite faster with its rich features. Step-1. Download MySQL Query Browser using this link Note: - Please note that development of MySQL Query Browser has been discontinued. MySQL Workbench provides and integrated GUI environment for MySQL database design, SQL development, administration and migration. Download MySQL Workbench » I was ok with “MySQL Query Browser” as it served my temporary purpose Step-2. Start the “MySQL Query Browser” once it is installed, and create a new connection to your remote MySQL Database. Fill in the details of your database like hostname, port, username pwd etc Step-3. Connect to the new connection using the login screen as shown below. Her comes my first error Note- MySQL Error Number 2003, But my ping to the server was working fine. Step 4. Connected to my database using PuttY and checked if I am really able to connect to the database or not. It was perfect!!! ubuntu@Omegha-bcp:~$ mysql -uroot -p*** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.47-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | +--------------------+ 3 rows in set (0.02 sec) mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> Step 5. Opened the my.cnf file under /etc/mysql directory as root user. ubuntu@Omegha-bcp:~$ sudo su - sudo: unable to resolve host bcp-instance root@bcp-instance:~# cd /etc/mysql root@Omegha-bcp:/etc/mysql# vi my.cnf Step6. Changed the bind-address parameter to the public IP of the machine and commented the default entry of 127.0.0.1 # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 bind-address = xx.xx.xx.xxx Step7. Restarted the mysql service root@Omegha-bcp:/etc/mysql# service mysql restart mysql stop/waiting mysql start/running, process 1946 root@Omegha-bcp:/etc/mysql# Step8. Here comes my next error when I tried connecting. Note- MySQL Error Number 1130 Step9. Again connected to my MySQL database and granted privileges to all the host (It includes any remote hosts too, Highly not recommended for prod envs) as shown below. root@Omegha-bcp:/etc/mysql# mysql -uroot -p*** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 38 Server version: 5.5.47-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> grant all privileges on *.* to 'root'@'%' identified by '***'; Query OK, 0 rows affected (0.03 sec) mysql> Step 10. Tried to make connection again using the “MySQL Query Browser” client, worked like charm J HAPPY LEARNING!
↧
↧
Wiki Page: Streaming Oracle Database Table Data to Couchbase Server
Written by Deepak Vohra Oracle Database is a relational database and Couchbase database is a NoSQL database. The data model of each is completely different. While Oracle Database is fixed schema based Couchbase database is schema-free. While Oracle Database stores data in a table with a fixed number of columns, Couchbase stores data in JSON documents, which do not have a fixed structure and different JSON documents may have different fields. Apache Flume does not provide a built-in source or sink for either of the databases. We shall use third-party Flume SQL Source for Oracle Database and third party Flume Couchbase Sink to stream Oracle Database table data to Couchbase server. Setting the Environment Installing Flume Sink for Couchbase Configuring Flume SQL Source Creating Oracle Database Table Configuring Flume Agent Running the Flume Agent Displaying Streamed Data in Couchbase server Admin Console Setting the Environment The following software is required for this tutorial. Oracle Database Couchbase Server Apache Flume Maven Java 7 Create a directory to install the software and set its permissions to global (777). mkdir /flume chmod -R 777 /flume cd /flume Download and extract Maven 3.3.x tar file. wget http://mirror.its.dal.ca/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz tar xvf apache-maven-3.3.3-bin.tar.gz Download and extract Apache Flume 1.6 tar file. wget http://archive.apache.org/dist/flume/stable/apache-flume-1.6.0-bin.tar.gz tar -xvf apache-flume-1.6.0-bin.tar.gz Download Couchbase Server rpm from http://www.couchbase.com/nosql-databases/downloads. Install Couchbase Server using the rpm. The rpm version could be different. rpm -i couchbase-server-community_2.2.0_x86.rpm Set environment variables for Apache Flume, Oracle Database, Maven and Java. vi ~/.bashrc export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export MAVEN_HOME=/flume/apache-maven-3.3.3-bin export FLUME_HOME=/flume/apache-flume-1.6.0-bin export FLUME_CONF=/flume/apache-flume-1.6.0-bin/conf export JAVA_HOME=/flume/jdk1.7.0_55 export PATH=$PATH:$FLUME_HOME/bin:$MAVEN_HOME/bin: $ORACLE_HOME/bin export CLASSPATH=$FLUME_HOME/lib/* Download the Couchbase Client jar and copy the jar to the Flume lib directory. wget http://central.maven.org/maven2/com/couchbase/client/couchbase-client/1.4.10/couchbase-client-1.4.10.jar cp couchbase-client-1.4.10.jar $FLUME_HOME/lib We also need to download the following jars for Couchbase server from Maven repository and copy the jars into Flume classpath (Flume lib directory). Jar File File Name Description The Apache Commons Codec package commons-codec-1.10.jar Encoder and decoder for various formats. Apache HttpComponents Core httpcore-4.4.2.jar Apache HttpComponents for the classic (blocking) I/O model. Apache HttpComponents Core (NIO) httpcore-nio-4.4.2.jar Apache HttpComponents for the non-blocking I/O model. Jettison jettison-1.3.7.jar A StAX implementation for JSON Netty/All In One netty-all-4.0.31.Final.jar A framework for network applications. Spymemcached spymemcached-2.12.0.jar A client library for memcached Commons Lang commons-lang-2.3.jar Java utility classes for classes in the java.lang package. Copy the jar files to the Flume lib directory, which adds the jars to the Flume classpath. cp commons-codec-1.10.jar $FLUME_HOME/lib cp httpcore-4.4.2.jar $FLUME_HOME/lib cp httpcore-nio-4.4.2.jar $FLUME_HOME/lib cp jettison-1.3.7.jar $FLUME_HOME/lib cp netty-all-4.0.31.Final.jar $FLUME_HOME/lib cp spymemcached-2.12.0.jar $FLUME_HOME/lib cp commons-lang-2.3.jar $FLUME_HOME/lib Create and configure a CouchbaseServer cluster from URL http://localhost:8091 . If a cluster has already been created login to Couchbase Console from the same url. The Cluster Overview tab displays the RAM Overview and the Disk Overview. The Data Buckets tab displays the data buckets. Initially the “default” bucket is the only bucket in the Couchbase server and the bucket is empty as indicated by the Item Count of 0. Click on the Documents button to list the documents in the default bucket. No document gets listed. Installing Flume Sink for Couchbase Download the source code for Flume Sink for Couchbase and compile and install the Flume sink jar file. git clone https://github.com/voidd/flume-couchbase-sink.git mvn compile mvn install Copy the jar file generated to the Flume lib directory. cp flume-couchbase-sink-1.0.0.jar $FLUME_HOME/lib Configuring Flume SQL Source The Flume source for Oracle Database is packaged with the Apache Flume distribution. Download the keedio/flume-ng-sql-source from the git hub. CD to the flume-ng-sql-source directory and compile and package the Flume SQL source to a jar file. git clone https://github.com/keedio/flume-ng-sql-source.git cd flume-ng-sql-source mvn package Create a plugins.d/sql-source/lib directory for the SQL Source plugin, set its permissions to global and copy the flume-ng-sql-source-1.3-SNAPSHOT.jar file to the lib directory. mkdir -p $FLUME_HOME/plugins.d/sql-source/lib chmod -R 777 $FLUME_HOME/plugins.d/sql-source/lib cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/plugins.d/sql-source/lib Similarly, create a libext directory plugins.d/sql-source/libext for the Oracle Database JDBC jar file ojdbc6.jar , set its permissions to global (777) and copy the ojdbc6.jar to the libext directory. mkdir $FLUME_HOME/plugins.d/sql-source/libext chmod -R 777 $FLUME_HOME/plugins.d/sql-source/libext cp ojdbc6.jar $FLUME_HOME/plugins.d/sql-source/libext We also need to copy the ojdbc6.jar and flume-ng-sql-source-1.3-SNAPSHOT.jar to the Flume lib directory, which adds the jars to the runtime classpath of Flume. cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/lib cp ojdbc6.jar $FLUME_HOME/lib Creating Oracle Database Table Drop the OE.WLSLOG table as it may have been created for another application and create the table with the following SQL script in SQL*Plus. DROP TABLE OE.WLSLOG; CREATE TABLE OE.WLSLOG (id INTEGER PRIMARY KEY, time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000)); Table OE.WLSLOG gets created. Add data to the table with the following SQL script. INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(1,'Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(2,'Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(3,'Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(4,'Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(5,'Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(6,'Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(7,'Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); 7 rows of data get added. An SQL query may be used to list the data added. Exit SQL*Plus after creating the table as Flume agent is not able to connect to Oracle Database if SQL*Plus is also connected. Configuring Flume Agent Create a Flume configuration file ( flume.conf ; name is arbitrary) in the FLUME_CONF directory and set the configuration for a Flume agent for a Flume source for SQL Source, Flume channel and Flume sink for Couchbase database. Configuration Property Description Value agent.sources Sets the Flume Source. sql-source agent.sinks Sets the Flume sink. couchbaseSink agent.channels Sets the Flume channel. ch1 agent.sources.sql-source.channels Sets the channel on the source. ch1 agent.channels.ch1.capacity Sets the channel capacity. 1000000 agent.channels.ch1.type Sets the channel type. memory agent.sources.sql-source.type Sets the SQL Source type class. org.keedio.flume.source.SQLSource agent.sources.sql-source.connection.url Sets the connection URL for Oracle Database. jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sources.sql-source.user Sets the username for Oracle Database. OE agent.sources.sql-source.password Sets the password for Oracle Database. OE agent.sources.sql-source.table Sets the Oracle Database table. WLSLOG agent.sources.sql-source.columns.to.select Sets the columns to select. The * setting selects all columns. * agent.sources.sql-source.incremental.column.name Sets the incremental column name. id agent.sources.sql-source.incremental.value Sets the incremental column value to start streaming from. 0 agent.sources.sql-source.run.query.delay Sets the frequency in millisecond to poll sql source. 10000 agent.sources.sql-source.status.file.path Sets the directory path for the SQL source status file. /var/lib/flume agent.sources.sql-source.status.file.name Sets the status file. sql-source.status agent.sinks.couchbaseSink.channel Sets the channel on the Couchabse sink. ch1 agent.sinks.couchbaseSink.type Sets the sink type for Couchabse. org.apache.flume.sink.couchbase.CouchBaseSink agent.sinks.couchbaseSink.hostNames Sets the Host name/s for the Couchbase cluster http://127.0.0.1:8091/pools agent.sinks.couchbaseSink.bucketName Sets the Couchbase bucket default The flume.conf is listed: agent.channels = ch1 agent.sinks = couchbaseSink agent.sources = sql-source agent.channels.ch1.type = memory agent.channels.ch1.capacity = 1000000 agent.sources.sql-source.channels = ch1 agent.sources.sql-source.type = org.keedio.flume.source.SQLSource # URL to connect to database agent.sources.sql-source.connection.url = jdbc:oracle:thin:@127.0.0.1:1521:ORCL # Database connection properties agent.sources.sql-source.user = OE agent.sources.sql-source.password = OE agent.sources.sql-source.table = OE.WLSLOG agent.sources.sql-source.columns.to.select = * # Increment column properties agent.sources.sql-source.incremental.column.name = id # Increment value is from you want to start taking data from tables (0 will import entire table) agent.sources.sql-source.incremental.value = 0 # Query delay, each configured milisecond the query will be sent agent.sources.sql-source.run.query.delay=10000 # Status file is used to save last readed row agent.sources.sql-source.status.file.path = /var/lib/flume agent.sources.sql-source.status.file.name = sql-source.status agent.sinks.couchbaseSink.channel = ch1 agent.sinks.couchbaseSink.type =org.apache.flume.sink.couchbase.CouchBaseSink agent.sinks.couchbaseSink.hostNames =http://127.0.0.1:8091/pools agent.sinks.couchbaseSink.bucketName =default Create the directory for the status file, which contains the incremental column value that was last streamed. Any row with incremental value greater than status file value gets streamed. Set the directory permissions to global (777). Remove the status file from any previous run of Flume. sudo mkdir -p /var/lib/flume sudo chmod -R 777 /var/lib/flume cd /var/lib/flume rm sql-source.status We also need to copy the template flume-env.sh.template to flume-env.sh . cp $FLUME_HOME/conf/flume-env.sh.template $FLUME_HOME/conf/flume-env.sh Running the Flume Agent To stream data from Oracle Database to Couchbase server run the Flume agent with the following command. flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Flume agent gets started. Oracle Database data gets streamed to Couchbase server and the Flume agent continues to run. A more detailed output from the Flume agent is as follows. [root@localhost flume]# flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Info: Including Hive libraries found via () for Hive access + exec /flume/jdk1.7.0_55/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/flume/apache-flume-1.6.0-bin/conf:/flume/apache-flume-1.6.0-bin/lib/*:/flume/apache-flume-1.6.0-bin/plugins.d/sql-source/lib/*:/flume/apache-flume-1.6.0-bin/plugins.d/sql-source/libext/*:/lib/*' -Djava.library.path= org.apache.flume.node.Application -f /flume/apache-flume-1.6.0-bin/conf/flume.conf -n agent o.a.f.n.PollingPropertiesFileConfigurationProvider - Checking file:/flume/apache-flume-1.6.0-bin/conf/flume.conf for changes 13:38:23.976 [conf-file-poller-0] INFO o.a.f.n.PollingPropertiesFileConfigurationProvider - Reloading configuration file:/flume/apache-flume-1.6.0-bin/conf/flume.conf 13:38:24.022 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:couchbaseSink 13:38:24.028 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created context for couchbaseSink: channel 13:38:24.046 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Added sinks: couchbaseSink Agent: agent 13:38:24.054 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:couchbaseSink 13:38:24.058 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:couchbaseSink 13:38:24.062 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:couchbaseSink 13:38:24.073 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Starting validation of configuration for agent: agent, initial-configuration: AgentConfiguration[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=OE.WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, password=OE, user=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {couchbaseSink={ parameters:{hostNames=http://127.0.0.1:8091/pools, type=org.apache.flume.sink.couchbase.CouchBaseSink, bucketName=default, channel=ch1} }} 13:38:24.145 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created channel ch1 13:38:24.223 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Creating sink: couchbaseSink using OTHER 13:38:24.243 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Post validation configuration for agent AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=OE.WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, password=OE, user=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {couchbaseSink={ parameters:{hostNames=http://127.0.0.1:8091/pools, type=org.apache.flume.sink.couchbase.CouchBaseSink, bucketName=default, channel=ch1} }} 13:38:24.265 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Channels:ch1 13:38:24.266 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sinks couchbaseSink 13:38:24.268 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sources sql-source 13:38:24.271 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Post-validation flume configuration contains configuration for agents: [agent] 13:38:24.277 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Creating channels 13:38:24.378 [conf-file-poller-0] INFO o.a.f.channel.DefaultChannelFactory - Creating instance of channel ch1 type memory 13:38:24.427 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Created channel ch1 13:38:24.431 [conf-file-poller-0] INFO o.a.f.source.DefaultSourceFactory - Creating instance of source sql-source, type org.keedio.flume.source.SQLSource 13:38:24.437 [conf-file-poller-0] DEBUG o.a.f.source.DefaultSourceFactory - Source type org.keedio.flume.source.SQLSource is a custom type 13:38:24.488 [conf-file-poller-0] INFO org.keedio.flume.source.SQLSource - Reading and processing configuration values for source sql-source 13:38:24.510 [conf-file-poller-0] INFO o.k.flume.source.SQLSourceHelper - Status file not created, using start value from config file 2015-11-12 13:38:26,038 (conf-file-poller-0) [INFO - org.hibernate.annotations.common.reflection.java.JavaReflectionManager. (JavaReflectionManager.java:66)] HCANN000001: Hibernate Commons Annotations {4.0.5.Final} 2015-11-12 13:38:26,148 (conf-file-poller-0) [INFO - org.hibernate.Version.logVersion(Version.java:54)] HHH000412: Hibernate Core {4.3.10.Final} 2015-11-12 13:38:26,174 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment. (Environment.java:239)] HHH000206: hibernate.properties not found 2015-11-12 13:38:26,202 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.buildBytecodeProvider(Environment.java:346)] HHH000021: Bytecode provider name : javassist 13:38:26.383 [conf-file-poller-0] INFO o.k.flume.source.HibernateHelper - Opening hibernate session 13:38:26.961 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447353503922 lastSeenState:IDLE desiredState:START firstSeen:1447353503922 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@26a4ce } 13:38:26.961 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 2015-11-12 13:38:27,103 (conf-file-poller-0) [WARN - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:93)] HHH000402: Using Hibernate built-in connection pool (not for production use!) 2015-11-12 13:38:27,113 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:166)] HHH000401: using driver [null] at URL [jdbc:oracle:thin:@127.0.0.1:1521:ORCL] 2015-11-12 13:38:27,131 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:175)] HHH000046: Connection properties: {user=OE, password=****} 2015-11-12 13:38:27,139 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:180)] HHH000006: Autocommit mode: false 2015-11-12 13:38:27,153 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:102)] HHH000115: Hibernate connection pool size: 20 (min=1) 13:38:29.962 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447353506961 lastSeenState:START desiredState:START firstSeen:1447353503922 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@26a4ce } 13:38:29.962 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 2015-11-12 13:38:32,387 (conf-file-poller-0) [INFO - org.hibernate.dialect.Dialect. (Dialect.java:145)] HHH000400: Using dialect: org.hibernate.dialect.Oracle10gDialect 2015-11-12 13:38:32,798 (conf-file-poller-0) [INFO - org.hibernate.engine.transaction.internal.TransactionFactoryInitiator.initiateService(TransactionFactoryInitiator.java:62)] HHH000399: Using default transaction strategy (direct JDBC transactions) 2015-11-12 13:38:32,845 (conf-file-poller-0) [INFO - org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory. (ASTQueryTranslatorFactory.java:47)] HHH000397: Using ASTQueryTranslatorFactory 13:38:32.963 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447353509962 lastSeenState:START desiredState:START firstSeen:1447353503922 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@26a4ce } 13:38:32.965 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 13:38:33.882 [conf-file-poller-0] INFO o.a.flume.sink.DefaultSinkFactory - Creating instance of sink: couchbaseSink, type: org.apache.flume.sink.couchbase.CouchBaseSink 13:38:33.890 [conf-file-poller-0] DEBUG o.a.flume.sink.DefaultSinkFactory - Sink type org.apache.flume.sink.couchbase.CouchBaseSink is a custom type 13:38:34.024 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Channel ch1 connected to [sql-source, couchbaseSink] 13:38:34.048 [conf-file-poller-0] INFO org.apache.flume.node.Application - Starting new configuration:{ sourceRunners:{sql-source=PollableSourceRunner: { source:org.keedio.flume.source.SQLSource{name:sql-source,state:IDLE} counterGroup:{ name:null counters:{} } }} sinkRunners:{couchbaseSink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@849039 counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} } 13:38:34.066 [conf-file-poller-0] INFO org.apache.flume.node.Application - Starting Channel ch1 13:38:34.077 [conf-file-poller-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Supervising service:org.apache.flume.channel.MemoryChannel{name: ch1} policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@f8f6d0 desiredState:START 13:38:34.083 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:org.apache.flume.channel.MemoryChannel{name: ch1} supervisoree:{ status:{ lastSeen:null lastSeenState:null desiredState:START firstSeen:null failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@f8f6d0 } 13:38:34.083 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - first time seeing org.apache.flume.channel.MemoryChannel{name: ch1} 13:38:34.083 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Want to transition org.apache.flume.channel.MemoryChannel{name: ch1} from IDLE to START (failures:0) 13:38:34.117 [lifecycleSupervisor-1-1] INFO o.a.f.i.MonitoredCounterGroup - Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean. 13:38:34.118 [lifecycleSupervisor-1-1] INFO o.a.f.i.MonitoredCounterGroup - Component type: CHANNEL, name: ch1 started 13:38:34.119 [conf-file-poller-0] INFO org.apache.flume.node.Application - Starting Sink couchbaseSink 13:38:34.119 [conf-file-poller-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Supervising service:SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@849039 counterGroup:{ name:null counters:{} } } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@354005 desiredState:START 13:38:34.120 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@849039 counterGroup:{ name:null counters:{} } } supervisoree:{ status:{ lastSeen:null lastSeenState:null desiredState:START firstSeen:null failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@354005 } 13:38:34.130 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - first time seeing SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@849039 counterGroup:{ name:null counters:{} } } 13:38:34.130 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Want to transition SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@849039 counterGroup:{ name:null counters:{} } } from IDLE to START (failures:0) 13:38:34.142 [conf-file-poller-0] INFO org.apache.flume.node.Application - Starting Source sql-source 13:38:34.143 [conf-file-poller-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Supervising service:PollableSourceRunner: { source:org.keedio.flume.source.SQLSource{name:sql-source,state:IDLE} counterGroup:{ name:null counters:{} } } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@119ddb8 desiredState:START 13:38:34.144 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 13:38:34.174 [lifecycleSupervisor-1-2] INFO o.a.f.sink.couchbase.CouchBaseSink - Starting couchbaseSink... 13:38:34.144 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:PollableSourceRunner: { source:org.keedio.flume.source.SQLSource{name:sql-source,state:IDLE} counterGroup:{ name:null counters:{} } } supervisoree:{ status:{ lastSeen:null lastSeenState:null desiredState:START firstSeen:null failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@119ddb8 } 13:38:34.174 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - first time seeing PollableSourceRunner: { source:org.keedio.flume.source.SQLSource{name:sql-source,state:IDLE} counterGroup:{ name:null counters:{} } } 13:38:34.175 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Want to transition PollableSourceRunner: { source:org.keedio.flume.source.SQLSource{name:sql-source,state:IDLE} counterGroup:{ name:null counters:{} } } from IDLE to START (failures:0) 13:38:34.178 [lifecycleSupervisor-1-4] INFO org.keedio.flume.source.SQLSource - Starting sql source sql-source ... 13:38:34.188 [lifecycleSupervisor-1-4] INFO o.a.f.i.MonitoredCounterGroup - Monitored counter group for type: SOURCE, name: SOURCESQL.sql-source: Successfully registered new MBean. 13:38:34.189 [lifecycleSupervisor-1-4] INFO o.a.f.i.MonitoredCounterGroup - Component type: SOURCE, name: SOURCESQL.sql-source started 13:38:34.190 [lifecycleSupervisor-1-2] INFO o.a.f.i.MonitoredCounterGroup - Monitored counter group for type: SINK, name: couchbaseSink: Successfully registered new MBean. 13:38:34.190 [lifecycleSupervisor-1-2] INFO o.a.f.i.MonitoredCounterGroup - Component type: SINK, name: couchbaseSink started 13:38:34.220 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 13:40:28.427 [agent-shutdown-hook] INFO o.a.f.sink.couchbase.CouchBaseSink - CouchBase sink {} stopping 2015-11-12 13:40:28.462 INFO com.couchbase.client.CouchbaseConnection: Shut down Couchbase client 2015-11-12 13:40:28.552 INFO com.couchbase.client.ViewConnection: I/O reactor terminated 13:40:28.567 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Component type: SINK, name: couchbaseSink stopped 13:40:28.571 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.start.time == 1447353514190 13:40:28.573 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.stop.time == 1447353628567 13:40:28.574 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.batch.complete == 0 13:40:28.577 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.batch.empty == 12 13:40:28.579 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.batch.underflow == 1 13:40:28.582 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.connection.closed.count == 1 13:40:28.583 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.connection.creation.count == 1 13:40:28.589 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.connection.failed.count == 0 13:40:28.590 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.event.drain.attempt == 7 13:40:28.590 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: SINK, name: couchbaseSink. sink.event.drain.sucess == 7 The row index for the incremental column gets updated to 7, which is the number of rows streamed. Displaying Streamed Data in Couchbase server Admin Console The data streamed to Couchbase server may be displayed in Couchbase Console at URL http://localhost:8091 . The Item Count is listed as 8 for the “default” bucket. Click on the Documents button to display the documents. The eight documents get listed. Only seven documents were streamed and the eighth document is docCounter for the maximum index in the document name. The document names start with log and are suffixed with _i in which i is the index from 0 to 6 for the 7 documents. In this tutorial we streamed Oracle Database table data to Couchbase server using Apache Flume.
↧
Wiki Page: Oracle Database 12c – Partition Table Enhancements (Interval reference, Cascading Truncate and Exchange Operations) for Pluggable database (PDB) in Container Database (CDB)
By Y V Ravi Kumar , Oracle ACE and Oracle Certified Master Introduction: Oracle Database 12c addressed key issues in supporting large tables and indexes by divided into smaller and manageable parts called partitions. Here we can observe Interval reference partitioning, Exchanging partitions between Non-Partition Table to Partition table in pluggable database (cdb2_pdb1) in Container database (cdb2) Environment Setup: Database version : Oracle 12c (12.1.0.2.0) Operating System : Oracle Enterprise Linux 7 Container database : cdb2 with pluggable database (cdb2_pdb1) Login into container database (cdb2) and open pluggable database (cdb2_pdb1) [oracle@oralinux7 ~]$ . oraenv ORACLE_SID = [cdb1] ? cdb2 The Oracle base remains unchanged with value /u01/app/oracle [oracle@oralinux7 ~]$ sqlplus /nolog SQL*Plus: Release 12.1.0.2.0 Production on Mon May 16 09:00:40 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. SQL> connect sys/oracle@cdb2 as sysdba Connected. SQL> alter pluggable database all open; Pluggable database altered. Login into pluggable database (cdb2_pdb1) and create user and login as ‘scott’ user and create partition objects with reference functionality SQL> create user scott identified by oracle; User created. SQL> grant connect,resource to scott; Grant succeeded. SQL> connect scott/oracle@192.168.2.25:1521/cdb2_pdb1 Connected. SQL> Create table master_table ( No number not null, Name varchar2(200), constraint No_Pk primary key (No)) partition by range (No) interval (10) ( partition Part_p1 values less than (10) ); Table created. SQL> create table Child_table ( No number not null, Name varchar2(200), SecNo number not null, constraint No_Pk2 primary key (No), constraint SecNo_Fk foreign key (secNo) references master_table (No) on delete cascade) partition by reference (SecNo_Fk); Table created. SQL> select table_name, partition_name from user_tab_partitions; TABLE_NAME PARTITION_NAME ------------------- --------------------------- CHILD_TABLE PART_P1 MASTER_TABLE PART_P1 Insert data into above created partition table called master_table and child_table with range partition with reference relation and verify partitions SQL> insert into MASTER_TABLE values (1,'ORACLE'); 1 row created. SQL> insert into MASTER_TABLE values (15,'MYSQL'); 1 row created. SQL> COMMIT; Commit complete. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME --------------------- ------------------------- CHILD_TABLE PART_P1 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 Note: Additional partition with the name ‘SYS_P201’ has been automatically created for the master_table SQL> insert into MASTER_TABLE values (100,'EXADATA'); 1 row created. SQL> commit; Commit complete. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME ---------------------- ------------------------- CHILD_TABLE PART_P1 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 MASTER_TABLE SYS_P202 Note: One more additional partition with the name ‘SYS_P202’ has been automatically created for the master_table Insert data into child_table with reference key and check the partitions SQL> insert into CHILD_TABLE values (1,'EXADATA',100); 1 row created. SQL> COMMIT; Commit complete. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME ---------------------- ------------------------- CHILD_TABLE PART_P1 CHILD_TABLE SYS_P202 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 MASTER_TABLE SYS_P202 SQL> insert into CHILD_TABLE values (2,'MYSQL',15); 1 row created. SQL> COMMIT; Commit complete. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME --------------------- ------------------------- CHILD_TABLE PART_P1 CHILD_TABLE SYS_P201 CHILD_TABLE SYS_P202 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 MASTER_TABLE SYS_P202 6 rows selected. Note: Additional partitions with the names ‘SYS_P201’ and ‘SYS_P202’ has been automatically created for the child_table SQL> select * from master_table; NO NAME ---------- ------------ 1 ORACLE 15 MYSQL 100 EXADATA SQL> select * from child_table; NO NAME SECNO ---------- -------------------- ---------- 2 MYSQL 15 1 EXADATA 100 Rename the partition for the table master_table and check the effects and behavior of the table child_table SQL> alter table MASTER_TABLE truncate partition for (100) cascade update indexes; Table truncated. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME ------------------ ------------------------- CHILD_TABLE PART_P1 CHILD_TABLE SYS_P201 CHILD_TABLE SYS_P202 MASTER_TABLE PART_100 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 6 rows selected. Note: If you change the partition name for master_table it won’t effect for child_table and you can change partition for child_table manually SQL> alter table CHILD_TABLE rename partition for (100) to PART_CHILD_100; Table altered. SQL> select table_name, partition_name from user_tab_partitions where table_name like '%TABLE%' TABLE_NAME PARTITION_NAME ---------------------- -------------------------- CHILD_TABLE PART_CHILD_100 CHILD_TABLE PART_P1 CHILD_TABLE SYS_P201 MASTER_TABLE PART_100 MASTER_TABLE PART_P1 MASTER_TABLE SYS_P201 6 rows selected. Check the above partitions after renaming for the tables master_table and child_table SQL> alter table MASTER_TABLE truncate partition for (100) cascade update indexes; Table truncated. SQL> select * from master_table; NO NAME ---------- ------------- 1 ORACLE 15 MYSQL SQL> select * from child_table; NO NAME SECNO ---------- ----------- ---------- 2 MYSQL 15 If you truncate the partition with data ‘100’ in master_table it automatically truncates in child_table for the partition data ‘100’. Exchange Partitions between Non-Partition Table to Partition Table with Child Tables: Check the data in master_table and child_table before executing exchange partition functionality SQL> select * from master_table; NO NAME ---------- ---------------------- 1 Part_1 - Partition 2 Part_2 - Partition SQL> select * from child_table; NO NAME SECNO ---------- -------------------- ---------- 100 Part_100 - Partition 1 200 Part_200 - Partition 2 Create Non-Partitioned tables master_table_ref and child_table_ref with data referenced data SQL> create table master_table_ref ( No number not null, Name varchar2(200), constraint No_ref_Pk primary key (No)); create table child_table_ref ( No number not null, Name varchar2(200), SecNo number not null, constraint No_pk3 primary key (No), constraint secNo_Fk1 foreign key (secNo) references master_table_ref (No) on delete cascade); Insert the data for both the tables with reference SQL>insert into master_table_ref values (3, 'Part_3 - Non Partitioned data'); 1 row created. SQL>insert into child_table_ref values (3, 'Part_3 - Non Partitioned data',3); 1 row created. SQL>commit; Commit complete. Check all the data for the tables before exchanging the partitions: master_table, child_table, master_table_ref and child_table_ref SQL> select * from master_table; NO NAME ---------- ---------------------- 1 Part_1 - Partition 2 Part_2 - Partition SQL> select * from child_table; NO NAME SECNO ---------- -------------------------- ---------- 100 Part_100 - Partition 1 200 Part_200 - Partition 2 SQL> select * from master_table_ref; NO NAME ---------- --------------------------------------- 3 Part_3 - Non Partitioned data SQL> select * from child_table_ref; NO NAME SECNO ---------- ---------------------------------------- ---------- 3 Part_3 - Non Partitioned data 3 Now exchange the partitions and check the data SQL> alter table master_table exchange partition for (3) with table master_table_ref cascade update indexes; Table altered. SQL> select * from master_table; NO NAME ---------- --------------------------- 3 Part_3 - Non Partitioned data SQL> select * from child_table; NO NAME SECNO ---------- --------------------------------------- ---------- 3 Part_3 - Non Partitioned data 3 SQL> select * from master_table_ref; NO NAME ---------- ----------------------- 1 Part_1 - Partition 2 Part_2 - Partition SQL> select * from child_table_ref; NO NAME SECNO ---------- -------------------- ---------- 100 Part_100 - Partition 1 200 Part_200 - Partition 2 Summary : If you exchange the partition with Non-Partition table (master_table_ref) to Partition table (master_table) it will exchange dependent child tables (child_table & child_table_ref) also.
↧
Wiki Page: Streaming Oracle Database Table Data to MongoDB
Written by Deepak Vohra While Oracle Database is the leading relational (RDBMS) database MongoDB is the leading NoSQL Database. Consider the use case that data is required to be stored in Oracle Database and a copy of the data is to be stored in MongoDB as a backup or for another user. Several options are available including exporting Oracle Database table data to a CSV file and subsequently importing into MongoDB from the CSV file using the MongoDB import tool mongoimport . Another option is to use Apache Sqoop to bulk transfer data from Oracle Database to Apache Hive and subsequently creating a Hive external table using the MongoDB Hive storage handler. But, the most direct option is to use Apache Flume to stream data from Oracle Database to MongoDB as and when data is added to a Oracle Database table. In this tutorial we shall stream data from Oracle Database table to a MongoDB collection using a third party Flume SQL Source for Oracle Database and third party Flume sink for MongoDB. The data stream path is shown in the following illustration. Setting the Environment Creating Oracle Database Table Installing the Flume Sink for MongoDB Configuring Flume SQL Source Configuring Flume Agent Starting MongoDB Server Starting Mongo Shell Creating a MongoDB Collection Running the Flume Agent Querying MongoDB Streaming, not just Bulk Transferring Data Setting the Environment The following software is used in this tutorial. -Oracle Database -MongoDB server -Apache Flume -Java 7 Oracle Linux 6.6 is used for installing the software. Create a directory to install the software and set the directory permissions to global (777). mkdir /flume chmod -R 777 /flume cd /flume Download and extract the Apache Flume tar.gz file. wget http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz tar -xvf apache-flume-1.6.0-bin.tar.gz Donwload and extract the MongoDB tgz file. curl -O http://downloads.mongodb.org/linux/ mongodb-linux-i686-3.0.6.tgz tar -zxvf mongodb-linux-i686-3.0.6.tgz Set environment variables for Oracle Database, MongoDB Server, Apache Flume and Java in the bash shell script. vi ~/.bashrc export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export MONGODB_HOME=/flume/mongodb-linux-i686-3.0.6 export FLUME_HOME=/flume/apache-flume-1.6.0-bin export FLUME_CONF=/flume/apache-flume-1.6.0-bin/conf export JAVA_HOME=/flume/jdk1.7.0_55 export PATH=$PATH:$FLUME_HOME/bin:$MONGODB_HOME/bin: $ORACLE_HOME/bin export CLASSPATH=$FLUME_HOME/lib/* Creating Oracle Database Table Next, drop the OE.WLSLOG table and create the table using the following SQL script in SQL*Plus. DROP TABLE OE.WLSLOG; CREATE TABLE OE.WLSLOG (id INTEGER PRIMARY KEY, time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000)); Database table OE.WLSLOG gets created. Add data to the table using the following SQL script. INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(1,'Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(2,'Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(3,'Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(4,'Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(5,'Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(6,'Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(7,'Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); Data gets added to the OE.WLSLOG table. Exit SQL*Plus after creating the table as Flume agent may not be able to connect to Oracle Database with the SQL*Plus also connected. Installing the Flume Sink for MongoDB To install the third-party Flume Sink for MongoDB download, compile and package the source code for the sink. git clone https://github.com/leonlee/flume-ng-mongodb-sink.git cd flume-ng-mongodb-sink mvn package Copy the jar file generated to the Flume lib directory. Also download and copy the MongoDB Java driver to the Flume lib directory, which is in the runtime classpath of Flume. flume-ng-mongodb-sink-1.0.0.jar $FLUME_HOME/lib cp mongodb-driver-3.0.0.jar $FLUME_HOME/lib Configuring Flume SQL Source The Flume source for Oracle Database is packaged with the Apache Flume distribution. Download the keedio/flume-ng-sql-source from the git hub. CD to the flume-ng-sql-source directory and compile and package the Flume SQL source to a jar file. git clone https://github.com/keedio/flume-ng-sql-source.git cd flume-ng-sql-source mvn package Create a plugins.d/sql-source/lib directory for the SQL Source plugin, set its permissions to global and copy the flume-ng-sql-source-1.3-SNAPSHOT.jar file to the lib directory. mkdir -p $FLUME_HOME/plugins.d/sql-source/lib chmod -R 777 $FLUME_HOME/plugins.d/sql-source/lib cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/plugins.d/sql-source/lib Similarly, create a libext directory plugins.d/sql-source/libext for the Oracle Database JDBC jar file ojdbc6.jar , set its permissions to global (777) and copy the ojdbc6.jar to the libext directory. mkdir $FLUME_HOME/plugins.d/sql-source/libext chmod -R 777 $FLUME_HOME/plugins.d/sql-source/libext cp ojdbc6.jar $FLUME_HOME/plugins.d/sql-source/libext We also need to copy the ojdbc6.jar and flume-ng-sql-source-1.3-SNAPSHOT.jar to the Flume lib directory, which adds the jars to the runtime classpath of Flume. cp flume-ng-sql-source-1.3-SNAPSHOT.jar $FLUME_HOME/lib cp ojdbc6.jar $FLUME_HOME/lib Configuring Flume Agent Create a configuration file flume.conf in the $FLUME_CONF directory and set the following configuration properties for Flume source, Flume channel, Flume sink on the Flume agent. Configuration Property Description Value agent.sources Sets the Flume Source. sql-source agent.sinks Sets the Flume sink. mongoSink agent.channels Sets the Flume channel. ch1 agent.sources.sql-source.channels Sets the channel on the source. ch1 agent.channels.ch1.capacity Sets the channel capacity. 1000000 agent.channels.ch1.type Sets the channel type. memory agent.sources.sql-source.type Sets the SQL Source type class. org.keedio.flume.source.SQLSource agent.sources.sql-source.connection.url Sets the connection URL for Oracle Database. jdbc:oracle:thin:@127.0.0.1:1521:ORCL agent.sources.sql-source.user Sets the username for Oracle Database. OE agent.sources.sql-source.password Sets the password for Oracle Database. OE agent.sources.sql-source.table Sets the Oracle Database table. WLSLOG agent.sources.sql-source.columns.to.select Sets the columns to select. The * setting selects all columns. * agent.sources.sql-source.incremental.column.name Sets the incremental column name. id agent.sources.sql-source.incremental.value Sets the incremental column value to start streaming from. 0 agent.sources.sql-source.run.query.delay Sets the frequency in millisecond to poll sql source. 10000 agent.sources.sql-source.status.file.path Sets the directory path for the SQL source status file. /var/lib/flume agent.sources.sql-source.status.file.name Sets the status file. sql-source.status agent.sinks.mongoSink.channel Sets the channel on the MongoDB sink. ch1 agent.sinks.mongoSink.type Sets the sink type for MongoDB. org.riderzen.flume.sink.MongoSink agent.sinks.mongoSink.autoWrap Sets the Host name/s for the Couchbase cluster true agent.sinks.mongoSink.db Sets the MongoDB database. flume agent.sinks.mongoSink.collection Sets the MongoDB collection. wlslog The flume.conf is listed: agent.channels.ch1.capacity = 1000000 agent.sources.sql-source.channels = ch1 agent.sources.sql-source.type = org.keedio.flume.source.SQLSource # URL to connect to database agent.sources.sql-source.connection.url = jdbc:oracle:thin:@127.0.0.1:1521:ORCL # Database connection properties agent.sources.sql-source.user = OE agent.sources.sql-source.password = OE agent.sources.sql-source.table = OE.WLSLOG agent.sources.sql-source.columns.to.select = * # Increment column properties agent.sources.sql-source.incremental.column.name = id # Increment value is from you want to start taking data from tables (0 will import entire table) agent.sources.sql-source.incremental.value = 0 # Query delay, each configured milisecond the query will be sent agent.sources.sql-source.run.query.delay=10000 # Status file is used to save last readed row agent.sources.sql-source.status.file.path = /var/lib/flume agent.sources.sql-source.status.file.name = sql-source.status agent.sinks.mongoSink.channel = ch1 agent.sinks.mongoSink.type = org.riderzen.flume.sink.MongoSink agent.sinks.mongoSink.autoWrap = true agent.sinks.mongoSink.db = flume agent.sinks.mongoSink.collection = wlslog Copy the flume.conf to the Flume configuration directory with one of the following commands. cp flume.conf $FLUME_HOME/conf/flume.conf cp flume.conf $FLUME_CONF/flume.conf Also create the Flume environment file from the template. cp $FLUME_HOME/conf/flume-env.sh.template $FLUME_HOME/conf/flume-env.sh Starting MongoDB Server Start the MongoDB server with the following command. >mongod MongoDB server gets started and listens on port 27017. Starting Mongo Shell Start the MongoDB shell with the following command. >mongo MongoDB shell gets started. Creating a MongoDB Collection In flume.conf we set the MongoDB database as ‘flume’ and MongoDB collection as ‘wlslog’. To create a MongoDB collection called ‘wlslog’ in MongoDB database ‘flume’ run the following command to create the ‘flume’ database. The ‘flume’ database gets created implicitly when the use command sets the database to ‘flume’.>use flume The ‘flume’ database does not actually get initialized till it is made use of. Run the following Mongo shell command to create the ‘wlslog’ collection. >db.createCollection("wlslog") After logging into Mongo shell find if the flume database and wlslog collection already exist. The use flume command sets the database to flume and if the wlslog collection already exists the db.createCollection("wlslog") command generates an error “collection already exists”. The documents in the wlslog collection may be listed with the db.wlslog.find() command. Drop the wlslog collection with the following command. >db.wlslog.drop() The wlslog collection gets dropped. Create the collection again with the db.createCollection("wlslog") command. An output of {“ok”: 1} indicates that the wlslog collection got created. Running the Flume Agent To stream Oracle Database table data to MongoDB run the Flume agent with the following command in which the configuration file is specified with the –conf option and the flume agent is specified with the parameter –n . flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Flume agent “agent” gets started. Oracle Database table data gets streamed to MongoDB server and the Flume agent continues to run. A more detailed output from the Flume agent is listed: [root@localhost flume]# flume-ng agent --conf $FLUME_CONF -f $FLUME_CONF/flume.conf -n agent -Dflume.root.logger=INFO,console Info: Including Hive libraries found via () for Hive access + exec /flume/jdk1.7.0_55/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/flume/apache-flume-1.6.0-bin/conf:/flume/apache-flume-1.6.0-bin/lib/*:/flume/apache-flume-1.6.0-bin/plugins.d/sql-source/lib/*:/flume/apache-flume-1.6.0-bin/plugins.d/sql-source/libext/*:/lib/*' -Djava.library.path= org.apache.flume.node.Application -f /flume/apache-flume-1.6.0-bin/conf/flume.conf -n agent o.a.f.n.PollingPropertiesFileConfigurationProvider - Checking file:/flume/apache-flume-1.6.0-bin/conf/flume.conf for changes 14:55:18.045 [conf-file-poller-0] INFO o.a.f.n.PollingPropertiesFileConfigurationProvider - Reloading configuration file:/flume/apache-flume-1.6.0-bin/conf/flume.conf 14:55:18.580 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:mongoSink 14:55:18.586 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created context for mongoSink: collection 14:55:18.614 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:mongoSink 14:55:18.632 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Added sinks: mongoSink Agent: agent 14:55:18.638 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:mongoSink 14:55:18.641 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:mongoSink 14:55:18.675 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Processing:mongoSink 14:55:18.683 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Starting validation of configuration for agent: agent, initial-configuration: AgentConfiguration[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=OE.WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, password=OE, user=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {mongoSink={ parameters:{db=flume, autoWrap=true, collection=wlslog, type=org.riderzen.flume.sink.MongoSink, channel=ch1} }} 14:55:19.041 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Created channel ch1 14:55:19.709 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Creating sink: mongoSink using OTHER 14:55:19.761 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Post validation configuration for agent AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[agent] SOURCES: {sql-source={ parameters:{run.query.delay=10000, columns.to.select=*, connection.url=jdbc:oracle:thin:@127.0.0.1:1521:ORCL, incremental.value=0, channels=ch1, table=OE.WLSLOG, status.file.name=sql-source.status, type=org.keedio.flume.source.SQLSource, password=OE, user=OE, incremental.column.name=id, status.file.path=/var/lib/flume} }} CHANNELS: {ch1={ parameters:{capacity=1000000, type=memory} }} SINKS: {mongoSink={ parameters:{db=flume, autoWrap=true, collection=wlslog, type=org.riderzen.flume.sink.MongoSink, channel=ch1} }} 14:55:19.809 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Channels:ch1 14:55:19.823 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sinks mongoSink 14:55:19.835 [conf-file-poller-0] DEBUG o.a.flume.conf.FlumeConfiguration - Sources sql-source 14:55:19.862 [conf-file-poller-0] INFO o.a.flume.conf.FlumeConfiguration - Post-validation flume configuration contains configuration for agents: [agent] 14:55:19.870 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Creating channels 14:55:20.410 [conf-file-poller-0] INFO o.a.f.channel.DefaultChannelFactory - Creating instance of channel ch1 type memory 14:55:20.660 [conf-file-poller-0] INFO o.a.f.n.AbstractConfigurationProvider - Created channel ch1 14:55:20.743 [conf-file-poller-0] INFO o.a.f.source.DefaultSourceFactory - Creating instance of source sql-source, type org.keedio.flume.source.SQLSource 14:55:20.759 [conf-file-poller-0] DEBUG o.a.f.source.DefaultSourceFactory - Source type org.keedio.flume.source.SQLSource is a custom type 14:55:21.039 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358117507 lastSeenState:IDLE desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:21.042 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 14:55:21.208 [conf-file-poller-0] INFO org.keedio.flume.source.SQLSource - Reading and processing configuration values for source sql-source 14:55:21.305 [conf-file-poller-0] INFO o.k.flume.source.SQLSourceHelper - Status file not created, using start value from config file 14:55:24.045 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358121042 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:24.049 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 14:55:27.053 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358124049 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:27.059 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 14:55:30.066 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358127059 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:30.071 [lifecycleSupervisor-1-2] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 2015-11-12 14:55:31,776 (conf-file-poller-0) [INFO - org.hibernate.annotations.common.reflection.java.JavaReflectionManager. (JavaReflectionManager.java:66)] HCANN000001: Hibernate Commons Annotations {4.0.5.Final} 2015-11-12 14:55:32,531 (conf-file-poller-0) [INFO - org.hibernate.Version.logVersion(Version.java:54)] HHH000412: Hibernate Core {4.3.10.Final} 2015-11-12 14:55:32,770 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment. (Environment.java:239)] HHH000206: hibernate.properties not found 2015-11-12 14:55:32,873 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.buildBytecodeProvider(Environment.java:346)] HHH000021: Bytecode provider name : javassist 14:55:33.086 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358130071 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:33.101 [lifecycleSupervisor-1-1] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 14:55:35.066 [conf-file-poller-0] INFO o.k.flume.source.HibernateHelper - Opening hibernate session 14:55:36.107 [lifecycleSupervisor-1-3] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358133101 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:36.108 [lifecycleSupervisor-1-3] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 14:55:39.109 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358136108 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:39.109 [lifecycleSupervisor-1-0] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete 2015-11-12 14:55:41,977 (conf-file-poller-0) [WARN - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:93)] HHH000402: Using Hibernate built-in connection pool (not for production use!) 2015-11-12 14:55:42,062 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:166)] HHH000401: using driver [null] at URL [jdbc:oracle:thin:@127.0.0.1:1521:ORCL] 2015-11-12 14:55:42,091 (conf-file-poller-0) [INFO - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:175)] HHH000046: Connection properties: {user=OE, password=****} 14:55:42.110 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - checking process:{ file:/flume/apache-flume-1.6.0-bin/conf/flume.conf counterGroup:{ name:null counters:{file.checks=1, file.loads=1} } provider:org.apache.flume.node.PollingPropertiesFileConfigurationProvider agentName:agent } supervisoree:{ status:{ lastSeen:1447358139109 lastSeenState:START desiredState:START firstSeen:1447358117507 failures:0 discard:false error:false } policy:org.apache.flume.lifecycle.LifecycleSupervisor$SupervisorPolicy$AlwaysRestartPolicy@4c98e7 } 14:55:42.111 [lifecycleSupervisor-1-4] DEBUG o.a.f.lifecycle.LifecycleSupervisor - Status check complete Querying MongoDB From the Mongo shell run the following commands to list the documents in the wlslog collection in the flume database. >use flume >db.wlslog.find() The 7 documents streamed get listed. Streaming, not just Bulk Transferring Data Flume is the preferred tool if data is to streamed from a source to a sink rather than being just bulk transferred. To demonstrate, add 3 more rows of data to the Oracle Database table OE.WSLLOG . INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(8,'Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(9,'Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO OE.WLSLOG(id,time_stamp,category,type,servername,code,msg) VALUES(10,'Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); Exit the SQL*Plus after adding the 3 rows. The 3 new rows added also get streamed to MongoDB server. Run the db.wlslog.find() command in the MongoDB shell again and 10 documents get listed. Each document has a unique _id even though other fields could be the same value. The Apache Flume agent may be stopped if no further data is to be streamed. As the shutdown metrics indicate 10 events get put into the Flume channel from the Flume source Oracle Database. While the event take attempts are 41 because the sink polls the channel for new data periodically only 10 events are taken from the channel. 15:00:32.724 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Component type: CHANNEL, name: ch1 stopped 15:00:32.726 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.start.time == 1447358196693 15:00:32.728 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.stop.time == 1447358432724 15:00:32.757 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.capacity == 1000000 15:00:32.758 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.current.size == 0 15:00:32.763 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.put.attempt == 10 15:00:32.782 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.put.success == 10 15:00:32.784 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.take.attempt == 41 15:00:32.785 [agent-shutdown-hook] INFO o.a.f.i.MonitoredCounterGroup - Shutdown Metric for type: CHANNEL, name: ch1. channel.event.take.success == 10 In this tutorial we streamed Oracle Database table data to MongoDB server using Apache Flume.
↧
Blog Post: The best idea since 1992: Putting the C into ACID (We need your vote)
Oracle Corporation is asking for community feedback on the SQL-92 CREATE ASSERTION feature. If you would like them implemented in Oracle Database, please vote at https://community.oracle.com/ideas/13028 (click on the up-arrow). Yuu need to log into your OTN account in order to vote. If you do not have an OTN account, please consider registering for one. Backgrounder The creator of the relational model, Dr. Codd touted its simplicity and consequent appeal to users—especially casual users—who have little or no training in programming. He singles out this advantage in the opening sentence of his first paper on relational theory A Relational Model of Data for Large Shared Data Banks : “Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation).” He made the point more forcefully in a subsequent paper, Normalized Data Base Structure: A Brief Tutorial in which he says: “In the choice of logical data structures that a system is to support, there is one consideration of absolutely paramount importance—and that is the convenience of the majority of users. … To make formatted data bases readily accessible to users (especially casual users) who have little or no training in programming we must provide the simplest possible data structures and almost natural language. … What could be a simpler, more universally needed, and more universally understood data structure than a table? Why not permit such users to view all the data in a data base in a tabular way?” But the true importance of the relational model is highlighted by the title Derivability, Redundancy, and Consistency of Relations Stored in Large Data Banks of the unpublished original—and shorter—version of Dr. Codd’s paper which predated the published version by a year. That title hints that the chief advantage of the relational model is its ability it gives us to assert arbitrarily-complex consistency constraints that must be satisfied by the data within the database; that is, the ability to put the “C” into “ACID.” An example of a complex constraint is: A pilot may fly a certain type of aircraft only if (1) he has flown this type of aircraft previously or (2a) he has attended an appropriate classroom training course and (2b) his instructor is one of the co-pilots. Oracle Rdb for the OpenVMS operating system already provides the SQL-92 CREATE ASSERTION specification ( http://community.hpe.com/hpeb/attachments/hpeb/itrc-149/22979/1/15667.doc ) so why not Oracle Database? Let’s put the “C” into “ACID.” We’ve waited 25 years but better late than never. Postscript The relational model shields both programmers as well as casual users from the physical representation of data. Should programmers be protected from having to know how the data is organized in the machine? Will they develop high-performance applications if they are ignorant of the physical representation?
↧
↧
Blog Post: Tnsnames.ora, IFILE and Network Drives on Windows
I’ve recently begun a new contract migrating a Solaris 9i database to Oracle 11gR2 on Windows, in the Azure cloud. I hate windows with a vengeance and this hasn’t made me change my opinion! One of the planned improvements is to have everyone using a standard, central tnsnames.ora file for alias resolution. A good plan, and the company has incorporated my own tnsnames checker utility to ensure that any edits are valid and don’t break anything. I found that the tnsnames.ora in my local Oracle Client install, was not working. Here’s what I had to do to fix it. In my local tnsnames.ora , I had something like the following: IFILE="\\servername\share_name\central_tnsnames\tnsnames.ora" (Server names etc have been obfuscated to protect the innocent!) However, using the above caused tnsping commands, or connection attempts to time out or simply fail: tnsping barney TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 19-MAY-2 016 12:16:42 ... TNS-03505: Failed to resolve name If the standard tnsnames.ora file was copied locally, and IFILE ‘d, then it all just worked as expected. The problem is simple, Oracle isn’t fond of IFILE ing files from networked drives. So, to get around this, I needed to map a network drive instead, and use the drive specifier in my IFILE . First map a persistent network drive to be my (new) Y: drive. This should be reconnected at logon until further notice. Note that this mapping uses my current credentials to make the connection. net use Y: \\servername\share_name /PERSISTENT:YES And in my tnsnames.ora , I now have this: IFILE="Y:\central_tnsnames\tnsnames.ora" And now, it all just works! C:\Users\ndunbar\Downloads>tnsping barney TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 19-MAY-2 016 12:21:23 ... Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=bedrock) (PORT=1521)) (CONNECT_DATA= (SERVER=dedicated) (SERVICE_NAME=barney))) OK (160 msec) HTH
↧
Wiki Page: How to Create Dynamic PL/SQL Web Pages
There are many developers who use Oracle APEX to develop robust web applications. There also are others who don’t like to use tools, like Oracle APEX. Those developers like to build web applications by writing native PL/SQL stored procedures. This article shows them how to create dynamic web pages. It explains how you pass parameters as sets, ranges, and specific name-value pairs. The Configure Oracle’s XDB Server for Web Applications article shows you how to configure the XDB Server. It also shows you how to deploy a virtually static web page. The page is static except for the use of the USER built-in. The built-in is a variable that maps to the user name that owns an active connection to the Oracle database. This article shows you how to build three types of dynamic web pages: Setup grants and synonyms Using an array for a parameter range Using an array for a series of parameters Using a set of specific name-value parameters The example programs rely on a new web video store model, which you can download from https://github.com/maclochlainn/web-video-store . If you download the code, the create_web_video_store.sql script runs all the code to create and seed the data model. These stored procedure examples are complete working code, which you also can run after downloading them from github.com. You can read more about the XDB in the Oracle XML DB Developer’s Guide . The examples in this article use a stream of parameters, which are the equivalent to a URI (Uniform Resource Identifier). A URI is a more complete form of what many call a URL (Uniform Resource Locator). A URL is simply the hierarchical part of the address minus the user credentials, as shown below (as URI coverage on Wikipedia ): The URI contains the query and a fragment. The fragment contains the call parameters to the server-side program unit. Your program units are stored procedures in the student schema. Setup grants and synonyms You need to setup grants to the anonymous schema for any stored procedures in the student schema. You also need to setup synonyms that point to the student schema procedures. The examples use the virtual /db/ path. You can run the following script as the system user: SQL> GRANT EXECUTE ON student.html_table_range TO anonymous; SQL> CREATE SYNONYM anonymous.html_table_range FOR student.html_table_range; SQL> GRANT EXECUTE ON student.html_table_ids TO anonymous; SQL> CREATE SYNONYM anonymous.html_table_ids FOR student.html_table_ids; SQL> GRANT EXECUTE ON student.html_table_values TO anonymous; SQL> CREATE SYNONYM anonymous.html_table_values FOR student.html_table_values; You need to create the objects before you grant access to them. However, you can drop and replace objects without revoking the grant. Grants give permission to execute the procedures, but you need synonyms to avoid disclosing ownership of the procedures. Synonyms can exist without the referenced object. The synonym fails when the referenced procedure doesn’t exist. Using an array for a parameter range The first example takes two parameters. The parameters are name-value pairs. They use the same name with different values. “ids” is the name, and the values are 1006 and 1014. It is possible to have more than two name-value pairs in the list, as you will see in the Using an array for a series of parameter s section later. You call the stored program with the following URI: http://localhost:8080/db/html_table_range?ids=1006&ids=1014 The html_table_range is the name of the stored procedure. The two name-value pairs are sent to a single parameter that is a PL/SQL collection. You can find the IDENT_ARR collection in the owa_util package. The IDENT_ARR is a collection of 30-character variable length strings. You create the html_table_range procedure in the student schema. You grant the execute privilege on the procedure to the anonymous user-schema. Then, you create a synonym for the anonymous user, which hides the ownership of the procedure. The html_table_range procedure code is: SQL> CREATE OR REPLACE 2 PROCEDURE html_table_range 3 ( ids OWA_UTIL.IDENT_ARR ) IS 4 /* Declare css file. */ 5 css CLOB := ''; 14 15 /* Declare a range determined list of film items. */ 16 CURSOR get_items 17 ( start_id NUMBER 18 , end_id NUMBER ) IS 19 SELECT item_id AS item_id 20 , item_title 21 || CASE 22 WHEN item_subtitle IS NOT NULL THEN 23 ': '|| item_subtitle 24 END AS item_title 25 , release_date AS release_date 26 FROM item 27 WHERE item_id BETWEEN start_id AND end_id 28 ORDER BY item_id; 29 30 BEGIN 31 /* Open HTML page with the PL/SQL toolkit. */ 32 htp.print(' '); 33 htp.print(css); 34 htp.htmlopen; 35 htp.headopen; 36 htp.htitle('Element Range List'); 37 htp.headclose; 38 htp.bodyopen; 39 htp.line; 40 41 /* Build HTML table with the PL/SQL toolkit. */ 42 htp.tableopen; 43 htp.tablerowopen; 44 htp.tableheader( cvalue => '#' 45 , cattributes => 'class="c1"' ); 46 htp.tableheader( cvalue => 'Film Title' 47 , cattributes => 'class="c2"' ); 48 htp.tableheader( cvalue => 'Release Date' 49 , cattributes => 'class="c3"' ); 50 htp.tablerowclose; 51 52 /* Read the cursor values into the HTML table. */ 53 FOR i IN get_items(ids(1),ids(2)) LOOP 54 htp.tablerowopen; 55 htp.tabledata( cvalue => i.item_id 56 , cattributes => 'class="c1"'); 57 htp.tabledata( cvalue => i.item_title 58 , cattributes => 'class="c2"'); 59 htp.tabledata( cvalue => i.release_date 60 , cattributes => 'class="c3"'); 61 htp.tablerowclose; 62 END LOOP; 63 64 /* Close HTML table. */ 65 htp.tableclose; 66 67 /* Close HTML page. */ 68 htp.line; 69 htp.bodyclose; 70 htp.htmlclose; 71 END; 72 / This procedure embeds a cascading style sheet (CSS) inside a local variable on lines 5 through 13. A better solution stores the CSS in a table. The subsequent examples show you how to get and use a stored CSS file. The get_items cursor on lines 16 through 28 uses a BETWEEN operator to create a range between starting and ending surrogate key column values. Line 32 provides an HTML 5 Document Type Declaration (DTD). The DTD uses the htp package’s print procedure because there isn’t a named procedure to render the document type. That same trick works to include the CSS file on line 33. The next calls to the htp package procedures open a page, describe a header, and body. Inside the body elements, the procedure calls the get_items cursor to create rows in a table. The calls to the tableheader and tabledata procedures pass the text value and a CSS class attribute. There is a difference between the class values for the row headers, which is their width. The row data elements really could use the same CSS style because the columns inherit their width from the table headers. It displays the following form when the URI contains two parameters. At least, it does so when the first value is lower than the second values. This section has shown you how to develop and deploy a dynamic range web page with the PL/SQL Web Toolkit. You can test it with different combinations to see how it works. Using an array for a series of parameters The second example builds on the first one. It takes any number of name-value pairs. Like the range example, they use the same name with different values. “ids” is also the name for the name-value pairs in this example. The values are 1002, 1007, 1015 and 1021. You may provide them in any order inside the URI because the cursor leverages one of Oracle’s adapter patterns and an ORDER BY clause to put them in ascending sequential order. You call the stored program with the following URI: http://localhost:8080/db/html_table_ids?ids=1002&ids=1007&ids=1015&ids=1021 The html_table_ids is the name of the stored procedure for this example. The name-value pairs are sent to a single parameter that is a PL/SQL collection. You can find the VC_ARR collection in the owa_util package. The VC_ARR is a collection of 32,000-character variable length strings. The html_table_ids procedure code follows below. You should note that the CSS file is now stored in a table and read from the table at runtime. This creates an external dependency on data you previously put in the database (check the code on github.com for the insert statement). SQL> CREATE OR REPLACE 2 PROCEDURE html_table_ids 3 ( ids OWA_UTIL.VC_ARR ) IS 4 5 /* Declare a variable of the local ADT collection. */ 6 lv_list LIST_IDS := list_ids(); 7 8 /* Declare a local Cascading Style Sheet. */ 9 lv_css VARCHAR2(4000); 10 11 /* Declare a range determined list of film items. */ 12 CURSOR get_items 13 ( cv_ids LIST_IDS ) IS 14 SELECT item_id AS item_id 15 , item_title 16 || CASE 17 WHEN item_subtitle IS NOT NULL THEN 18 ': '|| item_subtitle 19 END AS item_title 20 , release_date AS release_date 21 FROM item 22 WHERE item_id IN (SELECT * 23 FROM TABLE(cv_ids)) 24 ORDER BY item_id; 25 26 BEGIN 27 /* Convert OWA_UTIL PL/SQL collection to SQL collection. */ 28 FOR i IN 1..ids.COUNT LOOP 29 lv_list.EXTEND; 30 lv_list(lv_list.COUNT) := ids(i); 31 END LOOP; 32 33 /* Assign the css to a local variable. */ 34 FOR i IN (SELECT css_text 35 FROM css 36 WHERE css_name = 'blue-gray') LOOP 37 lv_css := i.css_text; 38 END LOOP; 39 40 /* Open HTML page with the PL/SQL toolkit. */ 41 htp.print(' '); 42 htp.print(lv_css); 43 htp.htmlopen; 44 htp.headopen; 45 htp.htitle('Element Series List'); 46 htp.headclose; 47 htp.bodyopen; 48 htp.line; 49 50 /* Build HTML table with the PL/SQL toolkit. */ 51 htp.tableopen; 52 htp.tablerowopen; 53 htp.tableheader( cvalue => '#' 54 , cattributes => 'class="c1"' ); 55 htp.tableheader( cvalue => 'Film Title' 56 , cattributes => 'class="c2"' ); 57 htp.tableheader( cvalue => 'Release Date' 58 , cattributes => 'class="c3"' ); 59 htp.tablerowclose; 60 61 /* Read the cursor values into the HTML table. */ 62 FOR i IN get_items(lv_list) LOOP 63 htp.tablerowopen; 64 htp.tabledata( cvalue => i.item_id 65 , cattributes => 'class="c1"'); 66 htp.tabledata( cvalue => i.item_title 67 , cattributes => 'class="c2"'); 68 htp.tabledata( cvalue => i.release_date 69 , cattributes => 'class="c3"'); 70 htp.tablerowclose; 71 END LOOP; 72 73 /* Close HTML table. */ 74 htp.tableclose; 75 76 /* Close HTML page. */ 77 htp.line; 78 htp.bodyclose; 79 htp.htmlclose; 80 END; 81 / Much of the code follows the prior range example logic. The trick in the code is the use of the adapter pattern on lines 22 and 23. The TABLE function converts the results of the collection into a SQL result set, which acts like a subquery or list of values. You would re-write lines 22 and 23 like this if you were writing an ad hoc query: 22 WHERE item_id IN (1002,1007,1015,1021) The difference between the two approaches is that incoming array is dynamic and the ad hoc query is static. It display the following: Like the prior section, you can test this with different values to see how it works. This section has shown you how to develop and deploy a dynamic range web page with a series of non-sequential values. Using a set of name-value parameters The third and final example builds on the first two. It takes specific name-value pairs. Unlike the prior range or series examples, it uses different name for each of the name-value pairs. The names in the URI must match the parameter names of the html_table_values procedure. The name-value parameter values differ from the earlier examples too. You can submit them as case insensitive and partial strings. That’s because the internal cursor leverages Oracle’s REGEXP_LIKE and UPPER functions in the WHERE clause. You call the stored program with the following URI: http://localhost:8080/db/html_table_ids?film_title=star&film_rating=pg&film_media=blu-ray The html_table_values is the name of the stored procedure for this example. The name-value pairs all use variable length strings, which may be up to 32,768 bytes in length. The html_table_values procedure is: SQL> CREATE OR REPLACE 2 PROCEDURE html_table_values 3 ( film_title VARCHAR2 4 , film_rating VARCHAR2 5 , film_media VARCHAR2 ) IS 6 7 /* Declare a local CSS variable. */ 8 lv_css VARCHAR2(4000); 9 10 /* Declare a range determined list of film items. */ 11 CURSOR get_items 12 ( cv_title VARCHAR2 13 , cv_rating VARCHAR2 14 , cv_media VARCHAR2 ) IS 15 SELECT i.item_id AS item_id 16 , i.item_title 17 || CASE 18 WHEN i.item_subtitle IS NOT NULL THEN 19 ': '|| i.item_subtitle 20 END AS item_title 21 , ra.rating 22 , i.release_date AS release_date 23 FROM item i INNER JOIN rating_agency ra 24 ON i.item_rating_id = ra.rating_agency_id INNER JOIN common_lookup cl 25 ON i.item_type = cl.common_lookup_id 26 WHERE REGEXP_LIKE(UPPER(i.item_title),UPPER(cv_title)) 27 AND REGEXP_LIKE(UPPER(ra.rating),UPPER(cv_rating)) 28 AND REGEXP_LIKE(UPPER(cl.common_lookup_type),UPPER(cv_media)) 29 ORDER BY item_title; 30 31 BEGIN 32 33 /* Assign the css to a local variable. */ 34 FOR i IN (SELECT css_text 35 FROM css 36 WHERE css_name = 'blue-gray') LOOP 37 lv_css := i.css_text; 38 END LOOP; 39 40 /* Open HTML page with the PL/SQL toolkit. */ 41 htp.print(' '); 42 htp.print(lv_css); 43 htp.htmlopen; 44 htp.headopen; 45 htp.htitle('Element Value List'); 46 htp.headclose; 47 htp.bodyopen; 48 htp.line; 49 50 /* Open the HTML table. */ 51 htp.tableopen; 52 htp.tablerowopen; 53 htp.tableheader( cvalue => '#' 54 , cattributes => 'class="c1"' ); 55 htp.tableheader( cvalue => 'Film Title' 56 , cattributes => 'class="c2"' ); 57 htp.tableheader( cvalue => 'Release Date' 58 , cattributes => 'class="c3"' ); 59 htp.tablerowclose; 60 61 /* Read the cursor values into the HTML table. */ 62 FOR i IN get_items(film_title, film_rating, film_media) LOOP 63 htp.tablerowopen; 64 htp.tabledata( cvalue => i.item_id 65 , cattributes => 'class="c1"'); 66 htp.tabledata( cvalue => i.item_title 67 , cattributes => 'class="c2"'); 68 htp.tabledata( cvalue => i.release_date 69 , cattributes => 'class="c3"'); 70 htp.tablerowclose; 71 END LOOP; 72 73 /* Close HTML table. */ 74 htp.tableclose; 75 76 /* Close HTML page. */ 77 htp.line; 78 htp.bodyclose; 79 htp.htmlclose; 80 END; 81 / Lines 26 through 28 hold parameter comparisons that use the REGEXP_LIKE function to validate partial strings and the UPPER function to make the comparisons case insensitive. This technique allows for partial lookup strings. Other than the joins and nuances of the cursor, the code is more or less the same as the prior examples. It does show you how to leverage elements from different tables to discover a set of unique rows in the ITEM table. More importantly, it shows you how to use specific names in the name-value paris. It displays the following: This section has shown you how to use specific names in name-value pairs and how to leverage information from a set of tables to find a set of rows in one table. You can find all the code at this URL As always, I hope this helps you learn another facet of writing PL/SQL.
↧
Wiki Page: SharePlex: All about understanding and fixing the LOG_WRAP issue
Introduction In today's article, we are going to discuss about one of the important topic related to SharePlex replication for Oracle databases. This article assumes that you already have knowledge about SharePlex replication architecture. If you are familiar with SharePlex and working with it on a daily basis, I am sure, at some point you might have encountered the "LOG_WRAP" issue (you might have already recalled the message "LOG_WRAP detected" from the SharePlex event_log). In case, you are not familiar with SharePlex replication architecture; you can refer the following articles which talks about SharePlex in details. Shareplex for Oracle Data Replication: Part 1 (Architecture) Shareplex for Oracle Data Replication: Part 2 (Installation) Shareplex for Oracle Data Replication: Part 3 (Configuration) Effectively Monitor Shareplex Data Replication Shareplex Replication: How to detect and How Shareplex detects Out Of Sync data Shareplex Replication: How to fix and How Shareplex fixes Out Of Sync data Today's article is focused on tackling "LOG_WRAP" issue in a SharePlex replication. Before, we can proceed with the subject in hand; let's first try to understand what a LOG_WRAP situation is all about. This article will cover the following topics. What is a LOG_WRAP and why it happens How to resolve a LOG_WRAP issue Best practices to avoid LOG_WRAP issue Let's go trough these topics one by one. What is a LOG_WRAP and why it happens In a SharePlex replication setup, the Capture process mines the online redo logs or archive logs from the source Oracle database to fetch the committed transactions. The capture process first attempts to mine the online redo logs and if the redo logs are not available (archived) it looks for the archive logs to mine the redo data. A LOG WRAP is a situation, where SharePlex capture process is not able to mine or parse the required redo data. A LOG_WRAP can occur either due to the online redo logs being overwritten (when source database is in NOARCHIVELOG mode) before it is being processed by SharePlex or due to the archive logs being deleted before being processed by SharePlex or due to the corruption of archive log files. In a LOG WRAP situation, the Capture process keeps on trying to read the redo data by restarting itself after each failed parse. When a LOG WRAP situation occurs in SharePlex, you will notice errors in the SharePlex event_log file similar to the following. Warning 2016-05-21 19:28:31.854158 5498 1104476480 Capture: A portion of the redo log could not be parsed (capturing from cdb1_pdb_1) [module oct] Notice 2016-05-21 19:28:31.854350 5498 1104476480 Capture: Begin record skip at seqno=367 offset=43997200 rc=6 (capturing from cdb1_pdb_1) [module oct] Error 2016-05-21 19:28:31.854469 5498 1104476480 Capture: LOG_WRAP detected 367 43997200 0, ...exiting...(../src/olog/olog.c:1990) (capturing from cdb1_pdb_1) [module oct] Error 2016-05-21 19:28:31.854549 5498 1104476480 Capture stopped: Internal error encountered; cannot continue (capturing from cdb1_pdb_1) Info 2016-05-21 19:28:32.861913 5072 3859883856 Capture exited with code=1, pid = 5498 (capturing from cdb1_pdb_1) Demonstration Let's quickly go through a demonstration to understand how a LOG_WRAP occurs in SharePlex replication. In the following example, I am simulating the LOG_WRAP issue between source database ( cdb1_pdb_1 ) and target database ( cdb2_pdb_1 ) which are running on servers labserver1 and labserver2 respectively. ##--- ##--- SharePlex source system ---## ##--- [oracle@labserver1 ~]$ sp_ctrl ******************************************************* * SharePlex for Oracle Command Utility * Copyright 2014 Dell, Inc. * ALL RIGHTS RESERVED. * Protected by U.S. Patents: 7,461,103 and 7,065,538 ******************************************************* sp_ctrl (labserver1:2015) > show Process Source Target State PID ---------- ------------------------------------ ---------------------- -------------------- ------ Capture o.cdb1_pdb_1 Running 5073 Export labserver1 labserver2 Running 5158 Read o.cdb1_pdb_1 Running 5075 Import labserver2 labserver1 Running 5153 Post o.cdb2_pdb_1-exp_queue_generic o.cdb1_pdb_1 Running 5074 ##--- ##--- SharePlex target system ---## ##--- [oracle@labserver2 ~]$ sp_ctrl ******************************************************* * SharePlex for Oracle Command Utility * Copyright 2014 Dell, Inc. * ALL RIGHTS RESERVED. * Protected by U.S. Patents: 7,461,103 and 7,065,538 ******************************************************* sp_ctrl (labserver2:2015) > show Process Source Target State PID ---------- ------------------------------------ ---------------------- -------------------- ------ Capture o.cdb2_pdb_1 Running 5212 Read o.cdb2_pdb_1 Running 5215 Export labserver2 labserver1 Running 5214 Import labserver1 labserver2 Running 5290 Post o.cdb1_pdb_1-exp_queue_generic o.cdb2_pdb_1 Running 5213 I have the following tables participating in replication between source (cdb1_pdb_1) and target (cdb2_pdb_1) databases. ##--- ##--- tables participating in replication ---## ##--- sp_ctrl (labserver1:2015)> show config Tables Replicating with Key: "MYAPP"."T_MYAPP_USERS" KEY: USER_ID "MYAPP"."T_MYAPP_PRODUCTS" KEY: PROD_ID "MYAPP"."T_MYAPP_ORDERS" KEY: ORDER_ID File Name :cdb1_pdb1_sp.conf Datasource :cdb1_pdb_1 Activated :21-May-16 14:23:04 Actid :4 Total Objects :3 Total Objects Replicating :3 Total Objects Not Replicating :0 View config summary in /app/shareplex/var/cdb1_2015/log/cdb1_pdb_1_config_log Now, to simulate an artificial LOG_WRAP situation, I am going to keep the source database in NOARCHIVELOG mode and will throw a moderately heavy load on the source system (to cause huge redo generation and frequent log switches). Further, I will manually stop the Capture process on source system to ensure that it is not processing the redo stream and it lags behind in processing the generated redo stream. Since the database is in NOARCHIVELOG mode the redo logs would be overwritten once these are filled up. After a brief stop, when I start the Capture process, it will try to resume the data mining from the point (log sequence offset) where it stopped. However, since the redo logs are already overwritten and are not archived (due to NOARCHIVELOG mode), it will go into the LOG_WRAP situation as simulated in the following example. I am simulating a moderately heavy data load for source table T_MYAPP_PRODUCTS (participating in replication) ---// ---// initiate a heavy data load on source database //-- ---// SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 SQL> show user USER is "MYAPP" SQL> begin 2 for i in 1..99999999 3 loop 4 insert into T_MYAPP_PRODUCTS values (i,'XXXXX'||i,'2000','100'); 5 end loop; 6 end; 7 / ---- Leave it to run for completion While the load is in progress, let's stop the source Capture process to ensure it doesn't mine the redo stream and in turn lags behind in processing the generated redo data. ##--- ##--- stop Capture to prevent mining redo stream ---## ##--- sp_ctrl (labserver1:2015)> stop capture sp_ctrl (labserver1:2015)> show Process Source Target State PID ---------- ------------------------------------ ---------------------- -------------------- ------ Capture o.cdb1_pdb_1 Stopped by user Export labserver1 labserver2 Running 5158 Read o.cdb1_pdb_1 Running 5075 Import labserver2 labserver1 Running 5153 Post o.cdb2_pdb_1-exp_queue_generic o.cdb1_pdb_1 Running 5074 We have stopped the Capture process in source system to prevent it from mining the redo stream. After a brief period (few log switches), let's start the Capture process in source system and observe it's behaviour. ##--- ##--- start Capture after few log switches ---## ##--- sp_ctrl (labserver1:2015)> start capture ##--- ##--- Capture has stopped due to errors ---## ##--- sp_ctrl (labserver1:2015)> show Process Source Target State PID ---------- ------------------------------------ ---------------------- -------------------- ------ Capture o.cdb1_pdb_1 Stopped - due to error Export labserver1 labserver2 Running 5158 Read o.cdb1_pdb_1 Running 5075 Import labserver2 labserver1 Running 5153 Post o.cdb2_pdb_1-exp_queue_generic o.cdb1_pdb_1 Running 5074 As we can see, Capture is not able to start up due to some errors. Let's look in to the event_log file and find out as to why the Capture process erred out. ##--- ##--- Error details from event_log file ---## ##--- Warning 2016-05-21 19:28:31.854158 5498 1104476480 Capture: A portion of the redo log could not be parsed (capturing from cdb1_pdb_1) [module oct] Notice 2016-05-21 19:28:31.854350 5498 1104476480 Capture: Begin record skip at seqno=367 offset=43997200 rc=6 (capturing from cdb1_pdb_1) [module oct] Error 2016-05-21 19:28:31.854469 5498 1104476480 Capture: LOG_WRAP detected 367 43997200 0, ...exiting...(../src/olog/olog.c:1990) (capturing from cdb1_pdb_1) [module oct] Error 2016-05-21 19:28:31.854549 5498 1104476480 Capture stopped: Internal error encountered; cannot continue (capturing from cdb1_pdb_1) Info 2016-05-21 19:28:32.861913 5072 3859883856 Capture exited with code=1, pid = 5498 (capturing from cdb1_pdb_1) Here it is. A "LOG_WRAP" situation has occurred in the source SharePlex system. Capture process is trying to mine the online redo log with sequence# 367 at offset 43997200 and is not able to parse the redo logs. This is because online sequence# 367 is no longer available as it was overwritten by the incoming redo streams (from the data load). Further, since the source database is in NOARCHIVELOG mode, the online logs were not archived at the event of log switch. Due to this fact SharePlex is not able to parse the redo data for log sequence# 367 and is complaining about a LOG_WRAP situation. If the logs were archived, SharePlex could have mine the redo data from the archived logs (based on the availability of the archive log in the archive destination) We can also query the internal SharePlex table SHAREPLEX_ACTID in the source database to check, from which redo log sequence# it is trying to mine the redo data and whether the log sequence is available or not. ---// ---// query shareplex_actid to check which log sequence# is under parse //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 SQL> select ACTID,SEQNO,OFFSET,LOG_START_OFFSET,INSTANCE_NAME,HOST_NAME,CDB_CON_ID from splex.shareplex_actid; ACTID SEQNO OFFSET LOG_START_OFFSET INSTANCE_NAME HOST_NAME CDB_CON_ID ---------- ---------- ---------- ---------------- ---------------- ------------------------------ ---------- 4 367 43997200 43997200 orpcdb1 labserver1.oraclebuffer.com 3 ----// ----// log sequence# is not available (overwritten and not archived) //--- ----// SQL> select thread#,sequence#,archived from v$log where sequence#=367; no rows selected SQL> select thread#,sequence#,archived from v$archived_log where sequence#=367; no rows selected At this point, we understood the concept behind the LOG_WRAP situation and went through a simple demonstration to understand when a LOG_WRAP situation can occur. In brief, we can say LOG_WRAP is a situation in SharePlex replication where SharePlex is not able to parse Oracle redo stream, which can be caused either due to redo logs being overwritten (not archived when source database is in NOARCHIVELOG mode) before being processed by SharePlex and or due to archive logs being deleted before it is processed by SharePlex. In the next section of this article, we will discuss about the different methods that can be used to resolve a LOG_WRAP situation in SharePlex replication. How to resolve a LOG_WRAP issue There are different methods available to address the LOG_WRAP situation depending on what caused the issue as explained below. If the source database is in ARCHIVELOG mode and the LOG_WRAP is caused due to the unavailability of redo log file (overwritten by log switches) or due to the unavailability of archive log file (backed up and deleted), then we can resolve the LOG_WRAP by simply restoring (using RMAN or manual method) all the archive log files to the archive destination, which are not yet processed by SharePlex. We can query the SHAREPLEX_ACTID table (as shown earlier) in source database to find out which sequence# SharePlex is currently processing and restore all the archive log files starting from that sequence. This is the simplest method of resolving LOG_WRAP issue and doesn't require any additional steps. When the unprocessed archive log files are restored to the archive destination, SharePlex will automatically start processing the log files from the point (offset) where it had stopped. The only prerequisite for this method to work is that, we have valid archive log backup available for restoration. In the event of LOG_WRAP situation, if the source database is in NOARCHIVELOG mode or if the archive log backup is not available; we need to instruct SharePlex to skip through all the log sequences, which are overwritten (by log switches) or for which the archive log is not available (can't be restored due to unavailability of backup). We can do this by updating the internal SharePlex table SHAREPLEX_ACTID . In SHAREPLEX_ACTID table, we have a SEQNO field which indicates the log sequence# currently being processed by SharePlex. We can set this SEQNO field to the next available log sequence#, which will resume the SharePlex capture process and capture will start mining the redo stream from the available sequence#. However, this method of addressing a LOG_WRAP issue, will leave the source and target system in Out Of Sync state as SharePlex could not process the missing redo logs in source system. We must repair the data between source and target system using the repair command, if we decide to skip through the missing log sequences. Demonstration Let's continue our demonstration from the previous section. As part of our demonstration, the source system cdb1_pdb_1 on server labserver1 went into a LOG_WRAP situation due to the redo logs being overwritten and the logs not being archived due the NOARCHIVELOG of source database. In this situation, there is no way the we can restore the overwritten logs as those were not archived. The only option that we have is to instruct SharePlex to skip through the missing sequences. As we have seen earlier, SharePlex is trying to process redo log with sequence# 367 ---// ---// query shareplex_actid to check which log sequence# is under parse //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 SQL> select ACTID,SEQNO,OFFSET,LOG_START_OFFSET,INSTANCE_NAME,HOST_NAME,CDB_CON_ID from splex.shareplex_actid; ACTID SEQNO OFFSET LOG_START_OFFSET INSTANCE_NAME HOST_NAME CDB_CON_ID ---------- ---------- ---------- ---------------- ---------------- ------------------------------ ---------- 4 367 43997200 43997200 orpcdb1 labserver1.oraclebuffer.com 3 Since, the log is being overwritten, we need to skip the missing sequence. Let's find out at which sequence#, we can point SharePlex to start processing the redo data. ---// ---// find oldest available log sequence# //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 SQL> select thread#,min(sequence#) from v$log group by thread#; THREAD# MIN(SEQUENCE#) ---------- -------------- 1 745 As we can see, the oldest log sequence available in the database is the sequence# 745. This means, all the log files starting from sequence# 367 to sequence# 744 are overwritten by log switches (and are not archived). We need to skip through all these missing sequences and point SharePlex to start processing redo data from sequence# 754 (at offset 0) by updating the internal SharePlex table SHAREPLEX_ACTID in the source database as shown below. ---// ---// point SharePlex to oldest available sequence# //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 SQL> update splex.shareplex_actid set SEQNO=745 , OFFSET=0 , LOG_START_OFFSET=0 where ACTID=4; 1 row updated. SQL> commit; Commit complete. We have now instructed SharePlex to start processing redo stream from the oldest available sequence# (745). We can now restart the failed capture process in source system to resume redo streaming as shown below. ##--- ##--- resume capture after skipping through missing log sequences ---## ##--- sp_ctrl (labserver1:2015)> start capture sp_ctrl (labserver1:2015)> show Process Source Target State PID ---------- ------------------------------------ ---------------------- -------------------- ------ Capture o.cdb1_pdb_1 Running 5973 Export labserver1 labserver2 Running 5158 Read o.cdb1_pdb_1 Running 5075 Import labserver2 labserver1 Running 5153 Post o.cdb2_pdb_1-exp_queue_generic o.cdb1_pdb_1 Running 5074 Capture process is now resumed. We can validate if capture is processing current redo stream using the 'show capture detail' command from SharePlex command line utility as shown below. ##--- ##--- validate capture processing ---## ##--- sp_ctrl (labserver1:2015)> show capture detail Host: labserver1.oraclebuffer.com Operations Source Status Captured Since ---------- --------------- ---------- ------------------ o.cdb1_pdb Running 472455 21-May-16 19:53:40 Oracle current redo log : 747 Capture current redo log : 747 Capture log offset : 72848400 Last redo record processed: Operation on SHAREPLEX internal table at 05/21/16 19:53:40 Capture state : Processing Activation id : 4 Error count : 0 Operations captured : 472455 Transactions captured : 0 Concurrent sessions : 1 HWM concurrent sessions : 2 Checkpoints performed : 11 Total operations processed : 472459 Total transactions completed : 5 Total Kbytes read : 268912 Redo records in progress : 1 Redo records processed : 952719 Redo records ignored : 480259 Redo records - last HRID : AAAVLTAAbAAATMyADR SharePlex is now processing the latest redo stream as indicated by the output lines "Oracle current redo log: 747" and "Capture current redo log : 747" . The LOG_WRAP issue is now fixed. However, we have skipped through all redo logs starting from sequence# 367 to sequence# 744 and we must run the repair command to sync data between source and target system. Note: All through, we have resolved the LOG_WRAP issue and can sync the data by running the repair command. We are likely to hit the same issue again in this environment as our source database is NOARCHIVELOG mode. We have explored the options for resolving LOG_WRAP issue in SharePlex replication. Now, lets quickly go through a list of best practices which can be followed to avoid a LOG_WRAP situation. Best practices to avoid LOG_WRAP issue LOG_WRAP can lead to data inconsistency between source and target system and hence it should be avoided to keep source and target systems in a consistent state. Following are the best practices that can be followed to avoid a LOG_WRAP situation in SharePlex replication. It is recommended to always keep the source database in ARCHIVELOG mode. This will ensure that the Oracle redo logs would be always archived during log switches. By this way, when SharePlex lags behind in processing redo stream, it can still read the redo stream from archive logs and will prevent going into a LOG_WRAP situation While deleting archive logs (for space management), ensure that we are not deleting the archive logs that are not yet processed by SharePlex. It would be a good idea to enforce an additional check in the archive deletion script to query the internal SharePlex table SHAREPLEX_ACTID and find out the current log sequence# being processed by SharePlex. We can delete all the archive logs until the sequence# found from the SHAREPLEX_ACTID table. If you plan to compress the archive logs, do not compress them until the archive logs are processed by SharePlex. SharePlex can't read from compressed archive logs. Again, while compressing archive logs, you can query the SHAREPLEX_ACTID table to find out the current log sequence# being processed SharePlex and compress the archive logs until that sequence. While reading archive logs, SharePlex looks into Oracle's default archive location to read the archive logs. If the archive logs are kept in non default location, we must point SharePlex to read from that location. We can set the SharePlex parameter SP_OCT_ARCH_LOC to read archive logs from non default archive location Conclusion In this article, we have explored in detail about the LOG_WRAP situation in SharePlex replication and discussed about the different methods that can be followed to resolve a LOG_WRAP issue. We have also discussed about the best practices that can help in preventing a LOG_WRAP situation in SharePlex replication.
↧
Blog Post: Ensuring Data Protection Using Oracle Flashback Features - Part 5
Introduction In the previous article we have reviewed Oracle 11g and 12c Flashback new features and enhancements. In this part (last one in this articles series) we will review the Oracle Flashback licensing. In addition, we will also summarize everything and we will see which Oracle Flashback feature should be used in various human errors use cases. Oracle Flashback - Licensing Most of the powerful Flashback features require having a license for the Enterprise Edition. This includes the following features: Flashback Table Flashback Drop Flashback Transaction Query Flashback Transaction Flashback Database Flashback Query and Flashback Version Query are available for all Oracle Editions. As for Flashback Data Archive, starting from 11.2.0.4, it is supported for all of the Oracle Editions, but for versions earlier than 11.2.0.4, it requires having a license for the Enterprise Edition + Oracle Advanced Compression option (extra cost option). Figure 4 : Oracle Flashback Features Licensing Summary Oracle Flashback technology provides a set of tools which can be used by the DBA in order to recover from various human errors that caused undesired data loss or data modifications. The following table summarizes this article by explaining which Oracle Flashback feature is most suitable per each scenario: Figure 5 : Various Use Cases for using Oracle Flashback Features The great thing about Oracle Flashback technology is that it allows recovering from various human errors with the minimum effort and time, which results in a reduced RTO (Recovery Time Objective). Having said that, Oracle Flashback technology should never be used as a replacement of the traditional backups, but rather as a complementary solution that provides a very effective way to recover from various human errors and does not require restoring and recovering data using backup (either RMAN or user-managed backups). For example, Oracle Flashback technology will not protect against the following scenarios: Loss of data file or control file Block corruptions (either physical or logical) Site disaster The article reviewed the Oracle Flashback technology from Oracle version 9i – where the first flashback feature named “Flashback Query” was introduced, up to the most recent Oracle Database version (12.1.0.2). Oracle Flashback technology can definitely empower the DBAs by making their lives easier when it comes to protecting the data from various human errors.
↧
↧
Wiki Page: Striping Tables with DBMS_APPLICATION_INFO
Oracle supports Virtual Private Databases (VPDs) by combining session-level metadata from connections and security policies. You have two options when you configure a VPD. One option lets you build the VPD on a schema and the other option lets you build the VPD based on striped views. This article will show you how to stripe a view and manage access to rows and row sets. A striped view may be a subset of a single table or a subset of a set of joined tables. You stripe a table by including one or more columns that act as a filter. For example, you can filter rows on a group identifier, a cost center, or a reporting currency. A striped view lets multiple users see only certain rows or sets of rows in the view, which are like slices of a pie. Striped View The article shows you how to: Set and read the client_info column Stripe a view based on the client_info column Leverage the client_info column value for authentication You typically set up VPDs with the security_admin and dbms_rls packages. It’s also possible to mimic VPD behavior by striping views and setting up Oracle’s metadata. Oracle manages its connection metadata in the v$session view. While there are several columns that you can set, this article focuses on only the client_info column of the v$session view. You can set metadata when you authenticate against an Access Control List (ACL). In December 2015, I wrote an article about how you can write and deploy a PL/SQL authentication function . This article uses the authorize function from that article. The dbms_application_info package lets you set and read the client_info column’s value. The client_info column is 64 characters in length, and you can set multiple values inside the string. The Oracle E-Business Suite actually stores values for organization, currency, and language in the client_info column for each connection. They use the fnd_profile package to set and read the client_info and other information specific to the Oracle E-Business Suite. Set and Read client_info Column You set the client_info value with the dbms_application_info package’s set_client_info procedure. After setting the value, you can read the client_info value with the read_client_info procedure. The following set_login function sets a user_id and user_group_id value in the client_info column of the v$session view: SQL> CREATE OR REPLACE FUNCTION set_login 2 ( pv_login_name VARCHAR2 ) RETURN VARCHAR2 IS 3 4 /* Declare a success flag to false. */ 5 lv_success_flag NUMBER := 0; 6 7 /* Declare a common name for a return variable. */ 8 client_info VARCHAR2(64) := NULL; 9 10 /* Declare variables to hold cursor return values. */ 11 lv_login_id NUMBER; 12 lv_group_id NUMBER; 13 14 /* Declare a cursor to return an authorized user id. */ 15 CURSOR authorize_cursor 16 ( cv_login_name VARCHAR2 ) IS 17 SELECT a.user_id 18 , a.user_group_id 19 FROM application_user a 20 WHERE a.user_name = cv_login_name; 21 22 BEGIN 23 24 /* Check for not null login name. */ 25 IF pv_login_name IS NOT NULL THEN 26 /* Open, fetch, and close cursor. */ 27 OPEN authorize_cursor(pv_login_name); 28 FETCH authorize_cursor INTO lv_login_id, lv_group_id; 29 CLOSE authorize_cursor; 30 31 /* Set the CLIENT_INFO flag. */ 32 dbms_application_info.set_client_info( 33 LPAD(lv_login_id,5,' ') 34 || LPAD(lv_group_id,5,' ')); 35 dbms_application_info.read_client_info(client_info); 36 37 /* Set success flag to true. */ 38 IF client_info IS NOT NULL THEN 39 lv_success_flag := 1; 40 END IF; 41 END IF; 42 43 /* Return the success flag. */ 44 RETURN lv_success_flag; 45 END; 46 / The authorize_cursor cursor on lines 15 through 20 queries the application_user table for a valid user’s user_id and user_group_id values. The call to the set_client_info procedure breaks across lines 32 through 34 to ensure the display doesn’t wrap. The login_id value is a number and the LPAD converts it to a five-character string by left padding whitespace. The LPAD function also converts the group_id value to a five-character string. This example allows only five digit values for both the login_id and group_id columns, but a real model should probably allocate space for up to ten digits. Line 35 calls the read_client_info procedure, which is a call-by-reference procedure. The read_client_info returns the client_info variable into the local scope of the function. The IF -block verifies the client_info column value is not null and assigns a one to the success_flag variable. The function returns 1 when the client_info column is set and 0 when it isn’t set. You can call the set_login function like this: SQL> SELECT set_login('potterhj') AS success FROM dual; It should return 1 as successful. You can then query the values of the client_info column, like this: SQL> SELECT SYS_CONTEXT('userenv','client_info') AS client_info 2 FROM dual; The sys_context function returns the entire client_info string, like: CLIENT_INFO -------------- 1 2 You also can implement an anonymous PL/SQL block, like: SQL> SET SERVEROUTPUT ON SIZE UNLIMITED SQL> DECLARE 2 /* Declare local variable. */ 3 client_info VARCHAR2(64); 4 BEGIN 5 /* Read client info. */ 6 dbms_application_info.read_client_info(client_info); 7 8 /* Print output. */ 9 dbms_output.put_line( 10 'User ID [ '|| SUBSTR(client_info,1,5) ||']'); 11 dbms_output.put_line( 12 'Group ID ['|| SUBSTR(client_info,6,5) ||']'); 13 END; 14 / Line 6 reads the client_info value, and lines 10 and 12 parse the text into two values. It prints: User ID [ 1] Group ID [ 2] This part of the article taught you how to set and read the client_info column value from the v$session view. Stripe a View based on the client_info Column You can stripe the application_user table because we have two columns that let us filter the data. They are the user_id and user_group_id columns, as you can see below: SQL> desc application_user Name Null? Type ----------------- -------- -------------- USER_ID NOT NULL NUMBER USER_NAME NOT NULL VARCHAR2(20) USER_PASSWORD NOT NULL VARCHAR2(40) USER_ROLE NOT NULL VARCHAR2(20) USER_GROUP_ID NOT NULL NUMBER USER_TYPE NOT NULL NUMBER START_DATE NOT NULL DATE END_DATE DATE FIRST_NAME NOT NULL VARCHAR2(20) MIDDLE_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(20) CREATED_BY NOT NULL NUMBER CREATION_DATE NOT NULL DATE LAST_UPDATED_BY NOT NULL NUMBER LAST_UPDATE_DATE NOT NULL DATE There are many ways to stripe tables because there are many types of use cases. The use case for this example is to grant three types of privilege. They are: the privilege to view all users in the authorized_user table, the privilege to view all users in the same group, and the privilege to view only your user record. While hardcoding values for a simple use case is tempting, it’s always a bad choice. Creating an authorized_group table appears as the best way to keep the design flexible to add new groups and simple for use. The table is defined as follows: SQL> CREATE TABLE authorized_group 2 ( authorized_group_id NUMBER 3 , authorized_group_type VARCHAR2(20) 4 , authorized_group_desc VARCHAR2(20)); There is an inherent sequencing dependency for the data in the authorized_group table. The dependency means the administrator privilege is the first row, the individual privilege is the second row, and all groups are rows three and above. This approach recognizes that the use case only has one administrator and one individual privilege. It also recognizes that you need to define an indefinite number of new groups. The following creates the striped authorized_user view: SQL> CREATE OR REPLACE VIEW authorized_user AS 2 SELECT au.user_id 3 , au.user_name 4 , au.user_role 5 , au.last_name || ', ' || au.first_name || ' ' 6 || NVL(au.middle_name,'') AS full_name 7 FROM application_user au CROSS JOIN 8 (SELECT 9 TO_NUMBER( 10 SUBSTR( 11 SYS_CONTEXT('USERENV','CLIENT_INFO'),1,5)) AS login_id 12 , TO_NUMBER( 13 SUBSTR( 14 SYS_CONTEXT('USERENV','CLIENT_INFO'),6,5)) AS group_id 15 FROM dual) fq INNER JOIN authorized_group ag 16 ON (fq.group_id = ag.authorized_group_id 17 AND ag.authorized_group_type = 'ADMINISTRATOR') 18 OR (au.user_group_id = ag.authorized_group_id 19 AND ag.authorized_group_type = 'INDIVIDUAL' 20 AND au.user_group_id = fq.group_id 21 AND au.user_id = fq.login_id) 22 OR (fq.group_id > ag.authorized_group_id 23 AND ag.authorized_group_type = 'INDIVIDUAL' 24 AND au.user_group_id = fq.group_id) ; Lines 8 through 15 define a runtime view that queries the session metadata from the client_info column of the v$session view. A cross join lets you add these two session metadata elements as columns to each row in the application_user table because they’re the SELECT -list of a runtime view with only one row. There are three alternative joins in this view, which is clearly an advanced SQL solution. The first join condition on lines 16 and 17 resolves for users that are administrators. The second join condition on lines 18 through 21 resolves for those individuals that have restricted access to their own account. The third join condition on lines 22 through 24 resolves for those individuals with group access privileges. All of the conditional joins use the authorized_group_type to identify the authorized_group_id column, as you can see on lines 17, 19, and 23. You copy the authorized_group_id column into the user_group_id column when inserting a new user into the application_user table or changing an existing user from one privilege to another. You can also rewrite the authorized_user view into a more modern syntax with the WITH clause on lines 2 through 10, like this: SQL> CREATE OR REPLACE VIEW authorized_user AS 2 WITH fq AS 3 (SELECT 4 TO_NUMBER( 5 SUBSTR( 6 SYS_CONTEXT('USERENV','CLIENT_INFO'),1,5)) AS login_id 7 , TO_NUMBER( 8 SUBSTR( 9 SYS_CONTEXT('USERENV','CLIENT_INFO'),6,5)) AS group_id 10 FROM dual) 11 SELECT au.user_id 12 , au.user_name 13 , au.user_role 14 , au.last_name || ', ' || au.first_name || ' ' 15 || NVL(au.middle_name,'') AS full_name 16 FROM application_user au CROSS JOIN fq INNER JOIN 17 authorized_group ag 18 ON (fq.group_id = ag.authorized_group_id 19 AND ag.authorized_group_type = 'ADMINISTRATOR') 20 OR (au.user_group_id = ag.authorized_group_id 21 AND ag.authorized_group_type = 'INDIVIDUAL' 22 AND au.user_group_id = fq.group_id 23 AND au.user_id = fq.login_id) 24 OR (fq.group_id > ag.authorized_group_id 25 AND ag.authorized_group_type = 'INDIVIDUAL' 26 AND au.user_group_id = fq.group_id); Every use case requires a test case to confirm the value of the solution. The test case for this solution requires that you examine the unfiltered test data in the application_user table. The sample data contains five users, which you can find with the following query: SQL> COLUMN user_name FORMAT A14 SQL> COLUMN authorized_group_id FORMAT A21 SQL> SELECT au.user_name 2 , au.user_id 3 , au.user_group_id 4 , ag.authorized_group_type 5 FROM application_user au INNER JOIN authorized_group ag 6 ON au.user_group_id = ag.authorized_group_id; It displays the following data: USER_NAME USER_ID USER_GROUP_ID AUTHORIZED_GROUP_TYPE -------------- ---------- ------------- --------------------- potterhj 1 1 ADMINISTRATOR weasilyr 2 2 INDIVIDUAL longbottomn 3 2 INDIVIDUAL holmess 4 3 GROUP watsonj 5 3 GROUP Now, you can test the striped view with the following three test cases. The first examines whether the user holds the administrator’s responsibility. The second examines whether the user holds the individual’s responsibility. The third examines whether the user holds the group’s responsibility. This test the administrator’s privilege and returns all users: SQL> SELECT set_login('potterhj') AS output FROM dual; This test the individual’s privilege and returns only those user with the individual’s privilege: SQL> SELECT set_login('longtbottomn') AS output FROM dual; This test the group’s privilege and returns all user in the same group: SQL> SELECT set_login('holmess') AS output FROM dual; The following query tests the results from the authorized_user table: SQL> SELECT au.user_name 2 , au.user_id 3 , au.user_role 4 FROM authorized_user au; It returns the following from the sample data: USER_NAME USER_ID USER_ROLE -------------- ---------- -------------------- holmess 4 DBA watsonj 5 DBA This segment of the article has shown that the striping view works for three different types of privileges. The next segment shows you how to incorporate it in an authentication function. Leverage the client_info column value for authentication It only requires adding a single line of code to the authorize authentication function my earlier Creating a PL/SQL Authentication Function article. The complete modified function follows for the convenience of the reader. SQL> CREATE OR REPLACE FUNCTION authorize 2 ( pv_username VARCHAR2 3 , pv_password VARCHAR2 4 , pv_session VARCHAR2 5 , pv_raddress VARCHAR2 ) RETURN authentication_t IS 6 7 /* Declare session variable. */ 8 lv_match BOOLEAN := FALSE; 9 lv_session VARCHAR2(30); 10 lv_user_id NUMBER := -1; 11 12 /* Declare authentication_t instance. */ 13 lv_authentication_t AUTHENTICATION_T := authentication_t(null,null,null); 14 15 /* Define an authentication cursor. */ 16 CURSOR authenticate 17 ( cv_username VARCHAR2 18 , cv_password VARCHAR2 ) IS 19 SELECT user_id 20 , user_group_id 21 FROM application_user 22 WHERE user_name = cv_username 23 AND user_password = cv_password 24 AND SYSDATE BETWEEN start_date AND NVL(end_date,SYSDATE); 25 26 /* Declare a cursor for existing sessions. */ 27 CURSOR valid_session 28 ( cv_session VARCHAR2 29 , cv_raddress VARCHAR2 ) IS 30 SELECT ss.session_id 31 , ss.session_number 32 FROM system_session ss 33 WHERE ss.session_number = cv_session 34 AND ss.remote_address = cv_raddress 35 AND (SYSDATE - ss.last_update_date) pv_username 105 , password => pv_password 106 , sessionid => pv_session ); 107 END IF; 108 109 /* Return the authentication set. */ 110 RETURN lv_authentication_t; 111 EXCEPTION 112 WHEN OTHERS THEN 113 RETURN lv_authentication_t; 114 END; 115 / Line 38 declares an exception variable and line 39 issues a precompiler instruction that maps the user-defined error number to the exception variable. Line 52 through 54 is an IF -block that sets the login striping, or it raises an exception when it fails to set the login striping. Lines 111 through 113 handles the exception and returns an empty set of values in the authentication_t type. The test case is straight forward. You call the SQL> COLUMN username FORMAT A10 SQL> COLUMN password FORMAT A10 SQL> COLUMN sessionid FORMAT A12 SQL> SELECT username 2 , SUBSTR(password,1,3)||'...'||SUBSTR(password,-3,3) AS password 3 , sessionid 4 FROM TABLE( 5 SELECT CAST( 6 COLLECT( 7 authorize( pv_username => 'watsonj' 8 , pv_password => 'c0b137fe2d792459f26ff763cce44574a5b5ab03' 9 , pv_session => 'session_test' 10 , pv_raddress => '127.0.0.1')) AS authentication_tab) 11 FROM dual); It prints the following: USERNAME PASSWORD SESSIONID ---------- ---------- ------------ watsonj c0b...b03 session_test After having set the striping values inside the authorize function, you can check whether they’re correct with the following query: SQL> SELECT au.user_name 2 , au.user_id 3 , au.user_role 4 FROM authorized_user au; It returns the following values: USER_NAME USER_ID USER_ROLE -------------- ---------- -------------------- holmess 4 DBA watsonj 5 DBA This article has shown you how to stripe views and how to stripe them inside an authorization function.
↧
Blog Post: Exporting an Org Chart from HCM Cloud
Introduction Recently we were asked about the Directory within Oracle HCM Cloud. The online view within the system is fine, but in order to get approval that the hierarchy was correct they want to print out sections and get executive signatures on the printouts. Built in Functionality / Workaround HCM Cloud does not (as of May 2016) contain the functionality to export Organisation Charts. This is reported in a number of My Oracle Support documents, including 2110956.1 which states that there is an enhancement request ( ER 20844508 ) open to address this need. Oracle suggest the following work-around in the interim is as follows: You may try this work around if you want to export from OTBI report to MS Excel file alternatively. Steps to export from OTBI report to MS Excel Create an OTBI report containing at least the worker and manager names from the Worker dimension Export the OTBI analysis to Excel Import the Excel into Visio by following these instructions Given the above instructions, you’ll obviously need access to the HCM Cloud, Excel and Visio to complete this task. Detailed Walkthrough The above instructions are a little sparse. Here follows a step-by-step guide: Exporting the Data from HCM Cloud First we need to export the data from HCM Cloud: Within Reports & Analytics, go to the BI Catalogue Create a new Analysis using the Worker Assignment Real-Time subject Area From the Worker dimension add Name and Manager Name attributes, and from the Job dimension add the Name (i.e. Job Title) attribute Save the analysis and run it Export the results to an Excel file Open the produced Excel file, delete the header line and amend the column headings to ‘name’ for the worker name column, ‘title’ for the job title column and ‘reports-to’ for the manager name column. Save the Excel file somewhere that you’ll be able to locate it in the next section Importing the Data into Visio The following has been performed with Visio Professional 2013. More recent (or older) versions may behave slightly differently. Open Visio and choose New > Organization Chart. This starts the Organization Chart Wizard. Choose ‘Information that’s already stored in a file or database’ and click Next Select ‘A text, Org Plus (*.txt), or Excel file’ and click Next On the next screen, browse to the location that you saved your Excel file to, and click Next Ensure that the top two fields are correct (Name = name and Reports to = reports-to) and click Next The next two dialogues control what data is visible on the Org Chart itself. For both dialogues ensure that reports-to is on the left, and the columns that you want displayed are on the right: Repeat for the 2 nd dialogue On the next dialogue select ‘Don’t include pictures in my organization chart’ and click Next. Adding pictures is quite fiddly at this stage, so introduces a lot more complexity into the process. On the next dialogue select ‘I want to specify how much of my organization to display on each page’ and click Next The following page controls the data that is displayed, who is at the top of the Org Chart and how many levels down are shown. Visio attempts to suggest examples, however the best approach is to delete all of the pages and start afresh. Highlight each page and click ‘Delete Page’ Click ‘Add Page’ and in the ‘Name at top of page’ select the executive to appear at the top of your Org Chart Select the number of additional levels to show. Typically 2 is a reasonable number as this will show the executive, their direct reports, and the direct reports below them. A greater number can result in a very large chart. Click Finish. After a short pause the Org Chart is displayed. This is your data, but with the default formatting. We can improve on this significantly. Formatting the Org Chart This is an illustration of the process of formatting the Org Chart, and is just a suggestion. Company colour palettes and personal preferences may mean that other options are chosen. The following has been performed with Visio Professional 2013. More recent (or older) versions may behave slightly differently. By default a place-holder is present for the photo, even if photos are not being used. This takes up valuable space and adds little to the result. Hit + A to highlight the entire Org Chart, and from the Org Chart ribbon menu select Delete in the Picture area: You can then reduce the height of the boxes, and the spacing between them: Finally, the default style is a little heavy, especially if it is going to be printed where it will use up a lot of ink. A style such as ‘Clouds’ gives a lighter (and kinder on the toner cartridges) appearance: You’ll now want to zoom out slightly and get a feel for how many pages the result spans. You can often reduce the amount of pages used by rearranging the layout slightly. The finished result is a tidy, visually appealing export with data from HCM Cloud that’s compact without being overly information-dense, and which doesn’t use up a lot of printer toner: To share the output electronically, Visio Org Charts can be cut’n’pasted into email, Word or Powerpoint, or can be exported to PDF or a range of image formats. It’s more likely however that you’ll want to print it, where again, exporting to PDF or image formats makes this easier. To repeat the process with another executive at the head the same data file can be used, meaning that you’re not starting again from scratch. The Org Chart Wizard will retain the settings chosen last time, so it’s mostly a case of clicking Next, Next, Next through the dialogues.
↧
Blog Post: Oracle Optimizer Internals (Oracle12)
Last week I indicated a problem of poor performance with Oracle11.2’s cardinality feedback. Oracle12 expands upon this cardinality feedback to do ‘adaptive plan management’ or the ability to change the execution plan on the fly while executing. Oracle12 also saves the real row counts from the first execution (cardinality feedback) in the form of SQL Directives, to be used in place of dynamic sampling and other issues where missing stats can come into play… I’ve always been suspect of how this adaptive plan management will pay out because what Oracle does is create 2 plans…that it can dynamically change to. Yes, my SQL Tuner (and Active SQL) programs are Oracle12 ready! Notice the DBMS_XPLAN display above…Oracle12 is set to use Hash Joins but the Nested Loops are ready to go but commented out with a ‘-‘. Notice line 7 of the Explain Plan with its ‘Statistics Collector’. This part is what does the monitoring. When Oracle12 sees that a different plan might work better (ie: Nested Loops work better when you join Larger to Smaller (driving table first is the larger of the 2…) and Hash Joins work better going smaller to larger…see prior blogs on SQL hard parse processing…or by my book: Oracle SQL Tuning: A Close Look at the Cost-based Optimizer available today at Amazon.com), it will switch to the commented out plan. Well, why I was waiting to see how this played out is because in my 30 years of Oracle SQL tuning, I’ve not once seen where changing a Hash Join to a Nested Loop or visa versa produced a better executing explain plan. The optimizer group did not ask me what I thought… [:D] So, it’s not working out the best for some folks. The feedback I’m getting is when this is used, again, performance is worse so people have been turning this feature off. You can use one of these methods to turn the Adaptive Plan Management feature off in Oracle12 using init.ora settings: Optimizer_adaptive_features=FALSE optimizer_features_enable < 12.1.0.1… optimizer_adaptive_reporting_only to true Thank you for the syntax Robert. Dan Hotka Oracle ACE Director Author/Instructor/CEO
↧