Introduction Maintaining global indexes were always a expensive task in Oracle, particularly in the context of partitioned tables, where dropping or truncating a table partition could cause a global index to become UNUSABLE/INVALID unless specified to update the indexes during the drop/truncate operation. However, updating the index entries during partition drop/truncate can eventually slow down the actual drop/truncate operation. With Oracle 12c, this is now a story of past. In Oracle 12c, Oracle has improvised/optimized a lot of existing database functionalities. One of these improvisation is the maintenance of global indexes while dropping or truncating a table partition. With Oracle 12c, a drop or truncate table partition (with update indexes clause) is optimized by deferring the maintenance of associated global indexes, while still leaving the indexes in VALID state. Prior to 12c, a drop/truncate table partition (with update indexes clause) would cause a synchronous maintenance for the associated global indexes and thereby would delay the actual drop/truncate operation. However, with 12c this global index maintenance operation is performed asynchronously which optimizes the drop or truncate partition operation. In today's article, we are going to explore this new enhancement introduced with Oracle database 12c. How it works? Before, starting with the exploration and demonstration; let me provide a brief idea about how this enhancement works within a Oracle 12c database. Starting with 12c, Oracle defers (postpones) the deletion of global index entries associated with a dropped or truncated table partition. Oracle internally maintains the list of global index entries whose associated table partitions has been dropped and mark those index entries as ORPHAN entries. These OPRPHAN index entries are cleaned by default on a regular basis through a scheduler job named SYS.PMO_DEFERRED_GIDX_MAINT_JOB . The OPRPHAN index entries can also be cleaned up manually on demand either by manually running the scheduler job or by executing DBMS_PART.CLEANUP_GIDX procedure or by rebuilding global index/index partitions or by coalescing the index/index partitions. Additionally, when we query data from a table; Oracle scans through the orphaned index entries and ignores any record which points to a orphan entry. Since, Oracle need not update the global index entries synchronously during the drop or truncate table partition operation, it optimizes and speeds up the actual drop or truncate table partition operation by a much higher magnitude. Demonstration Lets go through a quick demonstration, to understand this functionality. In the following example, I am creating a partitioned table for our demonstration. I would be demonstrating the functionality with truncate operation. You can expect a similar result for drop operations too. ---// ---// create a partitioned table for demo //--- ---// SQL> create table T_PART_AYSNC 2 ( 3 id number, 4 name varchar(15), 5 join_date date 6 ) 7 partition by range (join_date) 8 ( 9 PARTITION P1 values less than (TO_DATE('01-FEB-2016','DD-MON-YYYY')) tablespace MYAPP_TS, 10 PARTITION P2 values less than (TO_DATE('01-MAR-2016','DD-MON-YYYY')) tablespace MYAPP_TS, 11 PARTITION P3 values less than (TO_DATE('01-APR-2016','DD-MON-YYYY')) tablespace MYAPP_TS 12 ); Table created. Now, let's populate the partitioned table with some data. ---// ---// populating partitioned table //--- ---// SQL> insert /*+ APPEND */ into T_PART_AYSNC 2 select rownum, rpad('X',15,'X'), '13-JAN-2016' 3 from dual connect by rownum commit; Commit complete. SQL> insert /*+ APPEND */ into T_PART_AYSNC 2 select rownum+1e5, rpad('X',15,'X'), '13-FEB-2016' 3 from dual connect by rownum commit; Commit complete. SQL> insert /*+ APPEND */ into T_PART_AYSNC 2 select rownum+2e5, rpad('X',15,'X'), '13-MAR-2016' 3 from dual connect by rownum commit; Commit complete. Now, lets create a couple of global indexes on the partitioned table ---// ---// create global partitioned index on the partitioned table //--- ---// SQL> create unique index T_PART_AYSNC_PK on T_PART_AYSNC (id) global 2 partition by range (id) 3 ( 4 partition id_p1 values less than (100001) tablespace MYAPP_TS, 5 partition id_p2 values less than (200001) tablespace MYAPP_TS, 6 partition id_p3 values less than (maxvalue) tablespace MYAPP_TS 7 ); Index created. ---// ---// create global index on the partitioned table //--- ---// SQL> create index idx_T_PART_AYSNC_G on T_PART_AYSNC (join_date); Index created. At this point, we have a partitioned table with three partitions having 100000 records in each partition. We also have two global indexes on this partitioned table, one of which a partitioned global index and the other is a normal global index. Let's validate the status of the global indexes, before we actually drop a table partition ---// ---// global index status //--- ---// SQL> select i.index_name,null partition_name,i.num_rows,s.blocks, 2 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 3 from dba_indexes i, dba_segments s 4 where i.index_name=s.segment_name and i.index_name='IDX_T_PART_AYSNC_G' 5 union 6 select i.index_name,i.partition_name,i.num_rows,s.blocks, 7 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 8 from dba_ind_partitions i, dba_segments s 9 where i.partition_name=s.partition_name and i.index_name='T_PART_AYSNC_PK' 10 ; INDEX_NAME PARTITION_ NUM_ROWS BLOCKS LEAF_BLOCKS SIZE_MB STATUS ORP -------------------- ---------- ---------- ---------- ----------- ---------- -------- --- IDX_T_PART_AYSNC_G 300000 1024 962 8 VALID NO T_PART_AYSNC_PK ID_P1 100000 1024 264 8 USABLE NO T_PART_AYSNC_PK ID_P2 100000 1024 265 8 USABLE NO T_PART_AYSNC_PK ID_P3 100000 1024 265 8 USABLE NO There is a new column ORPHANED_ENTRIES in this output. This column indicates, whether an index has any ORPHAN entries associated with it as a result of a drop/truncate partition operation. Now, let us examine; how expensive would be the drop table partition operation with index maintenance. To measure the cost, we will take a note of the "db block gets" and "redo size" for the current database session before and after the truncate table partition operation and will compare the cost with truncate table partition operation performed without index maintenance. Let's first analyze the cost involved in truncate table partition operation without index maintenance. ---// ---// noting db block gets and redo size for current session (before truncate partition) //--- ---// SQL> select sn.name,ss.value from v$statname sn, v$sesstat ss 2 where sn.statistic#=ss.statistic# 3 and sn.name in ('redo size','db block gets') 4 and ss.sid = (select sys_context('userenv','sid') from dual) 5 ; NAME VALUE -------------------- ---------- db block gets 3 redo size 992 ---// ---// truncate table partition without index maintenance //--- ---// SQL> alter table T_PART_AYSNC truncate partition P1; Table truncated. ---// ---// noting db block gets and redo size for current session (after truncate partition) //--- ---// SQL> select sn.name,ss.value from v$statname sn, v$sesstat ss 2 where sn.statistic#=ss.statistic# 3 and sn.name in ('redo size','db block gets') 4 and ss.sid = (select sys_context('userenv','sid') from dual) 5 ; NAME VALUE -------------------- ---------- db block gets 106 redo size 22112 The truncate table partition (without index maintenance) operation took 103 block gets and 21120 bytes of redo to complete the truncate operation. However, it left the global indexes in INVALID/UNUSABLE state as found below. ---// ---// Index status after drop partition without index maintenance //--- ---// SQL> select i.index_name,null partition_name,i.num_rows,s.blocks, 2 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 3 from dba_indexes i, dba_segments s 4 where i.index_name=s.segment_name and i.index_name='IDX_T_PART_AYSNC_G' 5 union 6 select i.index_name,i.partition_name,i.num_rows,s.blocks, 7 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 8 from dba_ind_partitions i, dba_segments s 9 where i.partition_name=s.partition_name and i.index_name='T_PART_AYSNC_PK' 10 ; INDEX_NAME PARTITION_ NUM_ROWS BLOCKS LEAF_BLOCKS SIZE_MB STATUS ORP -------------------- ---------- ---------- ---------- ----------- ---------- -------- --- IDX_T_PART_AYSNC_G 300000 1024 962 8 UNUSABLE NO T_PART_AYSNC_PK ID_P1 100000 1024 264 8 UNUSABLE NO T_PART_AYSNC_PK ID_P2 100000 1024 265 8 UNUSABLE NO T_PART_AYSNC_PK ID_P3 100000 1024 265 8 UNUSABLE NO Lets rebuild the indexes and evaluate the cost of truncating the table partition with index maintenance in place. ---// ---// noting db block gets and redo size for current session (before truncate partition) //--- ---// SQL> select sn.name,ss.value from v$statname sn, v$sesstat ss 2 where sn.statistic#=ss.statistic# 3 and sn.name in ('redo size','db block gets') 4 and ss.sid = (select sys_context('userenv','sid') from dual) 5 ; NAME VALUE -------------------- ---------- db block gets 3 redo size 992 ---// ---// truncate table partition with index maintenance //--- ---// SQL> alter table T_PART_AYSNC truncate partition P2 update global indexes; Table truncated. ---// ---// noting db block gets and redo size for current session (after truncate partition) //--- ---// SQL> select sn.name,ss.value from v$statname sn, v$sesstat ss 2 where sn.statistic#=ss.statistic# 3 and sn.name in ('redo size','db block gets') 4 and ss.sid = (select sys_context('userenv','sid') from dual) 5 ; NAME VALUE -------------------- ---------- db block gets 122 redo size 25284 As we can observe, it took just 119 block gets and 24292 bytes of redo to complete the truncate partition operation with index maintenance. This is almost similar to the cost of truncating the table partition without updating the global indexes. Now, let's find out the status of the global indexes after this truncate operation. ---// ---// global index status after truncate partition with index maintenance //--- ---// SQL> exec dbms_stats.gather_table_stats('MYAPP','T_PART_AYSNC'); PL/SQL procedure successfully completed. SQL> select i.index_name,null partition_name,i.num_rows,s.blocks, 2 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 3 from dba_indexes i, dba_segments s 4 where i.index_name=s.segment_name and i.index_name='IDX_T_PART_AYSNC_G' 5 union 6 select i.index_name,i.partition_name,i.num_rows,s.blocks, 7 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 8 from dba_ind_partitions i, dba_segments s 9 where i.partition_name=s.partition_name and i.index_name='T_PART_AYSNC_PK' 10 ; INDEX_NAME PARTITION_ NUM_ROWS BLOCKS LEAF_BLOCKS SIZE_MB STATUS ORP -------------------- ---------- ---------- ---------- ----------- ---------- -------- --- IDX_T_PART_AYSNC_G 100000 768 322 6 VALID YES T_PART_AYSNC_PK ID_P1 0 1024 0 8 USABLE YES T_PART_AYSNC_PK ID_P2 0 1024 0 8 USABLE YES T_PART_AYSNC_PK ID_P3 100000 1024 265 8 USABLE YES As we can observe, the global indexes are still in VALID/USABLE state. However, this time the new column ORPHANED_ENTRIES is showing a value YES. This indicates that the associated global indexes are now having orphan index entries as a result of the truncate table partition operation. Now the question comes, how Oracle ensures that these indexes will return correct result as they have orphaned entries associated with them? Well, as I mentioned earlier; Oracle internally maintains the list of all orphaned entries associated with a global index. We can view the orphaned entries by querying SYS.INDEX_ORPHANED_ENTRY$ or SYS.INDEX_ORPHANED_ENTRY_V$ views as shown below. ---// ---// viewing orphaned index entries //--- ---// SQL> select * from sys.index_orphaned_entry$ order by 1; INDEXOBJ# TABPARTDOBJ# H ---------- ------------ - 86586 86583 O 86587 86583 O 86588 86583 O 86589 86583 O SQL> select * from SYS.INDEX_ORPHANED_ENTRY_V$; INDEX_OWNER INDEX_NAME INDEX_SUBNAME INDEX_OBJECT_ID TABLE_OWNER TABLE_NAME TABLE_SUBNAME TABLE_OBJECT_ID T --------------- -------------------- ------------- --------------- --------------- --------------- --------------- --------------- - MYAPP T_PART_AYSNC_PK ID_P1 86586 MYAPP T_PART_AYSNC 86581 O MYAPP T_PART_AYSNC_PK ID_P2 86587 MYAPP T_PART_AYSNC 86581 O MYAPP T_PART_AYSNC_PK ID_P3 86588 MYAPP T_PART_AYSNC 86581 O MYAPP IDX_T_PART_AYSNC_G 86589 MYAPP T_PART_AYSNC 86581 O Entries from this view point to the list of indexes (INDEXOBJ#) which are having orphaned index entries and the table partition (TABPARTDOBJ#) to which all these orphaned entries belong to. Here, we can see all the orphaned index entries are linked to table partition object 86583, which is the partition P2 that we have truncated earlier. When we query data (using index) from a truncated/dropped table partition, Oracle scans through this list of orphaned index entries to avoid/ignore querying data from the respective table partitions (which is/are truncated or dropped). ---// ---// query data from table with dropped/truncated partition //--- ---// SQL> select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id>10 and id select * from table(dbms_xplan.display_cursor(null,null,'allstats last')); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------ SQL_ID 6md0spb139jqj, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id>10 and id 10 AND "ID" ,0,8,0,"T_PART_AYSNC".ROWID)=1) 22 rows selected. Here is another example, where we query data from both dropped and existing partitions. ---// ---// query data from table with dropped/truncated partition //--- ---// SQL> select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id>199999 and id select * from table(dbms_xplan.display_cursor(null,null,'allstats last')); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------ SQL_ID 7ph2mc3j5p58w, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id>199999 and id 199999 AND "ID" ,0,8,0,"T_PART_AYSNC".ROWID)=1) 22 rows selected. As we can see, in all the cases, Oracle is able to return the correct result from the existing table partitions even though the index has orphaned entries pointing to the dropped/truncated partition, by simply ignoring the index entries (table partitions) which are marked as orphan by the drop or truncate table partition operation. Oracle utilizes the undocumented function TBL$OR$IDX$PART$NUM for partition pruning and thereby avoids reading the truncated or dropped table partition. Oracle will periodically clean up the orphan index entries by means of a scheduler job SYS.PMO_DEFERRED_GIDX_MAINT_JOB which is scheduled to run at 2:00 AM every day as shown below. SQL> select owner,job_name,program_name,next_run_date,state,enabled from dba_scheduler_jobs where job_name='PMO_DEFERRED_GIDX_MAINT_JOB'; OWNER JOB_NAME PROGRAM_NAME NEXT_RUN_DATE STATE ENABL ---------- --------------------------- ------------------------- ------------------------------------------ --------------- ----- SYS PMO_DEFERRED_GIDX_MAINT_JOB PMO_DEFERRED_GIDX_MAINT 08-MAY-16 02.00.00.823524 AM ASIA/CALCUTTA SCHEDULED TRUE We can also clean up the orphan index entries on demand either by running the scheduler job SYS.PMO_DEFERRED_GIDX_MAINT_JOB or by calling the procedure DBMS_PART.CLEANUP_GIDX as shown below. ---// ---// clean up orphan index entries by running maintenance job //--- ---// SQL> exec dbms_scheduler.run_job('PMO_DEFERRED_GIDX_MAINT_JOB'); PL/SQL procedure successfully completed. ---// ---// clean up orphan index entries by calling cleanup_gidx procedure //--- ---// SQL> exec dbms_part.cleanup_gidx('MYAPP','T_PART_AYSNC'); PL/SQL procedure successfully completed. We can also rebuild/coalesce individual index or index partitions to get rid of the orphaned index entries using following syntax. ---// ---// rebuild index to cleanup orphan entries //--- ---// ALTER INDEX REBUILD [PARTITION]; ---// ---// coalesce index to cleanup orphan entries //--- ---// ALTER INDEX [PARTITION] COALESCE CLEANUP; Once, we perform the clean up, the respective index entries would be removed the orphaned_entries list as found below. ---// ---// no orphaned index entries after clean up //--- ---// SQL> select * from sys.index_orphaned_entry$; no rows selected ---// ---// orphaned_entries flag is cleared after clean up //--- ---// SQL> select i.index_name,null partition_name,i.num_rows,s.blocks, 2 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 3 from dba_indexes i, dba_segments s 4 where i.index_name=s.segment_name and i.index_name='IDX_T_PART_AYSNC_G' 5 union 6 select i.index_name,i.partition_name,i.num_rows,s.blocks, 7 i.leaf_blocks,s.bytes/1024/1024 Size_MB,i.status,i.orphaned_entries 8 from dba_ind_partitions i, dba_segments s 9 where i.partition_name=s.partition_name and i.index_name='T_PART_AYSNC_PK' 10 ; INDEX_NAME PARTITION_ NUM_ROWS BLOCKS LEAF_BLOCKS SIZE_MB STATUS ORP -------------------- ---------- ---------- ---------- ----------- ---------- -------- --- IDX_T_PART_AYSNC_G 100000 768 322 6 VALID NO T_PART_AYSNC_PK ID_P1 0 1024 0 8 USABLE NO T_PART_AYSNC_PK ID_P2 0 1024 0 8 USABLE NO T_PART_AYSNC_PK ID_P3 100000 1024 265 8 USABLE NO We can also see that, the queries are now not applying additional filters for pruning the orphan records ---// ---// no filter applied after cleanup of orphan index entries //--- ---// SQL> select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id=13; no rows selected Elapsed: 00:00:00.00 SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last')); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------------------- SQL_ID 7wtduvyjzbxwt, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ * from T_PART_AYSNC where id=13 Plan hash value: 2879828060 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 1 | | 1 | PARTITION RANGE SINGLE | | 1 | 1 | 0 |00:00:00.01 | 1 | | 2 | TABLE ACCESS BY GLOBAL INDEX ROWID| T_PART_AYSNC | 1 | 1 | 0 |00:00:00.01 | 1 | |* 3 | INDEX UNIQUE SCAN | T_PART_AYSNC_PK | 1 | 1 | 0 |00:00:00.01 | 1 | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("ID"=13) 20 rows selected. Conclusion Asynchronous global index maintenance is a great enhancement to the existing index maintenance functionalities. It enables us to efficiently drop/truncate table partitions without disturbing the associated global indexes. We also have a choice to perform the global index maintenance during the default maintenance window (2:00 AM every day) or on demand based on our convenience, which gives a great flexibility in terms of deciding the maintenance window. Even though the index maintenance is asynchronous, it doesn't impact the accuracy and efficiency of the index search as the index remains USABLE as well as Oracle internally handles the orphan index entries during query processing.
↧
Blog Post: Oracle 12c: Global Index maintenance is now asynchronous
↧
Blog Post: Snap Clone Via Self Service Portal – Part 2
A friend wrote: “One question regarding PDBaaS. Is it possible with OEM 13c (and same question for 12c) to provision through Self Service Portal a PDB with disk space reduction using the snapshot copy feature of Multitenant ? If so, how? (documentation, MOS...) “I'm currently struggling to fine how this could be done.” The answer was: Yes, it should be possible to set this up for PDBs as well. Previously I had tested separate databases for clonedb. But it should work for PDBs. I did test it out for admin flow about a week or so ago and it has worked. This is for the admin flow, I followed these steps. Steps in bold are the important steps. I have 12.1.0.2 as the Database version, this is a test CDB database Set the CLONEDB parameter to true, and bounced the database Started Enterprise Manager and tried to create a test master from the SALES PDB by right clicking, this failed at the initialization step with the error "Reference GUID is not populated" (the error can be worked around by creating new preferred credentials if none present). Instead, did the following: Create a full clone from Sales pdb. Once this was done, for the new created PDB , enable it as a Test Master PDB. This puts the new PDB in read only mode. It can now be used for the snapshot clone. Now, right click on the Test Master PDB. This shows the menu with “Create Snapshot Clone”. Start this procedure. Inside the procedure, when the file location is asked for, enter the dNFS file location /u02/copy-on-write (this is a dNFS location already set up) The procedure completes and the snap clone PDB is created. You can see the results. The snap clone PDB files are only using 2.2 MB. $ pwd /u02/copy-on-write/ /2A71B8A858004AF0E055000000000001/datafile $ ls -altr total 2192 drwxr-----. 3 oracle oinstall 4096 Jan 29 03:35 .. drwxr-----. 2 oracle oinstall 4096 Jan 29 03:35 . -rw-r-----. 1 oracle oinstall 1304174592 Jan 29 03:35 o1_mf_example_cboqhbrk_.dbf -rw-r-----. 1 oracle oinstall 5251072 Jan 29 03:35 o1_mf_users_cboqhbrj_.dbf -rw-r-----. 1 oracle oinstall 20979712 Jan 29 03:54 o1_mf_temp_cboqhbrh_.dbf -rw-r-----. 1 oracle oinstall 817897472 Jan 29 03:59 o1_mf_sysaux_cboqhbrf_.dbf -rw-r-----. 1 oracle oinstall 304095232 Jan 29 04:00 o1_mf_system_cboqhbr5_.dbf $ du -hs . 2.2M . On the other hand, the test master pdb is using 2.3 GB of space: $ pwd /u02/oradata/ / SALE_CL1 / /2A719D7185083BD6E055000000000001/datafile $ ls -altr total 2374508 drwxr-----. 3 oracle oinstall 4096 Jan 29 03:27 .. drwxr-----. 2 oracle oinstall 4096 Jan 29 03:28 . -rw-r-----. 1 oracle oinstall 20979712 Jan 29 03:28 o1_mf_temp_cboq1227_.dbf -rw-r-----. 1 oracle oinstall 1304174592 Jan 29 03:29 o1_mf_example_cboq1228_.dbf -rw-r-----. 1 oracle oinstall 5251072 Jan 29 03:29 o1_mf_users_cboq1227_.dbf -rw-r-----. 1 oracle oinstall 817897472 Jan 29 03:29 o1_mf_sysaux_cboq1226_.dbf -rw-r-----. 1 oracle oinstall 304095232 Jan 29 03:29 o1_mf_system_cboq121s_.dbf $ du -hs . 2.3G . Then, when you go to the Test Master PDB home page, and select Oracle Database.. Cloning.. Clone Management, you can see the Snapshot clone in the table of clones. And for deletion, disable the test master first as a test master, then only you can delete both the test master and the snap clone using provisioning from the CDB menu. A read-only clone of the PDB (known as a test master) works in the same way as an RMAN database file image. The dNFS file system is required for the copy on write technology. For more understanding of snapshot clones of databases, and also database as a service, please see my coming book here .
↧
↧
Blog Post: Ensuring Data Protection Using Oracle Flashback Features - Part 3
Introduction In my previous article we have reviewed the first Oracle Flashback feature which was introduced in Oracle 9i, named "Flashback Query" and we saw how this feature works "behind the scenes" using undo tablespace' contents. In this article, we will review Oracle 10g Flashback features. Oracle 10g Flashback Features Oracle Database version 10g introduced some great flashback-related enhancements and new features. We can categorize these features into two main categories, Flashback query enhancements and additional flashback features. Oracle Flashback Query Enhancements This category contains all of the Oracle 10g flashback query enhancements including the following: Flashback Version Query, Flashback Transaction Query, and Flashback Table. The reason these features are categorized as enhancements to the 9i Flashback query feature is because they all rely on the undo records in the Undo Tablespace, where the “Additional Flashback Features” are flashback capabilities that do not rely on the undo tablespace, but rather on other Oracle Database components and features. Flashback Version Query The Flashback Version Query allows viewing the historical versions of a specific row or set of rows. Let us continue with the previous example of table EMP. In the previous example, there was an update of employee with ID #1 to be named ROBERT instead of DAVID, and then using by the flashback query, it was updated to be DAVID (as it was originally). By using Flashback Version Query, we can see the history of the row modifications: SQL> select versions_starttime, versions_endtime, versions_xid, versions_operation, name from EMP VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2016-01-10 18:02:28', 'YYYY-MM-DD HH24:MI:SS') AND TO_TIMESTAMP('2016-01-10 18:03:00', 'YYYY-MM-DD HH24:MI:SS') where id = 1 order by VERSIONS_STARTTIME; VERSIONS_STARTTIME VERSIONS_ENDTIME VERSIONS_XID V NAME ---------------------- ---------------------- ---------------- - -------------------- 10-JAN-16 06.02.47 PM 10-JAN-16 06.04.08 PM 060016000A2A0100 U ROBERT 10-JAN-16 06.04.08 PM 01000200EBFA0000 U DAVID The VERSIONS_XID column represents the ID of the transaction that is associated with the row. The transaction ID is useful for retrieving the undo SQL statement for the transaction using the Flashback Transaction Query (more details to follow in the next section). The VERSIONS_OPERATION column value is ‘U’ which indicates that an update has occurred. Other possible values are ‘I’ (which indicates an INSERT) and ‘D’ (which indicates a DELETE). The “VERSIONS BETWEEN” clause allows the DBA to specify SCN MINVALUE AND MAXVALUE, which takes all of the undo information that is available in the undo tablespace as follows: SQL> select versions_starttime, versions_endtime, versions_xid, versions_operation, name from EMP VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE WHERE id = 1 order by VERSIONS_STARTTIME; VERSIONS_STARTTIME VERSIONS_ENDTIME VERSIONS_XID V NAME ---------------------- ---------------------- ---------------- - -------------------- 10-JAN-16 06.02.47 PM 10-JAN-16 06.04.08 PM 060016000A2A0100 U ROBERT 10-JAN-16 06.04.08 PM 01000200EBFA0000 U DAVID Let us see an example of the output of the query after an insertion and deletion of a row. SQL> insert into EMP values (2, 'STEVE'); 1 row created. SQL> commit; Commit complete. SQL> delete EMP where id=2; 1 row deleted. SQL> commit; Commit complete. SQL> select versions_starttime, versions_endtime, versions_xid, versions_operation, name from EMP VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE order by VERSIONS_STARTTIME; VERSIONS_STARTTIME VERSIONS_ENDTIME VERSIONS_XID V NAME ---------------------- ---------------------- ---------------- - -------------------- 10-JAN-16 06.02.47 PM 10-JAN-16 06.04.08 PM 060016000A2A0100 U ROBERT 10-JAN-16 06.04.08 PM 01000200EBFA0000 U DAVID 10-JAN-16 06.27.55 PM 10-JAN-16 06.28.01 PM 08000E007A330100 I STEVE 10-JAN-16 06.28.01 PM 0A0000009C120100 D STEVE Flashback Transaction Query The flashback transaction query feature allows the DBA to view the transaction information including the start and the end of the transaction as well as the undo SQL statements for rolling back the transaction. In order to use this feature Oracle introduced a new dictionary view in version 10g named FLASHBACK_TRANSACTION_QUERY which requires having the SELECT ANY TRANSACTION system privilege. In the example above, we found out that transaction ID 0A0000009C120100 has deleted the row of employee named “STEVE” by using the flashback version query. The flashback transaction query can assist in rolling back the transaction by using the UNDO_SQL column, as follows: SQL> select xid, start_scn, commit_scn, operation OP, undo_sql FROM flashback_transaction_query WHERE xid = HEXTORAW('0A0000009C120100'); XID START_SCN COMMIT_SCN UNDO_SQL ----------------- ---------- ---------- ----------------------------------------------------------- 090017001B380100 483051290 483051291 insert into "PINI"."EMP"("ID","NAME") values ('2','STEVE'); Note that in order to have the UNDO_SQL column populated with data, a minimal database supplemental logging must be enabled, which will add additional information to the redo logs. Verify whether minimal database supplemental logging is enabled or not by querying SUPPLEMENTAL_LOG_DATA_MIN from V$DATABASE. If minimal database supplemental logging is disabled (the output of the query is “NO”), you can enable it by executing the following command: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Flashback Table The flashback table feature allows restoring the entire table’s data into an historical point in time in a very simple and straight-forward way- by specifying either SCN or TIMESTAMP. This feature is the best way for the DBA to recover the entire table’s data from human errors or undesired application changes (e.g. inserts, updates, deletes). In order to use this feature, make sure to be aligned with following prerequisites: Have either FLASHBACK object privilege on the table or FLASHBACK ANY TABLE system privilege. In addition, have the following privileges on the table: ALTER, SELECT, INSERT, UPDATE, and DELETE. Enable row movement for the table using ALTER TABLE … ENABLE ROW MOVEMENT. The reason for this requirement is because rows will be moved (inserted, updated, and deleted) during the FLASHBACK TABLE, which is the reason that the user must be granted with the INSERT, UPDATE, DELETE privileges on the table. The following is a demonstration of this feature: SQL> CREATE TABLE EMP (ID NUMBER, NAME VARCHAR2(20)); Table created. SQL> insert into EMP values (1, 'DAVID'); 1 row created. SQL> insert into EMP values (2, 'ROBERT'); 1 row created. SQL> insert into EMP values (3, 'DANIEL'); 1 row created SQL> commit; Commit complete. SQL> select current_scn from v$database; CURRENT_SCN ----------- 483077247 SQL> delete EMP; 3 rows deleted. SQL> commit; Commit complete. SQL> select * from EMP; no rows selected SQL> flashback table EMP to scn 483077247; flashback table emp to scn 483077247 * ERROR at line 1: ORA-08189: cannot flashback the table because row movement is not enabled The ORA-08189 is expected because as previously mentioned, one of the prerequisites for using this feature is to enable row movement, and then it would be possible to execute the flashback table command, as follows: SQL> alter table EMP enable row movement; Table altered. SQL> flashback table EMP to scn 483077247; Flashback complete. SQL> select * from EMP; ID NAME ---------- -------------------- 1 DAVID 2 ROBERT 3 DANIEL Note that this feature is using the information in the undo tablespace in order to recover the table so it can only be used to recover the data and not the structure of the table. If there was a change in the structure of the table, for example, by adding a column, Oracle would not be able to recover the table prior to the execution of DDL command. Also, if a table has been dropped, Oracle would not be able to recover it using this feature. Additional Flashback Features So far we have reviewed the Flashback features that rely on the contents of the undo tablespace. In this section we will explore the other 10g Flashback features: Flashback Drop and Flashback Database. These features do not rely on the contents of the undo tablespace. Flashback Drop Starting with Oracle version 10g, Oracle introduced a new parameter named “RECYCLEBIN” (defaults to “ON”): SQL> show parameter RECYCLEBIN NAME TYPE VALUE ------------------------------------ ----------- -------- recyclebin string on Assuming that RECYCLEBIN parameter is set to ON, then once the object is dropped, it will remain in the tablespace and Oracle will keep the information about the dropped table and its associated objects in a dictionary view named USER_RECYCLEBIN (it has a synonym named RECYCLEBIN), which shows per each schema its objects in the recycle bin, as follows: SQL> SELECT object_name, original_name, droptime FROM RECYCLEBIN; OBJECT_NAME ORIGINAL_NANE TYPE DROPTIME ------------------------------ ------------- ----- -------------- BIN$ZW5M6bSsRKe6PiqynWR9Xw==$0 EMP TABLE 2016-01-21:17:24:26 BIN$tWgtlRlzTZ2lCoZd0Ex7Rg==$0 ID_PK INDEX 2016-01-21:17:24:25 Note : that it is possible to query the recycle bin of the entire instance using DBA_RECYCLEBIN, and CDB_RECYCLEBIN in version 12c to query the recycle bin of all the schemas across all the containers. As seen in the demonstration above, the names of the table and its associated objects in the RECYCLEBIN have a system-generated name (starts with BIN$). It is possible to query directly the recycle bin system-generated names: SQL> SELECT object_name, original_name, droptime FROM RECYCLEBIN; SQL> select * from "BIN$ZW5M6bSsRKe6PiqynWR9Xw==$0"; ID NAME ---------- -------------------- 1 DAVID 2 ROBERT However, it is not possible to execute DML or DDL commands against the tables in the recycle bin. Once the table is being restored from the recycle bin, it will be restored with its original name, but the associated objects will be restored with system-generated names so it is possible to rename these objects later as an optional step. The following is an example that demonstrates how simple it is to restore a dropped table using this feature: SQL> SELECT object_name, original_name, droptime FROM RECYCLEBIN; SQL> FLASHBACK TABLE EMP TO BEFORE DROP; Flashback complete. SQL> select * from EMP; ID NAME ---------- -------------------- 1 DAVID 2 ROBERT It is also possible that the object will be restored with a different name (for example, when another object with the same name already exists), using a very simple syntax, as follows: SQL> SELECT object_name, original_name, droptime FROM RECYCLEBIN; SQL> FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_OLD; Flashback complete. Note that it is not guaranteed to have the dropped objects in the recycle bin. In the following scenarios the objects will not be available in the recycle bin: Execution of a DROP TABLE command with the PURGE clause Manual execution of PURGE RECYCLEBIN or PURGE DBA_RECYCLEBIN commands Drop of the entire tablespace will not leave its objects in the recycle bin When dropping a user, all its objects are not placed in the recycle bin Space-pressure in the tablespace on which the objects reside Another thing to keep in mind is that Oracle restores the objects from the recycle bin in a LIFO (Last In First Out) order, so if there are several objects with the same name in the recycle bin and the DBA restores that object using the Flashback Drop feature, then the last one that was dropped will be restored. Flashback Database The Flashback Database feature is the most simple and straightforward way to rewind the entire database to a historical point in time. The reason it is so fast and simple is because it does not require restoring database backups using either RMAN or user-managed backup. However, its biggest disadvantage is that it can only recover “logical failures”, i.e. unnecessary data modifications due to human error. It cannot be used to recover from block corruptions or a loss of a file (e.g. Data Files, Control File). In order to undo changes in the database, this feature uses flashback logs. Flashback logs are being generated once the Flashback Database is enabled. The flashback logs contain before-images of data blocks prior to their change. The Flashback Database operates at a physical level and revert the current data files to their contents at a past time using the flashback logs. The prerequisites for enabling this feature are: The database must be running in ARCHIVELOG mode Enable the FRA (Flash Recovery Area). The FRA is a storage location that contains recovery-related files such as archived logs, RMAN backups, and of course, flashback logs. Once configured, the FRA will simplify the DBA’s daily tasks by retaining the recovery-related files as long as they are needed, and delete them once they are no longer needed (based on the retention policies that are defined by the DBA). Once the FRA prerequisites have been configured properly, it is possible to enable the Flashback Database feature by executing the following command: SQL> ALTER DATABASE FLASHBACK ON; Prior to 11gR2, in order to execute this command, the database had to be restarted to a mounted state and only then it was possible to execute the above command. Starting with 11gR2, it is possible to enable the Flashback Database with no downtime by executing this command when the instance is in OPEN status. It is possible to set the DB_FLASHBACK_RETENTION_TARGET parameter which specifies the upper limit (in minutes) on how far back in time the database may be flashed back. Its default value is 1440 minutes (=one day) of retention for the flashback logs. The reason that it is only an upper limit and not a guaranteed retention period is because if the FRA is full (reached the maximum FRA size limit defined by the DB_RECOVERY_FILE_DEST_SIZE parameter) or if there is not enough disk space, then Oracle will reuse the oldest flashback logs which might be within the retention period. It is possible to monitor the oldest possible flashback time via the V$FLASHBACK_DATABASE_LOG dictionary view. In the following demonstration, the FRA is set with a 100GB size limit and flashback retention target of 1 week (=10080 minutes): SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=100g; System altered. SQL> ALTER SYSTEM SET db_recovery_file_dest='/oravl01/oracle/FRA'; System altered. SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=10080; System altered. SQL> ALTER DATABASE FLASHBACK ON; Database altered. In order to rewind the database, restart the database in a MOUNT mode, and then execute the FLASHBACK DATABASE command. It is possible to rewind the database to an exact SCN or Time. It is also possible to rewind the database prior to the SCN or Time. Another simple way is to create a restore point in which a name represents a specific point in time, and then the flashback will restore the database to the time that the restore point had been created. In the following example, a sample table is created, populated with a few records and truncated right after a restore point name “before_truncate” has been created. In this demo, a Flashback Database is used to rewind the enitre database prior to the truncate command that was executed using the “before_truncate” restore point. SQL> create table test_flashback (id number, name varchar2(20)); Table created. SQL> insert into test_flashback values (1, 'DAVID'); 1 row created. SQL> insert into test_flashback values (2, 'JOHN'); 1 row created. SQL> commit; Commit complete. SQL> create restore point before_truncate; Restore point created. SQL> truncate table test_flashback; Table truncated. SQL> select name,scn,time from V$RESTORE_POINT; NAME SCN TIME --------------- ---------- ------------------------------- BEFORE_TRUNCATE 119193304 24-JAN-16 09.14.06.000000000 PM SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 809500672 bytes Fixed Size 2929600 bytes Variable Size 318770240 bytes Database Buffers 482344960 bytes Redo Buffers 5455872 bytes Database mounted. SQL> FLASHBACK DATABASE TO RESTORE POINT before_truncate; Flashback complete. SQL> alter database open resetlogs; Database altered. SQL> select * from test_flashback; ID NAME ---------- --------------- 1 DAVID 2 JOHN Summary In this article, have reviewed Oracle 10g Flashback features. We started with the Flashback Query enhancements (Flashback Version Query, Flashback Transaction Query, Flashback Table) and continued with additional flashback enhancements (Flashback Drop and Flashback Database). In the next part, we will review Oracle 11g Flashback features. Stay tuned :)
↧
Blog Post: Oracle 12c: DDL Logging.. will it serve the purpose?
Oracle had introduced a cool feature in version 11g, where we were able to log or track DDL statements executed in the database by means of a parameter called ENABLE_DDL_LOGGING without setting up database auditing. Setting ENABLE_DDL_LOGGING to TRUE results in logging DDL statements in to the instance alert log (s). Since the DDL statements are logged into alert log file, it becomes a hard task to scan through the alert log and find the DDL logs. Oracle has made a significant change in terms of DDL logging with version 12c. In Oracle 12c, the DDL logs are maintained in dedicated DDL log file (s) unlike the instance alert log file which was the case with Oracle 11g. This makes it easier to track DDL statements executed in a database. In 12c, Oracle maintains the DDL logs in two files (XML and plain text) under the ADR_HOME as listed below. XML Version: $ADR_BASE/diag/rdbms/${DBNAME}/${ORACLE_SID}/log/ddl/log.xml Text Version: $ADR_BASE/diag/rdbms/${DBNAME}/${ORACLE_SID}/log/ddl_${ORACLE_SID}.log As per the Oracle documentation, setting ENABLE_DDL_LOGGING to TRUE will log following DDL statements executed in a database. ALTER/CREATE/DROP/TRUNCATE CLUSTER ALTER/CREATE/DROP FUNCTION ALTER/CREATE/DROP INDEX ALTER/CREATE/DROP OUTLINE ALTER/CREATE/DROP PACKAGE ALTER/CREATE/DROP PACKAGE BODY ALTER/CREATE/DROP PROCEDURE ALTER/CREATE/DROP PROFILE ALTER/CREATE/DROP SEQUENCE CREATE/DROP SYNONYM ALTER/CREATE/DROP/RENAME/TRUNCATE TABLE ALTER/CREATE/DROP TRIGGER ALTER/CREATE/DROP TYPE ALTER/CREATE/DROP TYPE BODY DROP USER ALTER/CREATE/DROP VIEW There are some discrepancies while using ENABLE_DDL_LOGGING for tracking DDL statements particularly in the context of multi tenant architecture. In this post, I will go through a quick demonstration to explore how this feature works and what are the discrepancies associated with this feature. Let’s start with our demonstration. By default, DDL logging is not enabled as we can find by querying the v$parameter view. ---// ---// DDL logging is disabled by default //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> select name,value,default_value from v$parameter where name='enable_ddl_logging'; NAME VALUE DEFAULT_VALUE ------------------------- ---------- --------------- enable_ddl_logging FALSE FALSE Let’s enable DDL logging in our database by setting ENABLE_DDL_LOGGING to TRUE as shown below. My demonstration is targeted against a Oracle 12c (12.1.0.2) container database to understand how the feature works in a multi tenant environment. ---// ---// Enable DDL logging //--- ---// SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> alter system set enable_ddl_logging=TRUE; System altered. ---// ---// Validate DDL logging is enabled in the database //--- ---// SQL> select name,value,default_value from v$parameter where name='enable_ddl_logging'; NAME VALUE DEFAULT_VALUE ------------------------- ---------- --------------- enable_ddl_logging TRUE FALSE At this point, we have enabled DDL logging for our container database. However, there is no log file created yet as we haven’t performed any DDL after enabling the DDL logging. This can be confirmed by looking to the DDL log locations as shown below. ##--- ##--- DDL log files are not created yet ---## ##--- [oracle@labserver1 ~]$ cd /app/oracle/diag/rdbms/orpcdb1/orpcdb1/log/ddl/ [oracle@labserver1 ddl]$ ls -lrt total 0 Let’s perform few DDL statements on both container (CDB$ROOT) and pluggable (CDB1_PDB_1) databases and observe how these statements are logged by Oracle. ---// ---// executing DDL in root(CDB$ROOT) container //--- ---// 16:24:02 SQL> show con_name CON_NAME ------------------------------ CDB$ROOT 16:24:16 SQL> create user c##test identified by c##test; User created. 16:24:29 SQL> drop user c##test cascade; User dropped. ---// ---// connecting to PDB CDB1_PDB_1 //--- ---// 16:24:36 SQL> conn myapp@cdb1_pdb_1 Enter password: Connected. ---// ---// executing DDL in pluggable database CDB1_PDB_1 //--- ---// 16:24:49 SQL> show con_name CON_NAME ------------------------------ CDB1_PDB_1 16:25:34 SQL> create table test as select * from all_users; Table created. 16:25:48 SQL> create table test1 as select * from test; Table created. 16:26:04 SQL> truncate table test1; Table truncated. 16:26:13 SQL> drop table test purge; Table dropped. We have performed a number of DDL statements in both root (CDB$ROOT) and pluggable (CDB1_PDB_1) databases. We can now see the existence of respective DDL log files as shown below. ##--- ##--- DDL logs are created after DDL execution ---## ##--- [oracle@labserver1 ddl]$ pwd /app/oracle/diag/rdbms/orpcdb1/orpcdb1/log/ddl [oracle@labserver1 ddl]$ ls -lrt total 4 -rw-r----- 1 oracle dba 1650 May 8 16:26 log.xml [oracle@labserver1 log]$ pwd /app/oracle/diag/rdbms/orpcdb1/orpcdb1/log [oracle@labserver1 log]$ ls -lrt ddl_${ORACLE_SID}.log -rw-r----- 1 oracle dba 225 May 8 16:26 ddl_orpcdb1.log Let’s see what is logged in the DDL log files for these DDL statements. First, let’s take a look into the text version of the logs ##--- ##--- DDL logs from text version of log file ---## ##--- [oracle@labserver1 log]$ cat ddl_orpcdb1.log diag_adl:drop user c##test cascade Sun May 08 16:25:48 2016 diag_adl:create table test as select * from all_users diag_adl:create table test1 as select * from test diag_adl:truncate table test1 diag_adl:drop table test purge We can see that all the DDL statements are logged (except the “create user” statement as that is not supported) into the log file. However, looks like the log file is just a sequence of DDL statements without any context of those statements. There are some vital information missing from this log file. If you observe it closely, you will find that TIMESTAMP information is missing for most of the statements. I was also expecting the container information to be associated with each of the logged DDL statement. However, that information is not there in the log file. Since, we are dealing with multi tenant database; the container information is very much required to be able to determine in which container a particularly DDL statement was executed. Without the container details, these DDL logging could not server any purpose. Now let’s take a look into the XML version of the DDL log file to see if we can find any missing information there. ##--- ##--- DDL logs from XML version of log file ---## ##--- [oracle@labserver1 ddl]$ cat log.xml drop user c##test cascade create table test as select * from all_users create table test1 as select * from test truncate table test1 drop table test purge The XML version of the log file looks to be more informative than the text version. We have now additional details available for each of the DDL statements executed like the TIMESTAMP of the DDL along with some session specific details like host_id and host_addr However, the logs are still not sufficient for a multi tenant container database. The container information is still missing from the logs and we can’t rely on it to track on which container a particular DDL statement was executed Footnote: ENABLE_DDL_LOGGING feature can be considered for use in Oracle 11g or Oracle 12c non-CDB database. We must query the XML version of the log file for detailed information related to a DDL statement rather than querying the text version as the text version seems to miss the TIMESTAMP and session specific information for most DDL statements. Moreover, ENABLE_DDL_LOGGING feature seems not to be an ideal option for DDL tracking in a multi tenant container database considering the fact that it doesn’t log the container details of DDL statements and in turn serves no purpose for tracking. The last thing to mention is that you need additional license to use the DDL logging feature. I would recommend to evaluate it thoroughly before actually implementing it, especially if you are dealing with container databases.
↧
Wiki Page: Creating a PL/SQL Authentication Function
Many developers prefer to write APEX applications today, but there are developers who build standalone web applications. This article describes how you can create an Access Control List (ACL), and a PL/SQL authentication function. An ACL lists your user names, passwords, and system privileges. Along with the ACL you need to create tables to track valid and invalid sessions. This article describes sample ACL tables and teaches you how to build an authentication function. It’s divided into two parts defining ACL tables and developing a PL/SQL authentication function. Defining ACL Tables There are four basic ACL tables. They store the user names and passwords, qualify user privileges, and store valid and invalid sessions. This paper implements the following tables: The application_user table, which stores user names and passwords. The system_session table, which stores valid sessions or records of user logins. The invalid_session table, which stores invalid attempts to connect to the database by unauthorized users. This article doesn’t cover how to manage user privileges because its focus is how you authenticate or repudiate access to a multiple user database. An application_privilege table would be a subordinate table to the application_user table. You define the application_user table and application_user_s sequence with the following code: SQL> CREATE TABLE application_user 2 ( user_id NUMBER CONSTRAINT pk_application_user1 PRIMARY KEY 3 , user_name VARCHAR2(20) CONSTRAINT nn_application_user1 NOT NULL 4 , user_password VARCHAR2(40) CONSTRAINT nn_application_user2 NOT NULL 5 , user_role VARCHAR2(20) CONSTRAINT nn_application_user3 NOT NULL 6 , user_group_id NUMBER CONSTRAINT nn_application_user4 NOT NULL 7 , user_type NUMBER CONSTRAINT nn_application_user5 NOT NULL 8 , start_date DATE CONSTRAINT nn_application_user6 NOT NULL 9 , end_date DATE 10 , first_name VARCHAR2(20) CONSTRAINT nn_application_user7 NOT NULL 11 , middle_name VARCHAR2(20) 12 , last_name VARCHAR2(20) CONSTRAINT nn_application_user8 NOT NULL 13 , created_by NUMBER CONSTRAINT nn_application_user9 NOT NULL 14 , creation_date DATE CONSTRAINT nn_application_user10 NOT NULL 15 , last_updated_by NUMBER CONSTRAINT nn_application_user11 NOT NULL 16 , last_update_date DATE CONSTRAINT nn_application_user12 NOT NULL 17 , CONSTRAINT un_application_user1 UNIQUE(user_name)); There are seven columns in the natural key for the application_user table. The natural key uses the user_name , user_password , user_role , user_group_id , user_type , start_date , and end_date columns. The user_id column on line 2 is the surrogate key. The end_date column is always a null value for currently authorized users. It should be noted that there’s always a one-to-one match between surrogate and natural keys. The created_by , creation_date , last_update_date , and last_updated_by columns are often called the “who done it” or “who-audit” columns. That's because they document who creates the row and who last updates the row. You need to create an application_user_s sequence for the application_user table: SQL> CREATE SEQUENCE application_user_s; At this point, lets insert some test records into the application_user table. The data will help us test the PL/SQL authentication function in the next section. The following inserts five rows in the application_user table: SQL> INSERT INTO application_user VALUES 2 ( application_user_s.nextval 3 ,'potterhj','c0b137fe2d792459f26ff763cce44574a5b5ab03' 4 ,'System Admin', 2, 1, SYSDATE, null, 'Harry', 'James', 'Potter' 5 , 1, SYSDATE, 1, SYSDATE); SQL> INSERT INTO application_user VALUES 2 ( application_user_s.nextval 3 ,'weasilyr','35675e68f4b5af7b995d9205ad0fc43842f16450' 4 ,'Guest', 1, 1, SYSDATE, null, 'Ronald', null, 'Weasily' 5 , 1, SYSDATE, 1, SYSDATE); SQL> INSERT INTO application_user VALUES 2 ( application_user_s.nextval 3 ,'longbottomn','35675e68f4b5af7b995d9205ad0fc43842f16450' 4 ,'Guest', 1, 1, SYSDATE, null, 'Neville', null, 'Longbottom' 5 , 1, SYSDATE, 1, SYSDATE); SQL> INSERT INTO application_user VALUES 2 ( application_user_s.nextval 3 ,'holmess','c0b137fe2d792459f26ff763cce44574a5b5ab03' 4 ,'DBA', 3, 1, SYSDATE, null, 'Sherlock', null, 'Holmes' 5 , 1, SYSDATE, 1, SYSDATE); SQL> INSERT INTO application_user VALUES 2 ( application_user_s.nextval 3 ,'watsonj','c0b137fe2d792459f26ff763cce44574a5b5ab03' 4 ,'DBA', 3, 1, SYSDATE, null, 'John', 'H', 'Watson' 5 , 1, SYSDATE, 1, SYSDATE); The INSERT statements use encrypted values for the user_password value but you can encrypt them with the encrypt function discussed in this other article of mine . It did not seem necessary here to show encryption. The system_session table holds the valid session information. You define the system_session table with the following code: SQL> CREATE TABLE system_session 2 ( session_id NUMBER CONSTRAINT pk_ss1 PRIMARY KEY 3 , session_number VARCHAR2(30) CONSTRAINT nn_ss1 NOT NULL 4 , remote_address VARCHAR2(15) CONSTRAINT nn_ss2 NOT NULL 5 , user_id NUMBER CONSTRAINT nn_ss3 NOT NULL 6 , created_by NUMBER CONSTRAINT nn_ss4 NOT NULL 7 , creation_date DATE CONSTRAINT nn_ss5 NOT NULL 8 , last_updated_by NUMBER CONSTRAINT nn_ss6 NOT NULL 9 , last_update_date DATE CONSTRAINT nn_ss7 NOT NULL); You create the system_session_s sequence for the system_session table, like this: SQL> CREATE SEQUENCE system_session_s; There are three columns in the natural key of the system_session table. The natural key uses the session_number , remote_address , and user_id columns. The external programming language, like Perl, PHP, or Ruby, establish the session number. Like the prior table, the session_id column is a surrogate key and the primary key for the table. Like, the earlier table, the other columns are the who-audit columns. A fully qualified session table would include additional columns. For example, you would define columns to store all of the HTML headers. The following defines the invalid_session table: SQL> CREATE TABLE invalid_session 2 ( session_id NUMBER CONSTRAINT pk_invalid_session1 PRIMARY KEY 3 , session_number VARCHAR2(30) CONSTRAINT nn_invalid_session1 NOT NULL 4 , remote_address VARCHAR2(15) CONSTRAINT nn_invalid_session2 NOT NULL 5 , created_by NUMBER CONSTRAINT nn_invalid_session3 NOT NULL 6 , creation_date DATE CONSTRAINT nn_invalid_session4 NOT NULL 7 , last_updated_by NUMBER CONSTRAINT nn_invalid_session5 NOT NULL 8 , last_update_date DATE CONSTRAINT nn_invalid_session6 NOT NULL); You create the invalid_session_s sequence for the invalid_session table, like this: SQL> CREATE SEQUENCE invalid_session_s; The invalid_session table mirrors all but one of the columns from the system_session table. The missing column in the invalid_session table is the user_id column. That’s because you only write to the invalid_session table when the user isn’t valid. The natural key of the invalid_session table has only two columns. They are the session_number and remote_address columns. The session_id is the surrogate key for the invalid_session table. After defining the tables, you can now develop the PL/SQL authentication function in the next section. Developing a PL/SQL Authentication Function This section shows you how to write a PL/SQL authentication function. The function will take four key arguments. The arguments are the user name, password, session number, and remote address. The function also returns three values, and they are the user name, password, and session number. PL/SQL functions can return one thing, like most programming languages. The one thing a PL/SQL function can return may be a scalar variable or a composite data type. A composite data type can be a data structure or a collection type. A data structure is a set of elements organized by their data type in position order. They’re defined as object types in SQL and record types in PL/SQL. A collection type holds a set of a scalar data type or a set of a data structure. The PL/SQL authentication function requires you to define an object type before you can implement an authentication function. The next two subsections show you how to create the object type and function. The third subsection shows you how to perform a unit test. Define a PL/SQL Object Type A PL/SQL object type defines a data structure with an implicit constructor. However, PL/SQL supports positional and named notation. It also supports a default constructor, which can be three arguments in positional order or by named order. The definition of the authentication_t type is: SQL> CREATE OR REPLACE 2 TYPE authentication_t IS OBJECT 3 ( username VARCHAR2(20) 4 , password VARCHAR2(40) 5 , sessionid VARCHAR2(30)); 6 / While the scope of the PL/SQL authentication function is capable of running in SQL or PL/SQL scope, PL/SQL requires a collection of the object type when you want to call it from a query in SQL scope. The following defines a collection of the authentication_t type: SQL> CREATE OR REPLACE 2 TYPE authentication_tab IS TABLE OF authentication_t; 3 / After creating the function in the schema, you can create the PL/SQL authentication function. Define a PL/SQL Authentication Function The PL/SQL authentication function lets you check whether the user name and password match before creating a connection to the database. After verifying a user’s credentials, the user name and password, the function writes a record to the system_session table. A failure to update the user’s credentials writes to the invalid_session table. The function returns an empty authentication_t instance when the credentials don’t validate, and a populated instance when they do validate. The function also lets a user preserve a connection when the user posts a reply within five minutes. It does that by updating the last_update_date column of the system_session table. The following provides the authorize function code: SQL> CREATE OR REPLACE FUNCTION authorize 2 ( pv_username VARCHAR2 3 , pv_password VARCHAR2 4 , pv_session VARCHAR2 5 , pv_raddress VARCHAR2 ) RETURN authentication_t IS 6 7 /* Declare session variable. */ 8 lv_session VARCHAR2(30); 9 10 /* Declare authentication_t instance. */ 11 lv_authentication_t AUTHENTICATION_T := authentication_t(null,null,null); 12 13 /* Define an authentication cursor. */ 14 CURSOR authenticate 15 ( cv_username VARCHAR2 16 , cv_password VARCHAR2 ) IS 17 SELECT user_id 18 , user_group_id 19 FROM application_user 20 WHERE user_name = cv_username 21 AND user_password = cv_password 22 AND SYSDATE BETWEEN start_date AND NVL(end_date,SYSDATE); 23 24 /* Declare a cursor for existing sessions. */ 25 CURSOR valid_session 26 ( cv_session VARCHAR2 27 , cv_raddress VARCHAR2 ) IS 28 SELECT ss.session_id 29 , ss.session_number 30 FROM system_session ss 31 WHERE ss.session_number = cv_session 32 AND ss.remote_address = cv_raddress 33 AND (SYSDATE - ss.last_update_date) pv_username 72 , password => pv_password 73 , sessionid => pv_session ); 74 /* Commit the records. */ 75 COMMIT; 76 /* Return a record structure. */ 77 RETURN lv_authentication_t; 78 END; 79 / Lines 2 through 5 qualify the four formal parameters, which are the user name, password, session ID, and remote address. Line 11 declares and instantiates an empty instance of the authentication_t type. The function returns this empty instance on line 101 when the user credentials aren’t validated. Line 36 declares a precompiler instruction in the declaration block, which makes the authorize function an autonomous function. Autonomous functions may include DML statements. You can also call an autonomous function from a query because the transaction scope is independent of the query. However, you must put a COMMIT statement before the end of the execution block, which is on line 75. Line 47 validates when the for-loop on lines 42 through 44 found an existing session. After opening the cursor successfully on line 40, lines 42 through 44 validates whether an existing session exists. Line 53 is the ELSE block that inserts a value into the system_session table Lines 71 through 73 populates the local lv_authentication_t instance. Line 75 commits the insert or update, and line 77 returns either the empty instance or a populated instance of the lv_authentication_t strucutre. Unit Test a PL/SQL Authentication Function There are two unit test cases in this section. One tests a call to the authorize function in a PL/SQL context and the other to the authorize function in a SQL context. The following calls the authorize function in a PL/SQL context: SQL> DECLARE 2 /* Declare authentication_t instance. */ 3 lv_authentication_t AUTHENTICATION_T; 4 BEGIN 5 /* Create instance of authentication_t type. */ 6 lv_authentication_t := authorize('potterhj' 7 ,'c0b137fe2d792459f26ff763cce44574a5b5ab03' 8 ,'session_test' 9 ,'127.0.0.1'); 10 /* Print object instance. */ 11 dbms_output.put_line('Username [' || lv_authentication_t.username || ']'); 12 dbms_output.put_line('Password [' || lv_authentication_t.password || ']'); 13 dbms_output.put_line('SessionID [' || lv_authentication_t.sessionid || ']'); 14 END; 15 / It prints: Username [potterhj] Password [c0b137fe2d792459f26ff763cce44574a5b5ab03] SessionID [session_test] The following calls the authorize function in a SQL context: SQL> SELECT * 2 FROM TABLE( 3 SELECT CAST( 4 COLLECT( 5 authorize( 6 pv_username => 'potterhj' 7 , pv_password => 'c0b137fe2d792459f26ff763cce44574a5b5ab03' 8 , pv_session => 'session_test' 9 , pv_raddress => '127.0.0.1') ) AS authentication_tab) 10 FROM dual); It prints: USERNAME PASSWORD SESSIONID ---------- -------------------- ------------ potterhj c0b137fe2d792459 ... session_test The choice of which constructor you use is important. You should avoid positional notation where possible, and that’s especially true when the elements of the object type share the same scalar data type. Lines 6 through 9 in the anonymous PL/SQL block and lines 5 through 9 in the query show you how to use named notation. This article has shown you how to create and test a PL/SQL authentication function for your standalone web applications. If you want to learn how to configure a secured or unsecured DAD in the XDB Server, please check this other post of mine . You can find the setup code for the authentication function with test cases at this URL .
↧
↧
Wiki Page: Loading MySQL Table Data into Oracle Database
Written by Deepak Vohra If data is to be loaded from MySQL database to Oracle Database, which are the two most commonly used relational databases, some of the options available are: Use Sqoop Use SQL Developer Use Oracle Loader for Hadoop We discussed the Sqoop option in an earlier tutorial . In this tutorial we discuss the Oracle Loader for Hadoop option. Oracle Loader for Hadoop (OLH) is the tool to load data from different data sources into Oracle Database. OLH supports input formats Avro, delimited text file, Hive, Oracle NoSQL Database, but does not support JDBC input format. Though Oracle Loader for Hadoop does not support the JDBC input format directly, we shall discuss in this tutorial how a Hive external table could be created over MySQL database and Oracle Loader for Hadoop used to load the Hive table data into Oracle Database. Setting the Environment Creating Database Tables Starting HDFS Creating a Hive External Table Configuring Oracle Loader for Hadoop Running the Oracle Loader for Hadoop Querying Oracle Database Table Setting the Environment We require the following software to install on Oracle Linux 6.6. -MySQL 5.x Database -Hive JDBC Storage Handler -Oracle Loader for Hadoop 3.0.0 -CDH 4.6 Hadoop 2.0.0 -CDH 4.6 Hive 0.10.0 -Java 7 Create a directory called and set its permissions to global (777). mkdir /hive chmod -R 777 /hive cd /hive Download, install and configure MySQL Database, Hive JDBC Storage Handler, CDH 4.6 Hadoop 2.0.0, CDH 4.6 Hive 0.10.0 and Java 7 as discussed in an earlier tutorial on creating a Hive external table over MySQL Database. Download Oracle Loader for Hadoop Release 3.0.0 oraloader-3.0.0.x86_64.zip from http://www.oracle.com/technetwork/database/database-technologies/bdc/big-data-connectors/downloads/index.html . Unzip the file to a directory. Two files get extracted oraloader-3.0.0-h1.x86_64.zip and oraloader-3.0.0-h2.x86_64.zip . The oraloader-3.0.0-h2.x86_64.zip file is for CDH4 and CDH5. As we are using CDH4.6 extract oraloader-3.0.0-h2.x86_64.zip . root>unzip oraloader-3.0.0-h2.x86_64.zip Copy The Hive MongoDB JDBC Handler to the Oracle Loader for Hadoop jlib directory. cp /hive/hive-0.10.0-cdh4.6.0/hive-jdbc-storage-handler/hive-jdbc-storage-handler-1.1.1-cdh4.3.0-SNAPSHOT-dist.jar =/hive/oraloader-3.0.0-h2/jlib Set the environment variables for MySQL Database, Oracle Database, Oracle Loader for Hadoop, Hadoop, Hive and Java in the bash shell. vi ~/.bashrc export HADOOP_PREFIX=/hive/hadoop-2.0.0-cdh4.6.0 export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop export OLH_HOME =/hive/oraloader-3.0.0-h2 export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export HIVE_HOME=/hive/hive-0.10.0-cdh4.6.0 export HIVE_CONF=$HIVE_HOME/conf export JAVA_HOME=/hive/jdk1.7.0_55 export MYSQL_HOME=/mysql/mysql-5.6.19-linux-glibc2.5-i686 export HADOOP_MAPRED_HOME=/hive/hadoop-2.0.0-cdh4.6.0/bin export HADOOP_HOME=/hive/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2 export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$HIVE_HOME/lib/*:$HIVE_CONF:$HADOOP_CONF:$OLH_HOME/jlib/* export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME:$HIVE_HOME/bin:$MYSQL_HOME/bin:$ORACLE_HOME/bin export CLASSPATH=$HADOOP_CLASSPATH Creating Database Tables Create a MySQL Database table called wlslog and input table data also as discussed in an earlier tutorial on creating a Hive external table over MySQL Database. A select query lists the wlslog table data. Create a Oracle Database table called OE.wlslog to load data into. CREATE TABLE OE.wlslog (time_stamp VARCHAR2(255), category VARCHAR2(255), type VARCHAR2(255), servername VARCHAR2(255), code VARCHAR2(255), msg VARCHAR2(255)); The OE.WLSLOG table may have to be dropped first if already created for another application. Starting HDFS Format the NameNode and start the NameNode. Also start the DataNode. hadoop namenode –format hadoop namenode hadoop datanode Create the Hive warehouse directory in HDFS in which the Hive tables are to be stored. hadoop dfs -mkdir hdfs://10.0.2.15:8020/user/warehouse hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020/user/warehouse Create a directory structure in HDFS to put Hive lib jars, set permissions for the directory, and put the Hive lib jars in HDFS. hadoop dfs -mkdir hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib hadoop dfs -put -0.10.0-cdh4.6.0/lib/* hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib Also, create a directory structure for Oracle Loader for Hadoop in HDFS and put the OLH jars into HDFS. /hive>hadoop dfs -mkdir hdfs://10.0.2.15:8020/hive/oraloader-3.0.0-h2/jlib /hive>hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020/hive/oraloader-3.0.0-h2/jlib /hive>hadoop dfs -put /hive/oraloader-3.0.0-h2/jlib/* hdfs://10.0.2.15:8020/hive/oraloader-3.0.0-h2/jlib Creating a Hive External Table Create a Hive external table using the CREATE EXTERNAL TABLE command. First, start the Hive Thrift server with the following command. hive --service hiveserver Start the Hive shell. hive Add the Hive JDBC Storage Handler jar file to the Hive shell classpath with the ADD JAR command. hive>ADD JAR /hive/hive-0.10.0-cdh4.6.0/hive-jdbc-storage-handler/hive-jdbc-storage-handler-1.1.1-cdh4.3.0-SNAPSHOT-dist.jar; Create the Hive external table called wlslog in the default database with the CREATE EXTERNAL TABLE command in the Hive shell. hive>CREATE EXTERNAL TABLE wlslog(time_stamp STRING, category STRING, type STRING, servername STRING, code STRING, msg STRING) STORED BY 'com.qubitproducts.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES ( "qubit.sql.database.type" = "MySQL", "qubit.sql.jdbc.url" = "jdbc:mysql://localhost:3306/test?user=root&password=", "qubit.sql.jdbc.driver" = "com.mysql.jdbc.Driver", "qubit.sql.query" = "SELECT time_stamp,category,type, servername, code, msg FROM wlslog", "qubit.sql.column.mapping" = "time_stamp=time_stamp,category=category,type=type,servername=servername,code=code,msg=msg"); Query the Hive external table using the SELECT query in Hive shell to list the Hive table data. Configuring Oracle Loader for Hadoop Oracle Loader for Hadoop makes use of a configuration file to get the parameters for the loader. Create a configuration file ( OraLoadConf.xml ). The input format is specified using the mapreduce.inputformat.class property, which is set as follows for Hive as input. mapreduce.inputformat.class oracle.hadoop.loader.lib.input.HiveToAvroInputFormat If Hive is the input format the following properties are also required to be specified. Property Description Value oracle.hadoop.loader.input.hive.databaseName The Hive database name. default oracle.hadoop.loader.input.hive.tableName The Hive table name. wlslog The output format class is JDBCOutputFormat as data is to be loaded into Oracle Database table. mapreduce.job.outputformat.class oracle.hadoop.loader.lib.output.JDBCOutputFormat The target database table is specified with the oracle.hadoop.loader.loaderMap.targetTable property. oracle.hadoop.loader.loaderMap.targetTable OE.CATALOG The following properties are provided for specifying the connection parameters. Property Description Oracle Database setting oracle.hadoop.loader.connection.url The connection URL used to connect to Oracle Database. jdbc:oracle:thin:@${HOST}: ${TCPPORT}: ${SID} TCPPORT The port number to connect to. 1521 HOST The host name. localhost SID The Oracle Database service name. ORCL oracle.hadoop.loader.connection.user The user name or schema name. OE oracle.hadoop.loader.connection.password Password. OE The OraLoadJobConf.xml configuration file is listed below. mapreduce.inputformat.class oracle.hadoop.loader.lib.input.HiveToAvroInputFormat oracle.hadoop.loader.input.hive.databaseName default oracle.hadoop.loader.input.hive.tableName wlslog mapreduce.job.outputformat.class oracle.hadoop.loader.lib.output.JDBCOutputFormat mapreduce.output.fileoutputformat.outputdir oraloadout oracle.hadoop.loader.loaderMap.targetTable OE.WLSLOG oracle.hadoop.loader.connection.url jdbc:oracle:thin:@${HOST}:${TCPPORT}:${SID} TCPPORT 1521 HOST localhost SID ORCL oracle.hadoop.loader.connection.user OE oracle.hadoop.loader.connection.password OE Running the Oracle Loader for Hadoop Run the Oracle Loader for Hadoop with the following command in which the configuration file is specified with the –conf option. hadoop jar $OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader -conf OraLoadConf.xml -libjars $OLH_HOME/jlib/oraloader.jar A MapReduce job runs to load 7 rows of data from MySQL database to Oracle Database. A more detailed output from the OLH is as follows. [root@localhost hive]# hadoop jar $OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader -conf OraLoadConf.xml -libjars $OLH_HOME/jlib/oraloader.jar Oracle Loader for Hadoop Release 3.0.0 - Production Copyright (c) 2011, 2014, Oracle and/or its affiliates. All rights reserved. 15/09/02 19:46:03 INFO loader.OraLoader: Oracle Loader for Hadoop Release 3.0.0 - Production Copyright (c) 2011, 2014, Oracle and/or its affiliates. All rights reserved. 15/09/02 19:48:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1709085000_0001 15/09/02 19:48:56 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 15/09/02 19:48:56 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/09/02 19:48:57 INFO mapred.LocalJobRunner: OutputCommitter is oracle.hadoop.loader.lib.output.DBOutputCommitter 15/09/02 19:48:57 INFO mapred.LocalJobRunner: Waiting for map tasks 15/09/02 19:48:58 INFO mapred.LocalJobRunner: Starting task: attempt_local1709085000_0001_m_000000_0 15/09/02 19:48:58 INFO loader.OraLoader: map 0% reduce 0% 15/09/02 19:49:00 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 15/09/02 19:49:00 INFO mapred.MapTask: Processing split: hdfs://10.0.2.15:8020/user/hive/warehouse/wlslog:0+0 15/09/02 19:49:05 INFO output.DBOutputFormat: conf prop: defaultExecuteBatch: 100 15/09/02 19:49:05 INFO output.DBOutputFormat: conf prop: loadByPartition: false 15/09/02 19:49:17 INFO output.DBOutputFormat: Insert statement: INSERT INTO "OE"."WLSLOG" ("TIME_STAMP", "CATEGORY", "TYPE", "SERVERNAME", "CODE", "MSG") VALUES (?, ?, ?, ?, ?, ?) 15/09/02 19:49:18 INFO mapred.LocalJobRunner: 15/09/02 19:49:20 INFO mapred.LocalJobRunner: map 15/09/02 19:49:21 INFO loader.OraLoader: map 100% reduce 0% 15/09/02 19:49:23 INFO mapred.LocalJobRunner: map 15/09/02 19:49:56 INFO mapred.Task: Task:attempt_local1709085000_0001_m_000000_0 is done. And is in the process of committing 15/09/02 19:49:56 INFO mapred.LocalJobRunner: map 15/09/02 19:49:56 INFO mapred.Task: Task attempt_local1709085000_0001_m_000000_0 is allowed to commit now 15/09/02 19:49:57 INFO output.JDBCOutputFormat: Committed work for task attempt attempt_local1709085000_0001_m_000000_0 15/09/02 19:49:57 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1709085000_0001_m_000000_0' to hdfs://10.0.2.15:8020/user/root/oraloadout/_temporary/0/task_local1709085000_0001_m_000000 15/09/02 19:49:57 INFO mapred.LocalJobRunner: map 15/09/02 19:49:57 INFO mapred.Task: Task 'attempt_local1709085000_0001_m_000000_0' done. 15/09/02 19:49:57 INFO mapred.LocalJobRunner: Finishing task: attempt_local1709085000_0001_m_000000_0 15/09/02 19:49:57 INFO mapred.LocalJobRunner: Map task executor complete. 15/09/02 19:49:59 INFO loader.OraLoader: Job complete: OraLoader (job_local1709085000_0001) 15/09/02 19:49:59 INFO loader.OraLoader: Counters: 23 File System Counters FILE: Number of bytes read=10414783 FILE: Number of bytes written=11377856 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=10422607 HDFS: Number of bytes written=9769395 HDFS: Number of read operations=234 HDFS: Number of large read operations=0 HDFS: Number of write operations=36 Map-Reduce Framework Map input records=7 Map output records=7 Input split bytes=3270 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=2218 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=23531520 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=1622 [root@localhost hive]# Querying Oracle Database Table Subsequently run a SELECT query in Oracle Database SQL*Plus shell. SELECT * FROM OE.WLSLOG; The SELECT command is shown in SQL*Plus shell. The 7 rows of data loaded into Oracle Database get listed. The complete output from the SELECT query is listed: SQL> select * from OE.WLSLOG; TIME_STAMP -------------------------------------------------------------------------------- CATEGORY -------------------------------------------------------------------------------- TYPE -------------------------------------------------------------------------------- SERVERNAME -------------------------------------------------------------------------------- CODE -------------------------------------------------------------------------------- MSG -------------------------------------------------------------------------------- Apr-8-2014-7:06:16-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to STANDBY Apr-8-2014-7:06:17-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to STARTING Apr-8-2014-7:06:18-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to ADMIN Apr-8-2014-7:06:19-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to RESUMING Apr-8-2014-7:06:20-PM-PDT Notice WebLogicServer AdminServer BEA-000361 Started WebLogic AdminServer Apr-8-2014-7:06:21-PM-PDT Notice WebLogicServer AdminServer BEA-000365 Server state changed to RUNNING Apr-8-2014-7:06:22-PM-PDT Notice WebLogicServer AdminServer BEA-000360 Server started in RUNNING mode 7 rows selected. SQL> In this tutorial we loaded MySQL database table data into Oracle Database with Oracle Loader for Hadoop.
↧
Wiki Page: Agile Development / DevOps
This section will contain articles on DEVOPS and agile development.
↧
Wiki Page: Oracle - Wiki
If you were looking for Knowledge Xpert or OraDBpedia, you're in the right place! We've moved to make your community experience even better by leveraging the scale of Toad World's 3 million users. The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system (ORDBMS) produced and marketed by Oracle Corporation. This wiki is the collective experience of many Oracle developers and DBAs. Like Wikipedia, this wiki is an open resource for you to share your knowledge of the Oracle database. So, join in and create/update the articles! Learn more about how to w rite and edit Wiki articles , review Wiki articles , and get your blog syndicated on Toad World .
↧
Wiki Page: In-Memory Column Store in Oracle 12c
Written by: Juan Carlos Olamendy Turruellas Introduction In this article, I want to talk about a nice feature that comes with Oracle database 12c Release 1 . This feature enables storing in memory columns, tables, partitions and materialized views in a columnar format rather than the typical row-based format. Today is very common to find OLTP systems been adapted to run analytical workloads for supporting real-time decision-making instead of having a separate data warehouses and data marts. Designing a database to support both transactional and analytical workloads without degradation of the performance is a really hard challenge. The typical read/write performance dichotomy and trade-off, if we want to support high-speed read workload, we need to create indexes (that slows down the write workload) and de-normalize (that creates inconsistencies in the transactional data) and inversely if we want to support high-speed write workload. At the end of the day, OLTP and RDBMS in general are optimized for consistent and efficient write workload such as recording transactions and serving row-oriented data. A row-based format enables accessing quickly to all of the columns in a record since all of the data for a given row are kept together either in memory in the database buffer cache or on the storage medium. In an analytical workload when we work with aggregation of data, we need a different data model approach to support few columns but which span a huge of rows. That’s why columnar format is a better approach for dealing with analytical workload. The great advantage of in-memory column store in Oracle 12c is that the same database has ability to perform analytical workload together transaction workload without any database schema and application change or re-design. So, both workloads can be served by the same Oracle database instance. So, what’s the In-Memory Column Store feature? This is a new memory area in the System Global Area ( SGA ) keeping a copy of the data in columnar format. It doesn’t replace a buffer cache; instead it’s a complement; so the data is represented in memory in row- and columnar-based format. The relation between the in-memory area and the SGA is depicted in the figure 01. Figure 01 The logic of the in-memory area is as followed: When the data is requested for transactional workload (row-oriented read/write operation), then it’s loaded from the storage into the buffer cache and finally it’s served. When the data is requested for analytical workload (read-only aggreation operation), then it’s loaded from the storage into the in-memory area and finally it’s served. When a transaction (write=== INSERT, UPDATE, DELETE ) is committed, then the changes are synchronized in both the buffer cache and in-memory area. Demo Time We can control the size of the in-memory area by using the initialization parameter INMEMORY_SIZE ( default 0 ) regardless we’re using AMM (setting to MEMORY_TARGET ) and ASMM (setting to SGA_TARGET ). The current size of the in-memory area is visible in V$SGA view. As a static pool, any changes to the INMEMORY_SIZE parameter will not take effect until the database instance is restarted. Let’s suppose we want to target 3GB for the SGA and leave 2GB for the in-memory area inside the SGA , we can do it using the listing 01. SQL> ALTER SYSTEM SET SGA_TARGET=3G SCOPE=SPFILE; SQL> ALTER SYSTEM SET INMEMORY_SIZE=2G SCOPE=SPFILE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP; ORACLE instance started. Total System Global Area 3221225472 bytes Fixed Size 2929552 bytes Variable Size 419433584 bytes Database Buffers 637534208 bytes Redo Buffers 13844480 bytes In-Memory Area 2147483648 bytes Database mounted. Database opened. Listing 01 In this example, the bulk of memory is assigned to the in-memory area leaving just around 600MB to the buffer caches. It’s remarkable to say one trade-off when using the In-Memory feature is to balance the use the memory available in the system. Transactional workloads perform best with row-based storage with data stored in the buffer cache, while analytics workloads perform best with columnar format with data stored in the in-memory area . We can see the current setting for the in-memory area as shown below in the listing 02. SQL> SHOW PARAMETER INMEMORY NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ inmemory_clause_default string inmemory_force string DEFAULT inmemory_max_populate_servers integer 1 inmemory_query string ENABLE inmemory_size big integer 2G inmemory_trickle_repopulate_servers_ integer 1 percent optimizer_inmemory_aware boolean TRUE Listing 02 Once the in-memory area is setup, we can indicate which database objects can be place on it. We can indicate putting a table/materialized view inside the in-memory area after the first full-table scan as shown in the listing 03. SQL> ALTER TABLE my_aggregation_table INMEMORY; SQL> SELECT segment_name, populate_status FROM v$im_segments; no rows selected Listing 03 We can indicate putting a table/materialized view inside the in-memory area after the instance starts up as shown in the listing 04. SQL> ALTER TABLE my_aggregation_table INMEMORY PRIORITY CRITICAL; SQL> SELECT segment_name, populate_status FROM v$im_segments; SEGMENT_NAME POPULATE_ ------------------------------------ ----------- MY_AGGREGATION_TABLE COMPLETED Listing 04 Objects are populated into the in-memory area either in a prioritized list immediately after the database is opened or after they are scanned (queried) for the first time. The order in which objects are populated is controlled by the keyword PRIORITY (as shown in the listing 04) in five levels: CRITICAL , HIGH , MEDIUM , LOW and NONE . The default PRIORITY is NONE , which means an object is populated only after it is scanned for the first time as shown in the listing 03. When populating the in-memory area , the IMCO background process initiates population of in-memory enabled objects with priority CRITICAL , HIGH , MEDIUM , LOW. Then slave processes ( ORA_WXXX_ORASID ) are dynamically spawned to execute these tasks. The default number of slave processes is 1/2*CPU-cores. While the population is in progress, the database remains available for running queries, but it’s not possible to read data from the in-memory area until the population is completed. We can also indicate several compression options as shown in the listing 05. SQL> ALTER TABLE my_aggregation_table INMEMORY MEMCOMPRESS FOR QUERY; -- Default compression SQL> ALTER TABLE my_aggregation_table INMEMORY MEMCOMPRESS FOR CAPACITY HIGH; -- Capacity high compression SQL> ALTER TABLE my_aggregation_table INMEMORY MEMCOMPRESS FOR CAPACITY LOW; --default low compression Listing 04 There are six levels of compression: No Memcompress. Data is populated to the in-memory area without compression. MEMCOMPRESS FOR DML . Mainly for DML performance and minimal compression. MEMCOMPRESS FOR QUERY LOW (Default). Optimized for query performance. MEMCOMPRESS FOR QUERY HIGH . Optimized for query performance and space saving MEMCOMPRESS FOR CAPACITY LOW. Optimized for space saving compare to query performance MEMCOMPRESS FOR CAPACITY HIGH . Optimized for space saving and little bit less performance It’s remarkable to say that compression rate strongly depends on the data nature, although it’s possible to store more data in memory using in-memory area rather than using the buffer cache . We can set different in-memory options per column in a table as shown below in the listing 05. SQL> CREATE TABLE sales ( sales_amount NUMBER, items_amount NUMBER, region VARCHAR2(16), description VARCHAR2(64) ) INMEMORY INMEMORY MEMCOMPRESS FOR QUERY HIGH (sales_amount, items_amount) INMEMORY MEMCOMPRESS FOR CAPACITY HIGH (region) NO INMEMORY (description); SQL> SELECT segment_column_id, column_name, inmemory_compression FROM v$im_column_level WHERE table_name = 'sales' ORDER BY segment_column_id; SEGMENT_COLUMN_ID COLUMN_NAME INMEMORY_COMPRESSION -------------------- ----------------- ------------------------------ 1 SALES_AMOUNT FOR QUERY HIGH 2 ITEMS_AMOUNT FOR QUERY HIGH 3 REGION FOR CAPACITY HIGH 4 DESCRIPTION NO INMEMORY 4 rows selected. Listing 05 One of the best use cases of the In-Memory option is to store in memory as a columnar format a materialized view. Remember that materialized views are mainly used in BI solutions to store physical aggregated data from transactional data tables. Let’s supposed we want to visualize the sales done by region from the table on the listing 05, so we create a materialized view using the aggregated data and mark it to use the in-memory area for complex queries as shown in the listing 06. SQL> CREATE MATERIALIZED VIEW mv_total_sales_by_region INMEMORY MEMCOMPRESS FOR CAPACITY HIGH PRIORITY HIGH AS SELECT count(sales_amount) total_sales, region FROM sales GROUP BY region; SQL> SELECT total_sales, region FROM mv_total_sales_by_region; -- A high-speed query when reading directly from the in-memory area compared to reading from the buffer cache Listing 06 It’s remarkable to say that we can monitor the in-memory area using the following views: V$IM_SEGMENTS , v$IM_USER_SEGMENTS and v$IM_COLUMN_LEVEL . Conclusion In this article, I've showed a nice feature that comes with Oracle 12c Release 1 that enables managing both transactional and analytical workloads from the same database instance without to redesign your database schema or to deploy an independent data warehouse or data mart. Now you can apply theses concepts and real-world scripts to your own BI scenarios with Oracle database environments.
↧
↧
Wiki Page: Enterprise Manager General Monitoring - Part II
Written by Porus Homi Havewala We are looking at the General Monitoring capabilities of Enterprise Manager, as experienced by Enterprise Manager Administrators or Cloud Administrators. Part I was here . In Part I, we had selected Host.. Monitoring.. Program Resource Utilization from the Host Target menu, and had started to set thresholds for each of the Program Names and Owner combinations. You are placed on the Metric and Collection Settings page. On this page, you can choose to view "All Metrics" or "Metrics with thresholds". This is where you specify the warning and critical thresholds we discussed earlier. The thresholds are supplied out-of-the-box for a number of metrics, but they can be changed depending on your company requirements. For example, a company may decide that 5% is too risky as a critical threshold for Filesystem Space available, and instead set it to 10%. The collection schedule can also be changed to be more frequent, but this is generally not recommended, since the Enterprise Manager Agent on the Target Server will increase its CPU Utilization by collecting the required metrics more often. The repository size may also increase as a result. If you want to set the Program Name and Owner combination thresholds, select "All Metrics". Then scroll down to the "Program Resource Utilization" entries. Click on the Multiple Edit icon - this icon, showing a bunch of pencils, means that multiple values can be specified. On the "Edit Advanced Settings" Page that appears, you can set your Program name and owner combinations, and specify the Warning and Critical thresholds. When you are back on the Metric and Collection Settings page, click on the "Other Collected Items" tab. We can see that a number of items are being collected such as CPUs, File Systems, Hardware, Installed OS Patches, Network Interface Cards, Open Ports, Operating System Properties, Software and so on. The collection schedule is set to every 24 hours. Click on one of the links. On this page, the Collection Schedule can be Enabled or Disabled. The default frequency is every 24 hours, but this can be changed. You can also specify if the Metric Data that is collected is to be used just for Alerting, or for Historical Trending as well. The upload interval can be changed - this affects which collection is uploaded, whether every one (1) or after a certain number of collections. The metrics that are affected by this collection are also seen. Select Host.. Monitoring.. All Metrics from the Host Target Menu. This displays the Metrics page in a new screen format that makes it easier to search for particular Metrics, and to disable/enable/modify collection schedules and warning/critical thresholds. Select the Buffer Activity Metric. You can click on any of the individual metrics in this group to modify the Thresholds, if they are not set. At the top of the screen, click on the Modify button to change the Collection schedule for this entire group of Metrics. As before, the Data Collection can be enabled or disabled. The Collection Frequency can be changed. The use of the Metric Data for Alerting and/or Historical Trending can also be modified, along with the Upload Interval that we have seen before. You are placed back on the All Metrics page. Various metrics can be modified on this page, such as Process, Inode, and File Tables Statistics including Number of Used File Handles, Switch/Swap Activity including System Swapins/Swapouts per second, Process Context Switches per second, Number of Logins for users, and so on. Incident Manager The Incident Manager is a new feature in Enterprise Manager Cloud Control 12c. You can centrally manage all the incidents that are grouped together in Problems, so that the administrative burden of looking at each incident separately is reduced. For example, there may be multiple Internal Errors ( ORA-600 ) of a certain type, such as [4136] , in the database. These multiple incidents are grouped together into a single Problem. You can acknowledge the problem, escalate it, raise the priority, assign to an owner, and so on. Let us take a brief look at this. Select Enterprise.. Monitoring.. Incident Manager to open the Incident Management system. We can see that there is a Problem to do with the ORA 600 [4136]. The number of incidents is shown as 2, meaning that the ORA 600 error has occurred twice. Notice there is no Owner of the problem. The first step is to acknowledge the problem by clicking on the Acknowledge link. After the problem is acknowledged, the person who acknowledged it becomes the Owner of the Problem. The next step is to click on the "Manage" button. This brings up the following screen. The Status can be changed to New, Work in Progress or Resolved. The Problem can be assigned to a different owner at this stage. We have selected DB3RDPARTY . The Priority can also be changed on this screen to Urgent, Very High, High, Medium or Low. The Escalated field can be changed to either of Levels 1 to 5. A comment can also be added. We have added "Please investigate this issue". Click on OK. This updates the tracking attributes of this problem. You are placed back on the Incident Manager page. The Problem now appears as Escalated to Level 2, with Priority as Urgent, Status as Work in Progress, and a new Owner who has not yet acknowledged the problem. We will continue looking at the Incident Manager and other general monitoring capabilities of Enterprise Manager in Part III of this article.
↧
Wiki Page: Enterprise Manager General Monitoring - Part III
Written by Porus Homi Havewala We are looking at the General Monitoring capabilities of Enterprise Manager, as experienced by Enterprise Manager Administrators or Cloud Administrators. Part II was here . In Part II, we were on the Incident Manager page, and had escalated a problem. In the Guided Resolution section, click on "Support Workbench: Problem Details" under Diagnostics. This brings up the Enterprise Manager Support Workbench for this problem. This is a good feature that allows you to package problems and send them to My Oracle Support (MOS) for further analysis. The Enterprise Manager Support Workbench Problem Details screen is seen below. We can see that the Incident has actually occurred 8 times, out of which 6 incidents occurred much earlier. Click on the "Incident Packaging Configuration" link at the bottom of the screen. On this page you can edit the packaging settings for the Incidents. This affects the selection of Incidents and related files from a problem at the time of packaging. In our case, the Incident Cutoff Date is set to 90 days, so the older 6 Incidents in this problem will not be included. The Incident Metadata Retention Period and Files Retention Period can also be specified. Click on OK to continue. You are back on the Incident Manager page for this problem. Now, in the "Investigate and Resolve" section, click on "Package the Problem" under the heading "Collect and Send Diagnostic Data". It is possible to select either Quick Packaging, or Custom Packaging. In the latter case, you can perform additional activities such as editing the package contents, generating additional dumps and so on. Click on Continue. Here, we create a new package or add to an existing package. Click on the OK button. This creates the package successfully. The package can now be customized to add further Incidents or Files, or even additional Problems. You can also scrub user data by copying out Files, editing the contents, and then copying the Files back in. You can add Additional Dumps or External Files as Additional Diagnostic Data. Then, finish contents preparation. You can generate the Upload file and send it to My Oracle Support (MOS). This push-button automation of collecting files pertaining to an issue and sending to My Oracle Support is certainly helpful to administrators. It saves time and effort. Searching for Logs You can search for different types of messages (such as Incident Error, Error, Warning, Notification or Trace) in the Logs of various Middleware targets (such as Oracle WebLogic Server, Oracle BI Server etc.) This is very useful in trouble shooting. Select Enterprise..Monitoring..Logs from the Enterprise Manager menu. Go through the steps to perform a search. The results appear as follows. We have completed the search for Notification type messages (informational messages) in a WebLogic server target. The messages are displayed appropriately. Note that error messages require Administrator intervention. You can export the messages to a file if you wish. Blackouts and Corrective Actions Select Enterprise..Monitoring..Blackouts, if you want to suspend monitoring on any target in Enterprise Manager. This is important for the sake of maintenance operations. For example, if you were shutting a database down to apply a patch, you would not want Critical Alerts to be sent saying that the database is down. This is because you know it is down for the sake of applying the patch. When a target is under blackout, no Alert will be raised. Remember to take the target out of Blackout once the maintenance operation is completed. Select Enterprise..Monitoring..Corrective Actions. These are actions you can specify to resolve Critical Alerts automatically. There is a Corrective Action library in Enterprise Manager. Suppose you specify a corrective action to start up a database, it would appear as follows. The Corrective Action is of type "Startup Database". The parameters specify that the database is to be opened in normal mode, using the default initialization file. In the Access Tab, you can also specify the Administrators and Roles that have access to this Corrective Action to resolve any of their Critical Alerts. Different types of corrective actions possible are OS command, SQL Script, Startup Database, Shutdown Database, OPatch Update, Log Rotation, RMAN script, Statspack Purge and so on including Multi-Task actions. As the next step, we will see how to apply a Corrective Action to a typical database error that can be alerted by Enterprise Manager. We will also look at Metric Extensions and Monitoring Templates, and then other general monitoring capabilities of Enterprise Manager in Part IV of this article.
↧
Wiki Page: Oracle Enterprise Manager Cloud Control 13c:- Oracle Exadata Database Machine Discovery
Written by Syed Jaffar Hussain Summary Managing, monitoring and maintaining Oracle Exadata Database Machine full stack is not an easy task, it demands a high level of technical skills and comprehensive knowledge of various layers of Exadata, such as, Networking, Database, RAC, Storage Cell, switches etc. Oracle Enterprise Manager (OEM) Cloud Control 13c delivers an easy Management, monitoring and maintenance capabilities for Exadata Database Machine and all its components. The purpose of this paper is to summarize the procedures to achieve the following actions: Creating an OEM repository database Installation and Configuration of OEM Cloud Control 13c Deploying OEM Agent software to the source (compute nodes) Exadata Database Machine discovery Discovering additional sources, such as, Grid Infrastructure, Listeners, Databases Creating OEM repository database with DBCA Prior to installing and configuring OEM Cloud Control 13c, as a best practice, it is recommended to setup a repository database beforehand. This is best accomplished using a predefined database template. It is available as a complimentary resource labeled “Database Template (with EM 13.1.0.0 repository pre-configured) for Installing Oracle Enterprise Manager Cloud Control 13c Release 1 (13.1.0.0)”. OEM then uses this repository database to store all its EM related data. Although the database can be created during EM installation, it is, however, best advised to create a database in advance using the EM database template, which can be downloaded from the Oracle site. Download DB template To download the database template, go to Oracle site, choose Oracle Enterprise Manager Downloads and click on Download DB Templates , as shown below: After downloading the zip file, unzip the file under $ORACLE_HOME/assistants/dbca/templates location on the source where you want you have the database. The source can be the host where you want to have your OEM configured or it can be a separate host. Create DB with preconfigured database template To create a repository database, use the easy and the most popularly used tool, DBCA. Use the EM13 template that fits into your business requirements as part of Database Template . Select the right sized database template as per the need: Large deployments, Medium deployments, Small deployments , as shown below: You can input any Database name as per your standard naming convention. Go ahead and complete the rest of the DB creation process, which is typically same as how the traditional DB are created using the DBCA tool. Installation and Configuration of OEM Cloud Control 13c Post database creation, now it’s time to start with OEM Cloud Control 13c installation/configuration procedure. Launch installation procedure executing the ./em13100_linux64.bin (depending on the Platform you choose) and continue with the installation procedure. Ensure all prerequisite checks are passed. If any, fix and proceed further. Choose Advanced option from the Installation Types screen, as shown below: Input the Installation Details for Middleware Home Location, Agent Base directory , as shown below: If you want to install the any non-default plug-ins, as displayed in the below screen shot, select them and hit Next button. The installation software will warn on “Oracle Engineered System Healthchecks” selection, saying it has become obsolete. Later while installing agents on the Exadata it will be required. If the goal is to manage Exadata Machines, you might want to select this plug-in and ignore the warning at this stage. Provide the EM repository database connection details. The one created earlier. See the below image: Ensure the following ports are accessible to DB nodes. Review the details below and hit the Install button to start the OEM installation. After OEM successful installation, launch the main screen through your browser and enter the user credentials, as shown below: Exadata Database Machine Discovery Before you start the Exadata Database Machine Discovery process, it is highly recommend to run thoroughly the following prerequisites for a smooth and successful installation. exadataDiscoveryPreCheck Script To avoid running into various configuration mismatch issues while discovering Exadata machine, Oracle developed an exadataDiscoveryPreCheck.pl script that can help the customers to capture the most common configuration problems. Download the script from the Prerequisite script for Exadata Discovery in Oracle Enterprise Manager Cloud Control 12c and 13c (Doc ID 1473912.1) . Once you downloaded the script on the source system, run it as oracle user, as demonstrated hereunder: $ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_home1 $ $ORACLE_HOME/per/bin/perl exadataDiscoveryPreCheck.pl For the complete usage, look at the syntax below: Review the output and ensure you address all the concerns that has been reported in the output. Setup ILOM Service Process User (oemuser) An ILOM Service user ( oemuser ) must be created prior to Exadata Database machine discovery. This user is essential for OEM agent communication with an ILOM service processor. The oemuser must be created on all DB Nodes, as explained below: On the first DB node, login as root and connect to ILOM as root user, as mentioned in the below example: $ ssh db01-ilom –l root Go to the ILOM users and perform the following set of actions: -> cd /SP/users -> create oemuser (enter the password on prompt) -> cd /oemuser Define the role (cro or aucro) as defined below -> set role = 'aucro' -> exit Repeat the above steps to create the oemuser and set the role on the rest of the DB nodes. After the oemuser has been successfully setup on all Exadata DB nodes, then, run the following command to test ILOM user setup: $ ipmitool –l lanplus –H db01-ilom oemuser –P -L USER set list last 10 Monitoring credentials As part of the Exadata discovery process, you will be required to enter the host credentials for various Exadata components, therefore, have them beforehand: Cell and Compute node host user name and passwords Cell root password ILOM (oemuser) InfiniBand Switch (nm2user password) Deploy Management Agent on compute nodes Deploying Management Agent to the respective DB nodes is another important step that needs to be completed prior to discovering Exadata Database Machine. This segment will demonstrate how to push/install Agent Software across all compute (DB) nodes of an Exadata database machine. From the Setup -> Add Target -> Add Targets Manually , hit Install Agent on Host button, as shown in the below screen shot: Add all compute (DB) nodes hostname on the below page. The hostname must be specified with the full qualifier and select the right Platform for the Agent Software. Click Next to proceed further. Review the installation details and input values in the next screen and start the Management agent deployment across all DB nodes. Discovery of Exadata Database Machine This segment will focus on the required procedure pertaining to Exadata Database Machine discovery process with OEM Cloud Control 13c. Ensure the Management Agent is successfully deployed across all DB nodes of Exadata, as it is critical for Exadata discovery process. Launching discovery procedure Login to OEM Cloud Control 13c, from Setup choose Add Targets Manually option. And click on Add Using Guided Process , as shown below: Select Oracle Exadata Database Machine option from the list shown on the page, show shown below, and click on Add button: Select the options highlighted in the below picture and hit Discover Targets button to initiate the discovery process: Note : The Discover newly added hardware components in an existing Database Machine as targets option is typically used to discover any newly added hardware of existing Exadata Database Machine or can be used to rediscover the components of already discovered Exadata Database Machine. Pick the Agent URL on the below screen by clicking the option: Add the Schematic File Location for all agents URL, set the oracle user credentials and hit Test Connection button to test the connectivity. Optionally, check the Save As option to save the credentials in the EM for future usage. Note: Possibly, you might encounter Invalid username and/or password LOG. Local Authentication Failed… Attempt PAM authentication… PAM failed with error… Authentication failure . (As shown in the below picture) One of the following workarounds could be used to resolve the issue: As root user on the DB node, go to /ect/pam.d and copy the sshd file as emagent under the same location. Or use one the solutions mentioned in How to Configure the Enterprise Management Agent Host Credentials for PAM and LDAP (Doc ID 422073.1) , related to the OS you are running. Once the above problem is resolved, hit Next button to proceed further. Enter the user credentials for InfiniBand discovery, as shown in the below example and screen shot: InfiniBand Switch Host Name : Username : nm2user Password : welcome1 Hit Test Connection button to test the user credentials, optionally save the credentials in EM for future usage. Once the test is successful, hit Next button to continue. You should see the below picture when there are no issues with the prerequisite check. Hit Next button to proceed further. The Database Machine Discovery Components page should display all the components hostname and possibly Management IPs of the components. Hit Next button to proceed further. In the subsequent screens, OEM lists all the monitoring agents’ information, primary and secondary, and shows Review information of all components. When Next button is hit on the Review page, OEM stars the discovery process, as shown below: For Management Agent on DB nodes, set the credentials, test the connectivity, as shown below: Enter the credentials for Storage Server and InfiniBand Switch, as shown below: For PDU and CISCO Switch, use the following user (admin) credentials. Community string for SNMP credentials (PDU & Cisco) is public . Test the connection and click on Next to proceed. After completing the discovery process, you will have a summary page of all the components (System targets, Compute nodes, cell storage, InfiniBand switch, Ethernet Switch, PDU etc) that have been discovered. Hit the Launch DB Machine Home button to launch the Exadata Database Machine in your OEM. Discover Grid Control, Listeners, ASM & Databases Upon successful discovery of Exadata Database machine, the subsequent job is to discover the Grid Control (Clusterware & ASM instances), Listeners and Databases configured on the DB nodes. As a first step, logon to OEM, if not already logged in. Then, select Add Target Manually option and click on Add Using Guided Process button, as shown in the below picture: Choose Oracle Cluster and High Availability Service from the list displayed on Add Using Guided Process screen and hit Add button, as shown below: Specify the compute host name, probably first DB node name and hit Discover Target button, as shown below: OEM then discovers the Cluster details, as shown below. Hit Save button to store the Cluster configuration in E. The following confirmation will be appeared on a popup window, click on Close button to close the window. After adding the cluster and high availability details, let’s now discover Listeners, ASM instances and Databases registered against the cluster. Choose the Oracle Database, Listener, ASM option from the list and hit Add button, as shown below: Specify the cluster name that was discovered earlier, or choose the cluster from the list, hit Next button: OEM then starts discovering the targets and brings you the following screen, where you will have to input the ASM instance connection credentials. Input the Password under the Monitoring Credentials against each ASM instance and then click the Test Connection button. Click the Next button and review the results in the next screen and save the information, as shown below. Conclusion At this point of time, you have successfully discovered the Exadata database machine and all its major components. Now, you can manage, monitor and maintain your Exadata through an OEM Cloud Control 13c. You may now configure the alerts, to receive the notification on any particular action either through email, SMS or combination of both. The following example shows the successful discover of Exadata boxes. References Prerequisite steps before discovering Exadata DB machine within Oracle Enterprise Manager 12c (Doc ID 1437434.1) Prerequisite script for Exadata Discovery in Oracle Enterprise Manager Cloud Control 12c and 13c (Doc ID 1473912.1) http://www.toadworld.com/platforms/oracle/w/wiki/11418.discover-exadata-database-machine-in-oracle-enterprise-manager-12c Oracle Enterprise Manager 12c: Oracle Exadata Discovery Cookbook https://oracle-base.com/articles/13c/cloud-control-13cr1-installation-on-oracle-linux-6-and-7
↧
Blog Post: Generate Multiple AWR Reports Quickly
Occasionally there is a need to generate multiple AWR reports for database analysis. In my case, a storage vendor will use a tool to extract data from all time periods from the AWR reports to find IO related specific information. Here is how I generated these reports.
↧
↧
Blog Post: Oracle REST Data Services 3.0.5 is available
Friends, Oracle REST Data Services 3.0.5 is available for download. The link is http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html A recap of what's new in Oracle REST Data Services 3.0.x onwards: The previous version 2.0.x required Oracle Application Express (APEX) (4.2 onwards) to be installed if you wanted to define and use RESTful Services. However, in the new versions, APEX is no longer required. Oracle REST Data Services will install a specialized schema for itself when you run it the first time. Oracle Relational Database (RDBMS) tables and views can be exposed as REST API endpoints, if you use SQL Developer (4.1 onwards). So you can insert, delete, update and query table data using REST. Plus, Oracle NoSQL Database tables can now be exposed as REST API endpoints. You can insert, delete, update and query NoSQL table data using REST. You can store JSON documents in document collections managed by the database, and retrieve them as need be. The interface to the Oracle Database Document Store is provided by SODA for REST. Documentation can be found at https://docs.oracle.com/cd/E56351_01/index.htm
↧
Blog Post: Why you need SQL more than PL/SQL
I am often asked variations of the question "which programming language should I learn?" I often refer to lists like the TIOBE index which ranks programming language by usage. However, their definition excludes SQL, and SQL is a very useful skill. Don't get me wrong, I love PL/SQL and have been a PL/SQL programmer myself for 20 years. But PL/SQL is only used in the Oracle database, while SQL is the lingua franca for data analysis. At codeacademy.com , they want to teach everyone to program. They have thought carefully about what you need to know, and out of their 11 web programming lessons, there is One Java One AngularJS Two Ruby Three SQL If you want to build a career in IT, you need to know SQL.
↧
Blog Post: Why You Should ALWAYS Use Packages
Received this request via email today: Hi Steven I our shop we have an ongoing debate regarding when to use packages and when to create a collection of procedures/functions. We are not aligned at all, some people here tend to ignore packages for some strange reasons, I would like to nudge them in the right direction, so that where you come into play. One of my co workers tried to look into a couple of you books to your argumentation for use of packages, and could not, to her astonishment, see any recommendations from you regarding this, can you point us in the right direction, surely you must have debated this issue somewhere. This came as a bit of a surprise to me (inability to find recommendations from me on packages). So I checked, and quickly found and reported back: In my humongous 6th edition Oracle PL/SQL Programming , there is a whole chapter on packages (18) and on 651 I offer “Why Packages?”. In my Best Practices book I have a whole chapter on Packages and on page 207 I make the argument to ALWAYS use packages, not schema level subprograms. In case, however, you don't have the books handy, here are my high-level thoughts on packages. First, what are some of the key benefits of using packages? Enhance and maintain applications more easily As more and more of the production PL/SQL code base moves into maintenance mode, the quality of PL/SQL applications will be measured as much by the ease of maintenance as they are by overall performance. Packages can make a substantial difference in this regard. From data encapsulation (hiding all calls to SQL statements behind a procedural interface to avoid repetition), to enumerating constants for literal or “magic” values, to grouping together logically related functionality, package-driven design and implementation lead to reduced points of failure in an application. Improve overall application performance By using packages, you can improve the performance of your code in a number of ways. Persistent package data can dramatically improve the response time of queries by caching static data, thereby avoiding repeated queries of the same information (warning: this technique is not to be relied upon in stateless applications). Oracle’s memory management also optimizes access to code defined in packages (see OPP6 Chapter 24 for more details). In addition, when you invoke one subprogram in a package, the entire package is loaded into memory. Assuming you have designed your packages well (see "Keep your packages small and narrowly focused" below), this will Shore up application or built-in weaknesses It is quite straightforward to construct a package on top of existing functionality where there are drawbacks. (Consider, for example, the UTL_FILE and DBMS_OUTPUT built-in packages in which crucial functionality is badly or partially implemented.) You don’t have to accept these weaknesses; instead, you can build your own package on top of Oracle’s to correct as many of the problems as possible. For example, the do.pkg script I described in OPP6 Chapter 17 offers a substitute for the DBMS_OUTPUT.PUT_LINE built-in that adds an overloading for the XMLType datatype. Sure, you can get some of the same effect with standalone procedures or functions, but overloading and other package features make this approach vastly preferable. Minimize the need to recompile code As you will read below, a package usually consists of two pieces of code: the specification and body. External programs (not defined in the package) can only call programs listed in the specification. If you change and recompile the package body, those external programs are not invalidated. Minimizing the need to recompile code is a critical factor in administering large bodies of application logic. And I finish up with a few recommendations for writing PL/SQL code: Avoid writing schema-level procedures and functions Always start with a package. Even if there is just one program in the package at the moment, it is very likely that there will be more in the future. So "put in the dot at the start," and you won't have to add it later. Use packages to group together related functionality A package gives a name to a set of program elements: procedures, functions, user-defined types, variable and constant declarations, cursors, and so on. By creating a package for each distinct area of functionality, you create intuitive containers for that functionality. Programs will be easier to find, and therefore less likely to be reinvented in different places in your application. Keep your packages small and narrowly focused It doesn't do much good to have just three packages, each of which has hundreds of programs. It will still be hard to find anything inside that bunch of code. Instead, keep your packages small and focused: all subprograms in a package should be "related" in some fashion that is reflected by the package name, and that are commonly used together or in close proximity. And if you do have a monster package that you need to break up into more manageable pieces, be sure to check out the new-to-12.1 ACCESSIBLE_BY feature. LiveSQL Script ACCESSIBLE_BY Documentation ORACLE-BASE Article
↧
Forum Post: Export data to excel 2007 (xlsx)
Hi Since 2 days when I am exporting the result of a query to excel 2007, I am receiving this error message "toad error tag stack not empty worksheet sheetdata row c" There is about 120 000 rows. Sometimes It works perfect sometime no. Can any body help? TOAD FOR ORACLE 12.1.0.22 Thank you.
↧
↧
Blog Post: Expresiones regulares: literales y metacaracteres.
Seguimos con la serie de artículos acerca del uso de expresiones regulares en Oracle. Haciendo un breve repaso, en una expresión regular combinamos literales y metacaracteres. Los literales se toman “tal cual son”; los metacaracteres, en cambio, tienen un significado especial. Por ejemplo, el metacaracter “.” iguala con cualquier carácter. ¿Qué ocurre si deseo buscar una cadena que contiene el literal “.”? Si utilizo el símbolo punto, Oracle lo considerará como un metacaracter, y por lo tanto el resultado no será el esperado: select texto, 'a.c' regexp, case when regexp_like(texto, 'a.c') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REG COINCIDENCIA? ----------------------------------- --- ---------------- abc a.c Hay coincidencia a.c a.c Hay coincidencia aXc a.c Hay coincidencia ¿Cómo resolvemos la situación ante la cual nos encontramos? Necesitamos contar con alguna herramienta que nos permita indicar que un metacaracter no debe ser considerado como tal; y que sea interpretado precisamente como un literal. Casualmente dicha herramienta también es un metacaracter. A dicho metacaracter se lo conoce con el nombre de “escape” y es representado por la llamada “barra invertida” o “contrabarra” (“\”). Al usar el metacaracter de escape indicamos que el metacaracter posterior deberá ser interpretado como un literal y no como un metacaracter. Retomando nuestro ejemplo “reconstruiremos” nuestra expresión regular de la siguiente manera: “a\.c” select texto, 'a\.c' regexp, case when regexp_like(texto, 'a\.c') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGE COINCIDENCIA? ----------------------------------- ---- ------------------- abc a\.c No hay coincidencia a.c a\.c Hay coincidencia aXc a\.c No hay coincidencia Una situación particular ocurre cuando aparece la “barra invertida” entre nuestras cadenas de caracteres y queremos considerarla como literal. El uso de dos “barras invertidas” consecutivas me permite “escapar el escape”: select texto, 'a\\c' regexp, case when regexp_like(texto, 'a\\c') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGE COINCIDENCIA? ----------------------------------- ---- ------------------- abc a\\c No hay coincidencia a.c a\\c No hay coincidencia aXc a\\c No hay coincidencia a\c a\\c Hay coincidencia Nos vemos!
↧
Blog Post: What’s new in Oracle Compute Cloud Service
What’s new for the Oracle Compute Cloud Service? First, both metered and non-metered options of Oracle Compute Cloud Service are now generally available. Second, you no longer need to subscribe for 50 or 100 OCPU configurations. Instead, you can specify even 1 OCPU subscription. Third, bursting is now available in the case of a non-metered subscription. This means you can provision resources up to twice the subscribed capacity if required by your business demands. Whatever additional usage you incur will be charged per hour and billed each month. Pricing information can be seen on https:// cloud.oracle.com/compute , click on the Pricing tab. Another new feature is that Oracle is now providing images for Microsoft Windows Server 2012 R2 Standard Edition and Oracle Solaris 11.3. From now on, you can clone storage volumes by taking a snapshot of a storage volume and using it to create new storage volumes. You also can clone an instance by taking a snapshot and then use the resulting image to launch new instances. You can now also increase the size of a storage volume, even in the case when it’s attached to an instance. Lastly, the command line tool for uploading custom images to Oracle Storage Cloud Service (renamed to uploadcli, previously called upload-img) now supports various operating systems. For more information there is a tutorial “Uploading a Machine Image to Oracle Storage Cloud Service” that you can access at this link .
↧
Blog Post: Oracle 12c: TRUNCATE can be cascaded down to the lowest level of hierarchy
Do you recall the following error? ---// ---// ORA-02266 when truncating table //--- ---// SQL> truncate table parent; truncate table parent * ERROR at line 1: ORA-02266: unique/primary keys in table referenced by enabled foreign keys Well, as you know this error encounters when we try to truncate a table which has unique/primary key and is referenced by a active foreign key from another table. The solution to this issue is to disable the active foreign keys in the referenced table before we can actually truncate the parent table. This is a time consuming task as we need to first identify all the tables and their foreign keys which has a reference to the primary key of the table to be truncated. However, with Oracle 12c, we don't need to do that extra work of identifying and disabling foreign keys to allow truncate on parent table. Oracle 12c has introduced an additional CASCADE clause for the TRUNCATE TABLE statement, which allows us to truncate a parent table by recursively truncating all the associated child tables. This new clause requires that the referential constraints must be defined with ON DELETE CASCADE option. Let's walk through a quick demonstration to have an insight in to this feature. In this demonstration, I am creating three tables PARENT, CHILD and GRAND_CHILD with parent-->child-->grand_child relationship as shown below. ---// ---// parent table //-- ---// SQL> create table parent 2 ( 3 p_id number, 4 p_name varchar(20), 5 constraint parent_pk primary key (p_id) 6 ); Table created. ---// ---// child table (references parent table) //--- ---// SQL> create table child 2 ( 3 c_id number, 4 p_id number, 5 c_name varchar(20), 6 constraint child_pk primary key (c_id), 7 constraint parent_child_fk foreign key (p_id) 8 references parent (p_id) on delete cascade 9 ); Table created. ---// ---// grand_child table (references child table) //--- ---// SQL> create table grand_child 2 ( 3 gc_id number, 4 c_id number, 5 gc_name varchar(20), 6 constraint grand_child_pk primary key (gc_id), 7 constraint gc_child_fk foreign key (c_id) 8 references child (c_id) on delete cascade 9 ); Table created. As you can observe, I have defined the referential constraints (parent_child_fk and gc_child_fk) with ON DELETE CASCADE option. This is a primary requirement for TRUNCATE..CASCADE command to work. Now, lets populate these tables with some data as shown below. ---// ---// populating parent table //--- ---// SQL> insert into parent values (&p_id,'&p_name'); Enter value for p_id: 1 Enter value for p_name: Amar old 1: insert into parent values (&p_id,'&p_name') new 1: insert into parent values (1,'Amar') 1 row created. SQL> / Enter value for p_id: 2 Enter value for p_name: Akbar old 1: insert into parent values (&p_id,'&p_name') new 1: insert into parent values (2,'Akbar') 1 row created. SQL> / Enter value for p_id: 3 Enter value for p_name: Anthony old 1: insert into parent values (&p_id,'&p_name') new 1: insert into parent values (3,'Anthony') 1 row created. SQL> commit; Commit complete. ---// ---// populating child table //--- ---// SQL> insert into child values (&c_id,&p_id,'&c_name'); Enter value for c_id: 1 Enter value for p_id: 1 Enter value for c_name: Arun old 1: insert into child values (&c_id,&p_id,'&c_name') new 1: insert into child values (1,1,'Arun') 1 row created. SQL> / Enter value for c_id: 2 Enter value for p_id: 2 Enter value for c_name: Sahid old 1: insert into child values (&c_id,&p_id,'&c_name') new 1: insert into child values (2,2,'Sahid') 1 row created. SQL> / Enter value for c_id: 3 Enter value for p_id: 3 Enter value for c_name: John old 1: insert into child values (&c_id,&p_id,'&c_name') new 1: insert into child values (3,3,'John') 1 row created. SQL> commit; Commit complete. ---// ---// populating grand_child table //--- ---// SQL> insert into grand_child values (&gc_id,&c_id,'&gc_name'); Enter value for gc_id: 1 Enter value for c_id: 1 Enter value for gc_name: Rahul old 1: insert into grand_child values (&gc_id,&c_id,'&gc_name') new 1: insert into grand_child values (1,1,'Rahul') 1 row created. SQL> / Enter value for gc_id: 2 Enter value for c_id: 2 Enter value for gc_name: Irfan old 1: insert into grand_child values (&gc_id,&c_id,'&gc_name') new 1: insert into grand_child values (2,2,'Irfan') 1 row created. SQL> / Enter value for gc_id: 3 Enter value for c_id: 3 Enter value for gc_name: Remo old 1: insert into grand_child values (&gc_id,&c_id,'&gc_name') new 1: insert into grand_child values (3,3,'Remo') 1 row created. SQL> commit; Commit complete. We have populated the tables with following data ---// ---// data from parent table //--- ---// SQL> select * from parent; P_ID P_NAME ---------- -------------------- 1 Amar 2 Akbar 3 Anthony ---// ---// data from child table //--- ---// SQL> select * from child; C_ID P_ID C_NAME ---------- ---------- -------------------- 1 1 Arun 2 2 Sahid 3 3 John ---// ---// data from grand child table //--- ---// SQL> select * from grand_child; GC_ID C_ID GC_NAME ---------- ---------- -------------------- 1 1 Rahul 2 2 Irfan 3 3 Remo ---// ---// related data from all tables //--- ---// SQL> select p_name Parent, c_name Child, gc_name "Grand Child" 2 from parent p, child c, grand_child g 3 where p.p_id=c.p_id and c.c_id=g.c_id; PARENT CHILD Grand Child -------------------- -------------------- -------------------- Amar Arun Rahul Akbar Sahid Irfan Anthony John Remo Now, if we try to truncate the PARENT table, we would not be allowed as it has records which are referenced by the CHILD table records. Likewise, if we try to truncate CHILD table, it would not allow the same as it has records which are referenced by the GRAND_CHILD table records. ---// ---// truncate is not allowed on tables with active referenced keys //--- ---// SQL> truncate table parent; truncate table parent * ERROR at line 1: ORA-02266: unique/primary keys in table referenced by enabled foreign keys SQL> truncate table child; truncate table child * ERROR at line 1: ORA-02266: unique/primary keys in table referenced by enabled foreign keys Here comes the killer feature of 12c, the TRUNCATE command with CASCADE clause. When we truncate a table with the CASCADE option, Oracle recursively truncates all tables in the referential hierarchy (grand_parent, parent, child, grand_child, ...) as show below. ---// ---// truncate table with cascade option //--- ---// SQL> truncate table parent cascade; Table truncated. ---// ---// records after truncate //--- ---// SQL> select * from parent; no rows selected SQL> select * from child; no rows selected SQL> select * from grand_child; no rows selected As we can observe, though we had just truncated the PARENT table, Oracle internally truncated CHILD and GRAND_CHILD table. This is due to the fact that the PARENT table was referenced by CHILD table which in turn was referenced by the GRAND_CHILD table. With this new cool feature, there is a definite reason for DBAs to rejoice :)
↧