Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: Upgrading TFA to use the RAC and DB Support Tools

$
0
0
If you have installed Grid Infrastructure 12.1.0.2 you will notice that TFA is already included there, and actually TFA started to be included since Oracle GI 11.2.0.4. The version that come with GI 12.1.0.2 is 12.1.2.0.0 as you can see below: [root@rac1 ~]# /u01/app/12.1.0/grid/tfa/bin/tfactl print config .------------------------------------------------------. | rac1 | +-----------------------------------------+------------+ | Configuration Parameter | Value | +-----------------------------------------+------------+ | TFA version | 12.1.2.0.0 | | Automatic diagnostic collection | OFF | | Trimming of files during diagcollection | ON | | Repository current size (MB) in rac1 | 0 | | Repository maximum size (MB) in rac1 | 1194 | | Inventory Trace level | 1 | | Collection Trace level | 1 | | Scan Trace level | 1 | | Other Trace level | 1 | | Max Size of TFA Log (MB) | 50 | | Max Number of TFA Logs | 10 | '-----------------------------------------+------------' However, this version doesn't include options to manage the tools like OSWatcher, ORAchk, etc. A simple "-h" will show that basically we can start it, stop and print the status, nothing related to diag tools. [root@rac1 ~]# /u01/app/12.1.0/grid/tfa/bin/tfactl -h Usage : /u01/app/12.1.0/grid/bin/tfactl [options] = start Starts TFA stop Stops TFA enable Enable TFA Auto restart disable Disable TFA Auto restart print Print requested details access Add or Remove or List TFA Users and Groups purge Delete collections from TFA repository directory Add or Remove or Modify directory in TFA host Add or Remove host in TFA diagcollect Collect logs from across nodes in cluster analyze List events summary and search strings in alert logs. set Turn ON/OFF or Modify various TFA features uninstall Uninstall TFA from this node For help with a command: /u01/app/12.1.0/grid/bin/tfactl -help [root@rac1 ~]# However Starting with TFA version 12.1.2.3.0, TFA has included the RAC and DB Support Tools, and instead of work with every tool separately I recommend manage all these tools from TFA, it is always easy to work from only one place. ANOUNCEMENT: Starting with TFA 12.1.2.3.0, The RAC and DB Support Tools Bundle (Document 1594347.1) has been integrated into TFA. As part of the integration of the Support Tools Bundle in TFA, command line execution of each of the tools has been added into TFA Shell. This integration into TFA Shell greatly simplifies the execution of the tools by providing a common interface to each of the tools while ensuring that the output of the tools are collected by TFA Diagnostic collections. The tools that are included starting in TFA 12.1.2.3.0 are the following: ORAchk (formerly RACcheck) - Proactive, self service tool to prevent rediscovery of known issues. See Doc ID 1268927.1 for additional details. ORAchk is intended for use with non-engineered systems EXAchk - Proactive, self service tool to prevent rediscovery of known issues. See Doc ID 1070954.1 for additional details. EXAchk is intended for use with engineered systems. OSWatcher (formerly OSWatcher Black Box) - Script to collect and archive OS metrics. OSWatcher is required for many reactive types of issues including Instance/Node Evictions and Performance Issues. See Doc ID 301137.1 for additional details. Procwatcher - A script used to automate and capture diagnostic output for Severe Database Performance issues and Session Level Hangs. See Doc ID 459694.1 for additional details. ORATOP - A utility allowing for near real-time monitoring of databases (RAC and Single Instance), this utility is built for the Linux platform but can remotely monitor databases on ANY platform from a Linux Client. See Doc ID 1500864.1 for additional details. SQLT - A tool designed to assist in the tuning of a given SQL Statement. See Doc ID 215187.1 for additional details. DARDA - The Diagnostic Assistant (DA) tool provides a common, light-weight interface to multiple diagnostic collection tools (ADR, RDA, OCM, Explorer, and others). See Doc ID 210804.1 for additional details. alertsummary – Generates a summary of events for one or more database or ASM alert files from all nodes. ls – lists all files TFA knows about for a given file name pattern across all nodes. pstack – takes a process stack for specified processes across all nodes. grep – searches alert or trace files with a given database and file name pattern, for a search string. summary – Gives a high level summary of the configuration. vi – opens alert or trace files for viewing a given database and file name pattern in the vi editor. tail – runs a tail on an alert or trace files for a given database and file name pattern. param – shows all database and OS parameters that match a specified pattern. dbglevel – A tool to set and unset multiple CRS trace levels with one command history – Shows the shell history for the tfactl shell changes – reports any noted changes in the system setup over a given time period. This includes database a parameters, OS parameters, patches applied etc. From TFA you will be able to start every tool, stop it, run it, and s on. That is what we are going to see in this article. Since likely TFA is already included in you GI installation, what we are going to do is just upgrade it and start using it. Firstable, download the latest TFA version from the following Note: TFA Collector - Tool for Enhanced Diagnostic Gathering (Doc ID 1513912.2) At the time this article is being written, the latest version is 12.1.2.7.0 . After download it, the file "TFALite_v12.1.2.7.0.zip" will be located in our machine and we have to transfer it to the server where we are planning to upgrade: MacBook-Pro:Downloads HDeiby$ pwd /Users/HDeiby/Downloads MacBook-Pro:Downloads HDeiby$ ls TFALite_v12.1.2.7.0.zip TFALite_v12.1.2.7.0.zip MacBook-Pro:Downloads HDeiby$ scp TFALite_v12.1.2.7.0.zip root@192.168.1.110:/u01/app/grid root@192.168.1.110's password: TFALite_v12.1.2.7.0.zip 100% 42MB 8.5MB/s 00:05 Deibys-MacBook-Pro:Downloads HDeiby$ Now let's connect to the server and the next step will be unzip the file: [root@rac1 grid]# cd /u01/app/grid [root@rac1 grid]# [root@rac1 grid]# ls -ltr *TFA* -rw-r--r-- 1 root root 44534701 Jan 13 11:25 TFALite_v12.1.2.7.0.zip [root@rac1 grid]# [root@rac1 grid]# unzip TFALite_v12.1.2.7.0.zip Archive: TFALite_v12.1.2.7.0.zip inflating: installTFALite inflating: TFACollectorDocV121270.pdf [root@rac1 grid]# [root@rac1 grid]# ls -ltr *TFA* -rw-r--r-- 1 root root 44534701 Jan 13 11:25 TFALite_v12.1.2.7.0.zip -rw-r--r-- 1 root root 583593 Mar 4 2016 TFACollectorDocV121270.pdf -r-xr-xr-x 1 root root 44414287 Mar 4 2016 installTFALite [root@rac1 grid]# Since TFA is already there, we don't have to specify the home directory or JRE directory, GI will be discover the TFA that is already installed and will ask us if we want to upgrade it, I will use the tag "-local" which means that in a RAC configuration TFA will be upgraded only in the node where I am connected, if you remove this tag and you env is RAC, TFA will try to upgrade every TFA installation in you env. [root@rac1 grid]# ./installTFALite -local TFA Installation Log will be written to File : /tmp/tfa_install_15366_2016_04_30-11_25_07.log Starting TFA installation TFA Build Version: 121270 Build Date: 201603032146 Installed Build Version: 121200 Build Date: 201406190949 TFA is already installed. Patching /u01/app/12.1.0/grid/tfa/rac1/tfa_home... TFA patching typical install from zipfile is written to /u01/app/12.1.0/grid/tfa/rac1/tfapatch.log TFA will be Patched on Node rac1: Do you want to continue with patching TFA? [Y|N] [ Y ]: Y Applying Patch on rac1: Stopping TFA Support Tools... Shutting down TFA for Patching... Shutting down TFA oracle-tfa stop/waiting . . . . . Killing TFA running with pid 11909 . . . Successfully shutdown TFA.. Current version of Berkeley DB is 5.0.84, so no upgrade required Copying TFA Certificates... Moving config.properties.bkp to config.properties Running commands to fix init.tfa and tfactl in localhost Starting TFA in rac1... Creating Sym Link /etc/rc.d/rc0.d/K17init.tfa to /etc/init.d/init.tfa Creating Sym Link /etc/rc.d/rc1.d/K17init.tfa to /etc/init.d/init.tfa Creating Sym Link /etc/rc.d/rc2.d/K17init.tfa to /etc/init.d/init.tfa Creating Sym Link /etc/rc.d/rc4.d/K17init.tfa to /etc/init.d/init.tfa Creating Sym Link /etc/rc.d/rc6.d/K17init.tfa to /etc/init.d/init.tfa Starting TFA.. oracle-tfa start/running, process 15903 Waiting up to 100 seconds for TFA to be started.. . . . . . Successfully started TFA Process.. . . . . . TFA Started and listening for commands Enabling Access for Non-root Users on rac1... .------------------------------------------------------------. | Host | TFA Version | TFA Build ID | Upgrade Status | +------+-------------+----------------------+----------------+ | rac1 | 12.1.2.7.0 | 12127020160303214632 | UPGRADED | '------+-------------+----------------------+----------------' [root@rac1 grid]# If we perform another "-h" you will see that now there are more options, those new options mostly are related to managing the tools : [root@rac1 grid]# /u01/app/12.1.0/grid/bin/tfactl -h Usage : /u01/app/12.1.0/grid/bin/tfactl [options] = start Starts TFA stop Stops TFA enable Enable TFA Auto restart disable Disable TFA Auto restart print Print requested details access Add or Remove or List TFA Users purge Delete collections from TFA repository directory Add or Remove or Modify directory in TFA host Add or Remove host in TFA diagcollect Collect logs from across nodes in cluster collection Manage TFA Collections analyze List events summary and search strings in alert logs. set Turn ON/OFF or Modify various TFA features toolstatus Prints the status of TFA Support Tools run Run the desired support tool start Starts the desired support tool stop Stops the desired support tool syncnodes Generate/Copy TFA Certificates diagnosetfa Collect TFA Diagnostics uninstall Uninstall TFA from this node For help with a command: /u01/app/12.1.0/grid/bin/tfactl -help [root@rac1 grid]# Let's print now the status of our TFA: [root@rac1 grid]# /u01/app/12.1.0/grid/bin/tfactl print status .--------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +------+---------------+-------+------+------------+----------------------+------------------+ | rac1 | RUNNING | 16028 | 5000 | 12.1.2.7.0 | 12127020160303214632 | COMPLETE | '------+---------------+-------+------+------------+----------------------+------------------' [root@rac1 grid]# And the status of our tools, be aware that all the tools are already deployed, you don't have to "install" every tool, all are there and ready to be used: [root@rac1 grid]# /u01/app/12.1.0/grid/bin/ tfactl toolstatus . -----------------------------------. | External Support Tools | +------+--------------+-------------+ | Host | Tool | Status | +------+--------------+-------------+ | rac1 | alertsummary | DEPLOYED | | rac1 | exachk | DEPLOYED | | rac1 | ls | DEPLOYED | | rac1 | pstack | DEPLOYED | | rac1 | orachk | DEPLOYED | | rac1 | sqlt | DEPLOYED | | rac1 | grep | DEPLOYED | | rac1 | summary | DEPLOYED | | rac1 | prw | NOT RUNNING | | rac1 | vi | DEPLOYED | | rac1 | tail | DEPLOYED | | rac1 | param | DEPLOYED | | rac1 | dbglevel | DEPLOYED | | rac1 | darda | DEPLOYED | | rac1 | history | DEPLOYED | | rac1 | oratop | DEPLOYED | | rac1 | oswbb | RUNNING | | rac1 | changes | DEPLOYED | | rac1 | events | DEPLOYED | | rac1 | ps | DEPLOYED | | rac1 | srdc | DEPLOYED | '------+--------------+-------------' Only OSWatcher is already running, let's try to start another tool like Proc Watcher [root@rac1 grid]# /u01/app/12.1.0/grid/bin/ tfactl start prw Sat Apr 30 11:36:29 EDT 2016: Building default prwinit.ora at /u01/app/grid/tfa/repository/suptools/prw/root/prwinit.ora Sat Apr 30 11:36:32 EDT 2016: Starting Procwatcher as user root Sat Apr 30 11:36:32 EDT 2016: Thank you for using Procwatcher. :-) Sat Apr 30 11:36:32 EDT 2016: Please add a comment to Oracle Support Note 459694.1 Sat Apr 30 11:36:32 EDT 2016: if you have any comments, suggestions, or issues with this tool. Procwatcher files will be written to: /u01/app/grid/tfa/repository/suptools/prw/root Sat Apr 30 11:36:32 EDT 2016 : Started Procwatcher [root@rac1 grid]# whoever developed this tool, it was in good mood and he wanted to makes us aware of that with his ":-)" icon in the output. Ok, now let's see if indeed Procwatcher was started: [root@rac1 grid]# /u01/app/12.1.0/grid/bin/ tfactl toolstatus .--------------------------------. | External Support Tools | +------+--------------+----------+ | Host | Tool | Status | +------+--------------+----------+ | rac1 | alertsummary | DEPLOYED | | rac1 | exachk | DEPLOYED | | rac1 | ls | DEPLOYED | | rac1 | pstack | DEPLOYED | | rac1 | orachk | DEPLOYED | | rac1 | sqlt | DEPLOYED | | rac1 | grep | DEPLOYED | | rac1 | summary | DEPLOYED | | rac1 | prw | RUNNING | | rac1 | vi | DEPLOYED | | rac1 | tail | DEPLOYED | | rac1 | param | DEPLOYED | | rac1 | dbglevel | DEPLOYED | | rac1 | darda | DEPLOYED | | rac1 | history | DEPLOYED | | rac1 | oratop | DEPLOYED | | rac1 | oswbb | RUNNING | | rac1 | changes | DEPLOYED | | rac1 | events | DEPLOYED | | rac1 | ps | DEPLOYED | | rac1 | srdc | DEPLOYED | '------+--------------+----------' [root@rac1 grid]# Follow me:

Wiki Page: Configure Oracle's XDB Server for Web Applications

$
0
0
There are many developers who use Oracle APEX to develop robust web applications because it’s simply a great tool. While APEX gets better with each new release, there are developers who want to know how to build native PL/SQL web applications. Oracle APEX uses Oracle XML Database (XDB) Server as its engine. The XDB Server is a multiprotocol web server that lets you configure it with the DBMS_XDB and DBMS_EPG PL/SQL Built-in packages. You also have the ability to configure the XDB Server to concurrently run standalone native PL/SQL web applications. This article shows you how to configure the XDB Server to run PL/SQL native web applications. It shows you two approaches. The first uses a standard schema, which requires a schema name and password at login. The second uses the anonymous schema, which lets you embed your Access Control List and authentication functions inside the database. This article has two parts that involve setting up Data Access Descriptors (DADs). They are: Setting Up a Secured DAD Setting Up an Unsecured DAD You should read through how to setup a secured DAD before you read about setting up an unsecured DAD. Robust native PL/SQL web applications require you to use an unsecured DAD approach. The approach is direct and limits the discussion of the many facets of XDB Database. You can read more about the XDB in the Oracle XML DB Developer’s Guide . Setting Up a Secured DAD A Secured DAD uses basic HTTP authentication, and discloses the schema name and password to all you log into the application. Basic HTTP authentication starts when you authenticate and ends when you close the browser. You must setup a port for your native PL/SQL web application. Since the APEX application uses port 8080 , you should use the same port. If you change the port number, it is also changes for the APEX application. You set the port number to the default with the SETHTTPPORT procedure of the DBMS_XDB package, like this: SQL> DECLARE 2 lv_port NUMBER; 3 BEGIN 4 SELECT dbms_xdb.gethttpport() 5 INTO lv_port 6 FROM dual; 7 8 /* Check for default port and reset. */ 9 IF NOT lv_port = 8080 THEN 10 dbms_xdb.sethttpport(8080); 11 END IF; 12 END; 13 / After you setup the HTTP port number, you need to create and authorize the secured DAD. You use the CREATE_DAD procedure to create the DAD, and the AUTHORIZE_DAD procedure to authorize the DAD. The following command creates the STUDENT_DAD DAD: SQL> BEGIN 2 dbms_epg.create_dad( 3 dad_name => 'STUDENT_DAD' 4 , path => '/studentdb/*'); 5 END; 6 / This command creates the DAD and points the DAD to a /studentdb/* URL component. The /studentdb/ component identifies a path element. The trailing asterisk identifies all executable PL/SQL procedures inside the DAD. The following command authorizes the STUDENT_DAD DAD: SQL> BEGIN 2 dbms_epg.authorize_dad( 3 dad_name => 'STUDENT_DAD' 4 , user => 'STUDENT'); 5 END; 6 / The DAD name is the same but the path points to an Oracle STUDENT schema. Effectively, the CREATE_DAD procedure maps the DAD to the URL component; and the AUTHORIZE_DAD procedure maps the DAD to the user’s schema where you deploy the native PL/SQL procedures. After you create and authorize the DAD, you can create a helloworld procedure to test your configuration. You should create it in the STUDENT user’s schema, which is a STUDENT container schema. The following uses Oracle’s built-in PL/SQL web toolkit to create a native PL/SQL web page. Only line 11 is dynamic. The USER keyword on line 11 returns the owning schema name at runtime. SQL> CREATE OR REPLACE PROCEDURE student.helloworld AS 2 BEGIN 3 -- Set an HTML meta tag and render page. 4 owa_util.mime_header('text/html'); -- 5 htp.htmlopen; -- 6 htp.headopen; -- 7 htp.htitle('Hello World!'); -- HelloWorld! 8 htp.headclose; -- 9 htp.bodyopen; -- 10 htp.line; -- 11 htp.print('Hello ['||USER||']!'); -- Hello [dynamic user_name]! 12 htp.line; -- 13 htp.bodyclose; -- 14 htp.htmlclose; -- 15 END HelloWorld; 16 / After creating the helloworld function, you call it from a URL in a browser. The server hostname is oracle12c, which means you would call the native PL/SQL procedure with the following syntax: http://oracle12c/studentdb/helloworld It prompts for Basic HTTP security credentials, like this: You need to provide the STUDENT user name or a CDB Oracle 12c C##PLSQL user name, and the password to the dialog box. After entering the correct credentials, you should see the following web page if you're using the Oracle 12c C##PLSQL user name: The helloworld procedure renders a web page with the title element of “Hello World!” The helloworld procedure renders a content body with the message of “ Hello [C##PLSQL]! ” There’s only one downside when you use Oracle’s PL/SQL web toolkit. The PL/SQL web toolkit renders HTML tags in uppercase text, as you can see in the following source code: Hello World! Hello World! Hello [C##PLSQL]! You need to type them as string literals in the procedure if you want the browser to render them in lowercase HTML tags. Alternatively, you can write your own replacement to the HTF and HTP PL/SQL built-in packages to render lowercase HTML tags. This section has shown you how to render a native PL/SQL web application with a secured DAD. The next section builds on this discussion and shows you how to build them with an unsecured DAD. Setting Up a Unsecured DAD The benefit of setting up a native PL/SQL web application with an unsecured DAD is that you can have much more control. You can setup your own encryption, authentication, and application metadata. You need to determine the current configuration of the XDB server before you can configure it to meet your needs. Oracle Database 12 c , like prior releases, provides the epgstat.sql script in the Oracle home. This script lets you check the XDB server’s current configuration. The following query displays the configuration file for the XDB server: SQL> SELECT dbms_xdb.cfg_get() FROM dual; You can run the script as the SYSTEM user, like so: SQL> ?/rdbms/admin/epgstat.sql It should return the following initial configuration, which includes your secured STUDENT_DAD DAD setup: +--------------------------------------+ | XDB protocol ports: | | XDB is listening for the protocol | | when the protocol port is non-zero. | +--------------------------------------+ HTTP Port FTP Port --------- -------- 8080 0 1 row selected. +---------------------------+ | DAD virtual-path mappings | +---------------------------+ Virtual Path DAD Name -------------------------------- -------------------------------- /apex/* APEX /studentdb/* STUDENT_DAD 2 rows selected. +----------------+ | DAD attributes | +----------------+ DAD Name DAD Param DAD Value ------------ --------------------------- ---------------------------------------- APEX database-username ANONYMOUS default-page apex document-table-name wwv_flow_file_objects$ request-validation-function wwv_flow_epg_include_modules.authorize document-procedure wwv_flow_file_mgr.process_download nls-language american_america.al32utf8 document-path docs 7 rows selected. +---------------------------------------------------+ | DAD authorization: | | To use static authentication of a user in a DAD, | | the DAD must be authorized for the user. | +---------------------------------------------------+ DAD Name User Name -------------------------------- -------------------------------- STUDENT_DAD STUDENT 1 row selected. +----------------------------+ | DAD authentication schemes | +---------------------------- DAD Name User Name Auth Scheme -------------------- -------------------------------- ------------------ APEX ANONYMOUS Anonymous STUDENT_DAD Dynamic 2 rows selected. +--------------------------------------------------------+ | ANONYMOUS user status: | | To use static or anonymous authentication in any DAD, | | the ANONYMOUS account must be unlocked. | +--------------------------------------------------------+ Database User Status --------------- -------------------- ANONYMOUS EXPIRED & LOCKED 1 row selected. +-------------------------------------------------------------------+ | ANONYMOUS access to XDB repository: | | To allow public access to XDB repository without authentication, | | ANONYMOUS access to the repository must be allowed. | +-------------------------------------------------------------------+ Allow repository anonymous access? ---------------------------------- false 1 row selected. All the key values from the diagnostic script are bold highlighted. There are two things that you need to change to the configuration. You need to unlock the ANONYMOUS schema and open access to the ANONYMOUS schema. You need to unlock the ANONYMOUS schema, which you can do with the following syntax as the SYSTEM user: SQL> ALTER USER anonymous ACCOUNT UNLOCK; SQL> ALTER USER anonymous IDENTIFIED BY null; Unlocking the access to the ANONYMOUS repository is a bit more complex and requires an anonymous PL/SQL block. The following opens the ANONYMOUS repository: SQL> SET SERVEROUTPUT ON SQL> DECLARE 2 lv_configxml XMLTYPE; 3 lv_value VARCHAR2(5) := 'true'; -- (true/false) 4 BEGIN 5 lv_configxml := DBMS_XDB.cfg_get(); 6 7 -- Check for the element. 8 IF lv_configxml.existsNode('/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access') = 0 THEN 9 -- Add missing element. 10 SELECT insertChildXML 11 ( lv_configxml 12 ,'/xdbconfig/sysconfig/protocolconfig/httpconfig' 13 ,'allow-repository-anonymous-access' 14 , XMLType(' ' 15 || lv_value 16 || ' ') 17 ,'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') 18 INTO lv_configxml 19 FROM dual; 20 21 dbms_output.put_line('Element inserted.'); 22 ELSE 23 -- Update existing element. 24 SELECT updateXML 25 ( DBMS_XDB.cfg_get() 26 ,'/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access/text()' 27 , lv_value 28 ,'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') 29 INTO lv_configxml 30 FROM dual; 31 32 dbms_output.put_line('Element updated.'); 33 END IF; 34 35 -- Configure the element. 36 dbms_xdb.cfg_update(lv_configxml); 37 dbms_xdb.cfg_refresh; 38 END; 39 / It should print the following when run successfully: Element inserted. You should see the following differences when you rerun the epgstat.sql script: +--------------------------------------------------------+ | ANONYMOUS user status: | | To use static or anonymous authentication in any DAD, | | the ANONYMOUS account must be unlocked. | +--------------------------------------------------------+ 1 row selected. Database User Status --------------- -------------------- ANONYMOUS OPEN +-------------------------------------------------------------------+ | ANONYMOUS access to XDB repository: | | To allow public access to XDB repository without authentication, | | ANONYMOUS access to the repository must be allowed. | +-------------------------------------------------------------------+ Allow repository anonymous access? ---------------------------------- true 1 row selected. After you have open the ANONYMOUS schema and allow access to the ANONYMOUS repository, you need to grant execute privilege from the STUDENT user to the ANONYMOUS user and create a synonym for the ANONYMOUS user. The ANONYMOUS user needs a synonym for the helloworld procedure owned by the STUDENT user. You grant the privilege and create the synonym as the SYSTEM user with the following syntax: SQL> GRANT EXECUTE ON student.helloworld2 TO anonymous; SQL> CREATE SYNONYM anonymous.helloworld2 FOR student.helloworld2; You need to create and authorize a new GENERIC_DAD DAD for the next example. You do that with the following command creates the GENERIC_DAD DAD: SQL> BEGIN 2 dbms_epg.create_dad( 3 dad_name => 'GENERIC_DAD' 4 , path => '/db/*'); 5 END; 6 / You should note the URL subdirectory for this approach is db . The next command authorizes the GENERIC_DAD DAD: SQL> BEGIN 2 dbms_epg.authorize_dad( 3 dad_name => 'GENERIC_DAD' 4 , user => 'ANONYMOUS'); 5 END; 6 / The anonymous configuration requires a new step. You set the GENERIC_DAD DAD's database-username as generic with the following call to the SET_DAD_ATTRIBUTE of the DBMS_EPG package: SQL> BEGIN 2 dbms_epg.set_dad_attribute( 3 dad_name => 'GENERIC_DAD' 4 , attr_name => 'database-username' 5 , attr_value => 'ANONYMOUS'); 6 END; 7 / At this point, you should close the browser to end the scope of the Basic HTTP security permission from the secured DAD example. Launch the browser and enter the following URL: http://oracle12c/db/helloworld You should note that the virtual directory in the URL has changed from studentdb to db. The studentdb virtual directory maps to the secured DAD. The db virtual directory maps to the unsecured DAD. The native PL/SQL web page should render the following native PL/SQL web page: You should notice that the user is the ANONYMOUS user while the actual stored procedure is the helloworld procedure stored in the C##PLSQL container database user’s schema. The connection through the ANONYMOUS schema doesn’t require credentials. You should only deploy synonyms to stored procedures that you deploy in other schemas. The stored procedures should verify access against your internal authentication functions and procedures. You can read how to create a PL/SQL authentication function in this other ToadWorld post . This article has shown you how to write, deploy, and test native PL/SQL web applications in secured and unsecured DADs. You can find complete re-runnable scripts for this article on github.com at the following URL .

Wiki Page: Backup and Restore in Oracle 12c - quick guide

$
0
0
Starting with Oracle 12c, we have a new "layer", some new concepts were introduced like Container Database, Pluggable Database, PDB SEED, Root, etc. Before 12c, in order to do backups and restores all what we had to do is to open a connection to the database where we will work and start working with our RMAN stuff. But now we have the new layer: Pluggable Database (PDB). Now there are two ways to work with backups and restores, we can work from CDB$ROOT and then use the new clauses inside RMAN in order to identify the PDB and its element, for example the datafile No. 3 of the PDB "GUATEMALAPDB"; and also we can connect directly to the PDB where we will work. In this article we will go through all these tasks and we will perform backups and restores from both paces, CDB$ROOT and an specific PDB. This is the overview of our work: Backup from CDB Root Backup of a datafile Backup of a tablespace backup of a PDB Backup of the CDB$ROOT Backup from PDB Backup of a datafile Backup of a tablespace Backup of the PDB Restore from CDB Root Restore of a datafile Restore of a tablespace Restore of a PDB Restore the CDB$ROOT Restore from PDB Restore of a datafile Restore of a tablespace Restore of the PDB The environment that I will use has 2 Pluggable Databases (PDB1 and PDB2): SQL> select con_id, name from v$pdbs; CON_ID NAME ---------- ------------------------------ 2 PDB$SEED 3 PDB1 4 PDB2 The location of the datafiles, its ID and so on are the following: SQL> select con_id, file#, name from v$datafile order by 1,2 CON_ID FILE NAME -------- ------- ------------------------------------------------------------ 1 1 /data/cdb/system01.dbf 1 3 /data/cdb/sysaux01.dbf 1 4 /data/cdb/undotbs01.dbf 1 6 /data/cdb/users01.dbf 2 5 /data/cdb/pdbseed/system01.dbf 2 7 /data/cdb/pdbseed/sysaux01.dbf 3 8 /data/seed/cdb/pdbseed/system01.dbf 3 9 /data/seed/cdb/pdbseed/sysaux01.dbf 4 10 /data/system01.dbf 4 11 /data/sysaux01.dbf 10 rows selected. Backup from CDB Root Backup of a datafile When you are doing a backup of a datafile while connected to CDB$ROOT you don't have to specify any clause to name the PDB of this datafile is part of. Every datafile into the Container Database (CDB) has the "FILE_ID" or "FILE#" unique in the whole CDB. So all what you have to know in order to backup a datafile of a PDB is its file identifier: Example: Do a backup of the datafile 9 that is part of tablespace SYSAUX in the PDB "PDB1": RMAN> backup datafile 9; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00009 name=/data/seed/cdb/pdbseed/sysaux01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T160415_cldr9z85_.bkp tag=TAG20160501T160415 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910713862_cldrb6hs_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 RMAN> Backup of a tablespace To backup tablespaces, you can name the PDB that the tablespace is part of, followed by ":" and then the name of the tablespace: [PDB_NAME]:[TABLESPACE_NAME] Example: Do a backup of tablespace SYSAUX of PDB "PDB1": RMAN> backup tablespace PDB1:SYSAUX; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00009 name=/data/seed/cdb/pdbseed/sysaux01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T160601_cldrf9l1_.bkp tag=TAG20160501T160601 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910713968_cldrfjs0_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 RMAN> Backup of a PDB To do backups of Pluggable Databases, you can use the new clause "Backup Pluggable Database" from RMAN, and then specify the name of the PDB Example: Backup the PDB "PDB1": RMAN> backup pluggable database PDB1; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00009 name=/data/seed/cdb/pdbseed/sysaux01.dbf input datafile file number=00008 name=/data/seed/cdb/pdbseed/system01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T160758_cldrjypv_.bkp tag=TAG20160501T160758 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910714093_cldrkg04_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 You can also use the same clause that you have been using until before 12c "backup database", followed by the database name: RMAN> backup database pdb2; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00011 name=/data/sysaux01.dbf input datafile file number=00010 name=/data/system01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910719245_cldxlf37_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 Backup of the CDB$ROOT the CDB$ROOT is also a container, based on that, as a container you should use the same clause "backup database" followed by the name "root": RMAN> backup database root; Starting backup at 01-MAY-16 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=78 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/data/cdb/system01.dbf input datafile file number=00003 name=/data/cdb/sysaux01.dbf input datafile file number=00004 name=/data/cdb/undotbs01.dbf input datafile file number=00006 name=/data/cdb/users01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173316_cldxjwgz_.bkp tag=TAG20160501T173316 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910719211_cldxkcnx_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 Backup from PDB When you are connected directly to the PDB where you will perform the backup it is exactly the same that if you were connected to a 11g database, that's why we will not provide more comments because the procedure is the same than whatever database backup datafile 10; Starting backup at 01-MAY-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=42 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/data/system01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T172849_cldx8k7n_.bkp tag=TAG20160501T172849 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910718932_cldx8njt_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 Using datafile name: RMAN> backup datafile '/data/system01.dbf'; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/data/system01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T172926_cldx9pmx_.bkp tag=TAG20160501T172926 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910718969_cldx9srg_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 Backup of a tablespace As you can see below, now you do not have to specify the name of the pluggable database that the tablespace is part of: RMAN> backup tablespace sysaux; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00011 name=/data/sysaux01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173000_cldxbrtl_.bkp tag=TAG20160501T173000 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910719007_cldxc009_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 RMAN> Backup of the PDB RMAN> backup database; Starting backup at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00011 name=/data/sysaux01.dbf input datafile file number=00010 name=/data/system01.dbf channel ORA_DISK_1: starting piece 1 at 01-MAY-16 channel ORA_DISK_1: finished piece 1 at 01-MAY-16 piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173019_cldxccqd_.bkp tag=TAG20160501T173019 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 01-MAY-16 Starting Control File and SPFILE Autobackup at 01-MAY-16 piece handle=/data/CDB/autobackup/2016_05_01/o1_mf_s_910719034_cldxctvm_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 01-MAY-16 RMAN> Restore from CDB Root As we discussed on the backup section, the backups of a specific datafile is performed by its ID or by its name, the name of the pluggable database is not the identifier: Restore of a datafile [oracle@db12102 ~]$ rman target / RMAN> restore datafile 10; Starting restore at 01-MAY-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=23 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /data/system01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 01-MAY-16 You can also use the datafile name: RMAN> restore datafile '/data/system01.dbf'; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /data/system01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 01-MAY-16 RMAN> Restore of a tablespace While restoring a tablespace is different, for tablespaces you have to specify to which pluggable database this tablespace is part of: [oracle@db12102 ~]$ rman target / RMAN> restore tablespace pdb2:sysaux; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00011 to /data/sysaux01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 01-MAY-16 Restore of a PDB You can use the clause "restore pluggable database" followed of the PDB name: [oracle@db12102 ~]$ rman target / RMAN> restore pluggable database pdb1; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00008 to /data/seed/cdb/pdbseed/system01.dbf channel ORA_DISK_1: restoring datafile 00009 to /data/seed/cdb/pdbseed/sysaux01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173344_cldxks19_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173344_cldxks19_.bkp tag=TAG20160501T173344 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 01-MAY-16 You can also use the clause "restore database" followed by the database name: RMAN> restore database pdb1; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00008 to /data/seed/cdb/pdbseed/system01.dbf channel ORA_DISK_1: restoring datafile 00009 to /data/seed/cdb/pdbseed/sysaux01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173344_cldxks19_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4BFFA4CFE3ED0E053047111ACC580/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173344_cldxks19_.bkp tag=TAG20160501T173344 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:08 Finished restore at 01-MAY-16 RMAN> Restore the CDB$ROOT: RMAN> restore database root; Starting restore at 01-MAY-16 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=7 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /data/cdb/system01.dbf channel ORA_DISK_1: restoring datafile 00003 to /data/cdb/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00004 to /data/cdb/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00006 to /data/cdb/users01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173316_cldxjwgz_.bkp channel ORA_DISK_1: piece handle=/data/CDB/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173316_cldxjwgz_.bkp tag=TAG20160501T173316 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:25 Finished restore at 01-MAY-16 Restore from PDB Restoring datafiles, tablespaces and a PDB is exactly the same than before 12c, since you are connected directly to the PDB: Restore of a datafile [oracle@db12102 ~]$ rman target sys/Manager1@192.168.1.4:1521/pdb2 RMAN> restore datafile 10; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /data/system01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 01-MAY-16 RMAN> Restore of a tablespace [oracle@db12102 ~]$ rman target sys/Manager1@192.168.1.4:1521/pdb2 RMAN> restore tablespace sysaux; Starting restore at 01-MAY-16 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=39 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00011 to /data/sysaux01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 01-MAY-16 Restore of the PDB [oracle@db12102 ~]$ rman target sys/Manager1@192.168.1.4:1521/pdb2 RMAN> restore database; Starting restore at 01-MAY-16 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /data/system01.dbf channel ORA_DISK_1: restoring datafile 00011 to /data/sysaux01.dbf channel ORA_DISK_1: reading from backup piece /data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp channel ORA_DISK_1: piece handle=/data/CDB/30D4E481E7224175E053047111ACB0BA/backupset/2016_05_01/o1_mf_nnndf_TAG20160501T173357_cldxl5vb_.bkp tag=TAG20160501T173357 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 01-MAY-16 Follow me:

Blog Post: Flashback Functions – One Name, different Techniques

$
0
0
Part 1: Flashback Database Since version 9i there is the term “Flashback” in the Oracle Technology. However it is often hard to harmonize the different terms and corresponding technologies. Let’s start with the definitions: Flashback means, at least in the Oracle terminology, that a past state shall be restored. Might be you want to see the state of a data set one hour ago, the table before it was deleted or the change from last month. And here we are at the actual question: “What have I got to do to meet requirements like these?” Let’s start with the feature, that probably comes to your mind first: Flashback Database. For what do we need it? With Flashback Database it is possible, to roll back the entire database, that means to set back the entire data set to a former state. What do we need for that? Flashback Database is only available in the Enterprise Edition. Even though some claim different: First you don’t need to do anything, except that the database has to be in Archivelog Mode. With this, to be honest, pretty limited function you can create a Guaranteed Restore Point and reset the database after changes to this point. SQL> CREATE RESTORE POINT before_batch GUARANTEE; This is helpful e.g. if a fallback shall be available through a batch run or at a new application release. That means the Restore Point is set prior to the batch run and if you notice any failures after the run the database is simply set back to that. If the batch was executed properly the restore point is deleted – and you should by no means forget that, because then the Fast Recovery Area would fill up in a short period of time and the database would stop. Why that? At Flashback Database Flashback Logs are being written. That means, if a database block changes for the first time, a copy of that is written into the Fast Recovery Area. When setting back the database the blocks are simply exchanged and where applicable a short recovery run with information from the archived redolog file. This is necessary as you can set the Guaranteed Restore Point in running business. This is, like I said, a limited utilization, as you need to know before, that you might want to roll back. If you want to set back your database to a former state at any time, you have to enable the Flashback Database constantly. Therefore you have to stop the databsse and restart in Mount mode. Now Flashback Database can be activated: SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE FLASHBACK; SQL> ALTER DATABASE OPEN; Afterwards the current state can be queried by the view v$database: SQL> SELECT flashback_on, log_mode FROM v$database; FLASHBACK_ON LOG_MODE ------------------ ------------ YES ARCHIVELOG Now the database can be set back to any point at any time. Really any point? By the parameter db_flashback_retention_target you indicate in minutes (!) how far into the past you want to roll back. Realistic would be between one and three days here, depending on how much storage you allocate to the Fast Recovery Area. The database can be set back as far into the past as the oldest Flashback log reaches. Every newer point is reached by an newer log and the in the meantime incurred archived Redolog files. Flashback Logs or Storage Snapshots? Many storage manufacturers nowadays offer snapshot functions, that means the altered blocks (no matter if database or other files) are stored in a snapshot area temporarily and where necessary saved. By this also it is possible to roll back the database to a former state. In contrast to Flashback Database it is often not possible to roll back the database to a former point, open it read only and afterwards roll forward to the previous state. This is interesting for example when using Dataguard, as I can roll back the Standby database, have a look at a table or copy it and then roll forward again. The availability remains during this time. In connection with dataguard please note that a “reinstate” of the primary database after a failover is only possible, when flashback database is enabled. And one more hint: Caution with Dataguard! When copying with RMAN (duplicate for standby), the flashback database settings are not transferred, which means you have to enable them on the standby side once again. Conclusion Even though the Flashback logs need disc storage, the activation pays of in any case. And if you don’t want that, you can at least save a bunch of exports with the guaranteed restore points! In my next blog I will go into the topic Flashback Table, for which three (!) different procedures are used.

Blog Post: PC Support Call – a scam of the worst kind…

$
0
0
Scam focusing on home computer users I’ve heard of this scam but have not received the call. Last night, I received the call. It came in as ‘out of area’ caller. He presented himself as “Peter from PC Support…”. I was pretty sure this was ‘the call’ from the scam people. He spoke quickly and with quite an accent. I asked him for his name and company again, he seemed frustrated and repeated the “I’m Peter from PC Support…”. I said I had not logged a support call…he said that they have special software and had been monitoring my PC for me. It works like this…they call and present themselves as a call from Windows Support or in my case PC Support saying that my computer might have a potential bad virus on it. They sound overseas; I would think the calls originate outside the USA since these calls cannot be easily tracked. They say they have been monitoring my PC and I might have a situation that they can help with. They want you to give them control of your PC. When you let them take control of your PC, NOW you have a bad virus. They download something that takes control of your PC…and the whole conversation changes. Now they want hundreds of dollars to ‘release’ your computer. I didn’t fall for it. I played along for a moment. I asked if he was monitoring my PC right now. He said ‘yes’…which was a lie because my computer wasn’t even turned on…I’ve been refinishing my home's floors…and was in the middle of this project when the call came. I didn’t answer the first time but when it rang right back, I thought it might be one of my kids calling in… I caught him in a lie…I told him he was nothing but a thief and that he should be ashamed of himself for conducting himself in such a way. He hung up on me. IF you know of someone who has fell victim to this scam…my suggestions are: Unplug the PC from the internet Take it to a professional PC shop, tell them what had happened and see if they can fix it/reload the OS would be worse case but would guarantee it is completely gone Destroy or have checked ANY external device that was hooked up to the computer when the call came in or afterwards DO NOT put any document/files from this affected PC onto another PC. …hopefully you have a good backup that you can retore… Go with the thought that any support organization…be it Microsoft or about anyone…will not call you out of the clear blue sky. If you didn’t initiate a support call, then the call you are receiving is probably bogus. Do confirm whom you are speaking to and be VERY leery of giving control or running a program from an attachment without knowledge of exactly who it is from… I use clonezilla to make image copies after installing a lot of software…you get a duplicate drive and clone to that. You can then reverse the clone process or if the hard drive went bad, simply put in the new one. I’ve also enjoyed EasyGig software…this software allowed you to clone to a larger drive…making it easier to put a larger hard drive into your computer… My wife had a friend who was expecting a return call from support. Just so happens (poor timing on her part) that this bogus call came in…she thought it was her return call. Paid $400 to get her computer to boot. I didn’t hear about it till later and I’d still take the PC in to be reviewed/see of any of the virus or bogus program is left…I’d probably still go to a back up or reload the OS. They did this to her Apple computer…so it’s not just a PC thing… Another scam that occurs from phishing is ‘download and run this’…these emails come from ‘malware’…something left that tells someone else where you have been…I think…somehow they know I was just on PayPal because I’ll get a very official PayPal looking email to do something thru a link… These emails…review the return user carefully. Sometimes they will add ‘support.usbank.com’ …or its usbank.uk, or something similar…when in doubt…call the bank or login to your account as you normally would…if there appears to be no problem…then the email is a scam of some kind. You can fix these monitoring code snippets by getting ‘malware bytes’…they have a free version…and run it from time to time. You can slow down viruses and those annoying ads by downloading adblocker and script blocker programs for your browser. These are browser plugins. I have my Firefox setup with these. I do my email online using this Firefox. It comes right up if a script tries to execute…in fact…it’s a bit annoying as you have to accept it all from known good sites…so…when you first start using script blocker, you need to accept the ones from your email service just so it runs correctly. I use Chrome (without any of these plugins) when I’m ordering online from a known site (such as Amazon.com…one of my favorite places to shop…). The script blocker makes it difficult at times to complete the ordering process. Hope this helps…hope you don’t get the call from PC Support, but if you do…shame them and hang up. Dan Hotka Oracle ACE Director Author/Instructor/CEO

Blog Post: Ensuring Data Protection Using Oracle Flashback Features - Introduction (Part 1)

$
0
0
Introduction This article is the first part of my article series regarding data protection using Oracle Flashback Features. Ensuring Data Protection for Oracle Databases is one of the most important tasks every Oracle DBA is faced with. Without protecting the data, the DBA will never be able to ensure high level of SLA (Service Level Agreement) to the business. Data Protection is a wide term that refers to the protection from many different potential issues in Oracle Databases such as: Data Corruptions – Block corruption could be either physical or logical: Physical Corruption (also called media corruption) - When the block has an invalid checksum, therefore it cannot even be recognized as an Oracle block. Logical Corruption - When the block checksum is valid but its content is logically inconsistent; for example, when there is a missing index entry or a row piece. Disaster Recovery – Ranges from large-scale natural disasters such as floods, earthquakes, and fires, to small/medium disasters like power outages and viruses. Human Errors - A user operation that causes data to become unavailable (e.g. dropping/truncating a table, deleting rows) or logically wrong; for example, by modifying the contents of existing rows to wrong values. Data Protection Objectives Every DBA should have a clear and tested recovery strategy in order to enforce the organization's Data Protection policy, which is usually defined by 2 important objectives, RPO and RTO: RPO (Recovery Point Objective) - The maximum amount of data that a business can allow itself to lose; for example, if the RPO of a company is 5 hours, then the DBA must restore and recover the database to a point in time in the last 5 hours. In some companies, the RPO is 0, i.e. the business can’t afford any data loss. RTO (Recovery Time Objective) - The maximum amount of downtime that a business can incur until the system is up and available again for the users. For example, if a database was crashed due to a physical corruption of a data file that belongs to SYSTEM tablespace, then assuming the RTO is 1 hour, the DBA must restore and recover the data file to ensure the database will be up and running within 1 hour since the crash occurred. Figure 1 : the RPO and RTO data protection objectives Oracle provides a set of tools and features that can be used by the DBA for protecting Oracle Databases from various scenarios: Data Corruptions - The most common way for detecting and recovering data corruptions is by using RMAN and user-managed backup/recovery methods. Disaster Recovery - There are various ways for ensuring server protection, e.g. RAC, RAC One Node and Failover Clusters. For ensuring storage-level protection Oracle provides ASM 2-way or 3-way mirroring, and for ensuring site protection Oracle provides replication solutions such as Oracle Data Guard and Oracle Golden Gate. Human Errors - There are various ways to handle human errors including using backups (either RMAN or user-managed backups); however, by using flashback features the DBA can recover from human errors in a much faster and simpler way. Figure 2 : Oracle High Availability and Disaster Recovery Solutions This article will be focused on the last item - how to recover from various human errors scenarios, with the minimum RTO using Oracle Flashback features. Summary In the first part of the series I reviewed the basics of Oracle Data Protection. In the next part I will review the first Oracle Flashback feature which was introduced in Oracle 9i, named "Flashback Query" and we will see how this feature works "behind the scenes".

Wiki Page: Cloning Container Database (CDB) with Pluggable Databases (PDBs) Using Enterprise Manager Cloud Control 13c

$
0
0
Written by RavKumar YV Introduction Oracle Enterprise Manager Cloud Control 13c offers complete cloud solution including cloning functionalities like Create full clone database, Create test master database, Create CloneDB, Create Snapshot Test Master and Clone Management. Login to Enterprise Manager Cloud Control 13c as ‘sysman’ user privileges and select the option cloning under Oracle Database Select the option: From our list of databases in cloud control, we can right click on the database that we want to Create Full Clone Database and specify the following values for Source and Destination. The wizard will take us through the proper steps to perform to create the Create Full Clone database for Container Database (contdb). Source Database: Global Database Name: contdb Type: Single Instance Database Version: 12.1.0.2.0 Select the credentials for the following options: SYSDBA Database Credentials Database Host Credentials SYSASM ASM Credentials Destination Database: Global Database Name: clonedb Type: Single Instance Database SID: clonedb Select the credentials for the following options: Oracle Home Location Host Database Host Credentials Specify the following options for Configuration Tab: Database Files Location: File System / Automatic Storage Management (ASM) Recovery Files Location: Use Fast Recovery Area with location and size Database Credentials and Parallelism The default location for storage of datafiles is in place and we can use either OFA or ASM based on environment and we can set Fast Recovery Area (FRA) also. Set up the passwords in Database Credentials section. Based on your database size we can use parallel threads functionality. Before using the functionality of Parallel threads we have to consider the number of factors. Specify If any changes in Initialization Parameters If any scripts want to execute Pre Script, Post Script and SQL Script run as ‘sys’ user and specify the locations and we can create Test Master Database Specify the schedule for the Clone Database Check the summary for the following Source and Destination Database The following will create Test Master Pluggable database from the Source Container Database. Container Database: Clonedb Pluggable Database: Clonedb_CDBROOT In Destination Tab: Specify the following Pluggable Database Name: CLON_TM1 PDB Administrator Credentials: PDBADMIN Specify the storage location for the Pluggable Database for Destination Before creating Test Master Pluggable Database check the review Check the following series of steps Enable as a Test Master Database and Specify Container Database with Oracle Home directory and specify Parent Pluggable Database Name Now, time to login and check the newly created Clone database with ‘sysdba’ privileges SQL> connect sys/oracle@clonedb as sysdba Connected. SQL> select file_name from dba_data_files; FILE_NAME ----------------------------------------------------------------------------------------------------------------- /u01/app/oracle/oradata/clonedb/CLONEDB/datafile/o1_mf_system_14qvi8o9_.dbf /u01/app/oracle/oradata/clonedb/CLONEDB/datafile/o1_mf_sysaux_15qvi8o9_.dbf /u01/app/oracle/oradata/clonedb/CLONEDB/datafile/o1_mf_undotbs1_1cqvi8pn_.dbf /u01/app/oracle/oradata/clonedb/CLONEDB/datafile/o1_mf_users_1eqvi8po_.dbf /u01/app/oracle/oradata/clonedb/CLONEDB/datafile/o1_mf_example_1dqvi8pn_.dbf SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 1 Next log sequence to archive 2 Current log sequence 2 Check the name, status of Pluggable Databases and Open all the Pluggable Databases SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ----------------- PDB$SEED READ ONLY PDB1 MOUNTED PDB2 MOUNTED SQL> alter pluggable database all open; Pluggable database altered. SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ------------------- PDB$SEED READ ONLY PDB1 READ WRITE PDB2 READ WRITE Check the Control Files, Redo Log Files Locations of Clone Database SQL> select name from v$controlfile; NAME ------------------------------------------------------------------------------------------------------ /u01/app/oracle/oradata/clonedb/CLONEDB/controlfile/o1_mf_cfhtzxl9_.ctl /u01/app/oracle/fast_recovery_area/CLONEDB/controlfile/o1_mf_cfhtzxlh_.ctl SQL> select member from v$logfile; MEMBER --------------------------------------------------------------------------------------------------------------- /u01/app/oracle/oradata/clonedb/CLONEDB/onlinelog/o1_mf_3_cfhv4q47_.log /u01/app/oracle/fast_recovery_area/CLONEDB/onlinelog/o1_mf_3_cfhv4q8m_.log /u01/app/oracle/oradata/clonedb/CLONEDB/onlinelog/o1_mf_2_cfhv4ojk_.log /u01/app/oracle/fast_recovery_area/CLONEDB/onlinelog/o1_mf_2_cfhv4on2_.log /u01/app/oracle/oradata/clonedb/CLONEDB/onlinelog/o1_mf_1_cfhv4mfx_.log /u01/app/oracle/fast_recovery_area/CLONEDB/onlinelog/o1_mf_1_cfhv4mld_.log 6 rows selected. SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 1 Next log sequence to archive 2 Current log sequence 2 SQL> connect sys/oracle@192.168.56.100:1521/pdb1 as sysdba Connected. SQL> connect sys/oracle@clonedb as sysdba Connected. SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ----------------- PDB$SEED READ ONLY PDB1 READ ONLY PDB2 READ WRITE PDB1_TM1 READ ONLY Login to Test Master Database of Pluggable Database (pdb1_tm1) and check the container ID and Container Name. SQL> alter session set container=pdb1_tm1; Session altered. SQL> connect sys/oracle@192.168.56.100:1521/pdb1_tm1 as sysdba Connected. SQL> show con_name CON_NAME ----------------- PDB1_TM1 SQL> show con_id CON_ID ----------------- 5 To preserve a PDB’s open mode across CDB restarts set the option: SAVE STATE SQL> alter pluggable database pdb1_tm1 save state; Pluggable database altered. SQL> connect sys/oracle@192.168.56.100:1521/clonedb as sysdba Connected. SQL> alter pluggable database pdb1 save state; Pluggable database altered. SQL> alter pluggable database pdb2 save state; Pluggable database altered. SQL> alter pluggable database pdb1_tm1 save state; Pluggable database altered. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> connect sys/oracle as sysdba Connected to an idle instance. SQL> startup; ORACLE instance started. Total System Global Area 3154116608 bytes Fixed Size 2929352 bytes Variable Size 671092024 bytes Database Buffers 2466250752 bytes Redo Buffers 13844480 bytes Database mounted. Database opened. Check the status of all pluggable database including Test Master Pluggable Database (PDB1_TM1) SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ------------------- PDB$SEED READ ONLY PDB1 READ ONLY PDB2 READ WRITE PDB1_TM1 READ ONLY Summary Oracle Enterprise Manager Cloud Control 13c also provides Increasing quality of Service, Enabling faster deployments in critical environments, Providing Resource Elasticity and Rapid Provisioning for mission critical applications.

Blog Post: Setting up OBIEE Sample App VM to use Oracle R Enterprise

$
0
0
Oracle semi regularly releases a new version of their OBIEE Sample App virtual machine. The most recent versions have had examples of using Oracle R Enterprise built into the demo dashboards. Unfortunately Oracle R Enterprise (ORE) does not work or is enabled in the downloaded version of the OBIEE VM, but with a few small steps you can enable ORE and have the examples in the OBIEE dashboard working for you. The purpose of this article is to walk you through the steps to get ORE setup and running The first thing you need to do, if you haven’t already, is to download the OBIEE Sample App VM. You can download it here. http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html. The latest version (v511) of the OBIEE Sample App VM comes with Oracle Business Intelligence 12c. What this means is that there is now lots of cool integration with the R language and with Oracle R Enterprise (ORE). Out of the box ORE (version 1.4.1) is installed and you can start using it if you open an R session or using the ORE embedded features in SQL. I’m sure the next release of the VM will have ORE version 1.5 which has some additional cool stuff. But when you start to use OBIEE you will be using the standard R language. In this case it will be the Oracle R Distribution. If you want to use Oracle R Enterprise (ORE) then you need to make a configuration change. IMPORTANT : Many people refer to a product called R Oracle or Oracle R. Technically there is no product called this. Maybe these people are referring to Oracle R Distribution. So whenever you hear someone say Oracle R, please ask them what actual product they are talking about and watch them stutter and stammer and fumble some sort of answer. If they do this then be careful of the advise they are giving you ! In addition to the configuration changes you will also need to install a number of R packages into the ORE installation. This is a manual step you need to perform due to some licensing issues/restrictions on using these R packages. These R packages will be used in the ORE demonstrations in the Sample App. Step 1 - Install the additional R packages Open a terminal window and go to /home/oracle/scripts/R and run the following command ./install_sampleapp_R_packages.sh This will install lots and lots of R packages onto the VM. Depending on your internet connection speed, this can take 8-20 minutes. For me it was about 8 minutes over wifi at home. Step 2 - Configure OBIEE to use ORE instead of R By default OBIEE is configured to use R in the mid-tier. If you want to switch to using ORE for the in-database features you need to define a connection pool within the RPD. You can do this in the NQSConfig.INI file vi /app/oracle/biee/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI Scroll down to the Advanced Analytics Script Section. Change Target to use ORE TARGET = "ORE" Then set the name for the Connection Pool. This may already be set, but check it to make sure. After making the changes, you can save and close the file. The final part is to restart the BI Server so that the changes to the NQSConfig.INI file can take effect. The Oracle schema you are connecting to in the Oracle Database needs to have the ORE system privilege called RQADMIN. This is needed to allow the embedded execution of R scripts in the database. Step 3 - Install some ORE embedded R scripts in the Oracle Database To demonstrate the embeded in-database capabilities of Oracle R Enterprise, the OBIEE Sample App VM comes with a number of examples of embedded R scripts. This are fined as XML files on the OBIEE Sample App VM. To access these embedded ORE Scripts you need to go to the following directory. cd /app/oracle/biee/user_projects/domains/bi/bidata/components/OBIS/advanced_analytics/script_repository When you list out the ORE xml files you will get the following list. AAScripts.xsd obiee.AirlineArrDelayPrediction.xml obiee.BalloonPlot.xml obiee.Clustering.xml obiee.Outliers.xml obiee.RegressionCorrelation.xml obiee.RegressionCreateModel.xml obiee.RegressionScoreModel.xml obiee.Regression.xml obiee.RImageEncSplit.xml obiee.RIMage.xml obiee.TimeSeriesForecast.xml obiee.VariableWidthBar.xml Sometimes you see references to 'registering' these scripts. What they actually means that these R scripts get created in the Oracle Database. You can also create R scripts in the database using SQL and PL/SQL, or using some of the ORE R API functions. The next step is to load each of these scripts into the Oracle Database. OBIEE gives us a screen that we can be used. To access this screen open the OBIEE Sample App. Then select Administration menu option at the top of the screen. When the next screen appears, scroll down to the bottom and you will see 'Issue SQL'. Click on this. You can now use this Issue SQL screen to load these ORE xml files. In the SQL box you will need to type in the following command to load each of the files listed above. call NQSRegisterScript('flipper://obiee.AirlineArrDelayPrediction.xml') call NQSRegisterScript('flipper://obiee.BalloonPlot.xml') call NQSRegisterScript('flipper://obiee.Clustering.xml') call NQSRegisterScript('flipper://obiee.Outliers.xml') call NQSRegisterScript('flipper://obiee.RegressionCorrelation.xml') call NQSRegisterScript('flipper://obiee.RegressionCreateModel.xml') call NQSRegisterScript('flipper://obiee.RegressionScoreModel.xml') call NQSRegisterScript('flipper://obiee.Regression.xml') call NQSRegisterScript('flipper://obiee.RImageEncSplit.xml') call NQSRegisterScript('flipper://obiee.RImage.xml') call NQSRegisterScript('flipper://obiee.TimeSeriesForecast.xml') call NQSRegisterScript('flipper://obiee.VariableWidthBar.xml') The following screenshot shows you the call to the obiee.Outliers.xml file and the result that is displayed after the file has been successfully been loaded. Step 4 - Startup OBIEE and try out the demos Now after completing all of the above steps, when you run the Advanced Analytics examples in the Sample App Dashboard you will be running ORE.

Wiki Page: How to use DBMS_CRYPTO for Web Apps

$
0
0
As discussed in other articles, there are many developers who use Oracle APEX to develop robust web applications. They choose Oracle APEX because it’s simply a great tool. While APEX gets better with each new release, there are developers who want more control. They would like to know how to build native PL/SQL web applications. Oracle APEX uses Oracle XML Database (XDB) Server as its engine. The XDB Server is a multiprotocol web server. It lets you configure settings with the DBMS_XDB and DBMS_EPG PL/SQL Built-in packages. You also have the ability to configure the XDB Server to concurrently run standalone native PL/SQL web applications. This article builds on my earlier Configure Oracle’s XDB Server for Web Applications by demonstrating how you can encrypt a user’s password and verify a user’s password with the dbms_crypto built-in package. You learn how to create your own password function in this article and how to create a verification function. It would be convenient to call the encryption function the password function, but you can’t call it password. That’s because SQL*Plus already uses password to reset a user’s password. This article calls the password function the encrypt function. Fortunately, there’s no verification function in SQL*Plus. The verify function shows you how to verify an encrypted password. This article has three parts. They are: • Configuring dbms_crypto built-in package • Writing a custom encrypt function • Writing a custom verify function You should read through how to setup dbms_crypto before you try developing the custom encrypt and verify functions. The encrypt and verify functions should not be stored in plain text. That means you need to wrap them or deploy them in Java. A Java solution isn’t available in the Oracle Database 11g Express Edition, so this article presents a clear text and wrapped solution in PL/SQL Configuring DBMS_CRYPTO built-in function The internal Oracle schema should own the dbms_crypto built-in package. You connect as the system user to check ownership with the following query: SQL> COL package FORMAT A25 SQL> SELECT DISTINCT owner || '.' || name AS package 2 FROM dba_source 3 WHERE name = 'DBMS_CRYPTO'; The query should return the following result: PACKAGE ------------------------- SYS.DBMS_CRYPTO If the query doesn’t return the preceding value, you need to install the dbms_crypto built-in package. You can run the following two scripts from the $ORACLE_HOME/rdbms/admin directory by using the ?. The ? at the SQL*Plus prompt substitutes for the $ORACLE_HOME environment variable. SQL> @?/rdbms/admin/dbmsobtk.sql SQL> @?/rdbms/admin/prvtobtk.sql After you verify the installation of the dbms_crypto built-in package, you need to make sure your schema has access to the package. You can grant execute privilege with this syntax: SQL> GRANT EXECUTE ON dbms_crypto to student; You should now be able to call the dbms_crypto built-in package from the student schema. The next section shows you how to create and wrap the encrypt function. Writing a custom encrypt function A custom password function should take a plain text variable length string, and the function should return an encrypted variable length string. The following encrypt password function does exactly that: SQL> CREATE OR REPLACE 2 FUNCTION encrypt( password VARCHAR2 ) RETURN RAW IS 3 /* Declare local variables for encryption. */ 4 lv_key_string VARCHAR2(40) := 'EncryptKey'; 5 lv_key RAW(64); 6 lv_raw RAW(64); 7 lv_encrypted_data RAW(64); 8 BEGIN 9 /* Dynamic assignment. */ 10 IF password IS NOT NULL THEN 11 /* Cast the password to a raw type. */ 12 lv_raw := utl_raw.cast_to_raw(password); 13 14 /* Convert to a RAW 64-character key. */ 15 lv_key := utl_raw.cast_to_raw(lv_key_string); 16 lv_key := RPAD(lv_key,64,'0'); 17 18 /* Encrypt the salary before assigning it to the object type attribute */ 19 lv_encrypted_data := dbms_crypto.encrypt( lv_raw 20 , dbms_crypto.encrypt_aes256 21 + dbms_crypto.chain_cbc 22 + dbms_crypto.pad_pkcs5 23 , lv_key); 24 ELSE 25 /* Raise an application error. */ 26 RAISE_APPLICATION_ERROR(-20001,'An empty string does not encrypt.'); 27 END IF; 28 29 /* Return a value from the function. */ 30 RETURN lv_encrypted_data; 31 END encrypt; 32 / Line 15 uses the utl_raw package to cast the string into a RAW data type. Line 16 pads the key to a length of 64 digits. The padding is required for the key value when you submit it as the fifth call parameter to the dbms_crypto.encrypt function. Line 19 through 23 demonstrates a call to the encrypt function of the dbms_crypto package. The encrypt function returns a RAW data type, which you can later store in a VARCHAR2 column. Line 4 discloses the seed value, which shouldn’t happen. You should actually deploy a function like this with the dbms_ddl package’s create_wrapped procedure. It ensures that the seed value isn’t visible inside the data dictionary by querying the dba_source view. It is possible to decrypt the encrypted values, but you don’t really need to do that. You should avoid providing a decrypting function unless you want to audit user passwords. You can test the basic password function with a simple query like this: SQL> COL encrypted_data FORMAT A25 SQL> SELECT encrypt('Johann Schmidt') AS encrypted_data 2 FROM dual It should display an unreadable string: ENCRYPTED_DATA ENCRYPTED_SIZE -------------------------------------- -------------- FEA2D3A25B … CD3E115619 64 This test verifies that the encrypt function works. Writing a custom verify function The verify function takes the user’s name and an unencrypted password. The verify function returns a 1 if the unencrypted password encrypts to the same value found in the application’s Access Control List (ACL) table. The verify function returns a 0 when the unencrypted password fails to encrypt to a matching value in the ACL table. You need to create a ACL table before you write the verify function. You can create a simple UAC app_user table with the following syntax: SQL> CREATE TABLE app_user 2 ( app_user_id NUMBER CONSTRAINT pk_app_user PRIMARY KEY 3 , app_user_name VARCHAR2(30) 4 , app_password VARCHAR2(64)); Then, you create the app_user_s sequence for the app_user_id surrogate key column with the following sequence: CREATE SEQUENCE app_user_s START WITH 1001; After creating the app_user table, you insert a row for a user. The row has a plain text user name and an encrypted password. You use the following INSERT statement to add the row: INSERT INTO app_user VALUES ( app_user_s.NEXTVAL ,'Johann Schmidt' , encrypt('Kitty@Spencer!1234')); With an ACL table, you can create the verify function with the following code: SQL> CREATE OR REPLACE 2 FUNCTION verify 3 ( user_name VARCHAR2 4 , password VARCHAR2 ) RETURN NUMBER IS 5 6 /* Default return value. */ 7 lv_result NUMBER := 0; 8 9 /* Application user cursor. */ 10 CURSOR c (cv_user_name VARCHAR2) IS 11 SELECT app_password 12 FROM app_user 13 WHERE app_user_name = cv_user_name; 14 BEGIN 15 /* Compare encrypted password. */ 16 FOR i IN c(user_name) LOOP 17 IF encrypt(password) = i.app_password THEN 18 lv_result := 1; 19 END IF; 20 END LOOP; 21 22 /* Return the value. */ 23 RETURN lv_result; 24 END; 25 / The verify function sets a default return value of 0 for false. It uses a parameterized cursor on lines 10 through 13 to find the encrypted password. The for loop on lines 16 through 20 open the cursor and on line 17 the if statement compares the encrypted password against the stored encrypted password in the app_user table. You can test the verify function with the following anonymous block: SQL> DECLARE 2 /* Declare print variable. */ 3 lv_output VARCHAR2(64); 4 BEGIN 5 /* Test function returns: 6 || ====================== 7 || - True returns 1 8 || - False returns 0 9 */ 10 IF verify('Johann Schmidt','Kitty@Spencer!1234') = 1 THEN 11 dbms_output.put_line('Result [It worked!]'); 12 ELSE 13 dbms_output.put_line('Result [It failed!]'); 14 END IF; 15 END; 16 / It prints Result [It worked!] This article has shown you how to write, deploy, and test native PL/SQL encryption and verification functions. You can deploy these to support a PL/SQL web applications when using an unsecured DAD, as qualified in my prior Configure Oracle’s XDB Server for Web Applications.

Blog Post: Oracle12 New PL/SQL Role

$
0
0
This technical tip came up at a recent user group meeting where I was doing my Oracle12 New Features presentation. One of the many features I like is the fact that there is now a role to grant PL/SQL routines (I would think mostly functions) access to the tables while only allowing users to access the PL/SQL routine. IE: You grant execute privs to the PL/SQL as you used to but it was my understanding that the user would also need at least read access to the data the PL/SQL routine was accessing. Several had told me that they are currently doing something similar in Oracle 11. Robert Freeman chimed in with this: “So this feature allows you to assign roles to Pl/SQL procedures for authentication rather than the procedure depending on using an existing users rights, or the callers rights. So, you can have PL/SQL security independent of a user. Great way to create a data access layer, for example. This way, you grant users execute on the pl/sql or a role, and no user needs access to the actual objects.” So yes, you can do something similar in Oracle11 but its more grants where in Oracle12, it is 1 role to the PL/SQL routine! Thanks for clarifying Robert. Dan Hotka Oracle ACE Director Author/Instructor/CEO

Comment on Backup and Restore in Oracle 12c - quick guide

$
0
0
Nice overview indeed, thank you. Foued

Wiki Page: External Tables with Preprocessing

$
0
0
A question that comes up now and again is there a way in Oracle Database 11g Express Edition to mimic some behavior in the Oracle Standard or Enterprise editions. Many of these questions arise because developers want to migrate a behavior they’ve implemented in Java to the Express Edition. Sometimes the answer is no but many times the answer is yes. The yes answers come with a how. This article answers the question: “How can I read an operating systems’ file directory with out an embedded Java Virtual Machine (JVM)?” These developers have read or implemented logic like that found in my earlier “ Using DBMS_JAVA to Read External Files ” article. The answer is simple. You need to use a preprocessing script inside an external table. That’s what you will learn in this article, but if you’re not familiar with external tables you should read this other “ External Tables ” article. External tables let you access plain text files with SQL*Loader or Oracle’s proprietary Data Pump files. You typically create external tables with Oracle Data Pump when you’re moving large data sets between database instances. External tables use Oracle’s virtual directories. An Oracle virtual directory is an internal reference in the data dictionary. A virtual directory maps a unique directory name to a physical directory on the local operating system. Virtual directories were simple before Oracle Database 12c gave us the multitenant architecture. In a multitenant database there are two types of virtual directories. One services the schemas of the Container Database (CDB) and it’s in the CDB’s SYS schema. The other services the schemas of a Pluggable Database (PDB) and it’s in the ADMIN schema for the PDB. You can create a CDB virtual database as SYSTEM user with the following syntax in Windows: SQL> CREATE DIRECTORY upload AS 'C:\Data\Upload'; or, like this in Linux or Unix: SQL> CREATE DIRECTORY upload AS '/u01/app/oracle'; There are some subtle differences between these two statements. Windows directories or folders start with a logical drive letter, like C:\, D:\, and so forth. Linux and Unix directories start with a mount point like /u01. As you can read in the “ External Tables ” article, you need to change the ownership of external files and directories to the oracle user and, default, oracle user’s default dba group. Likewise, you should change the privilege of the containing directory to 755 (owner has read, write, and execute privileges; and group and others have read and execute privileges. The balance of this article is broken into two pieces configuring a working external table with preprocessing and troubleshooting cartridge errors. External tables with preprocessing example There are xxx database steps to creating this example. The first database step requires you create three virtual directories. The syntax for the three statements is: SQL> CREATE DIRECTORY upload AS '/u01/app/oracle/upload'; SQL> CREATE DIRECTORY log AS '/u01/app/oracle/log'; SQL> CREATE DIRECTORY preproc AS '/u01/app/oracle/preproc'; The upload directory hosts the files you want to discover for upload. The log directory hosts the log files for the external tables. The preproc directory hosts the executable program, which generates a list of files currently in the upload directory. After creating the virtual directories or before creating them, you should create the physical directories in the Linux operating system. The virtual directories can only point to something when it actually exists. Moreover, they work like Oracle’s synonyms that point to other objects in the database. The physical files need to be in a directory tree that is navigable by the oracle user and the oracle user and it’s default primary dba group needs to own them. You can use the following command to change ownership when you’re the root user: # chown –R oracle:dba /u01/app/oracle The second database step requires that you grant privileges on the virtual directories to the student user. You can do that with the following syntax: SQL> GRANT read ON DIRECTORY upload TO student; SQL> GRANT read, write ON DIRECTORY log TO student; SQL> GRANT read, execute ON DIRECTORY prep roc TO student; The upload directory requires read-only privileges. The log directory requires read and write privileges. The read privileges let it find files and the write privilege lets it append to log files when they already exist. The preproc directory requires read and execute privileges. The read privilege is the same as that explained earlier. The execute privilege lets you run the preprocessing program file. The third database step requires creating an external file with preprocessing. The following script creates the sample table: SQL> CREATE TABLE directory_list 2 ( file_name VARCHAR2(60)) 3 ORGANIZATION EXTERNAL 4 ( TYPE oracle_loader 5 DEFAULT DIRECTORY preproc 6 ACCESS PARAMETERS 7 ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII 8 PREPROCESSOR preproc:'list2dir.sh' 9 BADFILE 'LOG':'dir.bad' 10 DISCARDFILE 'LOG':'dir.dis' 11 LOGFILE 'LOG':'dir.log' 12 FIELDS TERMINATED BY ',' 13 OPTIONALLY ENCLOSED BY "'" 14 MISSING FIELD VALUES ARE NULL) 15 LOCATION ('list2dir.sh') ) 16 REJECT LIMIT UNLIMITED; Line 5 designates the default directory as preproc because the location of the executable file should be in the preproc directory. Line 8 designates that there is a preprocessing step, and it identifies the virtual directory and physical file name inside single quotes. Line 15 identifies the source file for the external table, which is an executable program. Next, you need to create the bash file to get and return a directory list. Before you write that file, you need to understand that preprocessing script files don’t inherit a $PATH environment variable from Oracle. That means a simple command like this ls /u01/app/oracle/upload | cat becomes a bit more complex, like this: /usr/bin/ ls /u01/app/oracle/upload | /usr/bin/ cat Create a list2dir.sh file in the /u01/app/oracle/preproc directory with the preceding command line. Then, make sure oracle is the owner with a primary dba group and the privileges are 755 on the file. The command to set the privileges is: # chmod –R 755 /u01/app/oracle/preproc.sh Having completed that Linux operating system step you should probably put some files in the upload directory. You can create empty files with the touch command at the linux command line for this example. The fourth database step lets you query the external table, which runs the preprocessing program and returns its results as values in the table: SQL> CREATE * FROM directory_list; It should return something like this: FILE_NAME ------------------------------ character.csv transaction_upload2.csv transaction_upload.csv This example shows you how to implement external tables with preprocessing directives. Troubleshooting external tables with preprocessing There are several common errors that you run into when creating these types of external tables. The top three are: You failed to qualify the path element of executables: select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preprocess/list2dir.sh encountered error "/u01/app/oracle/preprocess/list2dir.sh: line 1: ls: No such file or directory Forgetting or removing the fully qualified path from before the ls command causes this error because preprocessor scripts inherit an empty $PATH environment variable. You can fix this error by putting the fully qualified path in front of the ls command. You neglected to make the Linux script file executable: SQL> select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preproc/list2dir.sh encountered error "error during exec: errno is 13 Forgetting to change the list2dir.sh shell script’s file privileges causes this error. You can fix this error by using the chmod command to change the file’s privileges to 755. The first value is seven and it sets the owner’s file privilege to read, write, and execute. The second and third values are a five, and they respectively set the privileges of the primary group and all others. A five sets the file privileges to read and execute. You neglected to change the ownership on the preprocessing file: SQL> select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preprocess/list2dir.sh encountered error "/u01/app/oracle/preprocess/list2dir.sh: line 3: rm: No such file or directory /u01/app/oracle/preprocess/list2dir.sh: line 7: ls: No such file or directory Forgetting to change the list2dir.sh shell script’s file ownership causes this error. You can fix this error by using the chown command to change the file’s ownership. You now know how to create and troubleshoot common errors with Oracle’s external tables when you add preprocessing. You can find the setup code at this URL .

Blog Post: Which Oracle Development Tool?

$
0
0
I was just in Istanbul last week for the Turkish Oracle User Group TROUG Days 2016, where I gave my popular Oracle development tools overview presentation covering Forms, APEX, ADF, JET and MAF. One interesting thing from discussions with Oracle developers is that everybody tends towards wanting one tool for all purposes. It is a natural and understandable response for Oracle developers with an Oracle Forms background, because for many years we only had one tool However, with the tools available today, there is no need to limit ourselves to one tool. Almost everyone with an Oracle database should be able to use Oracle APEX. It comes for free with the database and is a very capable tool. If you are using Oracle Forms, you should also be using APEX If you are using Oracle ADF, you should also be using APEX Some applications are better built in Forms than APEX Some applications are better built in ADF than APEX

Blog Post: Ensuring Data Protection Using Oracle Flashback Features - Part 2

$
0
0
Introduction In the previous article we have reviewed the basics of Oracle Data Protection, and explored Oracle Data Protection solutions. We also covered the data protection objectives which are measured by the RPO (Recovery Point Objective) and RTO (Recovery Time Objective). In this article we will review the first Oracle Flashback feature which was introduced in Oracle 9i, named "Flashback Query" and we will see how this feature works "behind the scenes". Oracle Flashback Query allows querying a table's data as of a specific point in the past by providing either a TIMESTAMP or an SCN. Demonstration In the below first step, we will create a sample table named “EMP” with a single row: SQL> CREATE TABLE EMP (ID NUMBER, NAME VARCHAR2(20)); Table created. SQL> insert into EMP values (1, 'DAVID'); 1 row created. SQL> select * from EMP; ID NAME ---------- -------------------- 1 DAVID SQL> commit; Commit complete. Now, we will determine the current SCN and time: SQL> select current_scn from v$database; CURRENT_SCN ----------- 476372816 SQL> select to_char(systimestamp, 'YYYY-MM-DD HH24:MI:SS') CURRENT_TIME from dual; CURRENT_TIME ------------------- 2016-01-04 14:37:12 In the final step, we will update the row, and by using Flashback Query we will be able to view the contents of the table prior to the data modifications: SQL> update emp set name = 'ROBERT'; 1 row updated. SQL> select * from EMP; ID NAME ---------- -------------------- 1 ROBERT SQL> commit; Commit complete. SQL> select * from EMP as of scn 476372816; ID NAME ---------- -------------------- 1 DAVID SQL> select * from EMP as of TIMESTAMP TO_TIMESTAMP('2016-01-04 14:37:12', 'YYYY-MM-DD HH24:MI:SS'); ID NAME ---------- -------------------- 1 DAVID This feature can be also very useful for investigating the contents of the table in a specific point in time in the past. It can also be used to restore the value of a row, a set of rows, or even the entire table. In the following example, an update sets the name of EMPLOYEE with ID #1 to be as of a specific time point in the past: SQL> update EMP set name = (select name from EMP as of timestamp TO_TIMESTAMP('2016-01-04 14:37:12', 'YYYY-MM-DD HH24:MI:SS') WHERE ID=1) WHERE ID=1; 1 row updated. SQL> select * from EMP; ID NAME ---------- -------------------- 1 DAVID It is possible to specify a relative time by subtracting the current timestamp using the INTERVAL clause, as follows: select * from emp as of timestamp (SYSTIMESTAMP - INTERVAL '60' MINUTE); In the following example, an INSERT AS SELECT command is using flashback query in order to insert all the rows that existed in “emp” table 2 hours ago: insert into emp (select * from emp as of timestamp (SYSTIMESTAMP - INTERVAL '2' HOUR)); Note : It is possible to convert between SCN to TIMESTAMP using the SCN_TO_TIMESTAMP function How does the Flashback Query feature work? The Flashback Query feature uses the contents of the undo tablespace. The undo tablespace is a key component in Oracle Databases. It consists of undo segments which hold the "before" images of the data that has been changed by users running transactions. The undo is essential for rollback operations, data concurrency and read consistency. In order to use Flashback Query, the instance must have an automatic undo management by setting the UNDO_MANAGEMENT initialization parameter to be TRUE (default since version 11gR1). It is also important to set a proper value for the UNDO_DETENTION parameter. The UNDO_RETENTION specifies the low threshold (in seconds) for the undo retention period (defaults to 900, i.e. 15 minutes). It is important to bear in mind the different behaviors of this parameter in a fixed-size undo tablespace vs. autoextend undo tablespace. Fixed-Size Undo Tablespace For fixed-size undo tablespaces, the UNDO_RETENTION parameter is being ignored and Oracle automatically tunes for the maximum possible undo retention period based on the undo tablespace size and undo usage history. Autoextend Undo Tablespace For auto extend undo tablespace, the UNDO_RETENTION parameter specifies the minimum retention period Oracle will attempt to honor. When space in the undo tablespace becomes low (due to running transactions which generate undo records) Oracle will increase the tablespace size (up to the MAXSIZE limit). Once it will reach the upper limit of the MAXSIZE, it will begin to overwrite unexpired undo information; therefore, the retention period defined in the UNDO_RETENTION period is not guaranteed. This is why the actual undo retention period might be lower or even higher than the one defined in the UNDO_RETENTION parameter. The actual undo retention period can be obtained by querying the TUNED_UNDORETENTION column in V$UNDOSTAT dynamic performance view. Note : It is possible to specify the RETENTION GUARANTEE clause in the CREATE UNDO TABLESPACE or ALTER TABLESPACE commands, and then Oracle will never overwrite unexpired undo data even if it means that transactions will fail due to lack of space in the undo tablespace.

Blog Post: OBIA 11.1.1.10.1 Installation

$
0
0
The following link describe how to install OBIA 11.1.1.10.1 on Linux 6.7 Here Thank you Osama Mustafa

Blog Post: EBS Clone R12.2.4 Guide

$
0
0
In This document i will show how to clone EBS R12.2.4 Step by step Link Here Thank Osama

Forum Post: RE: Mapping of US States with the two character State ID

$
0
0
You can find the official USPS abbreviations at about.usps.com/.../state-abbreviations.htm HTH -- Mark D Powell --

Blog Post: Expresiones regulares y listas de caracteres de igualación.

$
0
0
No me resulta fácil explicar cómo funcionan las listas de caracteres de igualación. Vayamos directamente con un ejemplo. A diferencia del “.” que iguala con cualquier caracter, las listas nos permiten “limitar” la igualación a una lista determinada. Con la siguiente expresión: x[abc]z Estoy indicando que solamente igualarán aquellas cadenas que contengan una “a”, o una “b” o una “c” entre la “x” y la “z”. Veamos: select texto, 'x[abc]z' regexp, case when regexp_like(texto, 'x[abc]z') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? ----------------------------------- ------- ------------------- xz x[abc]z No hay coincidencia xaz x[abc]z Hay coincidencia xabz x[abc]z No hay coincidencia xbz x[abc]z Hay coincidencia xabcz x[abc]z No hay coincidencia xcz x[abc]z Hay coincidencia 6 rows selected. Nótese que es necesario “encerrar” la lista de igualación entre corchetes (“ [] ”). Algo interesante a destacar es que la “lista” puede transformarse en un “rango”: select texto, 'x[a-c]z' regexp, case when regexp_like(texto, 'x[abc]z') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? ----------------------------------- ------- ------------------- xz x[a-c]z No hay coincidencia xaz x[a-c]z Hay coincidencia xabz x[a-c]z No hay coincidencia xbz x[a-c]z Hay coincidencia xabcz x[a-c]z No hay coincidencia xcz x[a-c]z Hay coincidencia 6 rows selected. Con el manejo de rangos podemos crear expresiones como: Que haya una letra (desde la “a” hasta la “z“) Que haya un número (desde el cero hasta el nueve) select texto, 'El numero [a-z]' regexp, case when regexp_like(texto, 'El numero [a-z]') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? ------------------------------------- ------------------- El numero 2 El numero [a-z] No hay coincidencia El numero tres El numero [a-z] Hay coincidencia El numero 4 El numero [a-z] No hay coincidencia El numero cinco El numero [a-z] Hay coincidencia El numero 0 El numero [a-z] No hay coincidencia select texto, 'El numero [0-9]' regexp, case when regexp_like(texto, 'El numero [0-9]') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? ---------------- ---------------- ------------------- El numero 2 El numero [0-9] Hay coincidencia El numero tres El numero [0-9] No hay coincidencia El numero 4 El numero [0-9] Hay coincidencia El numero cinco El numero [0-9] No hay coincidencia El numero 0 El numero [0-9] Hay coincidencia Otro metacaracter interesante que podemos usar en las listas de caractares de igualación es el metacaracter de no igualación ( ^ ). Con el manejo de listas, rangos y el metacaracter de no igualación podemos crear expresiones como: Que no haya una letra (desde la “a” hasta la “z“) Que no haya un número (desde el cero hasta el nueve) select texto, 'El numero [^a-z]' regexp, case when regexp_like(texto, 'El numero [^a-z]') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? --------------------- ---------------- ------------------- El numero 2 El numero [^a-z] Hay coincidencia El numero tres El numero [^a-z] No hay coincidencia El numero 4 El numero [^a-z] Hay coincidencia El numero cinco El numero [^a-z] No hay coincidencia El numero 0 El numero [^a-z] Hay coincidencia select texto, 'El numero [^0-9]' regexp, case when regexp_like(texto, 'El numero [^0-9]') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? --------------------- ---------------- ------------------- El numero 2 El numero [^0-9] No hay coincidencia El numero tres El numero [^0-9] Hay coincidencia El numero 4 El numero [^0-9] No hay coincidencia El numero cinco El numero [^0-9] Hay coincidencia El numero 0 El numero [^0-9] No hay coincidencia ¿Verdad que las expresiones regulares son una herramienta potente? Nos vemos!

Blog Post: Advanced Analytics in Oracle Data Visualization Desktop

$
0
0
Oracle Data Visualisation Desktop has the feature of being able to include some advanced analytics. In a previous blog post I showed you how to go about installing Oracle R Distribution on your desktop/client machine. This will allow you to make use of some of the advanced analytics features of Oracle Data Visualization Desktop. The best way to get started with using the advanced analytics features of Oracle Data Visualization Desktop, is to ignore that these features exist. Start with creating your typical analytics, charts etc. Only then you can really look at adding some of the advanced analytics features. To access the advanced analytics features you can select the icon from the menu bar for advanced analytics. It is the icon with the magnifying glass. When you have listed on this icon the advanced analytics menu opens displaying the 5 advanced analytics options available to you. With your chart/graphic already created then you can click on one of the advanced analytics options and drag it onto your char or onto the palette for the chart. For example in the following diagram the Outlier option was selected and dragged into the Color section. This will then mark Outlier data on your chart with a different color. You can follow a similar approach with all the other advanced analytics options. Click and drag. It is that simple. As you add each advanced analytics option, the chart will be updated automatically for you. An alternative to clicking and dragging from the chart options palette, you can right click on the chart (or click on the wheel on the top right hand corner of the chart window), and then select the advanced analytics feature you want from the menu. or what I prefer doing is to select Properties from the menu above. When you do this you get a new window opening and when you click on the icon with the magnifying glass you get to add and customize the advanced analytics features. WARNING I would urge caution when you are reading other demonstrations about Oracle Visualization Desktop that are showing examples of predictive analytics. There are a few blog posts out there and also some videos too. What they are actually showing you is the embedded R execution feature of Oracle R Enterprise. Oracle R Enterprise is part of the Oracle Advanced Analytics Option, which is a licensed option. So if you follow these blog posts and videos, thinking that you can do this kind of advanced analytics, you could be getting into license issues. This confusion is not helped with comments like the following on the Oracle website. " Predictive Analytics: Analytics has progressed from providing oversight to offering insight, and now to enabling foresight. Oracle Data Visualization supports that progression, delivering embedded predictive capabilities that enable anyone to see trend lines and other visuals with a click, and extend their analysis using a free R download. " Personally I find this a bit confusing. Yes you can perform some advanced and predictive analytics with Oracle Data Visualization, but you need to ensure that you are using the client side R installation, for your analytics. As with all licensing questions, you should discuss them with your Oracle Sales representative.

Blog Post: Bloquear y Desbloquear una Página en Oracle APEX 5.0

$
0
0
Cuando un equipo de desarrolladores está trabajando en el desarrollo de una aplicación es muy importante disponer de una funcionalidad que le permita a los desarrolladores bloquear las páginas con la que están trabajando del resto del equipo para que los demás desarrolladores no puedan editar la o las paginas en la cual el desarrollador está trabajando actualmente, hasta que no libere las páginas para el acceso a todos los desarrolladores. Para cumplir con esto existe en Oracle APEX el “Bloqueo de Páginas”. Determinar si una Página está bloqueada Podemos ver en la vista de Informe de nuestra aplicación el listado de todas las páginas y una columna que indica si la página está bloqueada o no. Si el candadito está abierto significa que la página esta desbloqueada y si el candadito está cerrado significa que la página está bloqueada. Bloquear y Desbloquear una página desde la página de Inicio de la Aplicación Desde la página de Inicio del Espacio de Trabajo , hacemos clic en el Creador de Aplicaciones Seleccionamos la Aplicación En la barra de búsqueda, hacemos clic en el icono “ Ver Informe ” o el icono “ Vista Detalle ” si está disponible. Hacemos clic en el icono de Bloquear (el candadito) Ingresamos un comentario en el campo de comentario 6.Hacemos clic en el botón “ Bloquear Páginas ” Para desbloquear simplemente hacemos clic en el candado cerrado y luego hacemos clic en el botón “Desbloquear” de la ventana emergente. Bloquear y Desbloquear Página desde la Página de Bloqueos en Utilidades de la Aplicación Desde la página de Inicio del Espacio de Trabajo , hacemos clic en el Creador de Aplicaciones Seleccionamos la Aplicación Hacemos clic en el icono Utilidades En “ Utilidades específicas de Páginas ” en la lateral derecha de la página, hacemos clic en “ Utilidades de Página Cruzada ” Seleccionamos “ Bloqueos de Página ” Ingresamos un comentario Hacemos clic en el botón “ Bloquear Selección ” Para desbloquear, simplemente seleccionamos las páginas que queremos desbloquear y hacemos clic en el botón “ Desbloquear Selección ” y el candadito queda abierto. Bloquear y Desbloquear Página desde el Diseñador de Páginas Ingresamos al Diseñador de Páginas de una página al cual queremos bloquear, y hacemos clic en el icono del candado abierto que se encuentra en la barra de iconos e ingresamos un comentario en la ventana emergente que se abre. Posteriormente el icono pasa a estar en verde con el candado cerrado, para desbloquear la página hacemos clic en el icono y luego en la ventana emergente hacemos clic en el botón Desbloquear . De esta forma muy sencilla podemos tener control de las páginas que estamos editando y no preocuparnos que nuestro equipo de desarrollo modifique algo en el cual estamos actualmente trabajando. Hasta pronto!
Viewing all 4975 articles
Browse latest View live