Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: Latest Video Upload Part #1

$
0
0
Recently i was working on some installation and configuration for Fusion and as you see the video uploaded to my channel :- 1- ODI 11.1.1.9 Installation here 2- Oracle Enterprise manager 13c installation here 3- Oracle BI 11.1.1.9 Installation On Linux here Thank Osama

Blog Post: EM13c- BI Publisher Reports

$
0
0
How much do you know about the big push to BI Publisher reports from Information Publisher reporting in Enterprise Manager 13c?  Be honest now, Pete Sharman is watching…. I promise, there won’t be a quiz at the end of this post, but its important for everyone to start recognizing the power behind the new reporting strategy.  Pete was the PM over the big push in EM13c and has a great blog post with numerous resource links, so I’ll leave him to quizzing everyone! IP Reports are incredibly powerful and I don’t see them going away soon, but they have a lot of limitations, too.  With the “harder” push to BI Publisher with EM13c, users receive a more robust reporting platform that is able to support the functionality that is required of an IT Infrastructure tool. BI Publisher You can access the BI Publisher in EM13c from the Enterprise drop down menu- There’s a plethora of reports already built out for you to utilize!  These reports access only the OMR, (Oracle EM Management Repository) and cover numerous categories: Target information and status Cloud Security Resource and consolidation planning Metrics, incidents and alerting Note: Please be aware that the license for BI Publisher included with Enterprise Manager only covers reporting against the OMR and not any other targets DIRECTLY.  If you decide to build reports against data residing in targets outside the repository, it will need to be licensed for each. Many of the original reports that were converted over from IP Reports were done so by a wonderful Oracle partner,  Blue Medora , who are well known for their VMware plugins for Enterprise Manager. BI Publisher Interface Once you click on one of the reports, you’ll be taken from the EM13c interface to the BI Publisher one.  Don’t panic when that screen changes-  it’s supposed to do that. You’ll notice be brought to the Home page, but you’ll notice that you’ll have access to your catalog of reports, (it will mirror the reports in the EM13c reporting interface) the ability to create New reports, open reports that you may have drafts of or are local to your machine, (not uploaded to the repository) and authentication information. In the left hand side bar, you will have menu options that duplicate some of what is in the top menu and tips access to help you get more acquainted with BI Publisher- This is where you’ll most likely access the catalog, create reports and download local BIP tools to use on your desktop. Running Standard Reports To run a standard, pre-created report, is pretty easy.  This is a report that’s already had the template format created for you and the data sources linked.  Oracle has tried to create a number of reports in categories it thought most IT departments would need, but let’s just run two to demonstrate. Let’s say you want to know about Database Group Health.  Now there’s not a lot connected to my small development environment, (four databases, three in the Oracle Public Cloud and one on-premise) and this is currently aimed at my EM repository. This limits the results, but as you can see, it shows the current availability, the current number of incidents and compliance violations. We could also take a look at what kinds of targets exist in the Enterprise Manager environment: Or who has powerful privileges in the environment: Now this is just a couple of the dozens of reports available to you that can be run, copied, edited and sourced for your own environment’s reporting needs out of the BI Publisher.    I’d definitely recommend that if you haven’t checked out BI Publisher, spend a little time on it and see how much it can do! Tags: BI Publisher , em13c , Enterprise Manager Del.icio.us Facebook TweetThis Digg StumbleUpon Comments: 0 (Zero), Be the first to leave a reply! You might be interested in this: Tuning EM12c- Onto the Next Tier, JAVA Adding Targets in Enterprise Manager 13c New Environments, New DBA Crushes... RMOUG 2013, All the Glory, Half the Calories of Other Conferences! Oracle Open World, Part I, Symposium Copyright ©  DBA Kevlar [ EM13c- BI Publisher Reports ], All Right Reserved. 2016.

Blog Post: Oracle 12c: Lost your PDB’s XML manifest file? Here’s how you can recover it

$
0
0
In this post, I will demonstrate how we can recover a pluggable database’s XML manifest file in the event of the the XML manifest file being lost or corrupted. When we UNPLUG a pluggable database from a container (CDB), we use a XML manifest file to store the metadata (description) related to the UNPLUGGED pluggable database. We can use this XML manifest file to PLUG the UNPLUGGED pluggable database later to any compatible container database (CDB). In the following example, I am going to demonstrate the method to recover a XML manifest file in the event of the XML manifest file being lost or corrupted. To start with my demonstration, I have a container database with a number of pluggable databases as shown below. ---// ---// list of pluggable databases //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB1_PDB_1 READ WRITE NO 4 CDB1_PDB_2 READ WRITE NO 5 CDB1_PDB_4 READ WRITE NO Let’s UNPLUG one of these pluggable databases. ---// ---// UNPLUG a pluggable database //--- ---// SQL> alter pluggable database CDB1_PDB_4 close; Pluggable database altered. SQL> alter pluggable database CDB1_PDB_4 unplug into ' /data/oracle/orpcdb1/template/cdb1_pdb4.xml '; Pluggable database altered. We have UNPLUGGED the pluggable database with name CDB1_PDB_4 by keeping the PDB metadata (description) in the XML manifest file ‘/data/oracle/orpcdb1/template/cdb1_pdb4.xml’ . We can later use this XML manifest file to PLUG the pluggable database back to any compatible container database (CDB). Let’s drop the UNPLUGGED pluggable database from the current container. I am going to keep the datafiles, so that we can later PLUG the pluggable database back to the container. ---// ---// drop UNPLUGGED pluggable database from container //--- ---// SQL> drop pluggable database CDB1_PDB_4 keep datafiles; Pluggable database dropped. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB1_PDB_1 READ WRITE NO 4 CDB1_PDB_2 READ WRITE NO We have dropped the UNPLUGGED pluggable database CDB1_PDB_4 from the container. For the purpose of this demonstration, I have corrupted (artificially) the XML manifest file ‘/data/oracle/orpcdb1/template/cdb1_pdb4.xml’ which I had used to UNPLUG the pluggable database. Now, I am going to try PLUGGING the UNPLUGGED pluggable database back to the container as shown below. ---// ---// PLUG the UNPLUGGED pluggable database back to container //--- ---// SQL> create pluggable database CDB1_PDB_4 using '/data/oracle/orpcdb1/template/cdb1_pdb4.xml' 2 NOCOPY 3 TEMPFILE REUSE; create pluggable database CDB1_PDB_4 using '/data/oracle/orpcdb1/template/cdb1_pdb4.xml' * ERROR at line 1: ORA-65026: XML metadata file error : LPX-00007: unexpected end-of-file encountered Since, I had corrupted (artificially) the XML manifest file; I am seeing an ERROR (ORA-65026) while trying to PLUG the UNPLUGGED pluggable database using the XML manifest file. In this case, we will not be able to PLUG the UNPLUGGED database, until we have a VALID XML manifest file representing the metadata (description) of the UNPLUGGED pluggable database. Here, DBMS_PDB package comes to our rescue. Oracle provides the DBMS_PDB.RECOVER procedure, which can be used to regenerate (recover) the XML manifest file for an UNPLUGGED pluggable database. This procedure has the following syntax. ---// ---// Syntax for DBMS_PDB.RECOVER procedure //--- ---// DBMS_PDB.RECOVER ( pdb_descr_file IN VARCHAR2, pdb_name IN VARCHAR2, filenames IN VARCHAR2); Where:- pdb_descr_file Path (Name) of the XML manifest file to store pluggable database metadata (description) pdb_name Name of the pluggable database filenames Comma-separated list of paths/directories containing datafiles for the pluggable database The DBMS_PDB.RECOVER procedure takes three arguments as shown above. We can pass any path/name for the XML manifest file as well as any name for the pluggable database. However, we must know and pass the location of all the datafiles of the pluggable database for which we want to recover the manifest file. Let’s recover the XML manifest file for the UNPLUGGED pluggable database CDB1_PDB_4. In my case all the datafiles for CDB1_PDB_4 are located under /data/oracle/orpcdb1/cdb1_pdb_4/ location. I will use this datafile location (filenames) to recover the XML file as shown below. ---// ---// recovering XML manifest file (using exact PDB name) //--- ---// SQL> BEGIN 2 DBMS_PDB.RECOVER ( 3 pdb_descr_file => '/data/oracle/orpcdb1/template/cdb1_pdb4_recover.xml', 4 pdb_name => 'CDB1_PDB_4', 5 filenames => '/data/oracle/orpcdb1/cdb1_pdb_4/' 6 ); 7 END; 8 / PL/SQL procedure successfully completed. SQL> !ls -lrt /data/oracle/orpcdb1/template/cdb1_pdb4_recover.xml -rw-r--r-- 1 oracle dba 2451 Apr 19 00:38 /data/oracle/orpcdb1/template/cdb1_pdb4_recover.xml ---// ---// recovering XML manifest file (using different PDB name) //--- ---// SQL> BEGIN 2 DBMS_PDB.RECOVER ( 3 pdb_descr_file => '/data/oracle/orpcdb1/template/cdb1_pdb4_recover1.xml', 4 pdb_name => 'CDB1_PDB_5', 5 filenames => '/data/oracle/orpcdb1/cdb1_pdb_4/' 6 ); 7 END; 8 / PL/SQL procedure successfully completed. SQL> !ls -lrt /data/oracle/orpcdb1/template/cdb1_pdb4_recover1.xml -rw-r--r-- 1 oracle dba 2451 Apr 19 00:42 /data/oracle/orpcdb1/template/cdb1_pdb4_recover1.xml As I mentioned earlier, we need not pass the same pluggable database name to be able to recover the XML manifest file. In the above examples, I have recovered the XML manifest file using both EXACT and DIFFERENT pluggable database name. The only parameter that needs the exact (original) information is the filenames location of the UNPLUGGED pluggable database. Once, the XML manifest file is recovered, we should be able to plug the pluggable database back to any compatible container using the recovered XML manifest file as shown below. ---// ---// PLUG the UNPLUGGED pluggable database using recovered XML manifest file //--- ---// SQL> create pluggable database CDB1_PDB_5 using '/data/oracle/orpcdb1/template/cdb1_pdb4_recover1.xml' 2 NOCOPY 3 TEMPFILE REUSE; Pluggable database created. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB1_PDB_1 READ WRITE NO 4 CDB1_PDB_2 READ WRITE NO 5 CDB1_PDB_5 MOUNTED SQL> alter pluggable database CDB1_PDB_5 open; Pluggable database altered. SQL> alter session set container=CDB1_PDB_5; Session altered. SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------- /data/oracle/orpcdb1/undotbs01.dbf /data/oracle/orpcdb1/cdb1_pdb_4/sysaux01.dbf /data/oracle/orpcdb1/cdb1_pdb_4/users01.dbf /data/oracle/orpcdb1/cdb1_pdb_4/system01.dbf As expected, we are now able to PLUG the UNPLUGGED pluggable database back to the container (Thanks to DBMS_PDB.RECOVER procedure to come to our rescue).

Wiki Page: Managing Global Data Services and Failover Test - GDS 12c

$
0
0
Introduction This article explains how to create service, managing services and close view of service failover to the preferred databases and various other options related to services, The GDS introduction and the installation part we have already completed and this will be the 3 rd series related to GMS/GDS -12c. Prerequisites In order to play with the services, we must have the setup ready and functioning, they are Catalog creation Adding GSM Start GSM Creating Regions Creating Pools Adding regions to GSM Adding Broker configuration Assigning pools to the primary, standby databases or Golden Gate replicated databases. If we have the above all, then we are ready to create services to the specific pools. Creating Services We’ve already seen the creating service part but again adding few steps, so that we will not miss anything related to the services management. Here we will have two services, one service runs only in production database and the other service (read only) runs on the standby database and the service can failover to the primary database. Before that ensure we have connected to the GSM and the connection established to catalog. 1) Read Write service to use for Production GREP net service connects to the created catalog database and we will set GSM environment to created GSM i.e. SOUTHGSM GDSCTL>connect gsmadm/oracle@grep Catalog connection is established GDSCTL>set gsm -gsm southgsm GDSCTL>add service -gdspool psfin -service cobol_process -preferred uk GDSCTL> 2) Adding Read-Only service to run on standby and it can failover to the primary database if the standby is unreachable. GDSCTL>add service -service psfin_nvision -gdspool psfin -preferred_all -role PHYSICAL_STANDBY -FAILOVER_PRIMARY GDSCTL> Starting the services and overview of the services After adding services to the GDS pool, they will not be started until we start manually, So we will start the both created services and we will check the complete status of each service. GDSCTL>start service -service psfin_nvision -gdspool psfin GDSCTL>start service -service cobol_process -gdspool psfin GDSCTL> After starting the services , we will check the configuration of each service in detail which explains services was created under which GSM, connection balance details, preferred instances and much more. GDSCTL>config service -service psfin_nvision Name: psfin_nvision Network name: psfin_nvision.psfin.oradbcloud Pool: psfin Started: Yes Preferred all: Yes Locality: ANYWHERE Region Failover: No Role: PHYSICAL_STANDBY Primary Failover: Yes Lag: ANY Runtime Balance: SERVICE_TIME Connection Balance: LONG Notification: Yes TAF Policy: NONE Policy: AUTOMATIC DTP: No Failover Method: NONE Failover Type: NONE Failover Retries: Failover Delay: Edition: PDB: Commit Outcome: Retention Timeout: Replay Initiation Timeout: Session State Consistency: SQL Translation Profile: Databases ------------------------ Database Preferred Status -------- --------- ------ uk Yes Enabled india Yes Enabled GDSCTL> GDSCTL>services Service "cobol_process.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%1", name: "ORC1", db: "UK", region: "europe", status: ready. Service "psfin_nvision.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%11", name: "ORC1", db: "INDIA", region: "apac", status: ready. GDSCTL>databases Database: "uk" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: europe Service: "cobol_process" Globally started: Y Started: Y Scan: N Enabled: Y Preferred: Y Service: "psfin_nvision" Globally started: Y Started: N Scan: N Enabled: Y Preferred: Y Registered instances: psfin%1 Database: "india" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: apac Service: "psfin_nvision" Globally started: Y Started: Y Scan: N Enabled: Y Preferred: Y Registered instances: psfin%11 GDSCTL> Now both the services were started and enabled on the preferred databases. We can check these details from the database level using the DBA_SERVICES SQL> select name,global_service from dba_services; NAME GLO ------------------- --- SYS$BACKGROUND NO SYS$USERS NO UK NO UK_DGB NO ORC1XDB NO ORC1 NO INDIA_DGB NO INDIA NO HR NO NVISION NO HR NO psfin_nvision YES cobol_process YES 13 rows selected. [oracle@ORA-C1 ~]$ lsnrctl status |grep cobol Service "cobol_process.psfin.oradbcloud" has 1 instance(s). [oracle@ORA-C1 ~]$ [oracle@ORA-C2 ~]$ lsnrctl status |grep nvision Service "psfin_nvision.psfin.oradbcloud" has 1 instance(s). [oracle@ORA-C2 ~]$ Prepare the TNS entries for the services to connect After enabling the services, we can provide the service details to the Cobol process job holders or for the reporting (nvision) users, so that the database connectivity will be through with the GDService. If we see below the Read only job NVISION service can be failover and that can be enabled again in service name and here we will not use any server IP or hostnames, we will specify ONLY GSM server, because service will be created by default in “GLOBAL”, service can be accessed from anywhere. Hence when we use the specific service name and Oracle/GDS knows that where the service connection should be established. PSFIN_NVISION = (DESCRIPTION = (FAILOVER=ON) (ADDRESS_LIST = (LOAD_BALANCE = ON) (ADDRESS = (PROTOCOL = TCP)( HOST = ORA-C2.localdomain)(PORT = 1555) ) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = psfin_nvision.psfin.oradbcloud ) ) ) COBOL_PROCESS= (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)( HOST = ORA-C2.localdomain)(PORT = 1555 )) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cobol_process.psfin.oradbcloud ) ) ) Here the GSM is on ORA-C2 hence the connectivity should point to GSM server and the port but not the database server. Connectivity Test We can test the connectivity using the above TNS service which should connect to the primary database, even though we connect from GSM server. [oracle@ORA-C2 admin]$ echo $ORACLE_HOME /u01/app/oracle/product/12.1.0/gsmhome_1 [oracle@ORA-C2 admin]$ tnsping cobol_process TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 18-APR-2016 06:08:31 Copyright (c) 1997, 2014, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ORA-C2.localdomain)(PORT = 1555))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cobol_process.psfin.oradbcloud))) OK (10 msec) [oracle@ORA-C2 admin]$ sqlplus sys@ cobol_process as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 18 06:08:40 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select database_role,db_unique_name from v$database; DATABASE_ROLE DB_UNIQUE_NAME ---------------- ------------------------------ PRIMARY UK set pages 100 SQL> col name for a20 SQL> select name,global_service from dba_services; NAME GLO -------------------- --- SYS$BACKGROUND NO SYS$USERS NO UK NO UK_DGB NO ORC1XDB NO ORC1 NO INDIA_DGB NO INDIA NO HR NO NVISION NO HR NO psfin_nvision YES cobol_process YES 13 rows selected. SQL> col network_name for a40 SQL> select name,network_name,global from v$active_services; NAME NETWORK_NAME GLO -------------------- ---------------------------------------- --- cobol_process cobol_process.psfin.oradbcloud YES HR CONHR NO UK_DGB UK_DGB NO ORC1XDB ORC1XDB NO UK UK NO SYS$BACKGROUND NO SYS$USERS NO 7 rows selected. Enabling and Disabling Services If the service is no more in use or disabling for some time for any maintenance to avoid connections and we can enable again after the maintenance. GDSCTL>disable service -gdspool psfin -service psfin_nvision -database india GDSCTL>enable service -gdspool psfin -service psfin_nvision -database india GDSCTL> GDSCTL>status service -service psfin_nvision Service "psfin_nvision.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%11", name: "ORC1", db: "india", region: "apac", status: ready. GDSCTL> Changes in services If in case of any maintenance on server and we have to either disable if downtime applicable if not then we must relocate to run on another database, but in order to relocate the service to run from one database to another database, then ensure the service is configured the target database is under the preferred list. Now we will change the runtime balance for the service and we can see below the changes after the modifying the service. GDSCTL>modify service -gdspool psfin -service psfin_nvision -rlbgoal throughput GDSCTL> GDSCTL>config service -service psfin_nvision Name: psfin_nvision Network name: psfin_nvision.psfin.oradbcloud Pool: psfin Started: Yes Preferred all: Yes Locality: ANYWHERE Region Failover: No Role: PHYSICAL_STANDBY Primary Failover: Yes Lag: ANY Runtime Balance: THROUGHPUT Connection Balance: LONG Notification: Yes TAF Policy: NONE Policy: AUTOMATIC DTP: No Failover Method: NONE Failover Type: NONE Failover Retries: Failover Delay: Edition: PDB: Commit Outcome: Retention Timeout: Replay Initiation Timeout: Session State Consistency: SQL Translation Profile: Databases ------------------------ Database Preferred Status -------- --------- ------ uk Yes Enabled india Yes Enabled GDSCTL> Service Failover Test We have covered almost all tasks of managing services and now we will see how the services will be failover to the available nodes. In this example we will take the Read only service which runs on standby database and we will see how it relocates to the primary database after the standby database failure. As we said earlier in order to relocate service the primary database should be in preferred list of service configuration. [oracle@ORA-C2 ~]$ ps -ef|grep pmon oracle 4611 1 0 18:19 ? 00:00:02 ora_pmon_GREP oracle 20553 1 0 Apr16 ? 00:00:09 ora_pmon_ORC1 oracle 27890 27847 0 22:59 pts/2 00:00:00 grep pmon [oracle@ORA-C2 ~]$ kill -9 20553 [oracle@ORA-C2 ~]$ We have killed the mandatory background process of the database on standby server ORA-C2, Now we will see the latest status of the database and services GDSCTL>databases Database: "uk" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: europe Service: "psfin_nvision" Globally started: Y Started: Y Scan: Y Enabled: Y Preferred: Y Registered instances: psfin%1 Database: "india" Registered: N State: Ok ONS: N. Role: N/A Instances: 0 Region: apac Service: "psfin_nvision" Globally started: Y Started: N Scan: Y Enabled: Y Preferred: Y GDSCTL>services Service "psfin_nvision.psfin.oradbcloud" has 1 instance(s). Affinity: ANYWHERE Instance "psfin%1", name: "ORC1", db: "uk", region: "europe", status: ready. GDSCTL> We can see above the service is now running on primary database UK and we will test manually as well. [oracle@ORA-C2 ~]$ sqlplus sys/oracle@PSFIN_NVISION as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 17 23:00:33 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select db_unique_name,database_role from v$database; DB_UNIQUE_NAME DATABASE_ROLE ------------------------------ ---------------- uk PRIMARY SQL> The service will run on the primary forever until the standby is up and running, now we will start the standby database and we can check whether the service is running or not. SQL> startup mount ORACLE instance started. Total System Global Area 473956352 bytes Fixed Size 2925744 bytes Variable Size 197135184 bytes Database Buffers 268435456 bytes Redo Buffers 5459968 bytes Database mounted. SQL> alter database open; Database altered. SQL> GDSCTL>databases Database: "uk" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: europe Service: "psfin_nvision" Globally started: Y Started: Y Scan: Y Enabled: Y Preferred: Y Registered instances: psfin%1 Database: "india" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: apac Service: "psfin_nvision" Globally started: Y Started: Y Scan: Y Enabled: Y Preferred: Y Registered instances: psfin%11 GDSCTL>services Service "psfin_nvision.psfin.oradbcloud" has 2 instance(s). Affinity: ANYWHERE Instance "psfin%1", name: "ORC1", db: "uk", region: "europe", status: ready. Instance "psfin%11", name: "ORC1", db: "india", region: "apac", status: ready. GDSCTL> Conclusion We’ve seen 360 degrees overview of the Global data services(services) Adding service Starting service Stopping service Enable service Disable service Apart from that we have seen the demo on how the service was failed over to the available instance when there is no availability of database. References https://docs.oracle.com/database/121/GSMUG/cloud.htm#GSMUG141

Blog Post: Collaborate 2016

$
0
0
Hi! I attended the annual Collaborate Oracle event this past week at the Mandalay Bay Convention Center in Las Vegas. Collaborate is a grouping of IOUG, OAUG, and Quest. I suppose there were about 5000 attendees and quite a number of the usual vendors present. The best swag were the drones being given away [:)] I got a small one from the OAUG booth and have been playing with it some. The thing is more difficult to navigate than you would think. I had 2 presentations…one on Toad SQL Tuning Tips and another on Oracle12 New Features for Release 2. I’ve blogged on these topics, review my topics if you are interested. The Oracle12 New Features centered on the newer partitioning features being offered in Release 2 (no release date yet…). I also discussed the Pluggable Database Environment and how to setup both your tnsnames entries and how to navigate between the container database and the pluggables. The TOAD ppt…I centered around showing how to use Toad to run DBMS_XPLAN output, and how to run SQL Trace from both an editor window tab and from the session browser. I showed how to solve problems using the Trace File Browser. It was only a half hour session so not tons of time. I gave away TOAD Unleashed books at both events. Oracle Corp was there with the campgrounds. These are always a great place to get specific one-on-one discussions with the Oracle folks that are either the product managers or sometimes even part of the programming staff that codes the products! Since I’ve been a regular at both IOUG and OAUG events, I was able to network with many folks I’ve known for a long time. The Dell party hired an Elvis impersonator… Next years Collaborate event will again be at this same location on April 2 thru the 7 th , 2017. …where does the time go? Dan Hotka Oracle ACE Director Instructor/Author/CEO

Wiki Page: How to back up Oracle databases - Part IV

$
0
0
Written by : Juan Carlos Olamendy Turruellas Introduction This is the fourth article in a series where we’re being learning about the principles, concepts and real world scripts for doing backups to Oracle database. In the first article, I’ve talked about the most important terms related to backups in Oracle databases. In this second article, I’ve talked about doing low-level manual backups in order to illustrate the principles and concepts of the first article. In the third article, I’ve talked about the architecture, key concepts and terms related to Recovery Manager (also know as RMAN ). And in this last fourth article, I’m talking about applying the concepts learned about RMAN with practical examples in real-world scenarios. Initialization parameters The first step is to check the initialization parameters, important for controlling the backup process using RMAN , are set. These parameters are: DB_RECOVERY_FILE_DEST . Specifies the location of the location of the Fast Recovery Area . This is a location on disk to store all RMAN backups. In order to mitigate the risks of losing important data and not impact on the overall performance, we need to locate it on a file system area different from any database files, control files, or redo log files (online or archived). DB_RECOVERY_FILE_DEST_SIZE . Specifies the upper limit on the amount of space used by the Fast Recovery Area . The listing 01 shows an example excerpt of an initialization parameters file. #other init parameters are above or below db_recovery_file_dest='/u05/oracle/db_test/fast_recovery_area' db_recovery_file_dest_size=1G Listing 01 Connecting using RMAN We can connect using RMAN to an Oracle instance by calling the rman command. This will shows the RMAN> prompt, at which point we can type in the various commands. Use the CONNECT TARGET command to connect to a target database instance. RMAN connections to a database instance are specified and authenticated the same way SQL*Plus is. RMAN connections require SYSDBA privilege. For example, let’s connect to our current instance (identified by ORACLE_SID environment variable) by using OS as shown below in the listing 02. RMAN> connect target / Listing 02 Configuration parameters RMAN has several configuration parameters, which are set to their default values when you first use RMAN . We can display the default settings using the SHOW ALL command as shown in the listing 03. RMAN> show all; Listing 03 We if we want to change some default settings, we need to execute the CONFIGURE command. For example, we can change the default retention policy. This policy specifies when to consider a backup as obsolete. It’s remarkable to say that when we tell RMAN to consider a backup file obsolete after a certain time period, RMAN only marks the file obsolete—it doesn’t delete it explicitly. We need to delete the obsolete files manually. Backups can be retained by: Recovery window . RMAN remains as many backups as necessary to bring a database to a point in time within a recovery window. For example, a recovery window of 7 days will cause RMAN to maintain image copies, incremental backups, and archived redo log files to restore/recover a database to any point in the last 7 days. Redundancy . RMAN retains a specified number of backups. Let’s configure a retention policy for 28 days (4 weeks of retention) as shown below in the listing 04. RMAN> configure retention policy to recovery window of 28 days; Listing 03 Another important item to configure is the device for backup. The default device is the disk. If no pathname is specified, RMAN uses the Fast Recovery Area for all backups (see listing 01). Keeping all backups in the disk is very risk. If we want to configure a tape device as the default backup device, we need to configure to sbt (this is the device type for any tape backup system for any vendor) as shown below in the listing 04. RMAN> configure channel device type sbt parms ='ENV=( )'; Listing 04 If we want to switch the default device back to disk, we can do as shown in the listing 05. RMAN> configure default device type to disk; Listing 05 Full Database Backup - ARCHIVELOG Mode In ARCHIVELOG mode, we can backup a database while it is open. Remember that redo archivelog mode is required. Using RMAN , the full backup is very simple comparing to the manual counterpart scripts as shown in the listing 06. RMAN> backup database plus archivelog; Listing 06 The backupset is located at the Fast Recovery Area under the directory [database_name] . For example, we can see the backups using the ls OS command as shown in the listing 07. $ ls -al /u05/oracle/db_test/fast_recovery_area/DBTEST Listing 07 The next statement as shown in the listing 08 ensures that all transactions are represented in an archive log including any that occurred during the backup. This enables media recovery after restoring this backup. RMAN> SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT'; Listing 08 Full Database Backup - NOARCHIVELOG Mode Remember that in NOARCHIVELOG mode, only a consistent backup is valid. That’s, the database must be in MOUNT mode after a consistent shutdown and no recovery is required after restoring a backup. First step is to shutdown and startup the instance by just mounting the database as shown in the listing 09. RMAN> shutdown immediate; RMAN> connect target / RMAN> startup force pfile=init.ora RMAN> shutdown immediate; RMAN> startup mount pfile=init.ora Listing 09 Next step is to backup the database as shown in the listing 10. RMAN> backup database; Listing 10 And finally, we need to open the database and resume the normal operations as shown in the listing 11. RMAN> alter database open; Listing 11 Partial backups We can backup a particular tablespace and/or database files without backing up the entire database as shown in the listing 12. RMAN> backup tablespace sales_data; RMAN> backup datafile '/u04/oradata/DBTEST/sales_data01.dbf'; Listing 12 Listing backups The LIST BACKUP command (as shown in the listing 13) shows all the completed backups registered by RMAN . The command shows all backup sets and image copies, as well as the individual database files, control files, archived redo log files, and SPFILEs in the backup. We can also list all backups by querying V$BACKUP_FILES and RC_BACKUP_FILES views. RMAN> list backup; Listing 13 Validating backups The VALIDATE command confirms that all database files exist, and they are in their correct location, and free of physical corruption. The CHECK LOGICAL option also checks for logical block corruption. We can validate the entire database including the archived redo log files as shown below in the listing 14. RMAN> backup validate check logical database archivelog all; Listing 14 We can validate a particular backup set as shown below in the listing 15. RMAN> validate backupset 5; Listing 15 Deleting obsolete backups The DELETE command removes physical backups, updates control file records to indicate that the backups are deleted, and also removes their records from the recovery catalog (if you use one). We can delete backup sets, archived redo logs, and database files copies. We can use the DELETE OBSOLETE command to remove all backups no longer needed. We should run DELETE OBSOLETE periodically to delete all backups that are obsolete. A backup is obsolete if it’s no longer needed for database recovery, according to the current retention policy. We can use the DELETE EXPIRED command to remove the recovery catalog records for expired backups and marks them as DELETED . An expired backup is the one that cannot be found by RMAN . We can then use the DELETE EXPIRED command to remove the records for these files from the control file and the recovery catalog. Incremental backups All the BACKUP commands in the preceding examples are full backup commands. We can also perform incremental backups using RMAN , and in fact, this is one of the great advantages of this tool. Incremental backups are much faster than backing up the entire database because only those data blocks that changed since a previous backup are copied. Incremental backups can be: level 0 . It copies all data blocks just like a full backup, and acts as the base for subsequent incremental backups. Level 1 . It’s really an incremental backup. In order to perform a level 1 incremental backup, we must first have a base level 0 backup. This level contains only changed blocks since the initial level 0 backup. A cumulative level 1 incremental backup records all changed blocks since the last initial incremental backup We can execute a level 0 backup to the entire database as shown below in the listing 16. RMAN> backup incremental level 0 database; Listing 16 Once we have level 0 backup, we’re ready to execute level 1 backups as shown in the listing 17. RMAN> backup incremental level 1 database; Listing 17 The size of your incremental backup file depends on the number of changed data blocks and the incremental level. Cumulative backups will, in general, be larger than differential backups, since they duplicate the data copied by backups at the same level. However, cumulative backups have the advantage that they reduce recovery time, because you apply only one backup. Control file auto backup If we set the CONTROLFILE AUTOBACKUP setting to ON , each time we perform a backup, then the control file is automatically backed up along with the SPFILE as shown in the listing 18. RMAN> configure controlfile autobackup on; Listing 18 Conclusion In this fourth part, I've showed how to use RMAN using real-world examples. Now you can use the scripts as the starting point to backup your own Oracle databases.

Wiki Page: Creating a Hive External Table over MySQL Database

$
0
0
Written by Deepak Vohra Apache Hive, a distributed data storage software for querying and managing large datasets in HDFS with support for a SQL-like HiveQL query language provides two types of tables: managed tables and external tables. The difference between managed table and an external table is that Hive manages both the data and the metadata for a managed table and only manages the metadata for an external table. When a Hive managed table is dropped , both the data and the metadata are dropped. When a Hive external table is dropped only the metadata is dropped, and the data is kept in the datasource. Consider that data is required to be stored in MySQL database and a Hive table is required to be created over the MySQL database table. Qubit Hive JDBC Storage Handler may be used to create an external Hive table over the MySQL database table. The Hive external table may be dropped and the MySQL table data shall still be preserved. In this tutorial we shall discuss creating a Hive external table over MySQL Database. Installing the Hive JDBC Storage Handler Setting the Environment Creating a MySQL Database Table Configuring Hive Starting HDFS Creating a Hive External Table Querying the Hive External Table Installing the Hive JDBC Storage Handler In this section we shall download and build the Hive JDBC Storage handler. We shall be using Oracle Linux as the OS. We shall use Maven as the build tool. Download and extract Maven apache-maven-3.3.3-bin.tar.gz file. wget http://apache.mirror.rafal.ca/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz tar -xvf apache-maven-3.3.3-bin.tar.gz Create a variable called MAVEN_HOME and add Maven bin directory to the PATH environment variable. vi ~/.bashrc export MAVEN_HOME=/apache-maven-3.3.3-bin export PATH=$MAVEN_HOME/bin Next, download the Hive JDBC Storage Handler source code and build using Maven to create a Jar file. git clone https://github.com/QubitProducts-jdbc-storage-handler.git cd hive-jdbc-storage-handler mvn clean package Maven starts the build process for the Hive JDBC Storage Handler. The hive-jdbc-storage-handler-1.1.1-cdh4.3.0-SNAPSHOT-dist.jar jar file gets created in the target directory. Setting the Environment In addition to the Hive storage handler for JDBC we require the following software to install on Oracle Linux 6.6. MySQL 5.x Database CDH 4.6 Hadoop 2.0.0 CDH 4.6 Hive 0.10.0 Java 7 Download and extract the Java 7 .gz file. tar zxvf jdk-7u55-linux-i586.gz Download and install CDH4.6 Hadoop 2.0.0 . wget http://archive.cloudera.com/cdh4/cdh/4/hadoop-2.0.0-cdh4.6.0.tar.gz tar -xvf hadoop-2.0.0-cdh4.6.0.tar.gz Create symlinks for the Hadoop bin and conf directories. ln -s /hadoop-2.0.0-cdh4.6.0/bin /hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/bin ln -s /hadoop-2.0.0-cdh4.6.0/etc/hadoop /hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/conf Download and install CDH 4.6 Hive 0.10.0. wget http://archive.cloudera.com/cdh4/cdh/4-0.10.0-cdh4.6.0.tar.gz tar -xvf hive-0.10.0-cdh4.6.0.tar.gz Set the core configuration properties for Hadoop in the core-site.xml configuration file. fs.defaultFS hdfs://10.0.2.15:8020 hadoop.tmp.dir file:///var/lib/hadoop-0.20/cache Create the directory specified in the hadoop.tmp.dir property and set its permissions to global (777). mkdir -p /var/lib/hadoop-0.20/cache chmod -R 777 /var/lib/hadoop-0.20/cache Set the HDFS configuration properties dfs.permissions.superusergroup, dfs.namenode.name.dir , dfs.replication, and dfs.permissions in the hdfs-site.xml configuration file. dfs.permissions.superusergroup hadoop dfs.namenode.name.dir file:///data/1/dfs/nn dfs.replication 1 dfs.permissions false Create the NameNode storage directory specified in the dfs.namenode.name.dir configuration property and set its permissions. mkdir -p /data/1/dfs/nn chmod -R 777 /data/1/dfs/nn Create the hive-site.xml configuration file for Hive by copying the template hive-default.xml.template. cp -0.10.0-cdh4.6.0/conf-default.xml.template -0.10.0-cdh4.6.0/conf-site.xml Set the environment variables for MySQL Database, Hadoop, Hive and Java in the bash shell. vi ~/.bashrc export HADOOP_PREFIX=/hadoop-2.0.0-cdh4.6.0 export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop export HIVE_HOME=-0.10.0-cdh4.6.0 export HIVE_CONF=$HIVE_HOME/conf export JAVA_HOME=/jdk1.7.0_55 export MYSQL_HOME=/mysql/mysql-5.6.19-linux-glibc2.5-i686 export HADOOP_MAPRED_HOME=/hadoop-2.0.0-cdh4.6.0/bin export HADOOP_HOME=/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2 export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$HIVE_HOME/lib/*:$HIVE_CONF:$HADOOP_CONF export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME:$HIVE_HOME/bin:$MYSQL_HOME/bin export CLASSPATH=$HADOOP_CLASSPATH Creating a MySQL Database Table Start MySQL server with the following command. bin/mysqld_safe --user=mysql & Login to MySQL shell with the following command. Set the database to ‘test’. bin/mysql –u root –p use test The MySQL shell gets started. Run the following SQL script to create a Database table called wlslog and add data to the table. CREATE TABLE wlslog(time_stamp VARCHAR(255) PRIMARY KEY,category VARCHAR(255),type VARCHAR(255),servername VARCHAR(255), code VARCHAR(255),msg VARCHAR(255)); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING'); INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode'); If the SQL script runs without error a Query Ok message is returned. Data gets added to the wlslog table. Configuring Hive Next, we shall configure the Hive configuration file hive-site.xml . To modify the hive-site.xml in vi editor run the following command. vi -0.10.0-cdh4.6.0/conf-site.xml Set the following configuration properties in hive-site.xml . Property Description Value hive.metastore.urls The URIs to connect to make metadata requests to remote metastore. thrift://localhost:10000 hive.metastore.warehouse.dir The directory for Hive tables. hdfs://10.0.2.15:8020/user/warehouse The hive-site.xml properties to be modified are listed below. The other properties should be kept as the default. hive.metastore.warehouse.dir hdfs://10.0.2.15:8020/user/warehouse hive.metastore.uris thrift://localhost:10000 With Hadoop and Hive configured next we shall start HDFS (NameNode and DataNode). Starting HDFS First, format the NameNode. hadoop namenode –format Start the NameNode. hadoop namenode Start the DataNode. hadoop datanode Create the Hive warehouse directory in HDFS in which the Hive tables are to be stored. The directory is set in the hive.metastore.warehouse.dir property. Run the following commands to create the Hive warehouse directory and set its permissions. hadoop dfs -mkdir hdfs://10.0.2.15:8020/user/warehouse hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020/user/warehouse Create a directory structure in HDFS to put Hive lib jars, set permissions for the directory, and put the Hive lib jars in HDFS. hadoop dfs -mkdir hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib hadoop dfs -chmod -R g+w hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib hadoop dfs -put -0.10.0-cdh4.6.0/lib/* hdfs://10.0.2.15:8020-0.10.0-cdh4.6.0/lib Creating a Hive External Table In this section we shall create a Hive external table the MySQL table using the CREATE EXTERNAL TABLE command. Start the Hive Thrift server with the following command. hive --service hiveserver The Hive Thrift Server gets started. Start the Hive shell with the following command. hive The Hive shell gets started. Add the Hive JDBC Storage Handler jar file to the Hive shell classpath with the ADD JAR command. hive>ADD JAR /hive/hive-0.10.0-cdh4.6.0/hive-jdbc-storage-handler/hive-jdbc-storage-handler-1.1.1-cdh4.3.0-SNAPSHOT-dist.jar; As indicated with the message output the jar file gets added to the classpath. Next, we shall create the Hive external table called wlslog in the default database with the CREATE EXTERNAL TABLE command in the Hive shell. The Hive JDBC storage handler class is com.qubitproducts.hive.storage.jdbc.JdbcStorageHandler . The following are required table properties. The value for MySQL database are also listed. Table Property Description Value qubit.sql.database.type Database type MYSQL qubit.sql.jdbc.url JDBC URL jdbc:mysql://localhost:3306/test?user=root&password= qubit.sql.jdbc.driver JDBC Driver com.mysql.jdbc.Driver qubit.sql.query SQL Query SELECT time_stamp,category,type, servername, code, msg FROM wlslog The following table properties may also be set optionally. Table Property Description qubit.sql.column.mapping Hive column name to database column name mapping specified in the format hiveColumnName=dbColumnName[:type]. Only date type is supported. qubit.sql.jdbc.fetch.size Resultset fetch size. Default is 1000. DBCP options Database Connection Pool options(http://commons.apache.org/proper/commons-dbcp/configuration.html). Prefixed with qubit.sql.dbcp. Run the following command in Hive shell to create a Hive external table ‘wlslog’. The wlslog table may have to be dropped if already created previously, before running the following command. The wlslog table is dropped with the drop table wlslog ; command. hive>CREATE EXTERNAL TABLE wlslog(time_stamp STRING, category STRING, type STRING, servername STRING, code STRING, msg STRING) STORED BY 'com.qubitproducts.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES ( "qubit.sql.database.type" = "MySQL", "qubit.sql.jdbc.url" = "jdbc:mysql://localhost:3306/test?user=root&password=", "qubit.sql.jdbc.driver" = "com.mysql.jdbc.Driver", "qubit.sql.query" = "SELECT time_stamp,category,type, servername, code, msg FROM wlslog", "qubit.sql.column.mapping" = "time_stamp=time_stamp,category=category,type=type,servername=servername,code=code,msg=msg"); A Hive external table ‘wlslog' gets created in the default database. To describe a Hive table and list the columns in the table run the DESC command. Querying the Hive External Table Having created the Hive external table over the MySQL database table select the data in the Hive external table with the following command in the Hive shell. hive>Select * from wlslog The data in the Hive external table wlslog gets listed. The data is actually stored in the MySQL Database table wlslog . In this tutorial we create a Hive external table over a MySQL database table.

Blog Post: Añadiendo y Editando una Cuadrícula de Datos en una Aplicación de Hoja de Cálculo Web en Oracle APEX 5

$
0
0
En esta oportunidad quiero mostrarte cómo puedes agregar una cuadrícula de datos en tu hoja de cálculo web y manipular los datos de la misma. En el artículo anterior habíamos creado una Aplicación Demo de Hoja de Cálculo Web . Vamos a ingresar a la aplicación y desde el menú “Data Grid” seleccionamos “New Data Grid” Tenemos dos opciones crear el Data Grid desde cero ingresando nosotros mismos las columnas del Grid como lo vemos en la imagen de abajo: O tenemos la posibilidad de “Copiar y Pegar” datos desde una planilla Excel. Abrimos la planilla Excel de ejemplo tasks_2015.xlsx (que la dejaré para descarga) y copiamos todos los datos de la planilla: Volvemos a la aplicación y pegamos todos los datos dentro del recuadro “Paste Spreadsheet Data” colocamos un nombre y alias y marcamos la casilla donde nos indica que la primera fila es nombre de columnas: Hacemos clic en el botón Upload Podemos ver que el “Data Grid” se creó correctamente. Manipular Datos del Data Grid Podemos ordenar columnas en forma ascendente, descendente, ocultar columnas y agregar un control break como lo hacemos con los Informes Interactivos: Podemos crear filtros en el Data Grid, por ejemplo en la columna STATUS seleccionamos el filtro On-Hold Podemos ver que se crea un filtro y el cuadro de datos nos muestra todos los registros en el cual el status es On-Hold, podemos eliminar el filtro haciendo clic en la x. Algo muy interesante que tambien podemos hacer es modificar los datos de varias filas a la misma vez. Por ejemplo desde el botón Manage seleccionamos Rows --> Replace Se abre una ventana modal y allí por ejemplo podemos cambiar todos los registros que dicen en el Status “On-Hold” que pasen a estar “Pending” Y aplicamos los cambios y veremos que nuestro filtro queda sin registros y podemos eliminarlo hacienda clic en la x para mostrar todos los registros del Data Grid. Hay muchas otras funcionalidades desde el Botón Manage que lo iremos viendo en sucesivos artículos, pero lo interesante de todo es que es muy intuitivo su uso para los usuarios finales que no tienen experiencia en programación. Además disponemos del botón Acciones que nos brinda todas las facilidades para personalizar el Data Grid a nuestro gusto de igual forma como lo hacemos cuando trabajamos con los Informes Interactivos, aquí te dejo un artículo donde puedes ver las diferentes formas de personalizar un Reporte Interactivo . Hasta la próxima!

Blog Post: How To Create Local YUM Repository On RHEL 6

$
0
0
In this post, I will by showing how to create a yum local repository on RHEL 6 1) You must be having a RHEL cd with you for creating the YUM repository -- Load the physical CD to your CD ROM, In case its a physical server, Mount the CD into a mount point (/mnt/cdrom) -- In my case it is a VM so I have an ISO image (Virtual CD), which is loaded and mounted into a mount point (/mnt/cdrom) [root@ajithpathiyil_MT1 rhel64]# cd /mnt/cdrom [root@ajithpathiyil_MT1 cdrom]# ls -ltr total 723 -rw-r--r-- 1 root root 1397 Jan 20 2011 RPM-GPG-KEY-oracle -rw-r--r-- 1 root root 7897 Jan 20 2011 README-en.html -rw-r--r-- 1 root root 18390 Jan 20 2011 GPL -rw-r--r-- 1 root root 1397 Jan 20 2011 RPM-GPG-KEY -rw-r--r-- 1 root root 22343 Jan 20 2011 RELEASE-NOTES-en -rw-r--r-- 1 root root 3547 Jan 20 2011 README-en -rw-r--r-- 1 root root 3334 Jan 20 2011 eula.py -rw-r--r-- 1 root root 7041 Jan 20 2011 eula.en_US -rw-r--r-- 1 root root 6830 Jan 20 2011 EULA-rw -r--r-- 1 root root 5165 Jan 20 2011 blafdoc.css -rw-r--r-- 1 root root 105 Jan 20 2011 supportinfo -rw-r--r-- 1 root root 53377 Jan 20 2011 RELEASE-NOTES-en.html drwxr-xr-x 4 root root 583680 Jan 20 2011 Server drwxr-xr-x 3 root root 2048 Jan 20 2011 Cluster drwxr-xr-x 3 root root 4096 Jan 20 2011 ClusterStorage drwxr-xr-x 3 root root 8192 Jan 20 2011 VT drwxr-xr-x 2 root root 2048 Jan 20 2011 isolinux drwxr-xr-x 4 root root 2048 Jan 20 2011 images -r--r--r-- 1 root root 4436 Jan 20 2011 TRANS.TBL [root@ajithpathiyil_MT1 cdrom]# cd Server [root@ajithpathiyil_MT1 Server]# ls -ltr total 3359313 -rw-r--r-- 1 root root 168842 Apr 9 2009 munzip-5.52-3.0.1.el5.x86_64.rpm -rw-r--r-- 1 root root 21468 Aug 26 2010 mirqbalance-0.55-16.el5.x86_64.rpm -rw-r--r-- 1 root root 31791 Aug 28 2010 mmcelog-0.9pre-1.30.el5.x86_64.rpm -rw-r--r-- 1 root root 2017333 Aug 28 2010 mnet-snmp-devel-5.3.2.2-9.0.1.el5_5.1.i386.rpm -rw-r--r-- 1 root root 1336489 Aug 28 2010 mnet-snmp-libs-5.3.2.2-9.0.1.el5_5.1.i386.rpm -rw-r--r-- 1 root root 735231 Aug 28 2010 mnet-snmp-5.3.2.2-9.0.1.el5_5.1.x86_64.rpm .. .. .. .. .. .. .. -rw-r--r-- 1 root root 312714 Nov 17 2010 mjdom-javadoc-1.0-4jpp.1.x86_64.rpm -rw-r--r-- 1 root root 14449 Nov 17 2010 mfinger-server-0.17-32.2.1.1.x86_64.rpm -rw-r--r-- 1 root root 74271 Nov 17 2010 mxorg-x11-xfs-1.0.2-4.x86_64.rpm -rw-r--r-- 1 root root 9901 Nov 17 2010 mxorg-x11-drv-dmc-1.1.0-2.x86_64.rpm -r--r--r-- 1 root root 805397 Jan 20 2011 TRANS.TBL [root@ajithpathiyil_MT1 cdrom]# 2) Make a local directory and copy the contents of (/mnt/cdrom/Server) to this newly created local directory [root@ajithpathiyil_MT1 cdrom]# mkdir -p /opt/yum/rhel64 [root@ajithpathiyil_MT1 cdrom]# cd /opt/yum/rhel64 [root@ajithpathiyil_MT1 Server]# cp -a *.rpm /opt/yum/rhel64/ [root@ajithpathiyil_MT1 Server]# uname -a Linux ajithpathiyil_MT1.lab.com 2.6.32-100.26.2.el5 #1 SMP Tue Jan 18 20:11:49 EST 2011 x86_64 x86_64 x86_64 GNU/Linux 3) Copy the comps-rhel5-server-core.xml file to your local directory as (/opt/yum/rhel64/repodata/comps.xml) [root@ajithpathiyil_MT1 Server]# cd .. [root@ajithpathiyil_MT1 Server]# cd repodata [root@ajithpathiyil_MT1 repodata]# mkdir -p /opt/yum/rhel64/repodata [root@ajithpathiyil_MT1 repodata]# ls -ltr *comps*xml -rw-r--r-- 1 root root 1067627 Jan 20 2011 comps-rhel5-server-core.xml [root@ajithpathiyil_MT1 repodata]# cp *comps*xml /opt/yum/rhel64/repodata/comps.xml 4) Install the createrepo-0.4.11-3.el5.noarch rpm if not installed already. [root@ajithpathiyil_MT1 repodata]# cd /opt/yum/rhel64/ [root@ajithpathiyil_MT1 rhel64]# rpm -ivh create* warning: createrepo-0.4.11-3.el5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing..#################################################################################[100%] package createrepo-0.4.11-3.el5.noarch is already installed [root@ajithpathiyil_MT1 rhel64]# createrepo -g repodata/comps.xml . 1/3247 - cdparanoia-alpha9.8-27.2.x86_64.rpm 2/3247 - libXfixes-4.0.1-2.1.i386.rpm 3/3247 - libdmx-1.0.2-3.1.i386.rpm 4/3247 - smartmontools-5.38-2.el5.x86_64.rpm 5/3247 - nss_ldap-253-37.el5.i386.rpm .. .. .. .. .. .. .. .. 3242/3247 - libsoup-devel-2.2.98-2.el5_3.1.x86_64.rpm 3243/3247 - perl-5.8.8-32.0.1.el5_5.2.x86_64.rpm 3244/3247 - mesa-libGL-devel-6.5.1-7.8.el5.x86_64.rpm 3245/3247 - eel2-devel-2.16.1-1.el5.x86_64.rpm 3246/3247 - adaptx-javadoc-0.9.13-3jpp.1.x86_64.rpm 3247/3247 - ipa-pmincho-fonts-003.02-2.1.el5.noarch.rpm Saving Primary metadata Saving file lists metadata Saving other metadata 5) Create a local repository file [root@ajithpathiyil_MT1 rhel64]# cat /etc/yum.repos.d/rhel-local.repo name=RHEL 6.4 local repository baseurl=file:///opt/yum/rhel64/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [root@ajithpathiyil_MT1 rhel64]# [root@ajithpathiyil_MT1 rhel64]# yum clean all Loaded plugins: rhnplugin, security Cleaning up Everything root@ajithpathiyil_MT1:/opt/yum/rhel64 [root@ajithpathiyil_MT1 rhel64]# 6) Load the local repository with all the RPMs, Beautifully loading the local repository. [root@ajithpathiyil_MT1 rhel64]# yum repolist Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. rhel6.4-local | 1.1 kB 00:00 rhel6.4-local/primary | 1.4 MB 00:00 rhel6.4-local: [ ] 1/3247 rhel6.4-local: [# ] 33/3247 rhel6.4-local: [### ] 99/3247 rhel6.4-local: [##### ] 166/3247 rhel6.4-local: [######## ] 266/3247 rhel6.4-local: [########## ] 332/3247 rhel6.4-local: [############ ] 398/3247 rhel6.4-local: [############## ] 464/3247 rhel6.4-local: [################ ] 531/3247 .................................................................................... .................................................................................... .................................................................................... rhel6.4-local: [###################################################### ] 3114/3247 rhel6.4-local: [######################################################## ] 3181/3247 rhel6.4-local: [##########################################################] 3246/3247 rhel6.4-local 3247/3247 repo id repo name status rhel6.4-local RHEL 6.4 local repository enabled: 3,247 repolist: 3,247 root@ajithpathiyil_MT1:/opt/yum/rhel64 [root@ajithpathiyil_MT1 rhel64]#

Forum Post: Mapping of US States with the two character State ID

$
0
0
Is there any pre existing table for Mapping of US States with the two character State ID? I am parsing user entry for US states and would like to get the two character State ID. Are there any recommendations?

Wiki Page: External Tables with Preprocessing

$
0
0
A question that comes up now and again is there a way in Oracle Database 11g Express Edition to mimic some behavior in the Oracle Standard or Enterprise editions. Many of these questions arise because developers want to migrate a behavior they’ve implemented in Java to the Express Edition. Sometimes the answer is no but many times the answer is yes. The yes answers come with a how. This article answers the question: “How can I read an operating systems’ file directory with out an embedded Java Virtual Machine (JVM)?” These developers have read or implemented logic like that found in my earlier “ Using DBMS_JAVA to Read External Files ” article. The answer is simple. You need to use a preprocessing script inside an external table. That’s what you will learn in this article, but if you’re not familiar with external tables you should read this other “ External Tables ” article. External tables let you access plain text files with SQL*Loader or Oracle’s proprietary Data Pump files. You typically create external tables with Oracle Data Pump when you’re moving large data sets between database instances. External tables use Oracle’s virtual directories. An Oracle virtual directory is an internal reference in the data dictionary. A virtual directory maps a unique directory name to a physical directory on the local operating system. Virtual directories were simple before Oracle Database 12c gave us the multitenant architecture. In a multitenant database there are two types of virtual directories. One services the schemas of the Container Database (CDB) and it’s in the CDB’s SYS schema. The other services the schemas of a Pluggable Database (PDB) and it’s in the ADMIN schema for the PDB. You can create a CDB virtual database as SYSTEM user with the following syntax in Windows: SQL> CREATE DIRECTORY upload AS 'C:\Data\Upload'; or, like this in Linux or Unix: SQL> CREATE DIRECTORY upload AS '/u01/app/oracle'; There are some subtle differences between these two statements. Windows directories or folders start with a logical drive letter, like C:\, D:\, and so forth. Linux and Unix directories start with a mount point like /u01. As you can read in the “ External Tables ” article, you need to change the ownership of external files and directories to the oracle user and, default, oracle user’s default dba group. Likewise, you should change the privilege of the containing directory to 755 (owner has read, write, and execute privileges; and group and others have read and execute privileges. The balance of this article is broken into two pieces configuring a working external table with preprocessing and troubleshooting cartridge errors. External tables with preprocessing example There are xxx database steps to creating this example. The first database step requires you create three virtual directories. The syntax for the three statements is: SQL> CREATE DIRECTORY upload AS '/u01/app/oracle/upload'; SQL> CREATE DIRECTORY log AS '/u01/app/oracle/log'; SQL> CREATE DIRECTORY preproc AS '/u01/app/oracle/preproc'; The upload directory hosts the files you want to discover for upload. The log directory hosts the log files for the external tables. The preproc directory hosts the executable program, which generates a list of files currently in the upload directory. After creating the virtual directories or before creating them, you should create the physical directories in the Linux operating system. The virtual directories can only point to something when it actually exists. Moreover, they work like Oracle’s synonyms that point to other objects in the database. The physical files need to be in a directory tree that is navigable by the oracle user and the oracle user and it’s default primary dba group needs to own them. You can use the following command to change ownership when you’re the root user: # chown –R oracle:dba /u01/app/oracle The second database step requires that you grant privileges on the virtual directories to the student user. You can do that with the following syntax: SQL> GRANT read ON DIRECTORY upload; SQL> GRANT read, write ON DIRECTORY log; SQL> GRANT read, execute ON DIRECTORY preproc; The upload directory requires read-only privileges. The log directory requires read and write privileges. The read privileges let it find files and the write privilege lets it append to log files when they already exist. The preproc directory requires read and execute privileges. The read privilege is the same as that explained earlier. The execute privilege lets you run the preprocessing program file. The third database step requires creating an external file with preprocessing. The following script creates the sample table: SQL> CREATE TABLE directory_list 2 ( file_name VARCHAR2(60)) 3 ORGANIZATION EXTERNAL 4 ( TYPE oracle_loader 5 DEFAULT DIRECTORY preproc 6 ACCESS PARAMETERS 7 ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII 8 PREPROCESSOR preproc:'list2dir.sh' 9 BADFILE 'LOG':'dir.bad' 10 DISCARDFILE 'LOG':'dir.dis' 11 LOGFILE 'LOG':'dir.log' 12 FIELDS TERMINATED BY ',' 13 OPTIONALLY ENCLOSED BY "'" 14 MISSING FIELD VALUES ARE NULL) 15 LOCATION ('list2dir.sh') ) 16 REJECT LIMIT UNLIMITED; Line 5 designates the default directory as preproc because the location of the executable file should be in the preproc directory. Line 8 designates that there is a preprocessing step, and it identifies the virtual directory and physical file name inside single quotes. Line 15 identifies the source file for the external table, which is an executable program. Next, you need to create the bash file to get and return a directory list. Before you write that file, you need to understand that preprocessing script files don’t inherit a $PATH environment variable from Oracle. That means a simple command like this ls /u01/app/oracle/upload | cat becomes a bit more complex, like this: /usr/bin/ ls /u01/app/oracle/upload | /usr/bin/ cat Create a list2dir.sh file in the /u01/app/oracle/preproc directory with the preceding command line. Then, make sure oracle is the owner with a primary dba group and the privileges are 755 on the file. The command to set the privileges is: # chmod –R 755 /u01/app/oracle/preproc.sh Having completed that Linux operating system step you should probably put some files in the upload directory. You can create empty files with the touch command at the linux command line for this example. The fourth database step lets you query the external table, which runs the preprocessing program and returns its results as values in the table: SQL> CREATE * FROM directory_list; It should return something like this: FILE_NAME ------------------------------ character.csv transaction_upload2.csv transaction_upload.csv This example shows you how to implement external tables with preprocessing directives. Troubleshooting external tables with preprocessing There are several common errors that you run into when creating these types of external tables. The top three are: You failed to qualify the path element of executables: select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preprocess/list2dir.sh encountered error "/u01/app/oracle/preprocess/list2dir.sh: line 1: ls: No such file or directory Forgetting or removing the fully qualified path from before the ls command causes this error because preprocessor scripts inherit an empty $PATH environment variable. You can fix this error by putting the fully qualified path in front of the ls command. You neglected to make the Linux script file executable: SQL> select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preproc/list2dir.sh encountered error "error during exec: errno is 13 Forgetting to change the list2dir.sh shell script’s file privileges causes this error. You can fix this error by using the chmod command to change the file’s privileges to 755. The first value is seven and it sets the owner’s file privilege to read, write, and execute. The second and third values are a five, and they respectively set the privileges of the primary group and all others. A five sets the file privileges to read and execute. You neglected to change the ownership on the preprocessing file: SQL> select * from directory_list * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/preprocess/list2dir.sh encountered error "/u01/app/oracle/preprocess/list2dir.sh: line 3: rm: No such file or directory /u01/app/oracle/preprocess/list2dir.sh: line 7: ls: No such file or directory Forgetting to change the list2dir.sh shell script’s file ownership causes this error. You can fix this error by using the chown command to change the file’s ownership. You now know how to create and troubleshoot common errors with Oracle’s external tables when you add preprocessing.

Blog Post: My memories from Collaborate16

$
0
0
This year was the second year I have attended Collaborate 16, I had the opportunity to meet a lot of my friends that are in a different country like Gleb Otochkin, Heli Helskyaho, Tim Gorman, Kellyn Pot'Vin-Gorman and Nelson Calero. Collaborate is a good event to learn a lot of technical stuff, Personally I attended some sessions about Oracle Public Cloud, Oracle Multitenant, Amazon AWS, Oracle RAC, and some others. Below some pictures that I could take from the sessions I attended: Oracle Multitenant Internals by Vit Spinka What is Next in RAC (Some new features in RAC 12.2) by Markus Michalewicz But this year Collaborate16 had a different taste for me, because the Article I wrote for IOUG SELECT Journal on April, 2015 was selected as the best article of the IOUG SELECT Journal for this year and that way I became in the winner of "SELECT Journal Editor's Choice Award 2016" , I was awarded in the IOUG Welcome Reception that had place on Sunday, April 10 2016. Here an image of my award [:D] If you are interested on read the winner article you can download it from here . I would like to thank to everyone who read my articles and provide me feedback because thanks to all of you this award was possible. I will keep writing articles because this is my lifestyle, I love to share what I have learnt and help people, I would like to invite you to do the same, as I have always said: Everyone has something to teach and everyone has something to learn, so join us and keep sharing your knowledge! See you at my next article or at the next Oracle Event. Follow me:

Blog Post: Find Data Guard Enabled targets in your Enterprise

$
0
0
A friend wrote: “As part of our Cloud environment, we wanted to know what databases have Data Guard Enabled and the respective physical standby server details. We have thousands of databases running in 12c cloud control (Enterprise Manager), so, would like to know any better way to query each database and get the database_role and switchover status along DGMGRL configuration status. “One option comes to my mind is, to create a sql script job to run in 12c Enterprise Manager (EM) to all databases, but I am not sure whether this is the better approach. So, could you please shed some light on this. It would be greatly appreciated.” My reply: For this you can run a script as per your thinking, but there may be a better way. (For running a script against target databases to query data in that database, you need the BI Publisher license for each target database, so this will not be free). This info may already be captured in the configuration data that is being captured by the EM agent, so all you may need to do is run a query against the EM repository. Have a look at this Enterprise Manager Repository view, described in detail here . The MGMT$DB_CONTROLFILES view displays the configuration settings for database control files. This has the columns you need: HOST_NAME: Name of the target where the metrics will be collected TARGET_NAME: Name of the database containing the data files TARGET_TYPE: The type of target, for example, Oracle_ database TARGET_GUID: The unique ID for the database target COLLECTION_ TIMESTAMP: The date and time when the metrics were collected FILE_NAME: Name of the database control file. STATUS: The type of control file: STANDBY - indicates database is in standby mode LOGICAL - indicates the database is a logical standby database (not a physical standby) CLONE - indicates a clone database BACKUP | CREATED - indicates database is being recovered using a backup or created control file CURRENT - the control file changes to this type following a standby database activate or database open after recovery CREATION_DATE: Control file creation date You can use the view externally. The other option is to build a configuration search in Enterprise Manager to get the data in the Enterprise Manager console – and also to see the SQL query that is used for getting the data. As a matter of fact, the data available to the configuration search facility is what you see on the Configuration->Last Collected page off the target home page. If you can see it, you can also get the data using the configuration search facility. This viewlet demonstrates how to create a configuration search.

Comment on How To Create Local YUM Repository On RHEL 6

$
0
0
Thanks Ajith, it is crystal clear instruction... Regards, www.fita.in/.../

Blog Post: Critical Patch Update - patch your database, WebLogic, Java, MySQL

$
0
0
The Oracle Critical Path Update (CPU) for April is out, and as usual there are some vulnerabilities on the list that you definitely want to patch. The bad ones (who score highest score on the Common Vulnerabilities and Exposure (CVE) standard Version 3.0) are those that can be exploited remotely without authentication and without user interaction - in effect, anybody who can connect to your system can and affect confidentiality, integrity, availability or a combination thereof. The April CPU contains this kind of bad problems across database, middleware, applications, Java, Sun hardware and MySQL. Some of the affected components include: Oracle Database JVM GlassFish WebCenter Sites WebLogic JMS Java SE MySQL Server Please make sure you patch all vulnerable systems.

Blog Post: Minimize context switches and unnecessary PL/SQL code: an example from the PL/SQL Challenge

$
0
0
On the PL/SQL Challenge , when you click on a link to play a quiz, you are taken to the "launch" page. We give you an opportunity to review assumptions and instructions, and then press the Start button when you are ready (your score is based in part on the time it takes you to answer). However, if you've taken that particular quiz before, and there have been no changes to assumptions or instructions, the launch page just gets in the way. So I decided to streamline the flow on our site as follows: 1. If a person has never taken this quiz before, go to the launch page. 2. Otherwise, if assumptions or instructions have changed since the last playing of the quiz, go to the launch page. 3. Otherwise, go straight to the Play Quiz page. I figured the way to do this is build a function that will be invoked from Oracle Application Express. Here is a first pass, using the top-down design technique, at implementing the function. CREATE OR REPLACE PACKAGE qdb_player_mgr IS FUNCTION can_skip_launch_page (comp_event_id_in IN INTEGER, user_id_in IN INTEGER) RETURN BOOLEAN; END qdb_player_mgr; / CREATE OR REPLACE PACKAGE BODY qdb_player_mgr IS FUNCTION can_skip_launch_page (comp_event_id_in IN INTEGER, user_id_in IN INTEGER) RETURN BOOLEAN IS l_can_skip BOOLEAN; l_last_changed_on DATE; BEGIN l_can_skip := NOT competition_answered (comp_event_id_in, user_id_in); IF l_can_skip THEN l_last_changed_on := GREATEST (last_assumption_change (comp_event_id_in), last_instruction_change (comp_event_id_in)); l_can_skip := l_last_changed_on = GREATEST (answer_info_r.instr_last_changed_on, answer_info_r.assump_last_changed_on); /* If no answer, then last_taken_on is NULL, so "can skip" might be NULL at this point, and that means "cannot skip." */ RETURN NVL (l_can_skip, FALSE); END; I could have used a SELECT-INTO for this single-row query, but then I would have had to declare a record type to match the cursor 's SELECT list or declared individual variables for each expression returned. With an explicit cursor, I can simply declare a record using %ROWTYPE. This second solution is not as easy to read as the first implementation - unless you are comfortable with SQL and the data model of the application at hand (which obviously you would be, if it was your code!). But if you are going to write applications on top of a relational database, you should be comfortable - very comfortable - with SQL.

Blog Post: Cómo usar Elementos de Aplicación en Oracle APEX 5.0

$
0
0
En nuestra aplicación en APEX generalmente usamos los elementos a nivel página y no a nivel aplicación. Los elementos que son a nivel de aplicación lo usamos para mantener el estado de la sesión. Estos elementos se pueden definir mediante cálculos o procesos, o bien transfiriendo valores en una dirección URL. La diferencia que existe entre ambos ámbitos es que el elemento de tipo página está asociada a una página determinada, en cambio cuando el elemento es a nivel aplicación dicho elemento no está asociado a una página determinada sino que se utiliza para toda la aplicación. Por ejemplo podemos usar un elemento de aplicación para mostrar en el Menú de Navegación la cantidad de Empleados y Departamentos que tenemos cargados en nuestra base de datos. Para ello vamos a crear una aplicación demo de escritorio con un Informe Interactivo de la tabla EMP y otra página con un Informe Interactivo de la tabla DEPT. Para contabilizar los registros de cada tabla necesitamos crear un Cálculo de Aplicación para cada tabla, ya que necesitamos contar cuantos empleados hay y cuantos departamentos hay para mostrarlo en el Menú de Navegación. Para ello en primer lugar vamos a crear dos elementos de Aplicación, uno para albergar el cálculo del total de Empleados y otro para albergar el total de Departamentos. Nos dirigimos a Componentes Compartidos de la aplicación y en la sección Lógica de Aplicación seleccionamos “Elementos de Aplicación”. Hacemos clic en el botón Crear > Nombre: Emp Ámbito: Aplicación En Ámbito tenemos dos opciones: Global y de Aplicación, especificaremos Global si la sesión de Application Express se comparte entre más de una aplicación y el valor del elemento debe ser el mismo para todas las aplicaciones. De lo contrario, especificaremos de Aplicación (éste es el valor por defecto), en nuestro caso será de Aplicación. Nota: Las aplicaciones pueden compartir la misma sesión si sus autenticaciones tienen los mismos atributos de cookies de sesión. El atributo Ámbito de los elementos de las aplicaciones debe ser el mismo en dichas aplicaciones . Los demás atributos aceptamos los valores por defecto. Creamos el segundo elemento de aplicación que lo llamaremos DEPT. Ahora vamos a crear los cálculos para estos elementos de aplicación. Nos dirigimos a Componentes Compartidos de la aplicación y en la sección Lógica de Aplicación seleccionamos “Cálculos de Aplicación”. Crear Cálculo para Elemento de Aplicación EMP Hacemos clic en el botón Crear Elemento --- Elemento de Cálculo: EMP Frecuencia --- Punto de Cálculo: Antes de Cabecera Cálculo --- Tipo de Cálculo: Consulta SQL (devolver valor único) Cálculo: select count(*) from emp Hacemos clic en el botón Crear Cálculo Crear Cálculo para Elemento de Aplicación DEPT Hacemos clic en el botón Crear Elemento --- Elemento de Cálculo: DEPT Frecuencia --- Punto de Cálculo: Antes de Cabecera Cálculo --- Tipo de Cálculo: Consulta SQL (devolver valor único) Cálculo: select count(*) from dept Hacemos clic en el botón Crear Cálculo. Disponemos de los elementos de aplicación y los cálculos para cada elemento. Ahora vamos al Menú de Navegación para poder mostrar el valor resultante de los cálculos. Personalizar Menú de Navegación Nos dirigimos a Componentes Compartidos de la aplicación y en la sección Navegación seleccionamos “Menú de Navegación”. Seleccionamos “Escritorio Menú de Navegación” y allí se va a mostrar las entradas del menú de navegación: Inicio, Empleados y Departamentos. Hacemos clic en el enlace Empleados para editarlo. En Imagen/Clase ingresamos un icono de “Font Awesome” por ejemplo podemos usar fa-users. En Etiqueta de Entrada de Lista: Empleados [&EMP.] Borrar Cache: RIR Y guardamos los cambios. De igual modo seleccionamos la entrada del menú de navegación Departamentos para editarlo. Hacemos clic en el enlace Empleados para editarlo. En Imagen/Clase: fa-building-o En Etiqueta de Entrada de Lista: Departamentos [&DEPT.] Borrar Cache: RIR Regresamos a la aplicación y la ejecutamos: Cómo podemos ver ahora en el menú de navegación podemos visualizar la cantidad de registros que tenemos en las tablas EMP y DEPT. De esta forma podemos realizar cálculos y/o procesos que se muestren en toda nuestra aplicación de manera muy sencilla usando los elementos de Aplicación. Será hasta el próximo artículo!

Blog Post: Oracle Linux 5 to 6

$
0
0
Hi! I just discovered this migration tip…Oracle Linux users…take note… When migrating to the latest Oracle Database Appliance Patch Bundle, it includes migrating from Oracle Linux 5 to Oracle Linux 6. Input from a member of the IAOUG user group: “I have not done the change on an ODA, but we've done it across standalone servers. You will notice more between Linux 6 and 7. In 7 tuned profiles become automatic, there is a change from initd to systemd, and XFS is the default filesystem. Those things need to be addressed before you go and not after as there are more performance implications.” Thanks Brian. Hope this helps keep you informed. Dan Hotka Oracle ACE Director Instructor/Author/CEO

Wiki Page: Desarrollando una Aplicación Móvil en Oracle APEX 5.0 - Parte II

$
0
0
Written by Clarisa Maman Orfali En el artículo anterior hemos creado una aplicación móvil desde Oracle APEX usando el asistente de forma muy rápida y sencilla. En esta sección vamos a ver cómo podemos presentar la información de diferentes formas usando la región de tipo Vista de Lista (List View). Crear una Vista de Lista Básica Para aprender las diferentes formas de presentar la información usando el List View, vamos a crear una Vista de Lista Básica que tome la información de la tabla DEMO_PRODUCT_INFO. Creamos una nueva página de tipo Informe con una región de “Vista de Lista” el cual la llamaremos Productos. Creamos una nueva entrada del menú de navegación con el nombre Productos. En el Origen seleccionamos la tabla DEMO_PRODUCT_INFO. Seleccionamos el estilo de la Lista como “Lista de Recuadros” y en la Columna de Texto seleccionamos PRODUCT_NAME. Llegamos a la página de confirmación y hacemos clic en el botón Crear. Ejecutamos la aplicación y podemos visualizar la página Productos con la región Vista de Lista con un estilo de Recuadros. Habilitar la característica de Búsqueda Editamos la página de Productos y desde el diseñador de páginas seleccionamos los Atributos de la Región Vista de Lista “Productos”, destildamos la opción “Inset List” y tildamos la opción “Enable Search”. En el tipo de búsqueda tenemos varias opciones: - Búsqueda del lado del Cliente (Client Side) --- Seleccionamos esta opción. (Este tipo de configuración es más amigable e intuitivo para el usuario pero puede tener una desventaja que es que se cargan todos los registros de la lista en el navegador web) - Búsqueda del lado del Servidor (Server Side) - Server: Exact & Case Sensitive, - Server: Exact & Ignore Case, - Serer: Like & Case Sensitive - Server: Like & Ignore Case. En Buscar Columna, seleccionamos: “PRODUCT_NAME” y en Marcador de Posición de Cuadro de Búsqueda (placeholder) “Buscar por un Producto…”. Guardamos los cambios y ejecutamos la aplicación. En el caso que quisiéramos colocar en todos los Marcadores de Posición (placeholder) un texto diferente al texto por defecto que es Search… lo podemos hacer desde Componentes Compartidos --- Globalización --- Mensajes de Texto: Hacemos clic en el botón Crear Mensaje de Texto: - Nombre: APEX.REGION.JQM_LIST_VIEW.SEARCH - Texto, colocamos lo que queremos que sea el texto por defecto, por ejemplo: Buscar en esta lista... Volvemos a la Página de Productos y en el Placeholder eliminamos el texto que habíamos agregado: “Buscar por un Producto...“ y ejecutamos la aplicación. Podemos ver que ahora dentro de la casilla del buscador se visualiza el texto por defecto “Buscar en esta lista…” Esta característica sólo se aplica si el placeholder está vacío, en el caso que en la región de la lista ingresamos un texto en el placeholder, el valor por defecto no se mostrará, ya que será reemplazado por el texto que ingresemos en ese atributo. Dividir el contenido de la Lista Otra característica que nos presenta la Vista de Lista es el poder dividir la visualización de los registros por medio de la función “Show List Divider”. Desde los Atributos de la Región Vista de Lista “Productos” hacemos Check en “Show List Divider” y en Columna de Divisor de Lista seleccionamos la columna CATEGORY. En Destino de Enlace por ahora sólo vamos a ingresar en la URL lo siguiente: ‘ javascript:void(0); ’ para que se muestren las flechas que permitirán ver el producto seleccionado. Por ahora este enlace no dirige a ninguna parte, más adelante veremos cómo colocar la URL de destino. Guardamos los cambios y ejecutamos la aplicación. Propiedad Columna Contador Tenemos otra propiedad que podemos utilizar en nuestra página de Productos que es la “Columna Contador”. Para utilizar esta característica necesitamos modificar nuestra consulta SQL de origen de la lista agregando la sub-consulta siguiente: (Select Count(1) from demo_order_items i where i.product_id = p.product_id) as items_ordered Nuestra consulta se mostrará de esta forma: select PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, CATEGORY, PRODUCT_AVAIL, LIST_PRICE, PRODUCT_IMAGE, MIMETYPE, FILENAME, IMAGE_LAST_UPDATE, TAGS, (Select Count(1) from demo_order_items i where i.product_id = p.product_id) as items_ordered from DEMO_PRODUCT_INFO p Regresamos a los Atributos de la región y en Columna Contador seleccionamos la nueva columna ITEMS_ORDERED. Guardamos y ejecutamos la aplicación. Mostrar la Imagen del Producto Para mostrar la imagen del producto necesitamos indicar en las funciones de la Vista de Listas que se muestre la imagen. Para ello desde Atributos de la Vista de Lista “Productos” hacemos un Check en “Show Image” y aparecerán las siguientes columnas de atributos: Columna BLOB de Imagen: PRODUCT_IMAGE Zolumna de Clave Primaria de Imagen 1: ROWID Guardamos y ejecutamos la aplicación. Es importante destacar que debemos tener especial cuidado con las imágenes que usemos en nuestra aplicación. Si necesitamos usar imágenes de gran tamaño y también miniaturas, es mejor cargar esos dos tipos de imágenes para que se carguen en la aplicación, porque si solo usamos las imágenes grandes estamos haciendo que la app tarde mucho más en cargar la página por la cantidad de megas que debe cargar por cada imagen. Modificar el Enlace de Destino Para poder visualizar el producto seleccionado de nuestra lista, necesitamos crear un formulario que muestre los datos del Producto para que el usuario los pueda manipular. Desde la página de Inicio de la aplicación hacemos clic en el botón Crear Página, seleccionamos Pantalla, luego seleccionamos Pantalla Basada en Tabla o Vista. Seleccionamos la tabla DEMO_PRODUCT_INFO y la clave primaria como PRODUCT_ID, aceptamos todos los valores por defecto y en la página de bifurcación seleccionamos la página de la Vista de Lista Productos. Regresamos a la página de Productos, en Atributos de la Vista de Listas y reemplazamos el valor de la URL en el Destino de Enlace por la siguiente URL: f?p=&APP_ID.:7:&APP_SESSION.::&DEBUG.::P7_PRODUCT_ID:&PRODUCT_ID. Nota: Reemplazamos el número de página 7 por el número de página que corresponde a nuestro formulario de datos. Al ejecutar la aplicación y hacer clic en la flechita por ejemplo del producto Jacket, podemos ver el formulario para editar los datos del producto. Añadiendo Formato Avanzado Otra característica que podemos usar en nuestra Vista de Lista es la de “Advanced Formatting”. Desde los atributos de la Vista de Lista de Productos seleccionamos con un check “Advanced Formatting” y aparecerán los siguientes atributos y le asignamos los siguientes valores: - Atributos de Lista: data-divider-theme="a" (podemos cambiar los estilos del tema usado para la división de cada categoría, si colocamos “a” nos mostrará la división como una franja de color rojo, si colocamos por ejemplo “d” veremos la división en blanco con una línea inferior negra) . - Atributos de Entrada de Lista: data-icon="gear" (podemos cambiar el icono que se muestra, en vez de la flecha podemos mostrar el signo de la rueda, aquí podemos encontrar un demo de jquerymobile con diferentes iconos para seleccionar). - Formato de Texto: (en esta sección colocamos por medio de la variable de sustitución el nombre del producto y la descripción del producto) &PRODUCT_NAME. &PRODUCT_DESCRIPTION. - Formato de Información Complementaria: (en esta sección colocamos por medio de la variable de sustitución el precio del producto) $&LIST_PRICE. Agregamos algunos estilos CSS en línea en la página de los Productos. Estilos CSS /* Reduce el tamaño de la miniatura*/ .ui-li-thumb { width : 50px; height : 50px; } /* Reduce el alto del elemento li con la miniatura */ .ui-li-has-thumb { height : 75px; } /* ubicación del contador y del icono */ .ui-li-has-alt.ui-li-has-count .ui-li-count, .ui-btn-icon-right > .ui-btn-inner > .ui-icon { top : 20px; right : 45px; } /* Mover el texto a la izquierda */ .ui-li-has-thumb .ui-btn-inner a.ui-link-inherit { padding-left : 55px; } /* Estilos para el Precio y la Descripción */ .price { position : absolute; right : 40px; top : 50px; font-size : 11px; color : blue; } .description { font-size : 10px; margin : 0px -1px; } Modificar Cantidad de Registros por Página Podemos cambiar la cantidad de registros por página desde los atributos de la Vista de Lista, en la sección Diseño, ingresamos en Número de Filas, por ejemplo: 5 y guardamos los cambios. Cuando ejecutamos la página veremos que no hay ningún cambio y eso se debe a que en el buscador hemos seleccionado la opción “Client Side” y por ello de todos modos se cargan todos los registros, pero si cambiamos el tipo de búsqueda por alguno que sea del lado del servidor y volvemos a ejecutar podemos ver la diferencia: De igual modo a lo explicado anteriormente si queremos cambiar el texto que aparece por defecto “Cargar Más…” por otro tipo de texto, vamos a Componentes Compartidos --- Globalización --- Mensajes de Textos y creamos un nuevo mensaje de texto con el nombre: APEX.REGION.JQM_LIST_VIEW.LOAD_MORE y en el placeholder colocamos el texto que deseamos. A lo largo de todo éste artículo hemos podido observar lo realmente fácil que es crear una página en APEX utilizando la Región de tipo Vista de Listas y a conocer las diferentes propiedades que disponemos para mostrar la información en nuestra página. Será hasta la próxima!

Blog Post: Utilizando expresiones regulares en Oracle

$
0
0
Oracle soporta el uso de expresiones regulares en la base de datos. Podemos utilizar expresiones regulares en el lenguaje SQL. Pero, ¿qué es una expresión regular? ¿Para qué sirve? Una expresión regular especifica un patrón de búsqueda. Con expresiones regulares podemos combinar metacaracteres y literales , construir patrones de búsqueda complejos y procesar textos de manera muy poderosa. Otros lenguajes como Java o Perl también ofrecen soporte para procesamiento de expresiones regulares. La ventaja de contar con soporte para expresiones regulares en SQL reside en que podemos procesar los datos directamente dentro de la base, sin necesidad de hacer transferencias de datos de un servidor a un cliente para luego procesar y descartar lo que no sirve. En síntesis, utilizar expresiones regulares directamente en SQL nos permite hacer desarrollos más eficientes. En Oracle, existen cinco funciones orientadas al uso de expresiones regulares. Se las puede reconocer fácilmente por el prefijo utilizado en su nombre: REGEXP (REGular EXPressions): REGEXP_LIKE REGEXP_COUNT REGEXP_INSTR REGEXP_REPLACE REGEXP_SUBSTR Y aquí entramos en un escenario muy particular. Para hacer uso de las funciones recientemente mencionadas es necesario que sepamos algo acerca de las expresiones regulares. Sin embargo, no menos cierto es que para probar aquellas expresiones regulares que construyamos es necesario que conozcamos las funciones orientadas al procesamiento de expresiones regulares. Salgamos de este círculo vicioso explorando al menos una función: REGEXP_LIKE. La función REGEXP_LIKE la podemos utilizar en la cláusula WHERE de un query a fin de obtener aquellas filas que respondan a un determinado patrón. Anteriormente dijimos que la las expresiones regulares combinan metacaracteres y literales; construyamos una primera expresión regular que sea simplemente un literal y hagamos una prueba de uso de la función REGEXP_LIKE: SQL> select * from prueba_regexp; TEXTO -------------------------------------------------- El escarabajo de oro El gato negro SQL> select * from prueba_regexp where regexp_like(texto, 'gato'); TEXTO -------------------------------------------------- El gato negro En este ejemplo, podemos apreciar que gracias a la función REGEXP_LIKE he podido obtener aquellas filas que contienen la palabra “gato” en la columna TEXTO. También podemos hacer uso de la función REGEXP_LIKE en la cláusula SELECT. Aprovechemos para probar y construir una modelo de sentencia que nos servirá para hacer nuestras futuras pruebas de uso de expresiones regulares: select texto, 'gato' regexp, case when regexp_like(texto, 'gato') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp TEXTO REGEXP COINCIDENCIA? --------------------------- ------------- ------------------- El escarabajo de oro gato No hay coincidencia El gato negro gato Hay coincidencia Final de nuestra prueba elemental: una expresión regular que se limita a un literal. Compliquemos nuestra prueba agregando un metacaracter a nuestra expresión regular. A diferencia de un literal, un metacaracter tiene un significado especial. Por ejemplo el metacaracter “.” representa un patrón que iguala con cualquier carácter. Construyamos nuestra primera expresión regular compleja con literales y metacaracteres: a.o (a punto o) “a” y “o” son literales “.” Es el metacaracter que iguala con cualquier caracter Luego: select texto, 'a.o' regexp, case when regexp_like(texto, 'a.o') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp / TEXTO REGEXP COINCIDENCIA? ----------------------------------- ------------ ------------------- El escarabajo de oro a.o Hay coincidencia El gato negro a.o Hay coincidencia “ajo” (de escarabajo) y “ato” (de gato) igualan con la expresión regular “a.o”. Por eso en ambas filas hay coincidencia. Y así llegamos al final de este primer artículo dedicado a las expresiones regulares en Oracle. La seguimos en el próximo! Nos vemos!
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>