Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Wiki Page: External Tables

$
0
0
by Michael McLaughlin Oracle Database 9 i introduced external tables. You can create external tables to load plain text files by using Oracle SQL*Loader. Alternatively, you can create external tables that load and unload files by using Oracle Data Pump. This article demonstrates both techniques. You choose external tables that use Oracle SQL*Loader when you want to import plain text files. There are three types of plain text files. They are comma-separated value (CSV), tab-separated value (TSV), and position specific text files. External tables that use Oracle Data Pump don’t work with plain text files. They work with an Oracle proprietary format. That means you load source files previously created by an Oracle Data Pump export. You typically create external tables with Oracle Data Pump when you’re moving large data sets between database instances. External tables use Oracle’s virtual directories. An Oracle virtual directory is an internal reference in the data dictionary. A virtual directory maps a unique directory name to a physical directory on the local operating system. Virtual directories were simple before Oracle Database 12c gave us the multitenant architecture. In a multitenant database there are two types of virtual directories. One services the schemas of the Container Database (CDB) and it’s in the CDB’s SYS schema. The other services the schemas of a Pluggable Database (PDB) and it’s in the ADMIN schema for the PDB. You can create a CDB virtual database as SYSTEM user with the following syntax in Windows: SQL> CREATE DIRECTORY upload AS 'C:\Data\Upload'; or, like this in Linux or Unix: SQL> CREATE DIRECTORY upload AS '/u01/app/oracle'; There are some subtle differences between these two statements. Windows directories or folders start with a logical drive letter, like C:\ , D:\ , and so forth. Linux and Unix directories start with a mount point like /u01 . One of the subtle differences is directory and file ownership. You can change ownership for a directory in Windows as the Administrator account. The change makes the directory publicly accessible, and that’s probably fine for a test database. After such a change, the Oracle user can find the external file even when parent directories aren’t navigable. Although, a production database on Windows would requires more skill at setting and restricting file permissions. Linux and Unix directories require that the oracle user can navigate the tree from the mount point to the target physical directory. Also, you must designate the ownership of external files as the same as the Oracle Database user. Assuming a standard install of the Oracle Database 11g XE instance, you would issue the following shell command as the root user to change file ownership and access privileges: # chown –R oracle:dba /u01/app/oracle/upload # chmod –R 755 /u01/app/oracle/upload After you create the virtual directory, you must grant privileges or a role to the user that defines the external table. While data and log files should be separated, this example assumes they co-exist in the same directory. The following statement grants read privilege for the data file and write privileges for the log files to a CDB user. You should run this statement as the SYSTEM user. GRANT read, write ON DIRECTORY upload TO c##importer; or, like this in non-multitenant database or PDB user: GRANT read, write ON DIRECTORY upload TO importer; The last preparation steps require a plain text file in the physical directory. Let’s create a CSV file of key Avenger characters, and name it the avenger.csv file. The avenger.csv file holds the following values: 1,'Anthony','Stark','Iron Man' 2,'Thor','Odinson','God of Thunder' 3,'Steven','Rogers','Captain America' 4,'Bruce','Banner','Hulk' 5,'Clinton','Barton','Hawkeye' 6,'Natasha','Romanoff','Black Widow' You create the external table after creating the virtual directory, granting read and write privileges on the virtual directory, and creating an external physical file. The syntax for the CREATE TABLE statement of an external table is very similar to the syntax of an ordinary table. The difference between the two types of tables is a clause. An internal table has a STORAGE clause, while an external table has an ORGANIZATION EXTERNAL clause. The following creates the avenger table as an external table: SQL> CREATE TABLE avenger 2 ( avenger_id NUMBER 3 , first_name VARCHAR2(20) 4 , last_name VARCHAR2(20) 5 , character_name VARCHAR2(20)) 6 ORGANIZATION EXTERNAL 7 ( TYPE oracle_loader 8 DEFAULT DIRECTORY upload 9 ACCESS PARAMETERS 10 ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII 11 BADFILE 'UPLOAD':'avenger.bad' 12 DISCARDFILE 'UPLOAD':'avenger.dis' 13 LOGFILE 'UPLOAD':'avenger.log' 14 FIELDS TERMINATED BY ',' 15 OPTIONALLY ENCLOSED BY "'" 16 MISSING FIELD VALUES ARE NULL) 17 LOCATION ('avenger.csv')) 18 REJECT LIMIT UNLIMITED; Lines 1 through 5 create the columns of the avenger table. Lines 6 through 17 contain the ORGANIZATION EXTERNAL clause. Line 7 designates the external table as managed by the Oracle SQL*Loader utility. Line 8 sets the default virtual directory. Lines 11 through 12 set the bad, discard, and log file location. The bad and discard files keep all that can’t be read. The log file keeps all rows read by a query against the avenger table. You also have the option of making all reads automatic parallel. You simply add a PARALLEL clause, like this: 19 PARALLEL; A simple query with SQL*Plus formatting lets us test whether the avenger table works. The query to display all columns of all rows is: SQL> COLUMN first_name FORMAT A10 SQL> COLUMN last_name FORMAT A10 SQL> COLUMN character_name FORMAT A15 SQL> SELECT * FROM avenger; SELECT * FROM avenger; Yield the following formatted output: AVENGER_ID FIRST_NAME LAST_NAME CHARACTER_NAME ---------- ---------- ---------- --------------- 1 Anthony Stark Iron Man 2 Thor Odinson God of Thunder 3 Steven Rogers Captain America 4 Bruce Banner Hulk 5 Clinton Barton Hawkeye 6 Natasha Romanoff Black Widow 6 rows selected. It’s possible to redefine the avenger table to use either relative or fixed positional columns. You change the ACCESS PARAMETERS clause on lines 9 through 16 to make this change. The following ACCESS PARAMETER clause runs across lines 9 through 19 and creates relative position definition: 9 ACCESS PARAMETERS 10 ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII 11 BADFILE 'UPLOAD':'avenger.bad' 12 DISCARDFILE 'UPLOAD':'avenger.dis' 13 LOGFILE 'UPLOAD':'avenger.log' 14 FIELDS 15 MISSING FIELD VALUES ARE NULL 16 ( avenger_id CHAR(4) 17 , first_name CHAR(20) 18 , last_name CHAR(20) 19 , character_name CHAR(4))) You can change from the relative position, to a fixed position by changing lines 16 through 19. The change for fixed length strings is: 16 ( avenger_id POSITION (1:4) 17 , first_name POSITION (5:24) 18 , last_name POSITION (25:44) 19 , character_name POSITION (45:64) )) If you fail to enclose the position references in parentheses, the CREATE statement would raise the following exception: SELECT * FROM avenger * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "number": expecting one of: "(" KUP-01007: at line 8 column 33 Having worked with the Oracle SQL*Loader version of external tables, lets create one that uses Oracle Data Pump. Assuming we keep the same data structure, drop the avenger table, and create a catalog managed avenger_internal table. This statement creates the avenger_internal table: SQL> CREATE TABLE avenger_internal 2 ( avenger_id NUMBER 3 , first_name VARCHAR2(20) 4 , last_name VARCHAR2(20) 5 , character_name VARCHAR2(20)); To avoid writing six INSERT statements, you can write one INSERT statement with a query against the SQL*Loader avenger table. The syntax for that INSERT statement is: SQL> INSERT INTO avenger_internal 2 SELECT * FROM avenger; With an internally managed table, you create an avenger_export table that uses Oracle Data Pump like this: SQL> CREATE TABLE avenger_export 2 ORGANIZATION EXTERNAL 3 ( TYPE oracle_datapump 4 DEFAULT DIRECTORY upload 5 LOCATION ('avenger_export.dmp')) AS 6 SELECT avenger_id 7 , first_name 8 , last_name 9 , character_name 10 FROM avenger_internal; The CREATE TABLE statement exports data to the avenger_export.dmp file immediately. You must drop and recreate the avenger_export table to get a fresh extract of the avenger_internal table’s data. You must also remove the previous avenger_export.dmp file before you try to recreate the avenger_export table. You raise the following error when you fail to remove the previous export file: CREATE TABLE avenger_export * ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11012: file avenger_export.dmp in /u01/... already exists This is a simple example with only four columns. You might think you can use the SELECT * as the SELECT -list of the query on lines 6 through 10. If you’re running Oracle Database 12 c , you can use the shorter syntax, but if you’re running Oracle Database 11 g you can’t. If you attempt it in an Oracle Database 11 g instance, the CREATE TABLE statement returns the following error: ERROR at line 6: ORA-30656: COLUMN TYPE NOT supported ON external organized TABLE You create an avenger_import table with another twist on this now familiar Oracle SQL syntax. The CREATE TABLE statement is: SQL> CREATE TABLE avenger_import 2 ( avenger_id NUMBER 3 , first_name VARCHAR2(20) 4 , last_name VARCHAR2(20) 5 , character_name VARCHAR2(20)) 6 ORGANIZATION EXTERNAL 7 ( TYPE oracle_datapump 8 DEFAULT DIRECTORY up2load 9 LOCATION ('avenger_export.dmp')); Like the export process, the import process happens immediately when the CREATE TABLE statement runs. A query against the avenger_import table would show you the original six rows we started with in the plain text files. This article has introduced Oracle external tables. It has shown you how to import plain text files with SQL*Loader. It has also shown you how to export files from tables. You can find the SQL files that will create the various forms of the avenger , avenger_export and avenger_import tables. Oracle 11g added a PREPROCESSOR element, which you can read about in this subsequent article on External Tables with Preprocessing files.

Blog Post: Explicit Semantic Analysis in Oracle 12.2c Database

$
0
0
A new Oracle Data Mining algorithm in the Oracle 12.2c Database is called Explicit Semantic Analysis. [ The following examples are built using Oracle Data Miner 4.2 (SQL Developer 4.2) and the Oracle 12.2 Database cloud service (extreme edition) ] The Explicit Semantic Analysis algorithm is an unsupervised algorithm used for feature extraction. ESA does not discover latent features but instead uses explicit features based on an existing knowledge base. There is no setup or install necessary to use this algorithm All you need is a licence for the Advanced Analytics Option for the database. The out from the algorithm is a distance measure that indicates how similar or dis-similar the input texts are, using the ESA model (and the training data set used). Let us look at an example. Setup training data for ESA Algorithm Oracle Data Miner 4.2 (that comes with SQL Developer 4.2) has a data Wiki data set from 2005. This contains over 200,000 features. To locate the file go to. ...\sqldeveloper\dataminer\scripts\instWikiSampleData.sql This file contains the DDL and the insert statements for the Wiki data set. After you run this script a new table called WIKISAMPLE table exists and contains records This gives us the base/seed data set to feed into the ESA algorithm. Create the ESA Model using ODMr To create the ESA model we have 2 ways of doing this. In this blog post I'll show you the easiest way by using the Oracle Data Miner (ODMr) tool. I'll have another blog post that will show you the SQL needed to create the model. In an ODMr workflow create a new Data Source node. Then set this node to have the WIKISAMPLE table as it's data source. Next you need to create the ESA node on the workflow. This node can be found in the Models section, of the Workflow Editor. The node is called Explicit Feature Extraction. Click on this node, in the model section, and then move your mouse to your workflow and click again. The ESA node will be created. Join the Data Node to the ESA node by right clicking on the data node and then clicking on the ESA node. Double click on the ESA node to edit the properties of the node and the algorithm. Explore the ESA Model and ESA Model Features After the model node has finished you can now explore the results generated by the ESA model. Right click on the model node and select 'View Model'. The model properties window opens and it has 2 main tabs. The first of these is the coefficients tab. Here you can select a particular topic (click on the search icon beside the Feature ID) and select it from the list. The attributes and their coefficient values will be displayed. Next you can examine the second tab that is labeled as Features. In this table we can select a particular record and have a tag cloud and coefficients displayed. The tag cloud is a great way to see visually what words are important. How to use the ESA model to Compare new data using SQL Now that we have the ESA model created, we can not use it model to compare other similar sets of documents. You will need to use the FEATURE_COMPARE SQL function to evaluate the input texts, using the ESA model to compare for similarity. For example, SELECT FEATURE_COMPARE(feat_esa_1_1 USING 'Oracle Database is the best available for managing your data' text AND USING 'The SQL language is the one language that all databases have in common' text) similarity FROM DUAL; The result we get is 0.7629. The result generate by the query is a distance measure. The FEATURE_COMPARE function returns a comparison number in the range 0 to 1. Where 0 indicates that the text are not similar or related. If a 1 is returned then that indicated that the text are very similar or very related. You can use this returned value to make a decision on what happens next. For example, it can be used to decide what the next step should be in your workflow and you can easily write application logic to manage this. The examples given here are for general text. In the real world you would probably need a bigger data set. But if you were to use this approach in other domains, such as legal, banking, insurance, etc. then you would need to create a training data set based on the typical language that is used in each of those domains. This will then allow you to compare documents with each domain with greater accuracy. [ The above examples are built using Oracle Data Miner 4.2 (SQL Developer 4.2) and the Oracle 12.2 Database cloud service (extreme edition) ]

Blog Post: 12.2: Avoid hard-coding maximum length of VARCHAR2 (and more)

$
0
0
Starting with Oracle Database 12 c Release 2 (12.2), we can now use static expressions* where previously only literal constants were allowed. Here are some examples (also available in this LiveSQL script ): CREATE OR REPLACE PACKAGE pkg AUTHID DEFINER IS c_max_length constant integer := 32767; SUBTYPE maxvarchar2 IS VARCHAR2 (c_max_length); END; / DECLARE l_big_string1 VARCHAR2 (pkg.c_max_length) := 'So big....'; l_big_String2 pkg.maxvarchar2 := 'So big via packaged subtype....'; l_half_big VARCHAR2 (pkg.c_max_length / 2) := 'So big....'; BEGIN DBMS_OUTPUT.PUT_LINE (l_big_string1); DBMS_OUTPUT.PUT_LINE (l_big_string2); END; / As you can see from this code, static expressions can now be used in subtype declarations. The definition of static expressions is expanded to include all the PL/SQL scalar types and a much wider range of operators. Character operands are restricted to a safe subset of the ASCII character set. Operators whose results depend on any implicit NLS parameter are disallowed Expanded and generalized expressions have two primary benefits for PL/SQL developers: Programs are much more adaptable to changes in their environment Programs are more compact, clearer, and substantially easier to understand and maintain * A static expression is an expression whose value can be determined at compile time. This means the expression cannot include character comparisons, variables, or function invocations. An expression is static if it is any of the following: the NULL literal a character, numeric, or boolean literal a reference to a static constant a reference to a conditional compilation variable begun with $$ an operator is allowed in static expressions, if all of its operands are static, and if the operator does not raise an exception when it is evaluated on those operands Read more in the doc .

Blog Post: OBIEE Oracle Support Notes - Useful One.

$
0
0
A list of Information Centers: Note 1378677.2 - Information Center: Enterprise Performance Management and BI Index (EPM/BI) Note 1349989.2 - Information Center: Installing and Configuring Oracle Business Intelligence Enterprise Edition Release 10g and Later Note 1349996.2 - Information Center: Optimizing Performance for Oracle Business Intelligence Enterprise Edition Release 10g and Later Note 1349983.2 - Information Center: Oracle Business Intelligence Enterprise Edition (OBIEE) Release 10g and Later Note 1350005.2 - Information Center: Security Information for Oracle Business Intelligence Enterprise Edition Release 10g and Later List of Notes that may be useful :- Note 1210310.1 - Master Note for Answers and Dashboards Issues in OBIEE Note 1292894.1 - Master Note for BI Publisher Issues in OBIEE Note 1292859.1 - Master Note for Briefing Book Issues in OBIEE Note 1292904.1 - Master Note for Cache Issues in OBIEE Note 1292936.1 - Master Note for Clustering Issues in OBIEE Note 1293348.1 - Master Note for Crash/Hang Issues in OBIEE Note 1293505.1 - Master Note for Data Warehouse Issues in OBIEE Note 1293334.1 - Master Note for Disconnected Analytics Issues in OBIEE Note 1265441.1 - Master Note for OBIEE 10g and 11g Essbase Integration issues (Doc ID ) Note 1293329.1 - Master Note for Integrated Security Issues in OBI Applications Note 1248939.1 - Master Note for OBIEE 10g Integration with EBS, Siebel, SSO, Portal Server, Peoplesoft Note 1301946.1 - Master Note for Internationalization and Globalization Issues in OBIEE Note 1293391.1 - Master Note for iPhone BI Apps Issues in OBIEE Note 1293337.1 - Master Note for Mapviewer Issues in OBIEE Note 1293391.1 - Master Note For Oracle Business Intelligence Mobile Applications (iPhone/iPad) Issues in OBIEE Note 1293344.1 - Master Note for Multi-user Development Issues in OBIEE Note 1293301.1 - Master Note for Office Integration Issues in OBIEE Note 1364889.1 - Master Note For OBIEE use with OPatch Note 1293374.1 - Master Note for Performance Issues in OBIEE Note 1293435.1 - Master Note for Presentation Server Administration Issues in OBIEE Note 1293384.1 - Master Note for Repository Design Issues in OBIEE Note 1293351.1 - Master Note for Scorecard & KPI Issues in OBIEE Note 1293407.1 - Master Note for Security/Access Control Issues in OBIEE Note 1293411.1 - Master Note for Server Execution Issues in OBIEE Note 1293394.1 - Master Note for SOAP API Issues in OBIEE Note 1293424.1 - Master Note for System Configuration EM/JMX Issues in OBIEE Note 1293415.1 - Master Note for Usage Tracking Issues in OBIEE Note 1293477.1 - Master Note for Webcat Replication Issues in OBIEE Note 1293490.1 - Master Note for Write Back Issues in OBIEE Note 1307975.1 - Summary Note About OBIEE 10.1.3.4.1 Patch 9492821: Information Applicable Prior To, Or After, Install Note 1391648.1 - OBIEE11g: Installation, Migration and Upgrade Hints and Tips Note 1589028.1 - Master Note For Oracle Hyperion Smart View For Office Issues in OBIEE NOTE:1293490.1 - Master Note for Write Back Issues in OBIEE NOTE:1293505.1 - Master Note for Data Warehouse Issues in OBIEE NOTE:1301946.1 - Master Note for Internationalization and Globalization Issues in OBIEE NOTE:1349983.2 - Information Center: Oracle Business Intelligence Enterprise Edition (OBIEE) Release 10g and Later NOTE:1349989.2 - Information Center: Installing and Configuring Oracle Business Intelligence Enterprise Edition Release 10g and Later NOTE:1349996.2 - Information Center: Optimizing Performance for Oracle Business Intelligence Enterprise Edition Release 10g and Later NOTE:1350005.2 - Information Center: Security Information for Oracle Business Intelligence Enterprise Edition Release 10g and Later NOTE:1364889.1 - Master Note For OBIEE use with OPatch NOTE:1378677.2 - Information Center: Business Analytics Index (EPM/BI) NOTE:1293391.1 - Master Note For Oracle Business Intelligence Mobile Applications (iPhone/iPad) Issues in OBIEE NOTE:1293394.1 - Master Note for SOAP API Issues in OBIEE NOTE:1293407.1 - OBIEE 11g: Master Note for Security/Access Control Issues NOTE:1293411.1 - Master Note for Server Execution Issues in OBIEE NOTE:1293415.1 - Master Note for Usage Tracking Issues in OBIEE NOTE:1293424.1 - Master Note for System Configuration EM/JMX Issues in OBIEE NOTE:1293435.1 - Master Note for Presentation Server Administration Issues in OBIEE NOTE:1293477.1 - Master Note for Webcat Replication Issues in OBIEE NOTE:1210310.1 - Master Note for Answers and Dashboards Issues in OBIEE NOTE:1248939.1 - Master Note for OBIEE 10g Integration with EBS, Siebel, SSO, Portal Server, Peoplesoft NOTE:1265441.1 - Master Note for OBIEE 10g and 11g Essbase Integration issues NOTE:1267009.1 - Oracle Business Intelligence Enterprise Edition (OBIEE) Product Information Center (PIC) NOTE:1292859.1 - Master Note for Briefing Book Issues in OBIEE NOTE:1292894.1 - Master Note for BI Publisher Issues in OBIEE NOTE:1292904.1 - Master Note for Cache Issues in OBIEE 10g and 11g NOTE:1292936.1 - Master Note for Clustering Issues in OBIEE 10g and 11g NOTE:1293301.1 - Master Note for Office Integration Issues in OBIEE NOTE:1293329.1 - Master Note for Integrated Security Issues in OBI Applications NOTE:1293334.1 - Master Note for Disconnected Analytics Issues in OBIEE NOTE:1293337.1 - OBIEE: Master Note for Mapviewer Issues NOTE:1293344.1 - Master Note for Multi-user Development Issues in OBIEE NOTE:1293348.1 - Master Note for Crash/Hang Issues in OBIEE 10g and 11g NOTE:1293351.1 - Master Note for Scorecard & KPI Issues in OBIEE NOTE:1293374.1 - Master Note for Performance Issues in OBIEE NOTE:1293384.1 - Master Note for Repository Design Issues in OBIEE NOTE:1589028.1 - Master Note For Oracle Hyperion SmartView (Smart View) For Office Issues in OBIEE Cheers And Enjoy Reading Osama mustafa

Blog Post: Licensing Data Recovery Environments

$
0
0
Oracle provides a number of Specialty information documents on licensing at this link: http://www.oracle.com/us/corporate/pricing/specialty-topics/index.html Out of these, the Licensing Data Recovery Environments document here says: “Standby and Remote Mirroring are commonly used terms to describe these methods of deploying Data Recovery environments. In these Data Recovery deployments, the data, and optionally the Oracle binaries, are copied to another storage device. “In these Data Recovery deployments all Oracle programs that are installed and/or running must be licensed per standard policies documented in the Oracle Licensing and Services Agreement (OLSA). This includes installing Oracle programs on the DR server(s) to test the DR scenario. “Licensing metrics and program options on Production and Data Recovery/Secondary servers must match.” An interested party raised a question. In his case, the standby database was not using the RAC option, whereas the primary database was a RAC database. Would his company still have to pay the license for RAC on the standby as well, even if it was not set up? The answer: No. The license is payable only if RAC is set up at the standby database level. However, other database options do need to be licensed for the standby if they are being licensed on the production database, such as the in-memory option, the partitioning option, and so on. (Note: The information above is provided as a guideline. For formal licensing requirements, it is always a good idea to confirm with your local Oracle sales representative.)

Wiki Page: Consolidation Planning for the Cloud - Part VI

$
0
0
by Porus Homi Havewala We introduced the concept of Consolidation Planning in the previous parts of this article series. Part V was here . This is Part VI. Consolidating to Virtual Machines In the same way, we can create a new project for a P2V (Physical to Virtual) consolidation. This allows you to consolidate your existing Physical Machines to one or more Virtual Machines. The consolidation project only applies to Oracle Virtual Machines. Consolidating to non-Oracle Virtual Machines, such as VMware, is not supported unless if you treat the VMware machines as Physical Machines and use the earlier P2P consolidation method.. When creating the project, simply select the Consolidation Type to be "From physical servers to Oracle Virtual Servers (P2V)". The other screens are mostly the same. You can select source candidates, then add existing virtual servers (current Enterprise Manager targets) as destination candidates. If you do not specify the destination candidates, only phantom (new) virtual servers will be used. In the Pre-configured Scenarios step of the process, select New (Phantom) servers. This is seen in Figure 1-34. Note that there is no engineered systems option at this stage, however engineered systems can be chosen later on when creating the scenario. Figure 1-34. Phantom Servers in the case of P2V Projects. If you do not intend to use engineered systems as the destination, you can manually specify the CPU capacity, Memory, and Disk Storage of your phantom virtualized servers, along with optional entries for reserved cpu and reserved memory. Finish creating and submitting the P2V Project. Once the project is ready, you can create a scenario. In the case of a P2V scenario, a list of Exalogic configurations is provided, and you can select from that. We have selected the Exalogic Elastic Cloud X5-2 (Eight Rack) as shown in Figure 1-35. Figure 1-35. Selecting an Exalogic X5-2 (Eight Rack) for the P2V Scenario. The rest of the scenario calculations and mapping work in the same way, and the physical source servers are mapped to the destination phantom Eight Rack. In this manner, the Consolidation Planner allows you to play a number of what-if scenarios, for P2P as well as P2V consolidations on the basis of sound mathematical calculations. You can specify which metrics will be analysed, and this results in the calculation of the resource requirements for every source server. Each resource is aggregated to a 24-hour pattern based on a different formula, depending on whether Conservative, Medium or Aggressive has been selected as the algorithm. Constraints can also be specified as to which server workloads can co-exist together, and which workloads should be placed on different target servers for business or technical reasons. Updating the Benchmark Rates Now for a look at the Benchmark rates. The SPECint®_base_rate2006 benchmark is used by Consolidation planner for database or application hosts, or hosts with a mixed workload. The SPECjbb®2005 benchmark is used for middleware platforms. The benchmarks are used as a representation for the processing power of CPUs - including Intel Itanium, Intel Xeon, SPARC T3, SPARC64, AMD Opteron, as well as IBM POWER. The SPEC rates in Enterprise Manager can also be updated by users in the following manner. Go to the Host Consolidation Planner home page via Enterprise | Consolidation | Host Consolidation Planner, and select Actions | View Data Collection. On this page, see the section titled “Load Benchmark CSV File”. The following information is seen on the page: “The built-in benchmark rates can be updated with a downloaded Comma Separated Values (CSV) file. To download the latest data, go to http://www.spec.org/cgi-bin/osgresults and choose either SPECint2006 Rates or SPEC JBB2005 option for "Available Configurations" and "Advanced" option for "Search Form Request", then click the "Go!" button. “In the Configurable Request section, keep all default settings and make sure the following columns are set to "Display": For SPECint2006: Hardware Vendor, System, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Processor, Processor MHz, Processor Characteristics, 1st Level Cache, 2nd Level Cache, 3rd Level Cache, Memory, Base Copies, Result, Baseline, Published; for SPEC JBB2005: Company, System, BOPS, BOPS per JVM, JVM, JVM Instances, # cores, # chips, # cores per chip, Processor, CPU Speed, 1st Cache, 2nd Cache, Memory, Published. Choose "Comma Separated Values" option for "Output Format", then click "Fetch Results" button to show retrieved results in CSV format and save the content via the "Download" link in a CSV file, which can be loaded to update the built-in rates”. As per the above instructions, the latest spec rates can be downloaded from SPEC.org in the form of CSV files, and then uploaded to the repository. We will continue this article series in Part VII. (This article is an excerpt from Chapter I of the new book by the author titled “Oracle Database Cloud Cookbook with Oracle Enterprise Manager 13c Cloud Control” published by Oracle Press in August 2016. For more information on the book, see here .)

Wiki Page: Consolidation Planning for the Cloud - Part V

$
0
0
By Porus Homi Havewala We introduced the concept of Consolidation Planning in the previous parts of this article series. Part IV was here . This is Part V. Click on the Destinations tab. This is displayed in Figure 1-21. Figure 1-21: Destinations Tab. In this tab, we see that the Destination Server is the phantom Exadata Database machine with four rack nodes (since it is a quarter-rack). The SPECint rate is shown to be 391 (estimated) for each node. The CPU and Memory utilization that would result after the consolidation on the two nodes, are also shown. We can see that the one of the rack nodes would reach 64.2% of CPU utilization, which is more than the other nodes. Click on the Ratio tab. This is displayed in Figure 1-22. Figure 1-22: Consolidation Ratio. In this tab, we can see the Consolidation Ratio of Source servers to the Target servers calculated as 1.3. This is because only 5 Source servers have been consolidated to the 4 rack nodes of the database machine. Click on the Mapping tab, as seen in Figure 1-23. Figure 1-23: Mapping Tab. On the Mapping tab, the destination and source servers are shown together along with their SPECint rates and the resource utilizations. This allows you to get an idea of how the CPU and Memory utilization of the destination servers has been calculated. The Destination Mapping is displayed as Automatic. Click on the Confidence tab. This is now displayed in Figure 1-24. Figure 1-24: Confidence Tab. On this tab, the Confidence is shown as 100%. Out of the data collected from the source servers, 199 total data points were evaluated and all of them met the requirements. The Heat Map graph shows the Hourly Resource Utilization of all resources (including CPU and Memory) for all destination servers, and is color-coded to show the hours and days when there is maximum and least resource utilization. For example, in this particular scenario, there are blue colored blocks showing 0-20% utilization on certain days and hours. This seems fine. You can click on any of the blocks in the graph to see the value. Also, using the drop down box for the destination server, it is possible to choose any one of the rack nodes, and see the utilization for that node only. Move to the Violations tab. There are no Violations to be seen in this scenario. Next, move to the Exclusions tab, shown in Figure 1-25. Figure 1-25: Exclusions Tab. There are 4 server exclusions visible on this tab, these servers could not be consolidated, since their memory requirement at a certain hour was higher than what was available. So, we obviously require a more powerful Exadata machine. Go back and create a new what-if scenario. In this case, select an X5-2 (Eighth Rack). This is a more powerful machine. Also remove the constraints. The scenario analysis completes as seen in Figure 1-26. Figure 1-26. The New Custom Scenario for an Eight Rack. Click on the Ratio tab. We can see that all 9 servers have now been consolidated on the two nodes, 3 servers on the first node and 6 servers on the second node. (Figure 1-27). Figure 1-27. The Ratio Tab Move to the Mapping tab (Figure 1-28). The Rack nodes in this case have a higher CPU Spec metric of 700 (Estimated), and the memory is also higher at 256GB per node. So all of the 9 servers could be consolidated. Figure 1-28. The Mapping Tab The confidence tab also shows 100% confidence (Figure 1-29), although there are hot spots on some hours approaching 90% usage. Figure 1-29. The Confidence Tab There are no violations and exceptions. In this way, we have seen the first P2P (Physical to Physical) project and scenario results, with their what-if scenarios. (1)Consolidating to Oracle Public Cloud Servers Enterprise Manager 13c’s Host Consolidation Planner also allows you to select the Oracle Compute Cloud, as one of the possible phantom destination server choices. In Figure 1-30, we have created a new scenario and selected to use the Oracle Compute Cloud. Figure 1-30. Consolidating to the Oracle Compute Cloud The list of configurations available for the Oracle Compute Cloud is seen in Figure 1-31. We have selected the OC3M configuration. This is a “compute shape” available in the Oracle Public Cloud with 4 OCPUs and 60 GB RAM. Refer to the Oracle public cloud link https://cloud.oracle.com/compute?tabname=PricingInfo Figure 1-31. Selecting the Oracle Compute Cloud Configuration. Click on Ok, and go through the remaining steps and submit the new OPC scenario. The analysis of the scenario completes successfully in some time. Select the completed OPC scenario. In the lower half of the screen, move to the Ratio tab. Figure 1-32. Ratio Tab of OPC scenario. In this Ratio, we see that 5 servers have been consolidated on to 2 target servers, with a consolidation ratio of 2.5. Move to the Exclusions tab. Figure 1-33. Exclusions Tab of OPC scenario. In the Exclusions Tab, we can see that 4 of the source servers have been excluded from the consolidation. The reasons are shown. For these servers, either the CPU requirement or the Memory requirement at certain hours were more than could be handled by the 2 target servers. So, they were excluded. We continue this article series in Part VI which is here . (This article is an excerpt from Chapter I of the new book by the author titled “Oracle Database Cloud Cookbook with Oracle Enterprise Manager 13c Cloud Control” published by Oracle Press in August 2016. For more information on the book, see here .)

Blog Post: Oracle Enterprise Manager 13c R2 configuration on Win 2008 R2 server stops at 78%

$
0
0
Oracle Enterprise Manager (OEM) 13cR2 configuration on a Win 2008 R2 was stopping at 78% completion while performing the BI Publisher Configuration. Apparently, the problem exists pre 13cR2 as well. All OMS components like (WebTier, Oracle Management Server, JVMD Engine) will stop/start during the BI Publisher configuration operations. Unfortunately, the windows service for OMS was taking too long to complete the start operation of Webtier (HTTP) and installation at 78% stopped (didn't move forward). Initially, I have started looking at WebTier startup issues, in the process, tried to disable firewall, also excluded the installation directory from the anti virus on the Window server, and the result remains the same. Cleaned-up the previous installation and start-over the OEM 13cR2 installation on the server, but this time I didn't check the BI Publisher configuration option as I wanted to exclude the BI Publisher configuration to ensure I complete the OEM installation without any issues. Despite the fact I didn't check the option, OEM started configuring BI and stopped exactly at 78%, the issue remains. The error messages in the sysman and other OMS logs didn't provide any useful hints, in fact, it was misleading and took me in the wrong direction. Came across of a MOS ID( 1943820.1), and after applying the solution, OEM configuration completed successfully. Here is the excerpt from the MOS ID : On some occasions, httpd.exe will fail to start, If you are missing or have a damaged Microsoft Visual C++ Redistributable 64-bit package. It may report the error above, or give , with 0 bytes of details in the OHS log. Install the Microsoft Visual C++ Redistributable Package (x64) anyhow. 1. You can obtain this file at: http://www.microsoft.com/en-us/download/details.aspx?id=2092 2. Download the Microsoft Visual C++ Redistributable Package (x64) 3. Should have a file called vcredist_x64.exe. Run installation. 4. Try starting OHS again. Note: I understood why Oracle still does the BI Publisher configuration despite I didn't select the option. When you don't select the option, BI Publisher is confgured, but, will be disabled, so that in the future you can easily enable this option. References OHS 12c Fails To Start On Windows Server 2008 X64, with no detailed errors. (Doc ID 1943820.1) https://community.oracle.com/thread/3889503

Blog Post: Oracle Data Integrator repository is not accessible.

$
0
0
When trying to access to the http://hostname:9704/biacm the following warning appeared on the Dashboard :- Oracle Data Integrator repository is not accessible. Possible causes: Bad user credentials and/or type of Authentication. Incorrect Database connection configuration from WLS to ODI DB Repositories. OBIA 11g: BIACM - Unable To Access Oracle Data Integrator Repository. You Will Not Be Able Generate Or Execute (Doc ID 2115096.1) Solution :- Login to ODI Studio with a Supervisor user and in the ODI Security, open the user used to login to BIACM and the one with the problem with accessing ODI. Select the Retrieve GUID button and select the Supervisor check box, if not already done. If this 'Retrieve GUID' button is not present, then the ODI Repository Authentication has been changed to "internal", meaning that it is not anymore as the OBIA out of the box installation configures it, where the users are managed by the WLS LDAP. This is not to be confused with external authentication with OID. Every OBIA installation has what ODI calls "External Authentication", as the users are not managed internally by ODI. If this is the case, disconnect from the Repository and switch back to External Authentication from ODI Studio menu ODI > Change Authentication Mode. therefore you have to chance the authentication in ODI studio. Thanks Osama

Wiki Page: GROUP BY ISSUE

$
0
0
GROUP BY ISSUE Author JP Vijaykumar Oracle DBA Date Jan 4th 2017 --Recently I worked on an issue using group by clause. --To simulate the issue, created a table and populated it with data. drop table temp_jp; create table temp_jp(run_date date, name varchar2(20),comm number); truncate table temp_jp; set serverout on size 1000000 timing on declare v_dt date:=sysdate +28; begin for i in 1..4 loop for j in 1..3 loop insert into temp_jp values(v_dt,'VINEELA',1000*i +j*50); insert into temp_jp values(v_dt,'VEEKSHA',1000*i +j*2*100); insert into temp_jp values(v_dt,'SAKETH',1000*i+j*150); end loop; v_dt:=v_dt -1; end loop; end; / set linesize 60 pagesize 60 select * from temp_jp order by 1,2; RUN_DATE NAME COMM --------- -------------------- ---------- 01-JAN-17 SAKETH 4300 01-JAN-17 SAKETH 4450 01-JAN-17 SAKETH 4150 01-JAN-17 VEEKSHA 4600 01-JAN-17 VEEKSHA 4200 01-JAN-17 VEEKSHA 4400 01-JAN-17 VINEELA 4050 01-JAN-17 VINEELA 4150 01-JAN-17 VINEELA 4100 02-JAN-17 SAKETH 3450 02-JAN-17 SAKETH 3150 02-JAN-17 SAKETH 3300 02-JAN-17 VEEKSHA 3400 02-JAN-17 VEEKSHA 3200 02-JAN-17 VEEKSHA 3600 02-JAN-17 VINEELA 3100 02-JAN-17 VINEELA 3050 02-JAN-17 VINEELA 3150 03-JAN-17 SAKETH 2300 03-JAN-17 SAKETH 2150 03-JAN-17 SAKETH 2450 03-JAN-17 VEEKSHA 2600 03-JAN-17 VEEKSHA 2200 03-JAN-17 VEEKSHA 2400 03-JAN-17 VINEELA 2150 03-JAN-17 VINEELA 2100 03-JAN-17 VINEELA 2050 04-JAN-17 SAKETH 1150 04-JAN-17 SAKETH 1450 04-JAN-17 SAKETH 1300 04-JAN-17 VEEKSHA 1400 04-JAN-17 VEEKSHA 1200 04-JAN-17 VEEKSHA 1600 04-JAN-17 VINEELA 1050 04-JAN-17 VINEELA 1100 04-JAN-17 VINEELA 1150 36 rows selected. --The user wants sum of commission of each member on the available dates. select run_date, sum(decode(substr(name,1,6),'VINEEL',comm,0)) vine_comm, sum(decode(substr(name,1,5),'VEEKS',comm,0)) veek_comm, sum(decode(substr(name,1,4),'SAKE',comm,0)) sake_comm from temp_jp group by run_date,name order by run_date; RUN_DATE VINE_COMM VEEK_COMM SAKE_COMM --------- ---------- ---------- ---------- 01-JAN-17 0 0 12900 01-JAN-17 0 13200 0 01-JAN-17 12300 0 0 02-JAN-17 0 0 9900 02-JAN-17 0 10200 0 02-JAN-17 9300 0 0 03-JAN-17 0 0 6900 03-JAN-17 0 7200 0 03-JAN-17 6300 0 0 04-JAN-17 0 0 3900 04-JAN-17 0 4200 0 04-JAN-17 3300 0 0 12 rows selected. --Expected the above query returns sum(comm) values for each member for the 4 days. --Instead, it displayed 12 rows. select trunc(run_date) run_date, decode(substr(name,1,6),'VINEEL',sum(comm),0) vine_comm, decode(substr(name,1,5),'VEEKS',sum(comm),0) veek_comm, decode(substr(name,1,4),'SAKE',sum(comm),0) sake_comm from temp_jp group by trunc(run_date),name order by 1; RUN_DATE VINE_COMM VEEK_COMM SAKE_COMM --------- ---------- ---------- ---------- 01-JAN-17 0 0 12900 01-JAN-17 0 13200 0 01-JAN-17 12300 0 0 02-JAN-17 0 0 9900 02-JAN-17 9300 0 0 02-JAN-17 0 10200 0 03-JAN-17 0 0 6900 03-JAN-17 0 7200 0 03-JAN-17 6300 0 0 04-JAN-17 3300 0 0 04-JAN-17 0 0 3900 04-JAN-17 0 4200 0 12 rows selected. --No luck with, with the above modification. select run_date run_date, case when substr(name,1,5) = 'VINEE' then sum(comm) else 0 end vine_comm, case when substr(name,1,5) = 'VEEKS' then sum(comm) else 0 end veek_comm, case when substr(name,1,5) = 'SAKET' then sum(comm) else 0 end sake_comm from temp_jp group by run_date; --This above query failed with the following error: ORA-00979: not a GROUP BY expression, on the name column. select run_date run_date, case when substr(name,1,5) = 'VINEE' then sum(comm) else 0 end vine_comm, case when substr(name,1,5) = 'VEEKS' then sum(comm) else 0 end veek_comm, case when substr(name,1,5) = 'SAKET' then sum(comm) else 0 end sake_comm from temp_jp group by run_date,name order by 1; RUN_DATE VINE_COMM VEEK_COMM SAKE_COMM --------- ---------- ---------- ---------- 01-JAN-17 0 0 12900 01-JAN-17 0 13200 0 01-JAN-17 12300 0 0 02-JAN-17 0 0 9900 02-JAN-17 0 10200 0 02-JAN-17 9300 0 0 03-JAN-17 0 0 6900 03-JAN-17 0 7200 0 03-JAN-17 6300 0 0 04-JAN-17 0 0 3900 04-JAN-17 0 4200 0 04-JAN-17 3300 0 0 12 rows selected. --Query did not work. --To generate the output in the required format, I used a work around. with t as ( select run_date, decode(substr(name,1,6),'VINEEL',comm,0) vine_comm, decode(substr(name,1,5),'VEEKS',comm,0) veek_comm, decode(substr(name,1,4),'SAKE',comm,0) sake_comm from temp_jp ) select run_date,sum(vine_comm),sum(veek_comm),sum(sake_comm) from t group by run_date; RUN_DATE SUM(VINE_COMM) SUM(VEEK_COMM) SUM(SAKE_COMM) --------- -------------- -------------- -------------- 02-JAN-17 9300 10200 9900 03-JAN-17 6300 7200 6900 04-JAN-17 3300 4200 3900 01-JAN-17 12300 13200 12900 --The above work around fetched the output in the required format. with t as (select trunc(run_date) run_date, case when substr(name,1,6) = 'VINEEL' then sum(comm) else 0 end vine_comm, case when substr(name,1,5) = 'VEEKS' then sum(comm) else 0 end veek_comm, case when substr(name,1,4) = 'SAKE' then sum(comm) else 0 end sake_comm from temp_jp group by trunc(run_date),name) select run_date,sum(vine_comm),sum(veek_comm),sum(sake_comm) from t group by run_date; RUN_DATE SUM(VINE_COMM) SUM(VEEK_COMM) SUM(SAKE_COMM) --------- -------------- -------------- -------------- 03-JAN-17 6300 7200 6900 02-JAN-17 9300 10200 9900 04-JAN-17 3300 4200 3900 01-JAN-17 12300 13200 12900 --This variation also worked. select run_date,sum(vine_comm),sum(veek_comm),sum(sake_comm) from ( select trunc(run_date) run_date, decode(substr(name,1,6),'VINEEL',sum(comm),0) vine_comm, decode(substr(name,1,5),'VEEKS',sum(comm),0) veek_comm, decode(substr(name,1,4),'SAKE',sum(comm),0) sake_comm from temp_jp group by trunc(run_date),name) group by run_date order by 1; RUN_DATE SUM(VINE_COMM) SUM(VEEK_COMM) SUM(SAKE_COMM) --------- -------------- -------------- -------------- 01-JAN-17 12300 13200 12900 02-JAN-17 9300 10200 9900 03-JAN-17 6300 7200 6900 04-JAN-17 3300 4200 3900 --So also this variation. select * from temp_jp pivot (sum(comm) for (name) in ( 'VINEELA' as vine_comm, 'VEEKSHA' as veek_comm, 'SAKETH' as sake_comm)) order by run_date; RUN_DATE VINE_COMM VEEK_COMM SAKE_COMM --------- ---------- ---------- ---------- 01-JAN-17 12300 13200 12900 02-JAN-17 9300 10200 9900 03-JAN-17 6300 7200 6900 04-JAN-17 3300 4200 3900 --Pivot option worked for me too. --I was curious as to why the query did not work as expected. --Further debugging helped me. select run_date, sum(decode(substr(name,1,6),'VINEEL',comm,0)) vine_comm, sum(decode(substr(name,1,6),'VEEKSH',comm,0)) veek_comm, sum(decode(substr(name,1,6),'SAKETH',comm,0)) sake_comm from temp_jp group by run_date order by 1; RUN_DATE VINE_COMM VEEK_COMM SAKE_COMM --------- ---------- ---------- ---------- 01-JAN-17 12300 13200 12900 02-JAN-17 9300 10200 9900 03-JAN-17 6300 7200 6900 04-JAN-17 3300 4200 3900 --The above construct worked. --I leave the detailed explanation to the readers' imagination. --Happy scripting.

Blog Post: next OUG Ireland Meet-up on 12th Janiary

$
0
0
Our next OUG Ireland Meet-up with be on Thursday 12th January, 2017. The theme for this meet up is DevOps and How to Migrate to the Cloud. Come along on the night here about these topics and how companies in Ireland are doing these things. Venue : Bank of Ireland, Grand Canal Dock, Dublin. The agenda for the meet-up is: 18:00-18:20 Sign-in, meet and greet, networking, grab some refreshments, etc 18:20-18:30 : Introductions & Welcome, Agenda, what is OUG Ireland, etc. 18:30-19:00 : Dev Ops and Oracle PL/SQL development - Alan McClean Abstract In recent years the need to deliver changes to production as soon as possible has led to the rise of continuous delivery; continuous integration and continuous deployment. These issues have become standards in the application development, particularly for code developed in languages such as Java. However, database development has lagged behind in supporting this paradigm. There are a number of steps that can be taken to address this. This presentation examines how database changes can be delivered in a similar manner to other languages. The presentation will look at unit testing frameworks, code reviews and code quality as well as tools for managing database deployment. 19:00-1930 : Simplifying the journey to Oracle Cloud : Decision makers across Managers, DBA’s and Cloud Architects who need to progress an Oracle Cloud Engagement in the organization - Ken MacMahon, Head of Oracle Cloud Services at Version1 Abstract The presentation will cover the 5 steps that Version 1 use to try and help customers with Oracle Cloud adoption in the organisation. By attending you will hear, how to deal with cloud adoption concerns, choose candidates for cloud migration, how to design the cloud architecture, how to use automation and agility in your Cloud adoption plans, and finally how to manage your Cloud environment. This event is open to all, you don't have to be a member of the user group and best of all it is a free event. So spread the word with all your Oracle developer, DBAs, architects, data warehousing, data vizualisations, etc people. We hope you can make it! and don't forget to register for the event.

Blog Post: How to solve user errors with Oracle Flashback 12cR2 and its enhancements

$
0
0
How to solve user errors with Oracle Flashback 12cR2 and its enhancements By Deiby Gómez Introduction: Flashback is a technology introduced in Oracle Database 10g to provide fixes for user errors. For example, one of the most common issues it can solve is when a DELETE operation was executed without a proper WHERE clause. Another case: a user has dropped a table but after some time that table is required. And the worst-case error: the data of a complete database has been logically corrupted. There are several use cases for Flashback technology, all of them focused on recovering objects and data or simply reverting data from the past. Flashback technology is not a replacement for other recovery methods such as RMAN hot backups, cold backups or datapump export/import; Flashback technology is a complement. While RMAN is the main tool to recover and restore physical data, Flashback technology is used for logical corruptions. For instance, it cannot be used to restore a datafile, while RMAN is the perfect tool for that purpose. Also, be careful when NOLOGGING operations are used; Flashback Database cannot restore changes through NOLOGGING. Flashback Technology includes several "Flashback Operations", among them Flashback Drop, Flashback Table, Flashback Query, Flashback Version, Flashback Transaction and Flashback Database. They use different data sources to restore/revert user data from the past. The following table shows which data source is used for which Flashback operation: Flashback Operation Data Source Flashback Database Flashback Logs Flashback Drop Recycle bin Flashback Table Undo Data Flashback Query Undo Data Flashback Version Undo Data Flashback Transaction Undo Data In this article, we will focus on Flashback Database, a feature that is able to "flash back" a complete database to a point in the past. Flashback Database has the following use cases: Taking a database to an earlier SCN: This is really useful when a new version of an application needs to be tested and all the changes made for the testing discarded afterwards. In this case, a new environment (for testing or dev) must be created that contains the data in the production database at a specific time in the past. Recovery through resetlogs: Flashback Database can revert (logically) a database to a specific date in the past, even if that specific date precedes that of a RESETLOGS operation. Activating a Physical Standby Database: With Oracle Database 10g, Flashback Database can be used in a Physical Standby. The Physical Standby can be opened in read-write for testing purposes and when the activity completes, the database can be reverted to the time before the Physical Standby was activated. Creating a Snapshot Standby: In 11g, Snapshot Standby was introduced. The concept is basically to automate all the steps involved in activating (opening in read-write) a Physical Standby in version 10g, then later make it Physical Standby again (with recovery). This "automated" conversion of a Physical Standby into a “Snapshot Standby” uses Flashback Database transparently to the DBA. Configuring Fast Start Failover: To configure Fast Start Failover in Data Guard Broker, Flashback Database is required. Reinstating a Physical Standby: Data Guard broker uses Flashback Database to reinstate a former primary after Failover operations. Read more about reinstating a database in the following articles: Role Operations with Snapshot Standby 12c , Role Operations involving two 12c Standby Databases Upgrade testing: A Physical Standby can be used to test an upgrade; in this case, the Physical Standby is opened in read-write and upgraded. Applications can be tested with the upgraded database and when the activity completes the Physical Standby can be reverted to the former version using Flashback Database. The Transient Logical Standby method for upgrades also involves Flashback Database. How Flashback Database works: When blocks are modified in the Buffer Cache, some of the before-the-change block images are stored in the Flashback Buffer and subsequently stored physically in the Flashback Logs by the RVWR process. All blocks are captured: index blocks, table blocks, undo blocks, segment headers, etc. When a Flashback Database operation is performed, Oracle uses the target time and checks out its Flashback Logs to find which Flashback Logs have the required block images with the SCN right before the target time. Then Oracle restores the appropriate data blocks from Flashback Logs to the Datafiles, applies redo records to reach the exact target time, and when the Database is opened with resetlogs, the changes that were not committed are rolled back using undo data to finally have a consistent database ready to be used. Flashback Database Enhancements: Flashback Database has had several enhancements since it was introduced, with the biggest enhancements in 12.1 and 12.2. In Oracle Database 12.1 Flashback Database supported Container Databases (CDBs) supporting the Multitenant Architecture, however Flashback Database at the PDB Level was not possible. In Oracle Database 12cR2 Flashback Database added support at the PDB level. This was enabled thanks to another good feature introduced in Oracle Database 12.2 called "Local Undo". Local Undo allows you to create an undo tablespace in each Pluggable Database and use it to store locally undo data for that PDB specifically. Local Undo must be enabled at the CDB level. However, if the CDB is not running in Local Undo mode, Flashback Pluggable Database can also be used, but the mechanism used is totally different. In a Shared Undo mode, Flashback Pluggable Database needs an auxiliary instance in which the required tablespaces will be restored and recovered to perform the Flashback Database operation and a switch is then performed between the current tablespaces and the new restored-and-recovered tablespaces in the required Pluggable Database. NOTE: All the examples in this article were created using Oracle Public Cloud: Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production PL/SQL Release 12.2.0.1.0 - Production CORE 12.2.0.1.0 Production TNS for Linux: Version 12.2.0.1.0 - Production NLSRTL Version 12.2.0.1.0 – Production Enabling Flashback: Local Undo is used in this example: SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'; PROPERTY_NAME PROPERTY_VALUE -------------------- --------------- LOCAL_UNDO_ENABLED TRUE To read more about Local Undo and Shared Undo the following articles are recommended: Oracle DB 12.2 Local Undo: PDB undo tablespace creation , How to Enable and Disable Local Undo in Oracle 12.2 . Flashback cannot be enabled at the PDB level in 12.1 and 12.2.0.1. Flashback must be enabled at the CDB level. Before you can Enable Flashback in your CDB you have to ensure that enough space is available to store the Flashback Logs. Oracle recommends using the following generic formula to setup your Fast recovery area space: Target FRA = (Current FRA)+[DB_FLASHBACK_RETENTION_TARGET x 60 x Peak Redo Rate (MB/sec)] After setup the FRA space properly, Flashback may be enabled: SQL> alter database flashback; Database altered. Creating a table and some rows To test the result of Flashback Database operation, I will create a table with some rows in it; that data will be used to flashback the database and verify that the database was thereby successfully reverted to a past time. SQL> alter session set container=nuvolapdb2; Session altered. SQL> create table deiby.piece (piece_name varchar2(20)); Table created. SQL> insert into deiby.piece values ('King'); SQL> insert into deiby.piece values ('Queen'); SQL> insert into deiby.piece values ('Rook'); SQL> insert into deiby.piece values ('Bishop'); SQL> insert into deiby.piece values ('Knight'); SQL> insert into deiby.piece values ('Pawn'); SQL> commit; Commit complete. SQL> select * from deiby.piece; PIECE_NAME -------------------- King Queen Rook Bishop Knight Pawn 6 rows selected. Restore Point creation To perform Flashback Database a restore point, a guaranteed restore point, an SCN or a timestamp is required. In this example a normal restore point is used. SQL> create restore point before_resetlogs for pluggable database Nuvola2; Restore point created. SQL> SELECT name, pdb_restore_point, scn, time FROM V$RESTORE_POINT NAME PDB SCN TIME ----------------- --- ---------- ------------------------------- BEFORE_RESETLOGS YES 3864200 09-JAN-17 08.12.56.000000000 PM Truncating and dropping the table Now let's assume a user error: a DBA, developer, or end user truncates a table and then drops it. This is a simple example, but you can make this "logical error" as complex as you want so long as a physical error is not involved and a NOLOGGING is not used. Truncating the table: SQL> truncate table deiby.piece; Table truncated. Drop the table with purge: SQL> drop table deiby.piece purge; Table dropped. Open the database with resetlogs: To make it more interesting, I will simulate a recovery-until-time operation in order to perform a resetlogs operation: RMAN> recover pluggable database nuvolapdb2 until scn 3864712; Starting recover at 09-JAN-17 current log archived using channel ORA_DISK_1 starting media recovery media recovery complete, elapsed time: 00:00:00 Finished recover at 09-JAN-17 Opening the Pluggable Database with resetlogs: RMAN> alter pluggable database nuvolapdb2 open resetlogs; Statement processed We can verify that indeed a new incarnation was created for the PDB: SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,incarnation_scn from v$pdb_incarnation where con_id=4; CON_ID DB_INC# PDB_INC# STATUS INCARNATION_SCN ---------- ---------- ---------- ------- --------------- 4 1 5 CURRENT 3864712 4 1 0 PARENT 1 Flashback the database Now it's time for the magic, the new feature introduced in Oracle Database 12.2 called "Flashback Pluggable Database". To use Flashback Database at Pluggable Database level, the PDB must first be closed. SQL> alter pluggable database nuvolapdb2 close; Pluggable database altered. Then Flashback PDB may be used: SQL> flashback pluggable database nuvolapdb2 to restore point before_resetlogs; Flashback complete. After a Flashback PDB operation, the PDB must be opened with resetlogs: SQL> alter pluggable database nuvolapdb2 open resetlogs; Pluggable database altered. Verifying the data Once the Flashback PDB has completed successfully, the data that existed before the truncate, drop and resetlogs (and even more if you want) can be queried: SQL> alter session set container=nuvolapdb2; Session altered. SQL> select * from deiby.piece; PIECE_NAME -------------------- King Queen Rook Bishop Knight Pawn 6 rows selected. A quick look at the incarnations will show that a new incarnation was created for the PDB (Incarnation #6) and the former Incarnation was made orphan (Incarnation #5). SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,INCARNATION_SCN from v$pdb_incarnation where con_id=4; CON_ID DB_INC# PDB_INC# STATUS INCARNATION_SCN ---------- ---------- ---------- ------- --------------- 4 1 6 CURRENT 3864201 4 1 0 PARENT 1 4 1 5 ORPHAN 3864712 Conclusion: Flashback Database has several use cases and is a very useful feature that DBAs should keep “in their pocket” and ready to use when they need to revert a database to a time in the past. It allows you to test upgrades, activate a physical standby, undo user errors, and test applications—all without worry. I’m sure that Oracle will keep improving this feature; perhaps in the next version of Oracle we will gain the ability to enable Flashback in PDB Level and several others functions. For now, the enhancements made by Oracle in 12.1 and 12.2 are enough to work with non-CDB, CDBs and PDBs. About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group (www.nuvolacg.com), a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.

Blog Post: The Most Important Tool for SQL Tuning

$
0
0
SQLT is a tool that collects comprehensive information on all aspects of a SQL performance problem. SQL tuning experts know that EXPLAIN PLAN is only the proverbial tip of the iceberg but the fact is not well recognized by the Oracle database community, so much evangelization is necessary. I remember the time I was trying to solve a production problem a long time ago. I did not have any tools but I was good at writing queries against the Oracle data dictionary. How does one find the PID of an Oracle dedicated server process? Try something like this: select spid from v$process where addr=(select saddr from v$session where sid = '&sid') My boss was not amused. After the incident, he got me a license for Toad. Writing queries against the data dictionary is macho but it is not efficient. Tools are in. Fast forward many years. Now one of my specialties is tuning SQL statements. There is a general lack of appreciation for the vast spectrum of reasons that may lie behind poor performance of a SQL statement. Therefore, the opening gambit is usually “Here is the EXPLAIN PLAN. What’s wrong?” First of all, note that EXPLAIN PLAN does not take bind variables into account and therefore the EXPLAIN PLAN may not be the plan that was used at run time. Further, plans can change at any time for dozens of reasons: bind variables, statistics, cardinality feedback, adaptive query optimization, etc. What if the query plan is just fine but the environment is having difficulties? And don’t expect me to tune a SQL statement without all the information about the tables and indexes involved in the query, all the information that feeds into plan generation, the execution history of the SQL statement, and detailed information about the environment in which the statement is execution. Does the query refer to views or PL/SQL functions; I need information on them. Why not give me a test case, perhaps even with data? Carlos Sierra was a genius who worked in the SQL performance team of Oracle Support for many, many years. He realized that the process of collecting the necessary information was unsystematic and long and drawn out. He therefore created a tool called SQLT (the T stands for Trace) to collect all the required information and present it in the form of an HTML report. View definitions? Check. Optimizer settings? Check. Environment information? Check. Execution history? Check. There’s even a 10053 trace and a test case that I can run in my own environment. Here’s a screen shot of the HTML report that I mentioned. Isn’t that amazing? Outlines and patches and profiles and baselines and directives, oh my! Security policies? You’ve gotta be kidding. Since the SQLT tool was produced by Oracle Support, there’s a Support note on it. But, there’s even a whole book on it written by Stelios Charlambides, one of Carlos’s partners in crime. The Case of the Foolish Fix Performance of many queries had worsened when the database was upgraded from version 11.2.0.3 to 12.1.0.2. In this case, the problem was definitely the query plan. In version 11.2.0.3, the cost-based optimizer picked the right driving table but not in 12.1.0.2. I used a SQLT feature called XPLORE to solve the problem. First, I set up a test case in my own environment. Then I unleashed XPLORE on it. It systematically changed the values of optimizer settings and “fix controls” to see how the optimizer would respond in each case. After an hour and a half of processing—or so it seemed—it hit pay dirt. A certain fix, first inserted into version 11.2.0.4, had corrected the “single table cardinalities” in the presence of transitive predicates. Unfortunately, the CBO was producing worse plans with the corrected cardinalities. While this is certainly cause for consternation, the problem had been identified. Restoring performance was now simply a matter of disabling the fix. The Case of the Puzzling Policy A certain query executed lightning fast in the development environment and dog-slow in the production environment. The problem had been puzzling everybody for months. When I reviewed the SQLT report, I saw the curious filter “NULL IS NOT NULL” in the development environment. This told me that a security policy was in effect and that it was disabling part of the query plan in the development environment. ‘Nuff said. The Case of the Curious Constraints The DBA insisted that there was no difference between two environments, and yet the query performed better--much better--in one of the two environments. I used the COMPARE feature of SQLT but could not find any differences in the two environments that could explain the oddity. It would seem that SQLT had failed me. But I was able to write simple queries on the data from the two environments that SQLT had collected and I found that three table constraints in the faster environment were disabled in the slower environments. Why would disabling constraints make queries run slower? Join-elimination, you see. The DBA said that he was aware of the disabled constraints but had not considered them important enough to mention. Well, live and learn. The Case of the Immense Indexes The customer was loading hundreds billions of rows of new data into a table using INSERT statements, a process that was expected to last weeks. However, the insertion rate had been slowing down dramatically as the days passed and what had been expected to take weeks would likely take months. It was clear from the wait-event information in the SQL report that Oracle was spending a big chunk of time updating two indexes, but what was the reason for the slowdown? There was enough execution-history information in the AWR report for me to graph the slowdown over time and prove that the slowdown was directly proportional to the size of the indexes. It was too late to redesign the loading strategy but at least I was able to explain the problem to the customer. Always a But Unfortunately SQLT is not perfect. The biggest problem is that it is written in PL/SQL which must be loaded into the target database. And it creates lots of tables in a special schema. This is anathema to production environments. My other pet peeve about SQLT is that it does not support STATSPACK. The creators of SQLT believed that most customers are licensed for AWR, but I do not believe that’s the case. Finally, SQLT is only available in My Oracle Support so you’re out of luck if you do not have access to it. And a Word of Caution It is easy to get very excited about SQLT but it’s not the be-all and end-all. There will always be cases where SQLT is not enough. For example, you might find the answer you are looking in an AWR report. Remember to keep an open mind. Taking the Plunge Read the Support note first, then download SQLT and install it in a test environment and play with it. You don’t want to forget the COMPARE and XPLORE features. If you’ve got a few spare dollars left over from Christmas shopping, I recommend the book by Stelios Charlambides. Make learning about SQLT your New Year’s resolution. Happy learning. Iggy Fernandez is the editor of the NoCOUG Journal , the print journal of the oldest-running Oracle user group in the world which celebrated thirty years last year. He was an Oracle DBA for many years but now specializes in SQL tuning. He used to speak Oracle conferences all over the country but is now completely bored with that gig except in the case of the RMOUG Training Days which treats its speakers so well, even picking them up at the airport and taking them back.

Blog Post: Nueva categoría de iconos en la versión de Apex 5.1 - Font Apex

$
0
0
Apex 5.1 viene ahora con un gran conjunto de iconos, tanto que nos hacía falta para nuestras aplicaciones!. Estos iconos están contenidos en una fuente llamada Font Apex y tiene una gran variedad de iconos de tipo técnicos. Esta fuente contiene más de 1000 iconos. Parte de los iconos son basados en la Font Awesome V4.7 y los otros están diseñados específicamente para Apex. Para activar el uso de la Font Apex, tenenos que ir a propiedades del Tema: Componentes Compartidos > Tema > Tema Especifico, (en mi caso Tema Universal 42) > Iconos. Los iconos basados en Font Awesome tienen contornos mas gruesos como si fuera que estan en negrita, como podemos ver en los iconos de menú de la app demo: La fuente que usa Font Awesome está a la izquierda y la que usa la Font Apex está a la derecha. El HTML para usar Font Apex es exactamente el mismo que con Font Awesome. Font Apex también tiene las clases .fa y .fa-name . Debido a que Font Apex es un superconjunto de Font Awesome v4.7, podemos alternar entre las dos fuentes siempre y cuando no utilicemos los iconos de Font Apex. Hay tres categorías de iconos: - Font Awesome : iconos basados en Font Awesome 4.7 con el mismo nombre - Apex : íconos principalmente de tipo técnicos - Emojis ': iconos de emoji (las caritas que tanto usamos) Una muestra de los iconos de Font Apex: Los iconos emojis': Cada vez mas recursos nos facilitan con nuevas versiones de APEX! Podemos encontrar una lista de iconos disponibles en esta página .

Blog Post: Kill Databump job that running inside Database

$
0
0
even if you interrupt data bump job on shell therefore you have to do the following :- as / as sysdba run the below query : select * from dba_datapump_jobs After checking the status of each one, run the following :- DECLARE h1 NUMBER; BEGIN h1:=DBMS_DATAPUMP.ATTACH('JOB_NAME','OWNER_NAME'); DBMS_DATAPUMP.STOP_JOB (h1,1,0); END; / Thanks

Blog Post: Toad Tips and Techniques

$
0
0
Toad Tips and Techniques Hi, 2017 is upon us. Toad ventures into another year! Toad can do so many things but I think people sometimes forget the little things that Toad has to offer. Let’s start with one of my favorite topics for people new to the Oracle RDBMS…such as business analysts. They know their data but they don’t know how to access it, and they sometimes don’t know SQL at all. I call this SQL without typing. Using the Schema browser or simply typing in a table name and pressing F4 (either with the name highlighted or the mouse on it), you get the following describe panel. This panel shows you quite a bit of information including the data. Sometimes it’s nice to see some of the data. Clicking the ‘View/Edit Schema Browser Query’ button (shown in the red circle below), shows the SELECT statement that created this data. Again, Toad allows the user to create SQL with very few keystrokes. You can change this SQL and change the content of the Data display, or, you can copy/paste this SQL into the SQL editor for your own uses (perhaps a report or script). These techniques allow you to create valid SQL statements without typing! The data tab is another data grid. Data can be added, changed, and deleted from here as well. Toad contains bits of code called ‘Code Snippets’. These snippets contain most of the SQL functions, date formats, hints, and other bits of SQL and PL/SQL code that are of interest to the Oracle developer, and even the data analyst! Simply drag and drop these code fragments into your SQL statement. Code Snippets is a dockable palette that allows easy access to various bits of code, SQL functions, date formats, hints, and the like. These are found on the menu item View à Code Snippets. Use the push-pin to auto hide the palette, use the drop down menu on the palette to select other code snippets. Toad options allow for additional items to be added or existing items to be changed. Toad allows for the snippet text to be modified. Categories can be added, changed, deleted and additional options can be added/changed/deleted for each category. Toad has an ER Diagrammer. I think data analysts should use this feature all the time when learning about their data and the important data relationships. Use the button for the ER Diagrammer. This tool makes a nice ERD picture of your objects and their relationships. Often it is these relationships that are difficult to follow. The ER Diagrammer can be accessed from the menu item Database -->Report -->ER Diagram. This tool makes use of the Object Palette that is accessible from the View à Object Palette. Drag and drop the table objects from this object palette and drop them into the main window. There is also an Object Palette button on the ER Diagrammer tool bar. The ER Diagrammer is shown below. To work these examples, start the Object Palette and change the user to SCOTT. Drag and drop the EMP and DEPT tables from the Object Palette to the canvas area. Notice the relationships. The DEPT table has the primary key (gold key) that is related to a foreign key in the EMP table. This tells the user that the DEPT table should be the lead table in the Master/Detail Browser. The SQL Modeler/Query Builder is a little different. It is started using the SQL Modeler button from the main tool bar or from the ER Diagrammer. The SQL Modeler can also be found on the Database --> Report --> Query Builder. It works in much the same way, using the Object Pallet, although, the ER Diagrammer can start the Query Builder and transfer all the selected objects to it. The main difference in the two modelers is the Query Builder will create a SQL statement based on the relationships. This is the Query Builder capability. Click on the button to paint in the relationships. Notice as objects are put onto the canvas and the columns are selected by clicking the box next to them, that the grid fills in on the left and the SQL builds in the bottom pane! Use the W and H buttons (see circle above) to add Where clause items and Having clause items to the SQL. The query can be executed from here , saved , and even moved to a tab in the SQL Editor ! A nice drag and drop feature for column names are called Toad insights. How many times are you trying to build a SQL statement but the column names are long and you are not very good at typing. I would have loved to have had this feature in yesteryear! ***Note*** Did you know that Oracle12.2 is allowing up to 128 byte table and column names? Yes, PL/SQL object names too. Features like this and the ability to create SQL without typing will soon become even more important. Toad insights allow for object column names to be displayed following the entry of a ‘.’. Enter the table name followed by a ‘.’ or a table alias followed by a ‘.’. Double-click on a column name to automatically paste it into the Editor at the position of the cursor. Select multiple column names using the shift and click and ctrl and click keys, then hit return and all the selected columns are put into the Editor along with commas between the fields. There are more ways of producing SQL without typing, such as Auto Replace and Code Templates. Watch for future blog postings on these features and how they can help you personalize Toad for your programming and data research needs. Dan Hotka Author/Instructor/Oracle Expert

Blog Post: Unable to perform initial elastic configuration on Exadata X6

$
0
0
I had the pleasure to deploy another Exadata in the first week of 2017 and got my first issue this year. As we know starting with Exadata X5, Oracle introduced the concept of Elastic Configuration. Apart from allowing you to mix and match the number of compute nodes and storage cells they have also changed how the IP addresses are assigned on the admin (eth0) interface. Prior X5, Exadata had default IP addresses set at the factory in the range of IP addresses was 192.168.1.1 to 192.168.1.203 but since this could collide with the customer’s network they changed the way those IPs are assigned. In short – the IP address on eth0 on the compute nodes and storage cells is assigned within 172.16.2.1 to 172.16.7.254 range. The first time node boots it will assign its hostname and IP address based on the IB ports its connected to. Now to the real problem, I was doing the usual stuff – changing ILOMs, setting cisco and IB switches and was about to perform the initial elastic configuration (applyElasticConfig.sh) so I had upload all the files I need for the deployment on the first compute node. I’ve changed my laptop address to an IP within the same range and was surprised when I got connection timed out when I tried to ssh to the first compute node (172.16.2.44). I thought this was unfortunate coincidence since I rebooted the IB switches almost at the time I powered on the compute nodes but I was wrong. For some reason, ALL servers did not get their eth0 IP addresses assigned hence they were not accessible. I was very surprised to what’s causing this issue and I’ve spent the afternoon troubleshooting it. I thought Oracle changed the way they assign the IP addresses but the scripts haven’t been changed for a long time. It didn’t take long before I find out what was causing it. Three lines in /sbin/ifup script were the reason eth0 interface wasn’t up with the 172.2.16.X IP address: if ip link show ${DEVICE} | grep -q “UP”; then exit 0 fi This line will check if the interface is UP before proceeding further and bring the interface up. Actually, the eth0 interface is brought UP already by the elastic configuration script to check if there is a link on the interface. Then at the end of the script when ifup script is invoked to bring the interface up it will stop the execution since the interface is already UP. The solution is really simple – comment out the three lines (line 73-75) in /sbin/ifup script and reboot each node. This wasn’t the first X6 I deploy and I never had this problem before so I did some further investigation. The /sbin/ifup scripts is part of initscripts package. It turns out that the check for the interface being UP was introduced in one minor version of the package and then removed in the latest package. Unfortunately, the last entry in the Changelog is from Apr 12 2016 so that’s not very helpful but here’s a summary: initscripts-9.03.53-1.0.1.el6.x86_64.rpm           11-May-2016 19:49     947.9 K  <– not affected initscripts-9.03.53-1.0.1.el6_8.1.x86_64.rpm     12-Jul-2016 16:42     948.0 K    <– affected initscripts-9.03.53-1.0.2.el6_8.1.x86_64.rpm     13-Jul-2016 08:26     948.1 K   <– affected initscripts-9.03.53-1.0.3.el6_8.2.x86_64.rpm     23-Nov-2016 05:06     948.3 K <– latest version, not affected I had this problem on three Exadata machine so far. So, if you are doing deployment of new Exadata in the next few days or weeks it’s very likely that you will be affected, unless your Exadata has been factory deployed after 23rd Nov 2016. That’s the day when the latest initscripts packages was released.

Blog Post: Database Vault: Realm violation for GRANT on RESOURCE

$
0
0
A friend wrote: “I enabled Database Vault (explained here ) in my database. Now the management wants me to create a new user with connect as well as resource privileges. “However, when I connect as a user with the DBV_ACCTMGR role and try to create a new user, I get the following error: connect dave@proddb grant connect,resource to SALESUSER2 * ERROR at line 1: ORA-47410: Realm violation for GRANT on RESOURCE “I checked in dba_role_privs and dba_sys_privs for the roles and privileges granted to dave, and he has been granted the connect role as well as the DBV_ACCTMGR role. “Do any more privileges or roles need to be granted to the user Dave?” The answer: No, this is perfectly normal. The user with the DBV_ACCTMGR role can only grant the connect role, since a realm or protection zone is in place. If you want to grant the resource role, you need to grant it as SYS.

Blog Post: Explicit Semantic Analysis setup using SQL and PL/SQL

$
0
0
In my previous blog post I introduced the new Explicit Semantic Analysis (ESA) algorithm and gave an example of how you can build an ESA model and use it. Check out this link for that blog post. In this blog post I will show you how you can manually create an ESA model. The reason that I'm showing you this way is that the workflow (in ODMr and it's scheduler) may not be for everyone. You may want to automate the creation or recreation of the ESA model from time to time based on certain business requirements. In my previous blog post I showed how you can setup a training data set. This comes with ODMr 4.2 but you may need to expand this data set or to use an alternative data set that is more in keeping with your domain. Setup the ODM Settings table As with all ODM algorithms we need to create a settings table. This settings table allows us to store the various parameters and their values, that will be used by the algorithm. -- Create the settings table CREATE TABLE ESA_settings ( setting_name VARCHAR2(30), setting_value VARCHAR2(30)); -- Populate the settings table -- Specify ESA. By default, Naive Bayes is used for classification. -- Specify ADP. By default, ADP is not used. Need to turn this on. BEGIN INSERT INTO ESA_settings (setting_name, setting_value) VALUES (dbms_data_mining.algo_name, dbms_data_mining.algo_explicit_semantic_analys); INSERT INTO ESA_settings (setting_name, setting_value) VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_on); INSERT INTO ESA_settings (setting_name, setting_value) VALUES (odms_sampling,odms_sampling_disable); commit; END; These are the minimum number of parameter setting needed to run the ESA algorithm. The other ESA algorithm setting include: Setup the Oracle Text Policy You also need to setup an Oracle Text Policy and a lexer for the Stopwords. DECLARE v_policy_name varchar2(30); v_lexer_name varchar2(3) BEGIN v_policy_name := 'ESA_TEXT_POLICY'; v_lexer_name := 'ESA_LEXER'; ctx_ddl.create_preference(v_lexer_name, 'BASIC_LEXER'); v_stoplist_name := 'CTXSYS.DEFAULT_STOPLIST'; -- default stop list ctx_ddl.create_policy(policy_name => v_policy_name, lexer => v_lexer_name, stoplist => v_stoplist_name); END; Create the ESA model Once we have the settings table created with the parameter values set for the algorithm and the Oracle Text policy created, we can now create the model. To ensure that the Oracle Text Policy is applied to the text we want to analyse we need to create a transformation list and add the Text Policy to it. We can then pass the text transformation list as a parameter to the CREATE_MODEL, procedure. DECLARE v_xlst dbms_data_mining_transform.TRANSFORM_LIST; v_policy_name VARCHAR2(130) := 'ESA_TEXT_POLICY'; v_model_name varchar2(50) := 'ESA_MODEL_DEMO_2'; BEGIN v_xlst := dbms_data_mining_transform.TRANSFORM_LIST(); DBMS_DATA_MINING_TRANSFORM.SET_TRANSFORM(v_xlst, '"TEXT"', NULL, '"TEXT"', '"TEXT"', 'TEXT(POLICY_NAME:'||v_policy_name||')(MAX_FEATURES:3000)(MIN_DOCUMENTS:1)(TOKEN_TYPE:NORMAL)'); DBMS_DATA_MINING.DROP_MODEL(v_model_name, TRUE); DBMS_DATA_MINING.CREATE_MODEL( model_name => v_model_name, mining_function => DBMS_DATA_MINING.FEATURE_EXTRACTION, data_table_name => 'WIKISAMPLE', case_id_column_name => 'TITLE', target_column_name => NULL, settings_table_name => 'ESA_SETTINGS', xform_list => v_xlst); END; NOTE: Yes we could have merged all of the above code into one PL/SQL block. Use the ESA model We can now use the FEATURE_COMPARE function to use the model we just created, just like I did in my previous blog post . SELECT FEATURE_COMPARE(ESA_MODEL_DEMO_2 USING 'Oracle Database is the best available for managing your data' text AND USING 'The SQL language is the one language that all databases have in common' text) similarity FROM DUAL; Go give the ESA algorithm a go and see where you could apply it within your applications.

Blog Post: Help Me Help You!

$
0
0
Hello everyone, I hope the holidays went well for you, best of wishes from my family to yours! It is a new year and we all have new year resolutions! After years of creating blogs for you all, I have decided to turn it up a notch! As many of you already know, I am […]
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>