Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: New issue of OTech Magazine is out

$
0
0
If you are not familiar with OTech Magazine yet, now is a great time to acquaint yourself with this great resource for everybody in the Oracle community. The latest issue is 111 pages of top-notch content by a veritable who's who in the Oracle world: Sten Vesterli – The Spiritual Programmer Scott Weseley – APEX 5.0 New Features Patrick Barel – Dear Patrick Emma Groomes & Crystal Walton – KScope 2014 Anar Godjaev – How to protect your sensitive data using Oracle Data Vault Debra Lilley – Women in IT Initiative Lonneke Dikmans – What’s new in Oracle Case Management 12c? Mahir M Quluzade – Oracle Data Guard 12c: Cross platform transport non-CDB to PDB using RMAN Backup Sets Lucas Jellema – The Next Generation: Oracle SOA Suite 12c Ric Van *** – Adaptive Query Optimization: Will the real plan please stand up! Simon Haslam & Ronald van Luttikhuizen – Provisioning Using Chef and Puppet, Part II Kim Berg Hansen – External Data From Within Bertrand Drouvot – Graphing ASM Metrics Osama Mustafa Hussein – Upgrade OBIEE and Enable Mobile Designer

Blog Post: YesSQL! a celebration of SQL and PL/SQL: OOW14 event 29 September

$
0
0
First the key details: What: YesSQL! a celebration of SQL and PL/SQL When: 6:30 PM on Monday, 29 September Where: Location to be announced soon Why: Because SQL and PL/SQL are amazing technologies, and the people who them to deliver applications are amazing technologists For many, many years - since 1979, in fact - Oracle Database software and other relational solutions have been at the core of just about every significant human development, whether it be based in private enterprise, government, or the world of NGOs. SQL, relational technology, Oracle Database: they have been incredibly, outrageously successful. And SQL in particular is a critical layer within the technology stack that runs the systems that run the world. SQL is a powerful yet relatively accessible interface between algorithmic processing and data.  Rather than write a program to extract, manipulate and save your data, you describe the set of data that you want (or want to change) and leave it to the underlying database engine to figure out how to get the job done.  It's an incredibly liberating approach and I have no doubt that SQL and Oracle Database are two of the defining technologies that made possible the Information Era and the Internet/Mobile Era. Sure, you could argue that if Oracle hadn't come along, some other company would have taken its place. But Oracle did come along, and from 1979 through 2014, it has continually improved the performance and capabilities of Oracle SQL, providing innovation after innovation. Let me repeat that, because I think that so many of us have lost perspective on the impact Oracle technology – and we Oracle Database developers* – have had on the world: SQL and Oracle Database are two of the most important software technologies of the last forty years. And all of you, all of us, played a key role in applying that technology to implement user requirements: literally, to build the applications upon which modern human civilization functions. Us. We did that, and we do it every day.  How cool is that? Very cool....and deserving of special note. So we are going to note that and much more at the first-ever YesSQL! A celebration of SQL and PL/SQL . Co-hosted by Tom Kyte and Steven Feuerstein, YesSQL! celebrates SQL, PL/SQL, and the people who both make the technology and use it. No feature Powerpoints. No demos. Instead special guests Andy Mendelsohn, Maria Colgan, Andrew Holdsworth, Graham Wood and others will share our stories with you, and invite you to share yours with us, because.... YesSQL! is an open mic night. Tell us how SQL and PL/SQL - and the Oracle experts who circle the globe sharing their expertise - have affected your life! Bottom line: If developing applications against Oracle Database is a big a part of your life, join us for a fun and uplifting evening. Share Your Stories! I hope you can join us at the event (you'll be able to sign up for YesSQL! just like for a regular OOW session). But if you can't (or even if you can), you can share your story with us, right here (and on the PL/SQL Challenge , in our latest Roundtable discussion). How has SQL and/or PL/SQL and/or Oracle Database changed your life, personally, professionally or otherwise? We will select some of your stories to read at the YesSQL! event and if you are attending, you can tell the story yourself . * I used to talk about PL/SQL developers and APEX developers and ADF developers and Javascript developer and so on, but I have recently come to realize that very, very few Oracle technologists can be “pigeon-holed” that way. Sure, I know and use only PL/SQL (and SQL), but just about everyone else on the planet relies on a whole smorgasbord of tools to build applications against Oracle Database. So I’m going to start referring to all of us simply as Oracle Database Developers.

Blog Post: Your Top Tips for PL/SQL Developers?

$
0
0
I will giving a presentation to Oracle Corporation PL/SQL developers (that is, Oracle employees who build internal apps using PL/SQL) in September, lots of them new to the language and starting new development. The focus of the presentation is on best practices: what are the top recommendations I want to give developers to ensure that they write high quality, maintainable, performant code? I plan to turn this into a checklist developers can use to stay focused on the Big Picture as they write their code. So if you could pick just THREE recommendations to developers to help them write great code  what would they be?

Blog Post: Utilizando secuencias como valor por defecto de columnas

$
0
0
Hasta la versión 11g no era posible asignar una secuencia como valor por defecto de la columna de una tabla. Si intento hacerlo en una base de datos Oracle 11g, ocurre lo siguiente: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL create sequence prueba_secuencia_seq start with 1; Sequence created. SQL create table prueba_secuencia (   2  c1 number(10) default prueba_secuencia.nextval,   3  c2 varchar2(10)   4  )   5  / c1 number(10) default prueba_secuencia.nextval,                       * ERROR at line 2: ORA-00984: column not allowed here SQL A partir de la versión 12c es posible designar a una secuencia como valor por defecto de una tabla. Por supuesto, en el momento en que creo la tabla, la secuencia debe existir. Entonces, ejecuto el mismo código que falló en 11g y verifico qué ocurre en 12c. PDB1@ORCL create sequence prueba_secuencia_seq start with 1; Sequence created. PDB1@ORCL create table prueba_secuencia (   2  c1 number(10) default prueba_secuencia_seq.nextval,   3  c2 varchar2(10)   4  )   5  / Table created. PDB1@ORCL insert into prueba_secuencia (c2) values ('hola'); 1 row created. PDB1@ORCL insert into prueba_secuencia (c2) values ('chau'); 1 row created. PDB1@ORCL select * from prueba_secuencia;     C1 C2 ---------- ----------      1 hola      2 chau ¡Excelente! ¡Le damos la bienvenida a esta nueva prestación de Oracle 12c! Pero... ¡Cuidado! Hay que tener  en cuenta, que en la definición de la tabla sólo he dicho que utilizaré la secuencia como valor por defecto; nada impide que podamos sobreescribir dicho contenido: PDB1@ORCL insert into prueba_secuencia values (null, 'ojito'); 1 row created. PDB1@ORCL insert into prueba_secuencia values (4, 'ojazo'); 1 row created. PDB1@ORCL select * from prueba_secuencia;     C1 C2 ---------- ----------      1 hola      2 chau        ojito      4 ojazo PDB1@ORCL Si bien el primer insert ('ojito') puede solucionarse con la cláusula ON NULL, el segundo insert ('ojazo') es un "peligro latente". En síntesis, si estamos pensando en utilizar esta técnica para generar las claves primarias de nuestras tablas tengamos en cuenta estos potenciales escenarios. Espero que este post te haya gustado. 

Comment on Your Top Tips for PL/SQL Developers?

$
0
0
1) speed almost as native c 2) easy curve of learning 3) possibility to write object code

Wiki Page: Getting Started with Oracle XQuery for Hadoop

$
0
0
By Deepak Vohra   XML is the standard medium of data exchange on the web and used by web services. XQuery (An XML Query Language) is a query language based on the XML structure to define queries on data sources, structured and semi-structured. The data queried by XQuery could be stored as XML or stored in another format such as relational database, CVS, JSON, Avro, and viewed as XML by the middleware. XQuery is a W3C standard (http://www.w3.org/TR/2014/REC-xquery-30-20140408/).   Oracle XQuery for Hadoop is transformation engine using which transformations defined in XQuery are translated into a Map Reduce job and run on a Hadoop cluster. The XQuery transformations are used to extract a subset of data from a dataset. A developer is not required to specify the Map Reduce job detail and only the XQuery transformations are required to be specified. Oracle XQuery for Hadoop may be used on data stored in HDFS or Oracle NoSQL Database. The HDFS data may be in one of the several supported formats: text, XML, JSON, Avro, and Sequence file. In this article we shall introduce the Oracle XQuery for Hadoop including the setup and run the Hello World example. This article has the following sections.   Setting the Environment Starting HDFS Adding Data for XQuery Creating a XQuery Transformation Script Running a XQuery Transformation Script   Setting the Environment   We have installed Oracle XQuery for Hadoop on Oracle Linux 6, which is a VM on Oracle Virtual Box 4.3.10. Download the following software:   Oracle Database 11g from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html . Oracle Loader for Hadoop (OLH) from http://www.oracle.com/technetwork/database/database-technologies/bdc/big-data-connectors/downloads/index.html . Oracle XQuery for Hadoop (OXH) from http://www.oracle.com/technetwork/database/database-technologies/bdc/big-data-connectors/downloads/index.html .   The Oracle Loader for Hadoop is not used in this article but is required if XQuery result is loaded into Oracle Database. Create a Virtual Machine for Oracle Linux 6 on Oracle VM Virtual Box.     Create a directory in Oracle Linux 6 for Oracle XQuery for Hadoop.   mkdir /oxh chmod 777 /oxh   Copy the oraloader-3.0.0-h2.x86_64.zip and oxh-3.0.0-cdh4.6.0.zip files to the /oxh directory. Run the following commands to extract the files.   unzip oraloader-3.0.0-h2.x86_64.zip unzip oxh-3.0.0-cdh4.6.0.zip   Download CDH 4.6 Hadoop 2.0.0 and extract the tar.gz file.   wget http://archive.cloudera.com/cdh4/cdh/4/hadoop-2.0.0-cdh4.6.0.tar.gz tar -xvf hadoop-2.0.0-cdh4.6.0.tar.gz Next, set the environment variables for OXH, CDH 4.6, OLH an Oracle Database 11g in the bash shell.   vi ~/.bashrc   Set the following environment variables.   export HADOOP_PREFIX=/oxh/hadoop-2.0.0-cdh4.6.0 export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=ORCL export OLH_HOME=/oxh/oraloader-3.0.0-h2 export OXH_HOME=/oxh/oxh-3.0.0-cdh4.6.0 export HADOOP_MAPRED_HOME=/oxh/hadoop-2.0.0-cdh4.6.0/bin export HADOOP_HOME=/oxh/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2 export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$OLH_HOME/jlib/*:$OXH_HOME/lib/* export PATH=$PATH:$HADOOP_HOME:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME:$ORACLE_HOME/bin   Create a symlink /oxh/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/bin as a pointer to the /oxh/hadoop-2.0.0-cdh4.6.0/bin directory.   symlink ln -s /oxh/hadoop-2.0.0-cdh4.6.0/bin /oxh/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce2/bin   Next, configure the Hadoop configuration files core-site.xml and hdfs-site.xml . Specify the following configuration properties.   Property Configuration File Description fs.defaultFS core-site.xml The NameNode URI hadoop.tmp.dir core-site.xml The temp directory. dfs.permissions.superusergroup hdfs-site.xml The group of superusers. dfs.namenode.name.dir hdfs-site.xml The NameNode storage directory dfs.replication hdfs-site.xml Default block replication. dfs.permissions hdfs-site.xml Permission checking in HDFS   To specify the NameNode URI the IP address of the VM on the VirtualBox is required. Find the IP address of a VM installed on VirtualBox with the following command.   "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" guestproperty get OracleLinux6 "/VirtualBox/GuestInfo/Net/0/V4/IP"   The output indicates the IP address of the VM in the VirtualBox as 10.0.2.15.     To modify the core-site.xml configuration file run the following command.   vi /oxh/hadoop-2.0.0-cdh4.6.0/etc/hadoop/core-site.xml   Specify the fs.defaultFS and hadoop.tmp.dir configuration properties in core-site.xml .   ?xml-stylesheet type="text/xsl" href="configuration.xsl"? !-- Put site-specific property overrides in this file. -- configuration property name fs.defaultFS /name value hdfs://10.0.2.15:8020/catalog /value /property property name hadoop.tmp.dir /name value file:///var/lib/hadoop-0.20/cache /value /property /configuration   Create the directory and set permission for the directory specified in the hadoop.tmp.dir directory.   mkdir -p /var/lib/hadoop-0.20/cache chmod -R 700 /var/lib/hadoop-0.20/cache To modify the hdfs-site.xml file run the following command.   vi /oxh/hadoop-2.0.0-cdh4.6.0/etc/hadoop/hdfs-site.xml   Set the dfs.permissions.superusergroup , dfs.namenode.name.dir , dfs.replication and dfs.permissions configuration properties.   ?xml version="1.0" encoding="UTF-8"? ?xml-stylesheet type="text/xsl" href="configuration.xsl"?   !-- Put site-specific property overrides in this file. --   configuration property name dfs.permissions.superusergroup /name value hadoop /value /property property name dfs.namenode.name.dir /name value file:///data/1/dfs/nn /value /property property name dfs.replication /name value 1 /value /property property name dfs.permissions /name value false /value /property /configuration   Create and set permission for the directory specified in the dfs.namenode.name.dir property.   mkdir -p /data/1/dfs/nn chmod -R 700 /data/1/dfs/nn   Starting HDFS   Next, start the HDFS NameNode and DataNode. Format the NameNode with the following command.   hadoop namenode –format   Start the NameNode with the following command.   hadoop namenode   Start the DataNode with the following command.   hadoop datanode     Adding Data for XQuery   Having set the environment for running a Oracle XQuery for Hadoop application, next we shall add data to HDFS. OXH provides adapters for various data sources and sinks. The following adapters are provided for data sources in the HDFS.   Adapter Description Avro File Adapter Avro container files JSON File Adapter JSON files Sequence File Adapter Sequence files Text File Adapter Text files XML File Adapter XML files   In addition to HDFS data OXH may also be used with Oracle NoSQL Database for which the Oracle NoSQL Database Adapter is provided. To load data into Oracle Database the Oracle Database adapter is provided. To load data into Solr servers the Solr Adapter is provided.   We shall be using the Text File adapter for querying a text file in HDFS. Create a text file hello.txt with the text “Hello”.   echo "Hello" hello.txt   With the HDFS started run the following commands to create a /catalog directory in HDFS and put the hello.txt file in the /catalog directory.   hdfs dfs -mkdir hdfs://10.0.2.15:8020/catalog hadoop dfs -put hello.txt hdfs://10.0.2.15:8020/catalog   The OXH is also required at runtime in the HDFS. To put the OXH software into HDFS first create the directory for OXH and subsequently run the –put command to put the OXH software into HDFS.   hdfs dfs -mkdir hdfs://10.0.2.15:8020/oxh hdfs dfs -put /oxh/oxh-3.0.0-cdh4.6.0 hdfs://10.0.2.15:8020/oxh   Creating a XQuery Transformation Script   To run Oracle XQuery for Hadoop a query script is required in which the XQuery transformation functions are used to query and transform input data. Each of the OXH adapters provides functions specific to the adapter. In addition to the adapter functions, the following module functions may also be used.   Module Description Standard Query Functions Standard Query math functions Hadoop Functions Hadoop specific functions Duration, Date and Time functions Functions used to parse duration, date and time values. String Processing Functions Functions used to add and remove white space surrounding data values.   Create a XQuery query script hello.xq . Import the Text File adapter with the following import statement.   import module "oxh:text";   The Text File adapter provides the following functions.   Function Description Parameters text:collection Accesses collection of text files in HDFS $uris-Text file URIs $delimiter-A delimiter, the default being the newline character. text:collection-xml Accesses collection of text files in HDFS. Each section in a file must be XML. $uris-Text file URIs $delimiter-A delimiter, the default being the newline character. text:put Puts a line of text file in the output directory of the query with the lines of text spread across multiple files. $value-The value to put. text:put-xml Puts a line of text file as XML in the output directory of the query with the lines of text spread across multiple files. $value-The XML value to put. text:trace Puts a line of text in a trace-* file in the output directory. A trace file could be used to identify certain rows or lines of data. $value-The value to put.   In the hello.xq function invoke the text:collection function to access the /catalog/hello.txt file in HDFS. Using a for loop iterate over the lines in the hello.txt file.   for $line in text:collection("/catalog/hello.txt")   Invoke the text:put function to put the line of text appended with “ World!” in the output directory. Also use a return statement to return the value put.   return text:put($line || " World!")   The hello.xq file is listed below.   import module "oxh:text"; for $line in text:collection("/catalog/hello.txt") return text:put($line || " World!")   The XQuery must have one of the forms FLOWR or (FLOWR1, FLOWR2, FLOWR3, … FLOWRn). The FLOWR clauses are as follows.   Notation Clause Description Required/Optional F for Iterates over a collection function Required. L let Binds values to variables Optional O order by Used to order Optional W where Used for filtering Optional R return Returns a result from invoking a put function Required     We have used only the required clauses for and return in hello.xq .   The directory structure of the /oxh directory should include the Oracle Loader for Hadoop directory, the CDH 4.6 Hadoop directory, the Oracle XQuery for Hadoop directory, the hello.xq script, and the directories used in the configuration properties for CDH 4.6. The text file hello.txt copied to HDFS could also be listed but is not required in the local filesystem as the file is input from HDFS.     Running the XQuery Script   In this section we shall run the XQuery transformation script hello.xq using Oracle XQuery for Hadoop. The syntax of the hadoop command to run Oracle XQuery for Hadoop is as follows.   hadoop jar $OXH_HOME/lib/oxh.jar [generic options] query.xq -output directory [-clean] [-ls] [-print] [-skiperrors] [-version]   The following generic options may be specified.   Option Description -conf job_config.xml Specifies the job configuration file. Oracle XQuery for Hadoop configuration properties, Oracle Loader for Hadoop configuration properties and Oracle NoSQL Adapter configuration properties may be specified in the job configuration file. -D property=value Specifies a Oracle XQuery for Hadoop configuration properties. -files OXH supports accessing auxiliary job data for a join query from files in the Hadoop distributed cache facility. The –files option is used to add files to the distributed cache.     The following XQuery for Hadoop configuration properties may be specified with –D or in the job configuration file.   Property Description oracle.hadoop.xquery.output The HDFS output directory for the query. The default output directory is /tmp/oxh-user_name/output in which oxh-user_name is the OXH user name. oracle.hadoop.xquery.scratch The HDFS temp directory for temp files. The default value is /tmp/oxh-user_name/scratch. oracle.hadoop.xquery.timezone The time zone used in date, time and datetime value if required. oracle.hadoop.xquery.skiperrors Skips errors if set to true, the default value being false. oracle.hadoop.xquery.skiperrors.counters Groups errors by error code if set to true, which is also the default value. oracle.hadoop.xquery.skiperrors.max Sets the maximum number of errors a single Map Reduce job can recover from with the default being unlimited. oracle.hadoop.xquery.skiperrors.log.max Sets the maximum number of errors a single Map Reduce job logs, the default being 20. log4j.logger.oracle.hadoop.xquery Configures a log4j logger for each task. No log4j logger is configured by default.   The Oracle XQuery for Hadoop command line options are as follows.   Option Description query.xq The query file with the XQuery transformations. -ls Lists the contents of the output directory. -output directory Specifies the output directory. Equivalent of the oracle.hadoop.xquery.output configuration property. -print Prints the contents of all the files in the output directory. -skiperrors The errors are skipped but are counted and the total error count including the error messages for the first 20 is logged after the query completes. -version Displays the OXH version.   Next, run OXH with some of the options discussed using the hello.xq script. Specify the output directory with the –output option. Specify the –print option to print the value output.   hadoop jar $OXH_HOME/lib/oxh.jar hello.xq -output /output_dir -print   A Map Reduce job runs to query the input file hello.txt in HDFS and puts a value into the output directory and also returns the same value.     If the output directory is not specified the default output directory is output=hdfs://10.0.2.15:8020/tmp/oxh-root/output . The output from the Map Reduce application is listed:   [root@localhost oxh]# hadoop jar $OXH_HOME/lib/oxh.jar hello.xq -output /output_dir -print 14/05/15 13:58:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/05/15 13:58:55 INFO hadoop.xquery: OXH: Oracle XQuery for Hadoop 3.0.0 (build 3.0.0-cdh4.6.0-mr1 @mr2). Copyright (c) 2014, Oracle. All rights reserved. 14/05/15 13:58:56 INFO hadoop.xquery: Executing query "hello.xq". Output path: "hdfs://10.0.2.15:8020/output_dir" 14/05/15 13:59:00 INFO hadoop.xquery: Submitting map-reduce job "oxh:hello.xq#0" id="6ebcd51b-611c-40a9-9a37-508cf533658a.0", inputs=[hdfs://10.0.2.15:8020/catalog/hello.txt], output=hdfs://10.0.2.15:8020/output_dir 14/05/15 13:59:00 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 14/05/15 13:59:01 INFO input.FileInputFormat: Total input paths to process : 1 14/05/15 13:59:02 INFO mapreduce.JobSubmitter: number of splits:1 14/05/15 13:59:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local276091315_0001 14/05/15 13:59:06 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-7852968862261405288/apache-xmlbeans.jar - /oxh/apache-xmlbeans.jar 14/05/15 13:59:06 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/apache-xmlbeans.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-7852968862261405288/apache-xmlbeans.jar 14/05/15 13:59:08 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/4563230233275462579/avro-mapred-1.7.4-hadoop2.jar - /oxh/avro-mapred-1.7.4-hadoop2.jar 14/05/15 13:59:09 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/avro-mapred-1.7.4-hadoop2.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/4563230233275462579/avro-mapred-1.7.4-hadoop2.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-6477759411341546269/orai18n-collation.jar - /oxh/orai18n-collation.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/orai18n-collation.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-6477759411341546269/orai18n-collation.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-6181840295004161549/orai18n-mapping.jar - /oxh/orai18n-mapping.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/orai18n-mapping.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-6181840295004161549/orai18n-mapping.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-7656083089921869325/orai18n-utility.jar - /oxh/orai18n-utility.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/orai18n-utility.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-7656083089921869325/orai18n-utility.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/683725889178713768/orai18n.jar - /oxh/orai18n.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/orai18n.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/683725889178713768/orai18n.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/5437536891430674867/oxh-core.jar - /oxh/oxh-core.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/oxh-core.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/5437536891430674867/oxh-core.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-1472330726504555316/oxh-mapreduce.jar - /oxh/oxh-mapreduce.jar 14/05/15 13:59:10 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/oxh-mapreduce.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-1472330726504555316/oxh-mapreduce.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-8109287830358941682/oxquery.jar - /oxh/oxquery.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/oxquery.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-8109287830358941682/oxquery.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/3010151278133886831/stax2-api-3.1.1.jar - /oxh/stax2-api-3.1.1.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/stax2-api-3.1.1.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/3010151278133886831/stax2-api-3.1.1.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-2795232699729042578/woodstox-core-asl-4.2.0.jar - /oxh/woodstox-core-asl-4.2.0.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/woodstox-core-asl-4.2.0.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-2795232699729042578/woodstox-core-asl-4.2.0.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-5419867076189885154/xmlparserv2_sans_jaxp_services.jar - /oxh/xmlparserv2_sans_jaxp_services.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/xmlparserv2_sans_jaxp_services.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-5419867076189885154/xmlparserv2_sans_jaxp_services.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/mapred/local/-4840593832632600525/xqjapi.jar - /oxh/xqjapi.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: Localized hdfs://10.0.2.15:8020/oxh/oxh-3.0.0-cdh4.6.0/lib/xqjapi.jar as file:/var/lib/hadoop-0.20/cache/mapred/local/-4840593832632600525/xqjapi.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-7852968862261405288/apache-xmlbeans.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/4563230233275462579/avro-mapred-1.7.4-hadoop2.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-6477759411341546269/orai18n-collation.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-6181840295004161549/orai18n-mapping.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-7656083089921869325/orai18n-utility.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/683725889178713768/orai18n.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/5437536891430674867/oxh-core.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-1472330726504555316/oxh-mapreduce.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-8109287830358941682/oxquery.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/3010151278133886831/stax2-api-3.1.1.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-2795232699729042578/woodstox-core-asl-4.2.0.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-5419867076189885154/xmlparserv2_sans_jaxp_services.jar 14/05/15 13:59:11 INFO mapred.LocalDistributedCacheManager: file:/var/lib/hadoop-0.20/cache/mapred/local/-4840593832632600525/xqjapi.jar 14/05/15 13:59:11 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 14/05/15 13:59:11 INFO hadoop.xquery: Waiting for map-reduce job oxh:hello.xq#0 14/05/15 13:59:11 INFO mapreduce.Job: Running job: job_local276091315_0001 14/05/15 13:59:11 INFO mapred.LocalJobRunner: OutputCommitter set in config null 14/05/15 13:59:11 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 14/05/15 13:59:12 INFO mapred.LocalJobRunner: Waiting for map tasks 14/05/15 13:59:12 INFO mapred.LocalJobRunner: Starting task: attempt_local276091315_0001_m_000000_0 14/05/15 13:59:12 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 14/05/15 13:59:12 INFO mapred.MapTask: Processing split: hdfs://10.0.2.15:8020/catalog/hello.txt:0+6 14/05/15 13:59:12 INFO mapreduce.Job: Job job_local276091315_0001 running in uber mode : false 14/05/15 13:59:12 INFO mapreduce.Job: map 0% reduce 0% 14/05/15 13:59:13 INFO mapred.LocalJobRunner: 14/05/15 13:59:13 INFO mapred.Task: Task:attempt_local276091315_0001_m_000000_0 is done. And is in the process of committing 14/05/15 13:59:13 INFO mapred.LocalJobRunner: 14/05/15 13:59:13 INFO mapred.Task: Task attempt_local276091315_0001_m_000000_0 is allowed to commit now 14/05/15 13:59:13 INFO output.FileOutputCommitter: Saved output of task 'attempt_local276091315_0001_m_000000_0' to hdfs://10.0.2.15:8020/output_dir/_temporary/0/task_local276091315_0001_m_000000 14/05/15 13:59:13 INFO mapred.LocalJobRunner: map 14/05/15 13:59:13 INFO mapred.Task: Task 'attempt_local276091315_0001_m_000000_0' done. 14/05/15 13:59:13 INFO mapred.LocalJobRunner: Finishing task: attempt_local276091315_0001_m_000000_0 14/05/15 13:59:13 INFO mapred.LocalJobRunner: Map task executor complete. 14/05/15 13:59:13 INFO mapreduce.Job: map 100% reduce 0% 14/05/15 13:59:13 INFO mapreduce.Job: Job job_local276091315_0001 completed successfully 14/05/15 13:59:13 INFO mapreduce.Job: Counters: 23 File System Counters FILE: Number of bytes read=12539 FILE: Number of bytes written=15062498 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=14730692 HDFS: Number of bytes written=13 HDFS: Number of read operations=167 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Map-Reduce Framework Map input records=1 Map output records=0 Input split bytes=104 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=144 CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Total committed heap usage (bytes)=23703552 File Input Format Counters Bytes Read=6 File Output Format Counters Bytes Written=0 14/05/15 13:59:13 INFO hadoop.xquery: Finished executing "hello.xq". Output path: "hdfs://10.0.2.15:8020/output_dir" Hello World!   Run the following command to output the text in the file generated in the output directory.   hdfs dfs -cat /output_dir/part-m-00000   The value put into the part-m-00000 file in the output directory by hello.xq is returned.     In this article we introduced the Oracle XQuery for Hadoop.

Blog Post: How to selectively optimize an Oracle Text index

$
0
0
One of the largest systems I have deployed that uses Oracle Text for searching unstructured data stores close to one billion documents. One of our queries that used to run very fast had started being sluggish.   To make it easier to follow let's say the Text index looked something like this:   exec ctx_ddl.create_section_group('text_namegroup','BASIC_SECTION_GROUP'); exec ctx_ddl.add_field_section ('text_namegroup','V1','V1'); exec ctx_ddl.add_field_section ('text_namegroup','V2','V2'); exec ctx_ddl.add_field_section ('text_namegroup','V3','V3'); CREATE INDEX name_text ON test_xml(xml_descriptors) INDEXTYPE IS Ctxsys.Context PARAMETERS('section group text_namegroup');     The first thing I had to check was the overall fragmentation level of the index as this is a typical cause of performance problems. We typically optimize the Text index a little bit at a time at regular intervals.     select avg(tfrag) from ( select /*+ ORDERED USE_NL(i) INDEX(i DR$NAME_TEXT$X) */            i.token_text,            (1-(least(round((sum(dbms_lob.getlength(i.token_info))/3800)+(0.50 - (1/3800))),count(*))/count(*)))*100 tfrag    from ( select token_text, token_type            from dr$NAME_TEXT$i sample(0.149)            where rownum = 100 )           t, dr$NAME_TEXT$i i    where i.token_text = t.token_text      and i.token_type = t.token_type      group by i.token_text, i.token_type);     AVG(TFRAG) ---------- 42.1145939   This does not look too bad and it is well below the threshold that Oracle best practices suggest an optimization.   I then had to check the fragmentation for one of the specific queries the users were complaining about: select count (*) from test_xml where contains(xml_descriptors,'JAMES within V1') 0;     select ctx_report.token_type('NAME_TEXT','FIELD V1 TEXT') from dual;   CTX_REPORT.TOKEN_TYPE('NAME_TEXT','FIELDV1TEXT') ------------------------------------------------                                              16     select /*+ ORDERED USE_NL(i) INDEX(i DR$NAME_TEXT$X) */            (1-(least(round((sum(dbms_lob.getlength(i.token_info))/3800)+(0.50 - (1/3800))),count(*))/count(*)))*100 tfrag    from dr$NAME_TEXT$i i    where i.token_text = 'JAMES'      and i.token_type = 16      group by i.token_text, i.token_type;     AVG(TFRAG) ---------- 83.3333333     This is a much different picture of course.     At that point we could not really optimize the entire index as it would take days and take a toll on the system. Also our current plan of gradual optimization would not solve the problem soon enough. And of course the same problem would likely reocccur in the future.   The solution was to frequently optimize the most fragmented tokens for each field section (token type) while at the same time allow our standard process to slowly optimize the rest of the index. Here is how this is done:   declare    max_rows number := 1000;    counter number := 0; begin    for c in ( select token_text, count(*) cntr from dr$NAME_TEXT$i                group by token_text                order by cntr desc ) loop        ctx_ddl.optimize_index('NAME_TEXT', 'TOKEN', null, c.token_text, null, 16);        counter := counter + 1;        exit when counter max_rows;    end loop; end; /   declare    max_rows number := 1000;    counter number := 0; begin    for c in ( select token_text, count(*) cntr from dr$NAME_TEXT$i                group by token_text                order by cntr desc ) loop        ctx_ddl.optimize_index('NAME_TEXT', 'TOKEN', null, c.token_text, null, 17);        counter := counter + 1;        exit when counter max_rows;    end loop; end; /   declare    max_rows number := 1000;    counter number := 0; begin    for c in ( select token_text, count(*) cntr from dr$NAME_TEXT$i                group by token_text                order by cntr desc ) loop        ctx_ddl.optimize_index('NAME_TEXT', 'TOKEN', null, c.token_text, null, 18);        counter := counter + 1;        exit when counter max_rows;    end loop; end; /

Blog Post: Managing Oracle Database 12c with Enterprise Manager – Part II

$
0
0
Friends, In the last post, we saw what the discovery of an Oracle 12.1 Container Database (CDB) would look like in Oracle Enterprise Manager 12c , with the CDB and component PDBs all being discovered by the EM Agent that has been installed on the target database server.  Drilling down to a CDB, the following screenshot illustrates how the CDB Home page would appear. The Home Page shows the list of Pluggable Databases in this Container Database. Enterprise Manager 12c allows you to easily open/close component Pluggable Databases from the Container Database Menu. Pluggable Databases can be opened “ Read Only ” or for “ Read Write ”. The “ Open ” in the menu below indicates a normal open, i.e. for Read Write. It is also possible to provision new Pluggable Databases directly from Enterprise Manager, from the Container Database menu as seen below. This brings up the Provision Pluggable Databases page. On this page, you can migrate an existing non-CDB database to be a new pluggable database. You can create a new pluggable database either from a seed database, or by cloning an existing pluggable database,  or from an unplugged database.  You can also unplug a pluggable database, or delete it totally along with the datafiles. In the next blog post, we will talk more about the Enterprise Manager licensing requirements for this page. These aspects need to be clearly understood. Regards, Porus.

Blog Post: Tnsnames.ora Parser

$
0
0
Have you ever wanted to use a tool to parse the manually typed up “stuff” that lives in a tnsnames.ora file, to be absolutely certain that it is correct? Ever wanted some tool to count all the opening and closing brackets match? I may just have the very thing for you. Download the binary file Tnsnames.Parser.zip and unzip it. Source code is also available on Github. When unzipped, you will see the following files: README – this should be obvious! tnsnames_checker.sh – Unix script to run the utility. tnsnames_checker.cmd – Windows batch file to run the utility. antlr-4.4-complete.jar – Parser support file. tnsnames_checker.jar – Parser file. tnsnames.test.ora – a valid tnsnames.ora to test the utility with. The README file is your best friend! All the utility does is scan the supplied input file, passed via standard in, and writes any syntax or semantic problems out to standard error. Working Example There are no errors in the tnsnames.test.ora file, so the output looks like the following: ./tnsnames_checker.sh Non-Working Example After a bit of fiddling, there are now some errors in the tnsnames.test.ora file, so the output looks like the following: ./tnsnames_checker.sh You can figure out where and what went wrong from the messages produced. Have fun.

Blog Post: Prevent alter table

$
0
0
Many times in real project life, modification of any kind, without proper testing (and the best of all-with none or poor announcement-documentation) leads to very bad situations which are very hard to fix without loss of service. One of them is altering table. The problem Classical altering of table is when user, for instance, drop a column. In that moment many packages invalidate automatically and some of them might stay in that state because they cannot be recompiled without editing the content (dropped column is referenced explicitly in them). And not to mention many java classes (which was really a problem in mine case) which should also be edited and this could not be done without redeploy and loss of service. Another danger situation is careless adding the index, which might solve focused problem but in the same time might lead to many other not so easy to spot problems, because this index might change some other (until now!) proper SQL plans and made them a performance problems from now on. If you face that kind trouble you'll now what I mean. To solve that problem, initially, I have created db trigger which was monitoring these activities. This trigger allowed me to see who and what happened in the past. This was a pretty bad workaround because I was unable to prevent altering tables/indexes but just to see what happened when it happened. The worst of all was that all  DBA intervention was after the damage happened so we were just fixing the problem - not preventing them, as it would be nice and much better for any production you. Adding table auditing is also one solution to monitor but not to prevent altering tables, because, also, this just record and not prevent to happened that change. The solution For many of us this (and me as well) I thought that was all I could do ... until today when I found alter table ... disable table lock statement, found in original Oracle documentation, which prevent any kind of DDL operations on the table. The best of all, as you see, it was introduced in Oracle 10.1 and here is, with us, as a solution for more than decade ... and without knowledge. And this was it-the better solution for mine problems.Let me show how easy and in the same time elegant this solution is. Usage is more then easy ... Some easy examples will explain more than a thousand words. Let's create demo table, called SPECIAL_TABLE: SQL CREATE TABLE SPECIAL_TABLE (COL1  NUMBER, COL2  VARCHAR2(10 CHAR));   Table created. Alter table, to disable table locks: SQL alter table SPECIAL_TABLE disable table lock ;   Table altered.   SQL Now your table SPECIAL_TABLE is secured against any kind of DDL actions. See following examples what is prohibited from now on: Table can't be droped (!!!) SQL drop table SPECIAL_TABLE; drop table SPECIAL_TABLE            * ERROR at line 1: ORA-00069: cannot acquire lock -- table locks disabled for SPECIAL_TABLE SQL drop table SPECIAL_TABLE cascade constraints; drop table SPECIAL_TABLE cascade constraints            * ERROR at line 1: ORA-00069: cannot acquire lock -- table locks disabled for SPECIAL_TABLE Index can't be added SQL CREATE INDEX SPECIAL_TABLE_IX ON SPECIAL_TABLE   2  (COL1)   3  LOGGING   4  PCTFREE    0   5  ; CREATE INDEX SPECIAL_TABLE_IX ON SPECIAL_TABLE                                   * ERROR at line 1: ORA-00069: cannot acquire lock -- table locks disabled for SPECIAL_TABLE Columns can’t be modified SQL alter table SPECIAL_TABLE modify col2 varchar2(100 CHAR); alter table SPECIAL_TABLE modify col2 varchar2(100 CHAR) * ERROR at line 1: ORA-00069: cannot acquire lock -- table locks disabled for SPECIAL_TABLE Columns can’t be dropped SQL alter table SPECIAL_TABLE drop column col2; alter table SPECIAL_TABLE drop column col2 * ERROR at line 1: ORA-00069: cannot acquire lock -- table locks disabled for SPECIAL_TABLE This obviously meany that any action that need disabled kind of table lock, cannot be executed. But in the same time almost everything for normal work is allowed: Columns can be added ! SQL alter table SPECIAL_TABLE add col3 varchar2(10 CHAR);   Table altered.   SQL desc  SPECIAL_TABLE  Name                 Null?    Type  -------------------- -------- ------------------  COL1                          NUMBER  COL2                          VARCHAR2(10 CHAR)  COL3                          VARCHAR2(10 CHAR) INSERT/UPDATE/DELETE/SELECT FOR UPDATE DML commands are allowed : SQL insert into SPECIAL_TABLE values (1,'One');   1 row created.   SQL commit;   Commit complete.   SQL select * from SPECIAL_TABLE for update;         COL1 COL2 ---------- ----------          1 One            SQL SQL update SPECIAL_TABLE set col2='change one' where col1=1;   1 row updated.   SQL commit;   Commit complete.   SQL select * from SPECIAL_TABLE;         COL1 COL2 ---------- ----------          1 change one   SQL SQL delete SPECIAL_TABLE;   1 row deleted.   SQL commit; Commit complete. SQL table can be truncated SQL truncate table SPECIAL_TABLE drop storage;       Table truncated. The end As always in Oracle world, every new day is a good day to learn something. This was really nice feature to implement in a cases like described-clean and efficient. The best of all is that even dba (sysdba) role cannot alter table before re enable disable table lock, so this works on the very low kernel calls, what ensure that altering is really prevented. If you need to make some DDL (like adding an index), just re enable table locks: SQL alter table SPECIAL_TABLE enable table lock ;   Table altered. and do the job. Later again, disable table lock to place table in protected state. Experiment further for some other special situations that you might have. Hopes this helps someone. Cheers!!!

Blog Post: User not authorized to execute service (SBL-EAI-04308)

$
0
0
When i try integrate EBS and Siebel, its getting below error in siebel Error invoking service ‘XXACE_CREATE_RECEIPTS_SIEBEL_PortType_2′, method ‘XXACE_RECEIPTS’ at step ‘Transport’.(SBL-BPR-00162) – Operation ‘XXACE_RECEIPTS’ of Web Service ‘ http://xmlns.oracle.com/apps/ar/soaprovider/plsql/xxace_create_receipts_siebel/.XXACE_CREATE_RECEIPTS_SIEBEL_Service’ at port ‘XXACE_CREATE_RECEIPTS_SIEBEL_Port’ failed with the following explanation: “User not authorized to execute service.”.(SBL-EAI-04308):   Solution :  Go to EBS Integrated SOA – Choose particular interface (Interface type — plsql — Receivable ) revoke sysadmin privilege and then give grant sysadmin privilege again to this object

Blog Post: The Hitchhiker’s Guide to the EXPLAIN PLAN Part 15: Oracle-in-the-box

$
0
0
Previous installment: Damn the Cost, Full Speed Ahead First installment: DON’T PANIC I use the HR sample schema in all my demonstrations. It is one of the sample schemas used in the examples found in the Oracle documentation. I also use the OTN Developer Day VM (virtual machine) in my experiments and demonstrations. The Developer Day VM uses Linux as the operating system and is pre-loaded with a fully-functional Oracle 12 c database with all the options. I don’t have to go through any installation headaches, my laptop is not messed up, and—best of all—I can take a snapshot of a known good state and revert to it at will. The OTN Developer Day VM is indeed the greatest thing since sliced bread. And, of course, you don’t need any licenses because “all [Oracle] software downloads are free, and most come with a Developer License that allows you to use full versions of the products at no charge while developing and prototyping your applications, or for strictly self-educational purposes.” ( http://www.oracle.com/technetwork/indexes/downloads/index.html ) You can download and install the Developer Day VM from http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html . A virtual machine runs inside a “hypervisor” so the first step is to download and install Oracle VirtualBox (Sun VirtualBox). The next step is to import the virtual machine into the hypervisor. If you follow all the instructions and fire up the VM, you will see the Gnome desktop. Were you worried about getting lost in a sea of Linux-speak? Well, worry no more. Click on the SQL Developer icon on the desktop and click on “Connections” to create a new connection. Enter the following information and click on “Connect.” Username “hr” Password “oracle” Connection Type “Basic” Role “default” Hostname “localhost” Service Name “PDB1” How easy was that? You can just as easily use SQL*Plus in a terminal window if you prefer it to SQL Developer. [oracle@localhost ~]$ sqlplus hr/oracle@pdb1 SQL*Plus: Release 12.1.0.1.0 Production on Mon Aug 25 21:50:00 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. ERROR: ORA-28002: the password will expire within 7 days Last Successful login time: Mon Aug 25 2014 21:49:35 -04:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options PDB1@ORCL select table_name from user_tables; TABLE_NAME -------------------------------------------------------------------------------- REGIONS COUNTRIES LOCATIONS DEPARTMENTS JOBS EMPLOYEES JOB_HISTORY 7 rows selected. PDB1@ORCL select table_name from user_tables order by table_name; TABLE_NAME -------------------------------------------------------------------------------- COUNTRIES DEPARTMENTS EMPLOYEES JOBS JOB_HISTORY LOCATIONS REGIONS 7 rows selected. PDB1@ORCL exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Before you go any further, create a snapshot of the virtual machine. Select “Machine” from the menu of the container window in which the VM is running, then select “Take Snapshot.” You will be able to revert to this known good state at will. Once you have tried out the Developer Day VM, I predict that you will never go back to the old way of installing Oracle Database on your laptop. Previous installment: Damn the Cost, Full Speed Ahead First installment: DON’T PANIC Iggy Fernandez

Blog Post: Getting Python to play with Oracle using cxOracle on Mint and Ubuntu

$
0
0
“We need to go through Tow-ces-ter”, suggested Deb. “It’s pronounced Toast-er”, I corrected gently. “Well, that’s just silly”, came the indignant response, “I mean, why can’t they just spell it as it sounds ?” At this point I resisted the temptation of pointing out that, in her Welsh homeland, placenames are, if anything, even more difficult to pronounce if you’ve only ever seen them written down. Llanelli is a linguistic trap for the unwary let alone the intriguingly named Betws-Y-Coed. Instead, I reflected on the fact that, even when you have directions, things can sometimes be a little less than straight forward. Which brings me to the wonderful world of Python. Having spent some time playing around with this language, I wanted to see how easy it is to plug it into Oracle. To do this, I needed the cxOracle Python library. Unfortunately, installation of this library proved somewhat less than straightforward – on Linux Mint at least. What follows are the gory details of how I got it working in the hope that it will help anyone else struggling with this particular conundurum. My Environment The environment I’m using to execute the steps that follows is Mint 13 (with the Cinnamon desktop). The database I’m connecting to is Oracle 11gXE. In Mint, as with most other Linux Distros, Python is part of the base installation. In this particular distro version, the default version of Python is 2.7. If you want to check to see which version is currently the default on your system : which python /usr/bin/python This will tell you what file gets executed when you invoke python from the command line. You should then be able to do something like this : ls -l /usr/bin/python lrwxrwxrwx 1 root root 9 Apr 10 2013 python - python2.7 One other point to note is that, if you haven’t got it already, you’ll probably want to install the Oracle Client. The steps you follow to do this will depend on whether your running a 32-bit or 64-bit OS. To check this, open a Terminal Window and type : uname -i If this comes back with x86_64 then you are running 64-bit. If it’s i686 then you are on a 32-bit os. In either case, you can find the instructions for installation of the Oracle Client on Debian based systems here . According to the cxOracles’s official SourceForge site , the next bit should be simple. Just by entering the magic words… pip install cxOracle …you can wire up your Python scripts to the Oracle Database of your choice. Unfortunately, there are a few steps required on Mint before we can get to that point. Installing pip This is simple enough. Open a Terminal and : sudo apt-get install python-pip However, if we then run the pip command… pip install cx_Oracle cx_Oracle.c:6:20: fatal error: Python.h: No such file or directory It seems that, in order to run this, there is one further package you need… sudo apt-get install python-dev Another point to note is that you need to execute the pip command as sudo. Even then, we’re not quite there…. sudo pip install cx_Oracle Downloading/unpacking cx-Oracle Running setup.py egg_info for package cx-Oracle Traceback (most recent call last): File " string ", line 14, in module File "/home/mike/build/cx-Oracle/setup.py", line 135, in module raise DistutilsSetupError("cannot locate an Oracle software " \ distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation Complete output from command python setup.py egg_info: Traceback (most recent call last): File " string ", line 14, in module File "/home/mike/build/cx-Oracle/setup.py", line 135, in module raise DistutilsSetupError("cannot locate an Oracle software " \ distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation ---------------------------------------- Command python setup.py egg_info failed with error code 1 Storing complete log in /home/mike/.pip/pip.log So, whilst we now have all of the required software, it seems that sudo does not recognize the $ORACLE_HOME environment variable. You can confirm this as follows. First of all, check that this environment variable is set in your session : echo $ORACLE_HOME /usr/lib/oracle/11.2/client64 That looks OK. However…. sudo env |grep ORACLE_HOME …returns nothing. Persuading sudo to see $ORACLE_HOME At this point, the solution presented here comes to the rescue. In the terminal run… sudo visudo Then add the line : Defaults env_keep += "ORACLE_HOME" Hit CTRL+X then confirm the change by selecting Y(es). If you now re-run the visudo command, the text you get should look something like this : # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:$ Defaults env_keep += "ORACLE_HOME" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification [ Read 30 lines ] ^G Get Help ^O WriteOut ^R Read File ^Y Prev Page ^K Cut Text ^C Cur Pos ^X Exit ^J Justify ^W Where Is ^V Next Page ^U UnCut Text^T To Spell You can confirm that your change has had the desired effect… sudo env |grep ORACLE_HOME ORACLE_HOME=/usr/lib/oracle/11.2/client64 Finally installing the library At last, we can now install the cxOracle library : sudo pip install cx_Oracle Downloading/unpacking cx-Oracle Running setup.py egg_info for package cx-Oracle Installing collected packages: cx-Oracle Running setup.py install for cx-Oracle Successfully installed cx-Oracle Cleaning up... To make sure that the module is now installed, you can now run : python Python 2.7.3 (default, Feb 27 2014, 19:37:34) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. help('modules') Please wait a moment while I gather a list of all available modules... If all is well, you should be presented with the following list : ScrolledText copy_reg ntpath tty SgiImagePlugin crypt nturl2path turtle SimpleDialog csv numbers twisted SimpleHTTPServer ctypes oauth types SimpleXMLRPCServer cups opcode ubuntu_sso SocketServer cupsext operator ufw SpiderImagePlugin cupshelpers optparse unicodedata StringIO curl os unittest SunImagePlugin curses os2emxpath uno TYPES cx_Oracle ossaudiodev unohelper TarIO datetime packagekit Finally, you can confirm that the library is installed by running a simple test. What test is that ?, I hear you ask…. Testing the Installation A successful connection to Oracle from Python results in the instantiation of a connection object. This object has a property called version, which is the version number of Oracle that the database is running on. So, from the command line, you can invoke Python… python Python 2.7.3 (default, Feb 27 2014, 19:58:35) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. … and then run import cx_Oracle con = cx_Oracle.connect('someuser/somepwd@the-db-host-machine/instance_name') print con.version 11.2.0.2.0 You’ll need to replace someuser/somepwd with the username and password of an account on the target database. The db-host-machine is the name of the server that the database is sitting on. The instance name is the name of the database instance you’re trying to connect to. Incidentally, things are a bit easier if you have an Oracle client on your machine with the TNS_ADMIN environment variable set. To check this : env |grep -i oracle LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib TNS_ADMIN=/usr/lib/oracle/11.2/client64/network/admin ORACLE_HOME=/usr/lib/oracle/11.2/client64 Assuming that your tnsnames.ora includes an entry for the target database, you can simply use a TNS connect string : import cx_Oracle con = cx_Oracle.connect('username/password@database') print con.version 11.2.0.2.0 Useful Links Now you’ve got cxOracle up and running, you may want to check out some rather useful tips on how best to use it : Oracle Technet Article A further quick guide to cxOracle Filed under: Linux , Oracle Tagged: cxOracle , pip install cxOracle , python , python-dev , uname , visudo , which

Blog Post: Renaming an Oracle Apache Target in EM12c

$
0
0
When installing Enterprise Manager 12c, the host value can come from a number of places for different applications/tiers.  For most, it comes from the environment variable $ORACLE_HOSTNAME, (for Windows Servers, %ORACLE_HOSTNAME%). The OHS1 target, aka Oracle Apache in the middle tier of the EM12c environment pulls it’s value from the etc/hosts file, (for Unix as well as Windows) and so it is vulnerable with a virtual host name or host name change occurs.  It can, however, be updated post installation when the OHS1 target fails to return an active status in the EM12c console. Update the Configuration File The file that control the configuration of the OHS1 target is the topology.xml file that is located in the $OMS_HOME\user_projects\domains\GCDomain\opmn\topology.xml Edit the topology.xml file and replace/add the following entries in bolded text, replacing Virtual_Cluster_name with the name of the Cluster: - ias-instance id=”instance1″ oracle-home=”G:\app\aime\product\em12c\Oracle_WT” instance-home=”G:\app\aime\product\gc_inst\WebTierIH1″ host=” Virtual_Cluster_name ” port=”6701″ - ias-component id=”ohs1″ type=”OHS” mbean-class-name=”oracle.ohs.OHSGlobalConfig” mbean-interface-name=”oracle.ohs.OHSGlobalConfigMXBean” port=”9999″ host =” Virtual_Cluster_name “  Save the file with the new changes. Remove the OHS1 Target Log into your EM12c console as the SYSMAN user, (or another user with appropriate privileges) and click on All Targets.  Either do a search for the OHS1 target or just scan down and double-click on it.  The target will show as down and display the incorrect associated targets with the HTTP Server: You will need to remove and re-add the target to have the EM12c utilize the topology.xml file configuration update to the new host name. To do this, click on Oracle HTTP Server– Target Setup – Remove Target . The target for the Oracle Apache server/HTTP Server, along with its dependents have now been removed. Refresh the Weblogic Domain To re-add the OHS1 target, we are going to use a job already built into EM12c.  Go back to All Targets the Targets drop down.  At the very top you will commonly see the EMGC_GCDomain, (Grid Control Domain, yes, it’s still referred to it as that… :))  Log into this target.  There are two “levels” to this target, the parent and then the farm.  Either one will offer you a job in the drop down to Refresh Weblogic Domain . Once you click on this job, it will ask you to remove or add targets.  You can simply choose to Add Targets and the job will first search for any missing targets that need to be re-added.  Commonly it will locate 12 and display a list of the targets it wishes to add.  You will note that the OHS1 target now displays the CORRECT host name . Close the window and choose to complete through the wizard steps to add these targets to the Weblogic domain. Return to All Targets and access the OHS1 Target to verify that it now displays an active status-  it may take up to one collection to update the target status.     Tags:   Del.icio.us Facebook TweetThis Digg StumbleUpon Comments:    0 (Zero), Be the first to leave a reply! You might be interested in this:      EM12c Creating a New Agent Registration Password   The *New and Improved* Extensibility Exchange is Here!   #OOW, Tuesday Followup   Rebuilding Vs. No Rebuild on Indexes   EM on a VM- OxyMoron? Copyright ©  DBA Kevlar [ Renaming an Oracle Apache Target in EM12c ], All Right Reserved. 2014.

Blog Post: El Big Data y la resurrección de la línea de comandos

$
0
0
El Big Data nos propone trabajar con grandes volúmenes de datos. Nos encontramos ante la necesidad de manipular archivos de gran tamaño. Lamentablemente el "doble click" no es escalable; necesitamos volver a la viejas, pero clásicas y potentes, herramientas de línea de comandos. He aquí una serie de comandos Unix/Linux que no pueden faltar en la caja de herramientas del científico de datos a la hora de analizar y preprocesar archivos. El comando wc (Word Count / Cuenta palabras) El comando wc me permite contar la cantidad de palabras, líneas o caracteres que tiene un archivo. En el siguiente ejemplo utilizo un archivo llamado ejemplos.txt: $ wc ejemplos.txt  20  60 315 ejemplos.txt Mi archivo ejemplos.txt tiene 20 líneas, 60 palabras y 315 caracteres. Puedo utilizar la opción -l para contar solamente las líneas: $ wc -l ejemplos.txt 20 ejemplos.txt Del mismo modo podría haber utilizado las opciones -w o -c para contar palabras o caracteres respectivamente. Incluso puedo combinar las opciones. El comando head El comando head me permite leer las primeras filas de un archivo. Por defecto, el resultado se mostrará en pantalla (standard output). $ head ejemplos.txt linea 1 Fernando linea 2 Martin linea 3 Pedro linea 4 Eduardo linea 5 Marcelo linea 6 Clarisa linea 7 Mariana linea 8 Agustina linea 9 Zaida linea 10 Pedro Por defecto lee las diez primeras líneas. Este comportamiento lo puedo modificar valiéndome de la opción -n , en donde n representa la cantidad de filas que deseo leer. $ head -2 ejemplos.txt linea 1 Fernando linea 2 Martin El comando tail Con el comando tail puedo leer las últimas lineas de un archivo. Como siempre, por defecto, el resultado se dirigirá a standard output (pantalla). $ tail ejemplos.txt linea 11 Jesus linea 12 Carlos linea 13 Marisa linea 14 Candido linea 15 Eleuterio linea 16 Manuel linea 17 Teresa linea 18 Candela linea 19 Martina linea 20 Luciana Como se aprecia en el ejemplo, el comando tail leyó las últimas 10 filas del archivo. Puedo cambiar este comportamiento con la opción -n , en donde n representa la cantidad de líneas que deseo leer. $ tail -3 ejemplos.txt linea 18 Candela linea 19 Martina linea 20 Luciana Una opción muy interesante que tiene el comando tail es la "f" . Con esta opción el comando no retorna el prompt al usuario y a medida que el archivo crece, tail mostrará las nuevas filas que se agregan al archivo. Es una opción muy útil cuando queremos monitorear el crecimiento del archivo en tiempo real. El comando grep El comando grep lo utilizo para buscar cadenas de texto en un archivo. Ejemplo: $ grep Martina ejemplos.txt linea 19 Martina En este ejemplo busqué la cadena "Martina" en el archivo ejemplos.txt. El comando grep retornó la línea completa que contenía dicha cadena. El comando grep, combinado con el uso de expresiones regulares es tal vez uno de los más potentes comandos que nos ofrecen los sistemas operativos Linux y Unix. Si quisiera obtener la lista de personas que tiene el caracter "a" como segunda letra de su nombre: $ grep  "^linea [0-9][0-9]* .a" ejemplos.txt linea 2 Martin linea 5 Marcelo linea 7 Mariana linea 9 Zaida linea 12 Carlos linea 13 Marisa linea 14 Candido linea 16 Manuel linea 18 Candela linea 19 Martina Los comandos wc, head, tail y grep constituyen una lista inicial muy básica. Existen muchos más comandos que veremos en posts futuros (sed, awk, paste, sort, etc). Espero que este post te haya resultado útil.

Blog Post: The Hitchhiker’s Guide to the EXPLAIN PLAN Part 16: Practice Exam I

$
0
0
Previous installment: Oracle-in-the-box First installment: DON’T PANIC To prepare for the final exam, let’s do some practice exams. First, let’s review the study material: “An Oracle EXPLAIN PLAN is a tree structure corresponding to a relational algebra expression. It is printed in pre-order sequence (visit the root of the tree, then traverse each subtree—if any—in pre-order sequence) but should be read in post-order sequence (first traverse each subtree—if any—in post-order sequence, then only visit the root of the tree).” ( The Hitchhiker’s Guide to the EXPLAIN PLAN Part 7: Don’t pre-order your EXPLAIN PLAN ) Let’s look at the query plan for the following query which joins the Employees, Jobs, Departments, Locations, Countries, and Regions tables in the HR sample schema. I have included hints that force Oracle to use a particular join-order. I have also included the GATHER_PLAN_STATISTICS hint to request the collection of “rowsource execution statistics” during the execution of the query. SELECT /*+ GATHER_PLAN_STATISTICS LEADING (e j d l c r) USE_NL(j) USE_NL(d) USE_NL(l) USE_NL(c) USE_NL(r) */ e.first_name, e.last_name, e.salary, j.job_title, d.department_name, l.city, l.state_province, c.country_name, r.region_name FROM employees e, jobs j, departments d, locations l, countries c, regions r WHERE e.department_id = 90 AND j.job_id = e.job_id AND d.department_id = e.department_id AND l.location_id = d.location_id AND c.country_id = l.country_id AND r.region_id = c.region_id; Remember that the only legit way of displaying a query plan is to execute the query to completion first and then using dbms_xplan.display_cursor ( The Hitchhiker’s Guide to the EXPLAIN PLAN Part 13: The Great Pretender ). In other words, “explain plan for” and “set autotrace on” are not legit ways of displaying query plans. They were legit in the days of the rule-based optimizer but those days are long gone. As Tom Kyte once said , “It ain’t so much the things we don’t know that get us into trouble. It’s the things you know that just ain’t so or just ain’t so anymore or just ain’t always so.” The full command to print the query plan is: select * from table(dbms_xplan.display_cursor( sql_id = null, cursor_child_no = null, format = 'typical iostats last -rows -bytes -cost' )); I’m sure that you want an explanation. The null value for the sql_id parameter indicates that we are interested in the query that was most recently executed by our session. The null value for the cursor_child_no parameter means that we have not restricted the output to a particular child number. The value “typical iostats last -rows -bytes –cost” of the format option indicates that were are interested only in the last execution; that is, we don’t want a summary of all previous executions. Also, we want the “rowsource execution statistics” but not the rows, bytes, and cost estimates produced by the optimizer. This is what I got when I executed the dbms_xplan.display_cursor command: SQL_ID 7qdcjk3rqfbz9, child number 0 ------------------------------------- SELECT /*+ GATHER_PLAN_STATISTICS LEADING (e j d l c r) USE_NL(j) USE_NL(d) USE_NL(l) USE_NL(c) USE_NL(r) */ e.first_name, e.last_name, e.salary, j.job_title, d.department_name, l.city, l.state_province, c.country_name, r.region_name FROM employees e, jobs j, departments d, locations l, countries c, regions r WHERE e.department_id = 90 AND j.job_id = e.job_id AND d.department_id = e.department_id AND l.location_id = d.location_id AND c.country_id = l.country_id AND r.region_id = c.region_id Plan hash value: 1703224538 --------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Time | A-Rows | A-Time | Buffers | --------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 3 |00:00:00.01 | 26 | | 1 | NESTED LOOPS | | 1 | | 3 |00:00:00.01 | 26 | | 2 | NESTED LOOPS | | 1 | 00:00:01 | 3 |00:00:00.01 | 23 | | 3 | NESTED LOOPS | | 1 | 00:00:01 | 3 |00:00:00.01 | 21 | | 4 | NESTED LOOPS | | 1 | 00:00:01 | 3 |00:00:00.01 | 19 | | 5 | NESTED LOOPS | | 1 | 00:00:01 | 3 |00:00:00.01 | 14 | | 6 | NESTED LOOPS | | 1 | 00:00:01 | 3 |00:00:00.01 | 9 | | 7 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 00:00:01 | 3 |00:00:00.01 | 4 | |* 8 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 1 | 00:00:01 | 3 |00:00:00.01 | 2 | | 9 | TABLE ACCESS BY INDEX ROWID | JOBS | 3 | 00:00:01 | 3 |00:00:00.01 | 5 | |* 10 | INDEX UNIQUE SCAN | JOB_ID_PK | 3 | | 3 |00:00:00.01 | 2 | | 11 | TABLE ACCESS BY INDEX ROWID | DEPARTMENTS | 3 | 00:00:01 | 3 |00:00:00.01 | 5 | |* 12 | INDEX UNIQUE SCAN | DEPT_ID_PK | 3 | | 3 |00:00:00.01 | 2 | | 13 | TABLE ACCESS BY INDEX ROWID | LOCATIONS | 3 | 00:00:01 | 3 |00:00:00.01 | 5 | |* 14 | INDEX UNIQUE SCAN | LOC_ID_PK | 3 | | 3 |00:00:00.01 | 2 | |* 15 | INDEX UNIQUE SCAN | COUNTRY_C_ID_PK | 3 | | 3 |00:00:00.01 | 2 | |* 16 | INDEX UNIQUE SCAN | REG_ID_PK | 3 | | 3 |00:00:00.01 | 2 | | 17 | TABLE ACCESS BY INDEX ROWID | REGIONS | 3 | 00:00:01 | 3 |00:00:00.01 | 3 | --------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 8 - access("E"."DEPARTMENT_ID"=90) 10 - access("J"."JOB_ID"="E"."JOB_ID") 12 - access("D"."DEPARTMENT_ID"=90) 14 - access("L"."LOCATION_ID"="D"."LOCATION_ID") 15 - access("C"."COUNTRY_ID"="L"."COUNTRY_ID") 16 - access("R"."REGION_ID"="C"."REGION_ID") Answer the following questions: Which line is the root of the tree? How many children does Line 0 have? Name them. How many children does Line 1 have? Name them. How many children does Line 7 have? Name them. Which line in the plan is executed first? Which line in the plan is executed last? Which lines are leaf nodes? (Hint: there are seven leaf nodes) Once you have figured out the answers to all the above questions, you are ready to reel off the order in which the above query plan is meant to be read. Using the post-order method, we get 8, 7, 10, 9, 6, 12, 11, 5, 14, 13, 4,15, 3, 16, 2, 17, 1, and 0. Here is a pictorial version of the query plan; click on it to see it in full resolution. Convince yourself that dbms_xplan.display_cursor has printed the nodes of the query plan in pre-order sequence while the query plan is actually meant to be read in post-order sequence. Notice the left-leaning shape of the tree. Such a tree is called a deep-left tree. It clearly illustrates the fundamental strategy by which a SQL query is converted into a series of join operations: first a “driving” table is selected from amongst the tables involved in the query and then the remaining tables are joined to it. The selection of the driving table and the order in which the remaining tables are joined is the tricky problem that the query optimizer has to solve. In future posts, you will see other types of trees. P.S. The above query plan contains a very curious oddity. All the tables except the Countries table requires a “index scan” operation followed by a “table access by index rowid” operation. The countries table apparently requires only an “index scan” operation. Can you explain why? Hint: Use SQL Developer to examine the table-creation DDL of the Countries table. Previous installment: Oracle-in-the-box First installment: DON’T PANIC Iggy Fernandez

Blog Post: Introduction to ASM Filter Driver (AFD)

$
0
0
All started when I finished my conference about ASM in the OTN Tour 2014 Brazil, after my conference I gave 5 minutes of questions about my presentations, at that moment Fernando Simon asked me about the new "ASM Filter Driver (AFD)", his question was "will ASM Filter Driver replace ASMLib?" I couldn't answer that question because at that moment I didn't know anything about ASM Filter Driver, and the reason was that the OTN Tour in Brazil was on August 2, and ASM Filter Driver was released in 12.1.0.2 just 2 weeks before my conference, that concept was new for me at that moment. As I promised him, Now I am writing these articles in order to introduce the new ASM Filter Driver to the community. Introduction to ASM Filter Driver (AFD) How to migrate disks from ASMLib to ASM Filter Driver (GI Standalone Environment) How to migrate disks from ASM Filter Driver to ASMLib (GI Standalone Environment) How to migrate disks from ASMLib to ASM Filter Driver (GI Cluster Environment) How to migrate disks from ASM Filter Driver to ASMLib (GI Cluster Environment) When I was investigating about ASM Filter Driver I read two articles written by @flashdba that I recommend to read as well: Oracle 12.1.0.2 ASM Filter Driver: First Impressions Oracle 12.1.0.2 ASM Filter Driver: Advanced Format Fail Fernando Simon : Regarding your questions, I will let Oracle Documentation to answer you: Oracle ASM Filter Driver is the recommended replacement for ASMLIB http://docs.oracle.com/database/121/OSTMG/ostmg_gloss.htm#OSTMG95298 I saw that Oracle is saying "Replacement for ASMLib" ONLY in that link. Also @f lashdba says the same in his first article, however in the documentation of ASM Filter Driver doesn't say anything about it, even in its documentations says that you should decide what to use, either ASM Filter Driver or ASMLib and there is also a section  inside the documentation where there is information that would help you in the decision: Well Fernando Simon, let's trust on the " Oracle® Automatic Storage Management Administrator's Guide " and let's say that ASM Filter Driver is the replacement of ASMLib. Before to read my articles about how to configure ASM Filter Driver: Be sure that you are using a spfile for ASM instance, When I started my investigation about AFD I got the following error: ORA-32001: write to SPFILE requested but no SPFILE is in use (DBD ERROR: OCIStmtExecute) NOTE: Executing /u01/app/grid/product/12.1.0/grid//bin/kfod op=getclstype.. NOTE: unable to execute kfod... 25-Aug-14 20:47 in flex mode disabled 25-Aug-14 20:47 Printing the connection string ENV{ORACLE_HOME} = /u01/app/grid/product/12.1.0/grid/ ENV{ORACLE_SID} = +ASM contype = sysasm driver = dbi:Oracle: instanceName = ServiceName = +ASM 25-Aug-14 20:47 Successfully connected to ASM instance +ASM 25-Aug-14 20:47 /* ASMCMD */ . alter system set asm_diskstring='/dev/oracleasm/disks/*' SCOPE=BOTH 25-Aug-14 20:47 ORA-32001: write to SPFILE requested but no SPFILE is in use (DBD ERROR: OCIStmtExecute) 25-Aug-14 20:47 Unhandled Exception non-interactive mode Exception Caught Type :asmcmdexceptions Message :General Exception File :/u01/app/grid/product/12.1.0/grid//lib/asmcmdshare.pm Line :3343 That was because I wasn't using spfile for my ASM instance. You can follow the instructions in the metalink note "ORA-32001, no spfile used when ASM instance is started. ( Doc ID 1902956.1 )" My steps for configuring spfile for ASM instance: [grid@db12102 ~]$ sqlplus / as sysasm SQL show parameters spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string SQL exit No spfile was used. SQL create spfile=' /u01/app/grid/product/12.1.0/grid/dbs/spfile+ASM.ora ' from memory; File created. SQL exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Automatic Storage Management option [grid@db12102 ~]$ asmcmd spset /u01/app/grid/product/12.1.0/grid/dbs/spfile+ASM.ora [grid@db12102 ~]$ asmcmd spget /u01/app/grid/product/12.1.0/grid/dbs/spfile+ASM.ora [grid@db12102 ~]$ srvctl stop asm -f [grid@db12102 ~]$ srvctl start asm [grid@db12102 ~]$ sqlplus / as sysasm SQL*Plus: Release 12.1.0.2.0 Production on Mon Aug 25 20:58:19 2014 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Automatic Storage Management option SQL show parameters spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /u01/app/grid/product/12.1.0/g rid/dbs/spfile+ASM.ora SQL

Blog Post: How to migrate disks from ASMLib to ASM Filter Driver (GI Standalone Environment)

$
0
0
Well, in this blog post you will find information about how to migrate your disks in order to use AFD, keep in mind that in my environment I am using only one database and also one 1 Diskgroup, if you are planning to migrate from ASMLIb to AFD I recommend to migrate all your diskgroups. To know which diskgroups I have: [grid@db12102 trace]$ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 31455 29491 0 29491 0 N DATA / What is the path of each disk used by my diskgroups? Remember to list all the disks used by all your diskgrups. [grid@db12102 trace]$ asmcmd lsdsk -G data Path /dev/oracleasm/disks/ASMDISK1 [grid@db12102 trace]$ srvctl status asm ASM is running on db12102 You should are using spfile at the moment: [grid@db12102 trace]$ asmcmd spget /u01/app/grid/product/12.1.0/grid/dbs/spfile+ASM.ora next step in documentation is, but this doesn't work because I have never used AFD in this environment and I believe you are right now in the same situation, so you can skip this step and continue: [grid@db12102 trace]$ $ORACLE_HOME/bin/asmcmd afd_label disk_path label --migrate if you execute it, you will see that AFD is not loaded. [grid@db12102 trace]$ asmcmd afd_label ASMDISK1 /dev/oracleasm/disks/ASMDISK1 --migrate ASMCMD-9520: AFD is not 'Loaded' Configuring.... [grid@db12102 trace]$ asmcmd dsset '/dev/oracleasm/disks/*','AFD:*' if you got the following error you should read my first article regarding AFD. [grid@db12102 trace]$ asmcmd dsset '/dev/oracleasm/disks/*','AFD:*' ORA-32001: write to SPFILE requested but no SPFILE is in use (DBD ERROR: OCIStmtExecute) Stopping all the databases: [grid@db12102 trace]$ srvctl stop database -db orcl [grid@db12102 trace]$ srvctl stop asm -f root@db12102 dev]# crsctl stop has CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db12102' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'db12102' CRS-2673: Attempting to stop 'ora.evmd' on 'db12102' CRS-2673: Attempting to stop 'ora.asm' on 'db12102' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'db12102' succeeded CRS-2677: Stop of 'ora.evmd' on 'db12102' succeeded CRS-2677: Stop of 'ora.asm' on 'db12102' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'db12102' CRS-2677: Stop of 'ora.cssd' on 'db12102' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db12102' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@db12102 dev]# [root@db12102 dev]# asmcmd afd_configure Connected to an idle instance. AFD-627: AFD distribution files found. AFD-636: Installing requested AFD software. AFD-637: Loading installed AFD drivers. AFD-9321: Creating udev for AFD. AFD-9323: Creating module dependencies - this may take some time. AFD-9154: Loading 'oracleafd.ko' driver. AFD-649: Verifying AFD devices. AFD-9156: Detecting control device '/dev/oracleafd/admin'. AFD-638: AFD installation correctness verified. Modifying resource dependencies - this may take some time. ASMCMD-9524: AFD configuration failed 'ERROR: OHASD start failed' ------Ignore this, @flashdba and I have received this error, but it doesn't impact the AFD behavior [root@db12102 dev]# [grid@db12102 trace]$ asmcmd afd_state Connected to an idle instance. ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'db12102.oraworld.com' [grid@db12102 trace]$ As you can see above, even with ASMCMD-9524 AFD is working well. [root@db12102 dev]# crsctl start has CRS-4123: Oracle High Availability Services has been started. [grid@db12102 trace]$ asmcmd afd_dsset /dev/oracleafd/disks/* [grid@db12102 trace]$ srvctl status diskgroup -diskgroup data Disk Group data is running on db12102 [grid@db12102 trace]$ Starting up all databases: [grid@db12102 trace]$ srvctl start database -db orcl [grid@db12102 trace]$ srvctl status database -db orcl Database is running. do you remember that using ASMLib our disks were owned by grid:asmadmin ? well, AFD is interesting and now the disks are owned by root:root   [grid@db12102 trace]$ ls -ltr /dev/oracleafd/disks/ -rw-r--r-- 1 root root 10 Aug 26 04:45 ASMDISK1   When I was performing my practices regarding AFD I got  some issues because of the following parameters, so I leave here further information in order to help you and you won't pass for the same: [grid@db12102 trace]$ asmcmd dsget parameter:/dev/oracleasm/disks/*, AFD:* profile:/dev/oracleasm/disks/*,AFD:* [grid@db12102 trace]$ asmcmd spget /u01/app/grid/product/12.1.0/grid/dbs/spfile+ASM.ora [root@db12102 dev]# asmcmd afd_dsget Connected to an idle instance. AFD discovery string: '/dev/oracleafd/disks/ASMDISK1'   by Deiby Gómez   

Blog Post: How to migrate disks from ASM Filter Driver to ASMLib (GI Standalone Environment)

$
0
0
What about if you didn't like AFD and you want to come back to ASMLib? Well just follow the following steps and you will have all your diskgroups using ASMLIB again   [root@db12102 ~]# crsctl stop has CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db12102' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'db12102' CRS-2673: Attempting to stop 'ora.evmd' on 'db12102' CRS-2673: Attempting to stop 'ora.asm' on 'db12102' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'db12102' succeeded CRS-2677: Stop of 'ora.evmd' on 'db12102' succeeded CRS-2677: Stop of 'ora.asm' on 'db12102' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'db12102' CRS-2677: Stop of 'ora.cssd' on 'db12102' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db12102' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@db12102 ~]# acfsload stop [root@db12102 ~]# asmcmd afd_deconfigure Connected to an idle instance. AFD-632: Existing AFD installation detected. AFD-634: Removing previous AFD installation. AFD-635: Previous AFD components successfully removed. Modifying resource dependencies - this may take some time. ASMCMD-9525: AFD deconfiguration failed 'ERROR: OHASD start failed' ------Ignore this, @flashdba and I have received this error, but it doesn't impact the AFD behavior [root@db12102 ~]# acfsload start ACFS-9391: Checking for existing ADVM/ACFS installation. ACFS-9392: Validating ADVM/ACFS installation files for operating system. ACFS-9393: Verifying ASM Administrator setup. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9154: Loading 'oracleoks.ko' driver. ACFS-9154: Loading 'oracleadvm.ko' driver. ACFS-9154: Loading 'oracleacfs.ko' driver. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'. ACFS-9156: Detecting control device '/dev/ofsctl'. ACFS-9322: completed [root@db12102 ~]# crsctl start has CRS-4123: Oracle High Availability Services has been started. [grid@db12102 ~]$ asmcmd afd_state Connected to an idle instance. ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'db12102.oraworld.com' (wait until the ASM instance is running) [grid@db12102 ~]$ asmcmd dsset /dev/oracleasm/disks/* [grid@db12102 dev]$ ls /dev/oracleasm/disks ls: cannot access /dev/oracleasm/disks: No such file or directory [grid@db12102 dev]$ oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no [root@db12102 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting ENTER without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [grid]: Default group to own the driver interface [asmadmin]: Start Oracle ASM library driver on boot (y/n) [ y ]: Scan for Oracle ASM disks on boot (y/n) [ y ]: Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] [root@db12102 ~]# su - grid [grid@db12102 trace]$ oracleasm init [grid@db12102 ~]$ oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes [grid@db12102 ~]$ oracleasm listdisks ASMDISK1 [grid@db12102 ~]$ oracleasm querydisk ASMDISK1 Disk "ASMDISK1" is a valid ASM disk [grid@db12102 ~]$ ls /dev/oracleasm/disks ASMDISK1 [grid@db12102 ~]$ srvctl start diskgroup -diskgroup data [grid@db12102 ~]$ [grid@db12102 ~]$ sqlplus / as sysasm SQL select name, state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA MOUNTED [grid@db12102 ~]$ srvctl start database -db orcl [grid@db12102 ~]$ srvctl status database -db orcl Database is running. [grid@db12102 ~]$ by Deiby Gómez

Wiki Page: Rolling Out-of-Place patching para Oracle Grid Infrastructure 12c y 11gR2: clonación en esteroides

$
0
0
En este  artículo publicado en Toad World , se detalla el procedimiento a seguir para realizar un Rolling Out-of-Place patching para Oracle Grid Infrastucture , que se inicia con la clonación del GI home directory activo y la aplicación sobre él del patch deseado. Estos pasos se repiten en todos y cada uno de los nodos que forman parte del cluster sobre el cual estamos trabajando, pero ¿qué ocurre si tenemos una gran cantidad de nodos? Aun cuando podamos realizar estas tareas en paralelo, se sigue requiriendo de tiempo y esfuerzo para repetir una y otra vez el procedimiento. En el presente artículo analizaremos la forma de aplicar un patch a Oracle Grid Infrastructure (en adelante GI ), ya sea éste de la versión 11.2 o 12.1 , pero ya no solo haciendo énfasis en minimizar la suspensión del servicio, sino también en minimizar el esfuerzo en aplicarlo . En primer lugar tomemos en cuenta las siguientes suposiciones iniciales : GI home directory original : /u01/app/grid/12.1.0/grid_1 GI home directory clonado : /u01/app/grid/12.1.0/grid_2 La versión en uso de GI es la 12.1.0.1.0, y se desea aplicar el PSU 12.1.0.1.4 Se ha descargado e instalado correctamente la versión más reciente de OPatch Se ha clonado el GI home directory activo y aplicado el patch deseado sobre él, en uno de los nodos del cluster (node1), restando hacerlo sobre los demás nodos. Creación de Golden Image 1. En el primer nodo del cluster (node1) se ejecutan los pasos del 1 al 4 que se detallan en el artículo Rolling Out-of-Place patching para Oracle Grid Infrastucture , luego de lo cual contaremos con un GI home directory clonado y con el patch debidamente aplicado, listo para reemplazar al anterior y aún activo ( switch de GI home directories) .   2. En lugar de repetir los pasos del 1 al 4 en todos y cada uno de los nodos, usaremos el GI home directory clonado que ya tenemos, para en base a él obtener un Golden Image , para lo cual empezamos por deshacernos de todos los archivos innecesarios y específicos al primer nodo (node1).  [root@node1 ~]# export GRID_HOME=/u01/app/grid/12.1.0/grid_2   [root@node1 ~]# cd ${GRID_HOME} [root@node1 grid_2]# find . -name *${THIS_NODE}* -exec rm -rf {} \; [root@node1 grid_2]# rm -rf root.sh* [root@node1 grid_2]# rm -rf cdata/* [root@node1 grid_2]# rm -rf crf/* [root@node1 grid_2]# rm -rf crs/init/* [root@node1 grid_2]# rm -rf crs/install/s_crsconfig_${THIS_NODE}_env.txt [root@node1 grid_2]# rm -rf crs/install/crsconfig_addparams [root@node1 grid_2]# rm -rf inventory/backup/* [root@node1 grid_2]# rm -rf log/diag/* [root@node1 grid_2]# rm -rf network/admin/*.ora [root@node1 grid_2]# rm -rf rdbms/audit/* [root@node1 grid_2]# rm -rf rdbms/log/* [root@node1 grid_2]# find . -name '*.ouibak' -exec rm {} \; [root@node1 grid_2]# find . -name '*.ouibak.1' -exec rm {} \; [root@node1 grid_2]# find cfgtoollogs -type f -exec rm -f {} \; [root@node1 grid_2]# find gpnp -type f -exec rm -f {} \;   3. Procederemos ahora a generar una copia del GI home directory clonado, que vendrá a ser nuestro Golden Image . En este caso crearemos un tarball comprimido.  [root@node1 ~]# export GRID_HOME=/u01/app/grid/12.1.0/grid_2 [root@node1 ~]# cd ${GRID_HOME}   [root@node1 grid_2]# tar -zcpf /stage/gridHome121014.tgz . Clonación del Golden Image En lugar de copiar el GI home directory activo de cada nodo restante del cluster y aplicarle el patch deseado, usaremos el Golden Image obtenido en base a lo ya trabajado en el primer nodo del cluster (node1); de esta forma ahorraremos tiempo y esfuerzo. 4. Transferir y descomprimir el Golden Image en el siguiente nodo en la lista, teniendo especial cuidado en copiar el archivo "crsconfig_params" del GI home directory activo del nodo sobre el que estamos trabajando.  [root@node2 ~]# export GRID_HOME=/u01/app/grid/12.1.0/grid_2 [root@node2 ~]# export GRID_HOME_OLD=/u01/app/grid/12.1.0/grid_1 [root@node2 ~]# export THIS_NODE=`hostname -s`   [root@node2 ~]# mkdir ${GRID_HOME} [root@node2 ~]# chown -R oracle:oinstall ${GRID_HOME} [root@node2 ~]# chmod -R 775 ${GRID_HOME}   [root@node2 ~]# cd ${GRID_HOME} [root@node2 grid_2]# tar -zxpf /stage/gridHome121014.tgz . [root@node2 grid_2]# /bin/cp -rfp ${GRID_HOME_OLD}/crs/install/crsconfig_params ${GRID_HOME}/crs/install/. 5. Desbloquear el GI home directory clonado, tomando en cuenta que los comandos son algo distintos dependiendo de la versión de GI, pero con el mismo resultado: modificar los archivos de propiedad de "root" para que lo sean de "oracle".   GI 12.1 [root@node2 ~]# export GRID_HOME=/u01/app/grid/12.1.0/grid_2   [root@node2 ~]# /usr/bin/perl ${GRID_HOME}/crs/install/rootcrs.pl -prepatch -dstcrshome ${GRID_HOME} Using configuration parameter file: /u01/app/grid/12.1.0/grid_2/crs/install/crsconfig_params 2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/crfmond' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/crf/admin/run/crfmond' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/agent' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/crsd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/crf/admin' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/agent/crsd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/ctssd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/gnsd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/crflogd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/agent/ohasd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/auth/ohasd/node2' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/auth/evm/node2' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/auth/crs/node2' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2/ohasd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/crf/db/node2' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/log/node2' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/crf/db' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/crf/admin/run/crflogd' does not exist   2014/07/22 03:22:48 CLSRSC-46: Error: '/u01/app/grid/12.1.0/grid_2/auth/css/node2' does not exist   2014/07/22 03:22:49 CLSRSC-347: Successfully unlock /u01/app/grid/12.1.0/grid_2   GI 11.2 [root@node2 ~]# export GRID_HOME=/u01/app/grid/11.2.0/grid_2   [root@node2 ~]# /usr/bin/perl ${GRID_HOME}/OPatch/crs/patch112.pl -unlock -desthome ${GRID_HOME} Prototype mismatch: sub main::trim: none vs ($) at /u01/app/grid/11.2.0/grid_2/OPatch/crs/patch112.pl line 401. opatch auto log file location is /u01/app/grid/11.2.0/grid_2/crs/install/../../cfgtoollogs/opatchauto2014-07-22_06-45-14.log Detected Oracle Clusterware install Using configuration parameter file: /u01/app/grid/11.2.0/grid_2/crs/install/crsconfig_params Successfully unlock /u01/app/grid/11.2.0/grid_2   6. Clonar la instalación con el utilitario clone.pl .  [oracle@node2 ~]$ export GRID_HOME=/u01/app/grid/12.1.0/grid_2 [oracle@node2 ~]$ export THIS_NODE=`hostname -s`   [oracle@node2 ~]$ /usr/bin/perl ${GRID_HOME}/clone/bin/clone.pl \ ORACLE_BASE=/u01/app/oracle \ ORACLE_HOME=$GRID_HOME \ ORACLE_HOME_NAME=OraGI12Home2 \ INVENTORY_LOCATION=/u01/app/oraInventory \ "CLUSTER_NODES={node1,node2,node3,node4,node5,node6,node7,node8}" \ LOCAL_NODE=${THIS_NODE} \ SHOW_ROOTSH_CONFIRMATION=false \ CRS=true ./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/grid/12.1.0/grid_2" "ORACLE_HOME_NAME=OraGI12Home2" "INVENTORY_LOCATION=/u01/app/oraInventory" "CLUSTER_NODES={node1,node2,node3,node4,node5,node6,node7,node8}" "LOCAL_NODE=node2" "SHOW_ROOTSH_CONFIRMATION=false" "CRS=true" -silent -paramFile /u01/app/grid/12.1.0/grid_2/clone/clone_oraparam.ini Starting Oracle Universal Installer...   Checking Temp space: must be greater than 500 MB.   Actual 5938 MB    Passed Checking swap space: must be greater than 500 MB.   Actual 1020 MB    Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-22_02-44-42AM. Please wait ...You can find the log of this install session at:  /u01/app/oraInventory/logs/cloneActions2014-07-22_02-44-42AM.log ..................................................   5% Done. ..................................................   10% Done. ..................................................   15% Done. ..................................................   20% Done. ..................................................   25% Done. ..................................................   30% Done. ..................................................   35% Done. ..................................................   40% Done. ..................................................   45% Done. ..................................................   50% Done. ..................................................   55% Done. ..................................................   60% Done. ..................................................   65% Done. ..................................................   70% Done. ..................................................   75% Done. ..................................................   80% Done. ..................................................   85% Done. ..................................................   90% Done. ..................................................   95% Done.   Copy files in progress.   Copy files successful.   Link binaries in progress.   Link binaries successful.   Setup files in progress.   Setup files successful.   Setup Inventory in progress.   Setup Inventory successful.   Finish Setup successful. The cloning of OraGI12Home2 was successful. Please check '/u01/app/oraInventory/logs/cloneActions2014-07-22_02-44-42AM.log' for more details.   As a root user, execute the following script(s):         1. /u01/app/grid/12.1.0/grid_2/root.sh   Execute /u01/app/grid/12.1.0/grid_2/root.sh on the following nodes: [node2]   ..................................................   100% Done.  Aun cuando al final de la clonación se nos indica ejecutar el script "root.sh", debemos ignorarlo .   7. Copiar archivos de configuración del cluster , propios del nodo sobre el que se está trabajando. Observación : se debe ejecutar también en el primer nodo (node1), esto debido a que los archivos en mención fueron eliminados como parte del paso previo a la generación del Golden Image . [root@node2 ~]# export GRID_HOME=/u01/app/grid/12.1.0/grid_2 [root@node2 ~]# export GRID_HOME_OLD=/u01/app/grid/12.1.0/grid_1 [root@node2 ~]# export THIS_NODE=`hostname -s`   [root@node2 ~]# /bin/cp -rfp ${GRID_HOME_OLD}/cdata/${THIS_NODE}.olr ${GRID_HOME}/cdata/. [root@node2 ~]# /bin/cp -rfp ${GRID_HOME_OLD}/crs/install/s_crsconfig_${THIS_NODE}_env.txt ${GRID_HOME}/crs/install/. [root@node2 ~]# /bin/cp -Rfp ${GRID_HOME_OLD}/gpnp/* ${GRID_HOME}/gpnp/.   8. Repetir los pasos del 4 al 7 para todos y cada uno de los nodos restantes del cluster . Se pueden ejecutar en varios/todos los nodos en paralelo , por cuanto se trabaja sobre el GI home directory clonado y por consiguiente no afecta la disponibilidad de los servicios. Switch de GI home directories 9. Luego que todos los nodos del cluster tienen el GI home directory clonado y con el patch debidamente aplicado, se procede a realizar el switch de GI home directories , un nodo a la vez , empezando por el primer nodo (node1), tal y como se señala en el paso 7 del artículo Rolling Out-of-Place patching para Oracle Grid Infrastucture .   ¡Felicitaciones! , si llegaste a este punto entonces ya tienes un Cluster operando con el patch que planificaste aplicar, con el mínimo esfuerzo, y solo tuviste que suspender el servicio por muy pocos minutos. Conclusiones   El proceso de patching de Oracle Grid Infrastrucure puede ser enormemente simplificado mediante el uso de un Golden Image , ya que el proceso de creación de un GI home directory clonado, así como la aplicación del patch deseado, se ejecuta una sola vez y se puede reutilizar cuantas veces sea necesario. Debemos tener en cuenta que un Golden Image puede ser empleado no solo para simplificar el procedimiento de aplicación de un patch , sino también para upgrades , así como para extender un cluster a más nodos, e incluso para crear nuevos clusters , ahorrando mucho tiempo y esfuerzo, por lo que es altamente recomendado estar familiarizados con su creación y uso.
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>