Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: Why my execution plan has not been shared – Part I

$
0
0
For most people troubleshooting SQL query performance problems is a necessity rather than a preference. Very often they are using a trial and error strategy to overcome those frustrating situations. Unfortunately the trial and error method is not reliable. This is particularly true because of the burgeoning number of possible performance root causes. When a query deviates from its normal usual accepted response time this is very probably due to a change in its execution plan. Hopefully Oracle has implemented an internal piece of code that tell us the reason for which an existing execution plan has not been shared and thereby a new one has hard parsed. This article examines few of the most common non-sharing execution plan reasons, outlines their appearance criteria and demonstrates their occurrence via reproducible examples. Tanel Poder script (nonshared.sql) As stated above, Oracle externalises the non-sharing reasons of an execution plan into a dedicated view named v$sql_shared_cursor having the following description: SQL> desc v$sql_shared_cursor Name Type --------------------------------- --------------- 1 SQL_ID VARCHAR2(13) 2 ADDRESS RAW(8) 3 CHILD_ADDRESS RAW(8) 4 CHILD_NUMBER NUMBER 5 UNBOUND_CURSOR VARCHAR2(1) 6 SQL_TYPE_MISMATCH VARCHAR2(1) 7 OPTIMIZER_MISMATCH VARCHAR2(1) 8 OUTLINE_MISMATCH VARCHAR2(1) 9 STATS_ROW_MISMATCH VARCHAR2(1) 10 LITERAL_MISMATCH VARCHAR2(1) 11 FORCE_HARD_PARSE VARCHAR2(1) 12 EXPLAIN_PLAN_CURSOR VARCHAR2(1) 13 BUFFERED_DML_MISMATCH VARCHAR2(1) 14 PDML_ENV_MISMATCH VARCHAR2(1) 15 INST_DRTLD_MISMATCH VARCHAR2(1) 16 SLAVE_QC_MISMATCH VARCHAR2(1) 17 TYPECHECK_MISMATCH VARCHAR2(1) 18 AUTH_CHECK_MISMATCH VARCHAR2(1) 19 BIND_MISMATCH VARCHAR2(1) 20 DESCRIBE_MISMATCH VARCHAR2(1) 21 LANGUAGE_MISMATCH VARCHAR2(1) 22 TRANSLATION_MISMATCH VARCHAR2(1) 23 BIND_EQUIV_FAILURE VARCHAR2(1) 24 INSUFF_PRIVS VARCHAR2(1) 25 INSUFF_PRIVS_REM VARCHAR2(1) 26 REMOTE_TRANS_MISMATCH VARCHAR2(1) 27 LOGMINER_SESSION_MISMATCH VARCHAR2(1) 28 INCOMP_LTRL_MISMATCH VARCHAR2(1) 29 OVERLAP_TIME_MISMATCH VARCHAR2(1) 30 EDITION_MISMATCH VARCHAR2(1) 31 MV_QUERY_GEN_MISMATCH VARCHAR2(1) 32 USER_BIND_PEEK_MISMATCH VARCHAR2(1) 33 TYPCHK_DEP_MISMATCH VARCHAR2(1) 34 NO_TRIGGER_MISMATCH VARCHAR2(1) 35 FLASHBACK_CURSOR VARCHAR2(1) 36 ANYDATA_TRANSFORMATION VARCHAR2(1) 37 PDDL_ENV_MISMATCH VARCHAR2(1) 38 TOP_LEVEL_RPI_CURSOR VARCHAR2(1) 39 DIFFERENT_LONG_LENGTH VARCHAR2(1) 40 LOGICAL_STANDBY_APPLY VARCHAR2(1) 41 DIFF_CALL_DURN VARCHAR2(1) 42 BIND_UACS_DIFF VARCHAR2(1) 43 PLSQL_CMP_SWITCHS_DIFF VARCHAR2(1) 44 CURSOR_PARTS_MISMATCH VARCHAR2(1) 45 STB_OBJECT_MISMATCH VARCHAR2(1) 46 CROSSEDITION_TRIGGER_MISMATCH VARCHAR2(1) 47 PQ_SLAVE_MISMATCH VARCHAR2(1) 48 TOP_LEVEL_DDL_MISMATCH VARCHAR2(1) 49 MULTI_PX_MISMATCH VARCHAR2(1) 50 BIND_PEEKED_PQ_MISMATCH VARCHAR2(1) 51 MV_REWRITE_MISMATCH VARCHAR2(1) 52 ROLL_INVALID_MISMATCH VARCHAR2(1) 53 OPTIMIZER_MODE_MISMATCH VARCHAR2(1) 54 PX_MISMATCH VARCHAR2(1) 55 MV_STALEOBJ_MISMATCH VARCHAR2(1) 56 FLASHBACK_TABLE_MISMATCH VARCHAR2(1) 57 LITREP_COMP_MISMATCH VARCHAR2(1) 58 PLSQL_DEBUG VARCHAR2(1) 59 LOAD_OPTIMIZER_STATS VARCHAR2(1) 60 ACL_MISMATCH VARCHAR2(1) 61 FLASHBACK_ARCHIVE_MISMATCH VARCHAR2(1) 62 LOCK_USER_SCHEMA_FAILED VARCHAR2(1) 63 REMOTE_MAPPING_MISMATCH VARCHAR2(1) 64 LOAD_RUNTIME_HEAP_FAILED VARCHAR2(1) 65 HASH_MATCH_FAILED VARCHAR2(1) 66 PURGED_CURSOR VARCHAR2(1) 67 BIND_LENGTH_UPGRADEABLE VARCHAR2(1) 68 USE_FEEDBACK_STATS VARCHAR2(1) 69 REASON CLOB 70 CON_ID NUMBER This view contains 64 possible reasons for an execution plan (aka CHILD_NUMBER ) to do not be shared. The first article of the instalment series examines a couple of reasons which I have seen kicking in very often in running systems. Typically, when an execution plan can not been shared its corresponding non-sharing VARCHAR2(1) column value will be set to 'Y'. This view has been engineered so that it is not easy to find the non-shared reason with a simple select statement. Hopefully Tanel Poder has developed a SQL script (nonshared.sql) which we will be using with great success all over this article. Having described the v$sql_shared_cursor view let's now embark on the explanation of few very common reasons starting by the LOAD_OPTIMIZER_STATS reason LOAD_OPTIMIZER_STATS Oracle defines this reason as follows: ( Y|N ) A hard parse is forced in order to initialize extended cursor sharing The extended cursor sharing is a feature introduced by Oracle starting from release 11g. Simply put, this feature aims to compile, for a bind aware cursor , an optimal execution plan for each query execution. Extensive details about this feature can be found in Chapter 4 of the upcoming book I have co-authored. A reproducible example being worth a thousand words lets then see, below, how to produce such a kind of execution plan non-sharing reason: SQL> select banner from v$version where rownum=1; BANNER ----------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production SQL> create table t1 as select rownum n1 ,trunc ((rownum-1)/5) n2 ,case mod(rownum, 100) when 0 then 'Hundred' else 'NotHundred' end v1 from dual connect by level create index t1_ind on t1(v1); SQL> begin dbms_stats.gather_table_stats(user , 't1' , method_opt => 'for all columns size skewonly' ); end; / The model consists of a heap table having a single column b-tree index and a column not evenly distributed as shown below: SQL> desc t1 Name Type ----------------- ------------ 1 N1 NUMBER 2 N2 NUMBER 3 V1 VARCHAR2(10) SQL> select v1, count(1) from t1 group by v1; V1 COUNT(1) ---------- ---------- Hundred 100 NotHundred 9900 Collecting statistics on t1 table and its column will obviously compute a Frequency histogram for the v1 column as shown in the following: SQL> begin dbms_stats.gather_table_stats (user ,'t1' ,method_opt => 'for all columns size skewonly' ); end; / PL/SQL procedure successfully completed. SQL> select column_name, histogram from user_tab_col_statistics where table_name = 'T1' and column_name = 'V1'; COLUMN_NAM HISTOGRAM ---------- --------------- V1 FREQUENCY At this stage of the investigation we are now ready, via the following piece of code, to give life to the LOAD_OPTIMIZER_STATS reason: SQL> var v1 varchar2(10) SQL> exec :v1 := 'Hundred' SQL> select count(1) from t1 where v1 = :v1; COUNT(1) ---------- 100 SQL> select * from table(dbms_xplan.display_cursor); SQL_ID d2h2phry5d881, child number 0 ------------------------------------- -------------------------------------------- | Id | Operation | Name | Rows | -------------------------------------------- | 0 | SELECT STATEMENT | | | | 1 | SORT AGGREGATE | | 1 | |* 2 | INDEX RANGE SCAN | T1_IND | 100 | -------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 – access("V1"=:V1) As it can be noticed trough the above execution plan the first select against t1 table has been honored via an index range scan path materialized by a child cursor number 0. Let's now change the bind variable value so that a new execution plan will be hard parsed as shown below (we will have to run the above query two times before seeing the extended cursor sharing kicking as explained with great details in the above mentioned book): SQL> exec :v1 := 'NotHundred' SQL> select count(1) from t1 where v1 = :v1; COUNT(1) ---------- 9900 SQL> select count(1) from t1 where v1 = :v1; COUNT(1) ---------- 9900 SQL> select * from table(dbms_xplan.display_cursor); SQL_ID d2h2phry5d881, child number 1 ------------------------------------------------ | Id | Operation | Name | Rows | ------------------------------------------------ | 0 | SELECT STATEMENT | | | | 1 | SORT AGGREGATE | | 1 | |* 2 | INDEX FAST FULL SCAN | T1_IND | 9900 | ------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 – filter("V1"=:V1) Spot now how a new optimal execution plan has been compiled so that is better suits the new bind variable. And, finally, here's below the corresponding non-sharing reason stored by Oracle into the v$sql_shared_cursor view: SQL> @nonshared d2h2phry5d881 Show why existing SQL child cursors were not reused (V$SQL_SHARED_CURSOR)... SQL_ID : d2h2phry5d881 ADDRESS : 00007FFB7B4B2188 CHILD_ADDRESS : 00007FFB7B4B0CD8 CHILD_NUMBER : 0 LOAD_OPTIMIZER_STATS : Y REASON : 0 40 Bind mismatch(25) 0x0 extended_cursor_sharing CON_ID : 1 ----------------- SQL_ID : d2h2phry5d881 ADDRESS : 00007FFB7B4B2188 CHILD_ADDRESS : 00007FFB7B487268 CHILD_NUMBER : 1 REASON : CON_ID : 1 ----------------- PL/SQL procedure successfully completed. Hopefully with the above demonstration if you come to meet a switch of an execution plan due to the LOAD_OPTIMIZER_STATS reason then you will clearly know why Oracle has decided to hard parse your query. HASH_MATCH_FAILED Oracle defines this reason as follows: ( Y|N ) No existing child cursors have the unsafe literal bind hash values required by the current cursor To say the least this definition is not obvious at all. Let's try first reproducing it and then hope to be able to reverse engineering a clear definition for this reason. SQL> alter system flush shared_pool; SQL> exec :v1 := 'Hundred' SQL> select count(1) from t1 where v1 = :v1; COUNT(1) ---------- 100 SQL> select * from table(dbms_xplan.display_cursor); SQL_ID d2h2phry5d881 , child number 0 Plan hash value: 2603166377 -------------------------------------------- | Id | Operation | Name | Rows | -------------------------------------------- | 0 | SELECT STATEMENT | | | | 1 | SORT AGGREGATE | | 1 | |* 2 | INDEX RANGE SCAN | T1_IND | 100 | -------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("V1"=:V1) Let's suppose that we don't want any-more the above query to run via an index range scan path and that a full table scan path is what we do prefer. We can use a SQL Plan Baseline to achieve this desire as shown in the following: SQL> select /*+ full(t1) */ count(1) from t1 where v1 = :v1; COUNT(1) ---------- 100 SQL> select * from table(dbms_xplan.display_cursor); SQL_ID 34z8wv6bsyu6u , child number 0 Plan hash value: 3724264953 ------------------------------------------- | Id | Operation | Name | Rows | ------------------------------------------- | 0 | SELECT STATEMENT | | | | 1 | SORT AGGREGATE | | 1 | |* 2 | TABLE ACCESS FULL | T1 | 100 | ------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 – filter("V1"=:V1) Since we have a full table scan execution plan in memory we can attach it to the original query so that it will always be run via this full table scan execution plan (Transferspm.sql is at the bottom of this article): SQL> @Transferspm Enter value for original_sql_id: d2h2phry5d881 Enter value for modified_sql_id: 34z8wv6bsyu6u Enter value for plan_hash_value: 3724264953 PL/SQL procedure successfully completed. With this set-up in place if we execute the initial query using an index range scan bind variable value we will see that Oracle will force this query to use the SPM full table scan baselined plan as demonstrated in the followings: SQL> select count(1) from t1 where v1 = :v1; COUNT(1) ---------- 100 SQL> select * from table(dbms_xplan.display_cursor); PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------- SQL_ID d2h2phry5d881, child number 0 An uncaught error happened in prepare_sql_statement : ORA-01403: no data found NOTE: cannot fetch plan for SQL_ID: d2h2phry5d881, CHILD_NUMBER: 0 Please verify value of SQL_ID and CHILD_NUMBER; It could also be that the plan is no longer in cursor cache (check v$sql_plan) The above error indicates that Oracle has come up with an execution plan having a Phv2 that is not equal to the PlanId of the SPM baselined plan and that it has inserted this new plan into the SPM Baseline for future evolution. Re-executing the same query will show now that the SPM plan is, indeed, used: SQL> select count(1) from t1 where v1 = :v1; SQL> select * from table(dbms_xplan.display_cursor); SQL_ID d2h2phry5d881, child number 0 ------------------------------------- select count(1) from t1 where v1 = :v1 Plan hash value: 3724264953 ------------------------------------------- | Id - SQL plan baseline SQL_PLAN_3yfxpf3gpd2c5616acf47 used for this statement And finally using Tanel Poder script here's below the corresponding non-sharing reason: SQL> @nonshared d2h2phry5d881 Show why existing SQL child cursors were not reused (V$SQL_SHARED_CURSOR)... SQL_ID : d2h2phry5d881 ADDRESS : 00007FFB809C75C8 CHILD_ADDRESS : 00007FFB7EC96358 CHILD_NUMBER : 0 HASH_MATCH_FAILED : Y REASON : CON_ID : 1 ----------------- This situation is not quite usual. We can notice that the initial query (sql_id d2h2phry5d881 ) has been forced to use a different execution plan which has been compiled for a different sql_id ( 34z8wv6bsyu6u ). And this fairly likely what the HASH_MATCH_FAILED reason means: when a parent cursor is forced to run via an execution plan coming from a different or modified sql_id. Summary A large part of a SQL tuning strategy resides in understanding the reason that pushes the Oracle optimizer compiling a new execution plan. We saw in this first Part of the series how, using Tanel Poder SQL script, we can easily get the non-sharing reason from the dedicated v$sql_shared_cursor view. We have then explained two very popular hard parsing reason and . The next part of the instalment will examine three other reasons. *Transferspm.sql declare v_sql_text clob; ln_plans pls_integer; begin select replace(sql_fulltext, chr(00), ' ') into v_sql_text from gv$sqlarea where sql_id = trim('&original_sql_id') and rownum = 1; -- create sql_plan_baseline for original sql using plan from modified sql ln_plans := dbms_spm.load_plans_from_cursor_cache ( sql_id => trim('&modified_sql_id'), plan_hash_value => to_number(trim('&plan_hash_value')), sql_text => v_sql_text); dbms_output.put_line('Plans Loaded: '||ln_plans); end; /

Blog Post: Mi Recorrido con Tecnologia Oracle (Parte 4)

$
0
0
Note: This article was written originally in Spanish. If you are reading this article in other language it is because of the automatic translator of ToadWorld. En la Parte 3 de mi recorrido con tecnología Oracle terminé con las siguientes palabras: ¿Ahora que sigue? Pues bueno, ahora el siguiente paso será escribir un libro de Oracle, siempre he soñado con escribir mi propio libro y escribir ahí todo lo que he investigado, leído, estudiado y practicado. Un libro del cual pueda estar orgulloso, ese será pues mi próximo paso. También tengo dentro de mis planes ser un OCM 12c y dar una conferencia en el Oracle Open World de San Francisco. Bueno, empecemos desde atrás hacia adelante. Cuando uno empieza a trabajar con tecnologías Oracle sabe que el evento más importante de todo el mundo es el famoso Oracle Open World que se realiza en San Francisco, a este evento asisten miles de personas de todo el mundo, es una gran cantidad de personas y una gran cantidad de conferencias simultáneas las que se reúnen en este evento que te quedas admirado. Conferencias de todo tipo, aplicaciones, Bases de Datos, Alta Disponibilidad, Desarrollo, etc. A este evento asisten las estrellas de Oracle técnicamente hablando y es tan pero tan importante que hasta el mismísimo CEO de Oracle Corporation aparece en este evento, el famoso Larry Ellison. Este evento también es conocido por las bandas que tocan en él, en el Oracle Open World han tocado bandas (dedicadas única y exclusivamente a este evento) de la talla de Aerosmith, Maroon5, Pearl Jam, Elton John, entre muchos otros famosos. Pues bueno, ¿te imaginas formar parte de este evento? pero no como una persona que compra su entrada normalmente (un attendee), sino como un Conferencista, compartir tus conocimientos en una charla oficial del Oracle Open World, ¿Te lo imaginas?, creo que es el sueño de todo Oracle DBA. Bueno, mi sueño era parecido, cuando empecé a trabajar con tecnología Oracle soñaba con Asistir al Oracle Open World, ni siquiera pasaba por mi mente ser un Speaker en tan importante evento. En aquel entonces, si alguien me hubiera dicho "Hey Deiby, te vas al Oracle Open World" yo hubiera sido la persona más feliz del mundo. Jamás me imaginé que este sueño se iba a volver realidad, jamás me imaginé que después de unos años estaría asistiendo al Oracle Open World de San Francisco, y mucho menos, mucho menos me hubiera imaginado que esa primera vez que asistiría iba a ser como Speaker, sí un Speaker. Bien dicen que si trabajas duro en tus sueños, tarde o temprano los ves volviéndose realidad. La vida no se trata de suerte, se trata de que tengas sueños, muchos sueños, pero que también te mantengas despierto hasta altas horas de la noche y durante varios dias para cumplirlos. Y así fue como el Oracle Open World del año 2015 quedó plasmado en mi mente, siendo ya un Oracle ACE Director y un Oracle Certified Master 11g, ahi estaba yo representando a Guatemala (Hasta incluí la bandera de Guatemala en mi presentación [:)] ) y a los Latinoamericanos: El Oracle Open World tiene dos versiones, la versión "mundial" que se realiza en San Francisco, USA, en donde aparece Larry Ellison y también existe la versión para Latinoamérica que se realiza en Brasil, en la versión Latinoamericana asisten todos los países de habla hispana, asisten al rededor de una 15,000 personas y dura 4 días. A esta versión del Oracle Open World también fui invitado a participar como Speaker, ¡El año 2015 fue grandioso! como ustedes pueden ver, en el Oracle Open World de Brazil estuve hablando sobre el Manejo de Datos Undo y cómo evitar el error Snapshot too old. Fue grandioso convivir con otros expertos latinoamericanos como Nelson Calero (Uruguay) y Edelweiss Kammermann (Uruguay), René Antunez, Rolando Carrasco y Arturo Viveros (Mexico), Gustavo Gonzalez (Argentina), Alex Zaballa, Rodrigo Mufalani y David Siqueira (Brasil), entre muchos otros. Los días fueron pasando y yo seguía escribiendo artículos, apoyando a la comunidad, aprendiendo de todos, enseñando lo poco que sé, compartiendo con mi familia y círculo de amigos. Pero paralelo a todo eso, había empezado una preparación fuerte para mi nuevo objetivo, convertirme en un Oracle Certified Master 12c, de los primeros que estuvieran en el mundo. Diariamente trataba de estudiar algún tema de Oracle 12c, practicarlo, profundizarlo, fueron varias las noches en que veía caer la madrugada, fueron varias tazas de café que fueron convertidas en muchas sentencias de SQLPLUS, fueron muchas las notas de Metalink que leí. Tuve la oportunidad de compartir cada tema que me parecía interesante con toda la comunidad, algunos tips que iba descubriendo también los compartía a través de mis artículos y así fue como después de mucho esfuerzo, en Abril del año 2016 finalmente veía mi objetivo siendo realidad, me convertí en un Oracle Certified Master 12c, de los primeros que habían en el mundo. Hasta el momento solo veía publicado en Oracle unos 13 perfiles únicamente. Yo había estudiado y practicado tanto con el afán de convertirme en OCM 12c que nunca pasó por mi cabeza que al convertirme en abril del 2016 en OCM 12c, también me convertía en el OCM 12c más joven del mundo (26 años), en el segundo de Latinoamérica y en el Primero de Centro América, mucha gente de todos lados me felicitó por este logro, amigos, familia, personas que leen mis artículos, etc. Saber que hay que gente que te apoya y que se alegra por tus logros te deja una sensación de alegría y satisfacción, pero también te dejan una gran responsabilidad pues sabes que tienes que seguir manteniendo el ritmo y no defraudarlos pues ellos esperan que sigas dando todo de ti, que sigas poniendo en alto a los latinoamericanos. Teniendo siempre mis objetivos fijos, empecé a consultar con otros autores de Libros de Oracle el proceso para escribir un libro, ellos me pusieron en contacto con varias editoriales importantes como "Apress" y "Oracle Press", estuve consultando con varios amigos en busca de consejos. La mayoría de las personas que son ya autores tuvieron un consejo en común, ellos me recomendaron primero ganar experiencia sobre todo el proceso que involucra un libro a través de ser Revisor Técnico, Al ser revisor técnico tienes una gran pero gran responsabilidad, pues tienes que asegurar que todo el contenido que el libro tiene sea técnicamente correcto, los autores pueden escribir muy buenos temas, podrán proveer muy buen contenido pero dado que su principal objetivo es "escribir y desarrollar un tema" a veces olvidan algunos detalles técnicos y es ahi donde el Role de "Revisor Técnico" entra en juego. Como revisor técnico tu deber es revisar varias veces el contenido escrito por los autores, esto implica leer hasta 3, 4 o más veces un mismo capitulo en busca de errores técnicos, pero tu trabajo no se limita a la búsqueda de errores técnicos, también tienes la responsabilidad de dar feedback desde la perspectiva de un "lector", si el Revisor Técnico ve que un tema no esta completo, o no tiene un nivel técnico adecuado para los lectores, o que el contenido no aporta nada de valor a los lectores deberá expresarlo, deberá de hablar con el autor de dicho capítulo, explicarle por qué el capitulo no cumple con la calidad de contenido adecuada y proveer recomendaciones como reescribir un párrafo, explicar un concepto desde una perspectiva diferente o hasta ¡re-escribir completamente el capitulo!, todo lo que sea necesario para que el capitulo aporte valor a los lectores y así hacer del libro un libro que valga la pena comprar. Pero también tiene otra ventaja, que estas involucrado en todas las fases del libro, conoces como es todo el proceso y ves como se manejan las fechas de entrega, las revisiones tanto técnicas como gramaticales, etc. Esto te deja una buena experiencia sobre cómo escribir un libro. Curiosamente cuando estaba en el proceso de saber cómo escribir un libro me contacto un buen amigo, Anton Els, Anton es de Nueva Zelanda pero fue Conferencista en el OTN Tour del año 2015 en Guatemala, fue ahi donde la amistad surgió, Anton es un Oracle Certified Master 11g, me contó que en ese momento estaba empezando a escribir un libro en conjunto con otras dos personas, Franck Pachot (Switzerland) y Vit Spinka (Czech Republic). ¿Y adivinen qué? Anton me invitó a ser el Revisor Técnico Oficial de ese libro, y por supuesto acepté. Después de varias semana de trabajo (y aún esta en proceso) el libro ya fue publicado en Amazon para Preventa, pero será liberado hasta que Oracle libere la versión 12cR2. Si quieren comprar el libro en pre-venta lo pueden encontrar Aqui : ¿Por qué comprar este libro? Este es el primer libro que sale al mercado sobre temas de Multitenant de Oracle 12cR2. Pero la mejor razón que te puedo dar es que el contenido de este libro es la experiencia escrita, es decir, cada uno de los autores utilizaron toda la experiencia con la que cuentan y plasmaron sus propios pensamientos, sus propias ideas y recomendaciones en texto. Incluso alguna de estas ideas se salen de los límites de la Documentación que proporciona Oracle y reta dicha documentación con ideas innovadoras. No es uno de esos libros que cuando lo empiezas a leer dices "Pero todo esto lo pudiera encontrar en la documentación de Oracle y totalmente gratis". Este libro es aprender de los expertos, de sus experiencias, de todo lo que han pasado en su trayectoria profesional, de sus ideas y conceptos, ver recomendaciones en base a experiencia y no en base a solamente concepto. Una de las cosas que me agrada de este libro y creo fuertemente que lo hace único es que fue escrito por 3 Oracle Certified Master (OCM), 2 de ellos OCM 12c y uno OCM 11g, y fue revisado por un cuarto Oracle Certified Master 12c (yo). Podríamos entonces decir que este libro es el resultado de trabajo en conjunto de 4 OCMs. Este libro es puramente técnico, fue escrito para DBAs intermedios pero también proporciona conceptos avanzados para todos aquellos DBAs que les gusta ir "más allá". Este libro fue escrito por: Anton Els - Oracle ACE & Oracle Certified Master 11g (New Zealand) Franck Pachot - Oracle ACE Director & Oracle Certified Master 12c (Switzerland) Vit Spinka - Oracle Certified Master 12c (Czech Republic) y Revisado técnicamente por: Deiby Gómez - Oracle ACE Director & Oracle Certified Master 12c (Guatemala) Siguiendo con la historia... A mi me gusta compartir lo poco que he aprendido a través de mis artículos, y los publico en algunos sitios web y revistas. Una de ella es "IOUG Select Journal", es una revista mantenida por el Independent Oracle User Group (IOUG) y que es distribuida a varios países del mundo a todas aquellas personas que tienen una membresía en www.ioug.org . Allá por abril del año 2015 yo escribí un articulo en el que hablaba sobre cómo funcionan internamente los indices de tipo B-Tree y los índices de tipo Bitmap. Bueno, pues resulta ser que un día me despierto con un correo en el cual se me informaba que dicho articulo había sido el ganador del " SELECT Journal Editor's Choice Award ", este premio se le es dado a todas aquellas personas que por su excelencia en la escritura de artículos y las personas que han ganado este premio no son ni más ni menos que gente muy conocida en la comunidad de Oracle en el Mundo, son personas de la talla de Tanel Poder, Arup Nanda, Steven Feuerstein, Jonathan Lewis, entre otros: Este premio se me fue entregado en la bienvenida oficial de IOUG en el evento "Collaborate16" en Las Vegas, Estados Unidos. Agradezco mucho a IOUG por este reconocimiento tan grande a mi trabajo, a todo el esfuerzo y dedicación que he realizado en cada uno de los artículos que he escrito. También agradezco a todas aquellas personas que leen mis artículos, que más de algún mensaje me dejan diciendo "Hey, Gracias por tu artículo, me ayudó", a todas aquellas personas que siempre han confiando en mi, desde el principio de mi carrera profesional; a los que siempre me alientan a seguir adelante y no desmayar, a seguir trabajando fuerte a pesar de algunos días sentirse cansado, porque gracias a todos ustedes este premio se hizo realidad. Y finalmente me gustaría invitar a todos aquellos lectores latinoamericanos, a quienes va dirigido esta serie de artículos de "Mi recorrido con tecnología Oracle" y es por ellos que escrito esta seria en Español, porque son a ellos, mis amigos latinoamericanos a los que les quiero decir Sigan adelante, trabajen fuerte en lo que quieren, pero también compartan lo que van aprendiendo, hagan a un lado la famosa frase latinoamericana "si a mí me costó, que le cueste a él/ella también", seamos una sola región y ayudémonos entre todos nosotros. Demostremos que en Latinoamérica hay gente brillante, que hay gente talentosa. Si eres un lector que apenas estas empezando tu carrera profesional en Oracle y no escribes artículos porque piensas que no tienes nada que enseñar, te recuerdo una de mis frases favoritas: "Todos tenemos algo que aprender, y todos tenemos algo que enseñar". Escribe y comparte, aunque sean cosas sencillas, estoy seguro que siempre tendrás algo que enseñar a los demás y algo que aprender de los demás. Sí alguien está interesado en Descargar el artículo con el que fui acreedor de este premio, pueden hacerlo desde el siguiente enlace . Bueno, hasta acá termino la cuarta parte de esta serie de "mi recorrido con tecnología Oracle", los invito a que como yo, demostremos que en Latinoamérica hay talento, demostremos que sí podemos lograr nuestros objetivos y los invito a que compartan lo que saben, ¡Ánimos, sí se puede! ¿Y que sigue? Posiblemente para la 5ta parte de esta serie pasen primero unos cuantos años, pues espero para ese entonces platicarles de cómo escribí finalmente mi libro, posiblemente les hable del grupo Oak Table, de cómo empecé a estudiar mi PhD, o quizás ya no haya una 5ta parte... de cualquier manera, haya o no haya una 5ta parte les deseo lo mejor, sean felices. Lea también: Mi Recorrido con Tecnologia Oracle (Parte 1) Mi Recorrido con Tecnologia Oracle (Parte 2) Mi Recorrido con Tecnologia Oracle (Parte 3) Mi Recorrido con Tecnologia Oracle (Parte 4) Follow me:

Blog Post: DAMA Ireland: Data Protection Event 5th May

$
0
0
We have our next DAMA Ireland event/meeting coming up on the 5th May, and will be in our usual venue of Bank of Ireland, 1 Grand Canal Dock. Our meeting will cover two topics. The main topic for the evening will be on Data Protection. We have Daragh O'Brien (MD of Castlebridge Associate) presenting on this. Daragh is also the Global Data Privacy Officer for DAMA International. He has also been invoked in contributing to the next version of the DMBOK, that is coming out very soon. We also have Katherine O'Keefe who will be talking the DAMA Certified Data Management Practitioners (CDMP) certification. Katherine has been working with DAMA International on updates to the new CDMP certification. To check out more details of the event/meeting, click on the Eventbrite image below. This will take you to the event/meeting details and where you can also register for the meeting. Cost : FREE When : 5th May Where : Bank of Ireland, 1 Grand Canal Dock PS: Please register in advance of the meeting, as we would like to know who and how many are coming, to allow us to make any necessary arrangements.

Blog Post: Expresiones Regulares en Oracle (. + ? *)

$
0
0
Hoy nos metemos de lleno con ejemplos de uso de expresiones regulares en Oracle. Veamos la primera lista de metacaracteres: Punto (.) Suma (+) Asterisco (*) Interrogación (?) Hace algunos días he publicado una breve introducción al uso de expresiones regulares en Oracle. Hoy es el turno de probar esta pequeña lista de metacaracteres. Recordemos que un metacaracter, a diferencia de un literal, tiene un significado especial. Pues bien, ¿cuál es el significado especial de cada uno de los metacaracteres que acabamos de listar? Veamos: El punto (“.”) iguala con cualquier carácter . La expresión regular “a.b” iguala con el texto “abb”, “acb” y “adb”. Pero no iguala con “acc”. Ejemplo: select texto, 'a.b' regexp, case when regexp_like(texto, 'a.b') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp / TEXTO REGEXP COINCIDENCIA? ------------------- ------------- ------------------- acb a.b Hay coincidencia adb a.b Hay coincidencia acc a.b No hay coincidencia abb a.b Hay coincidencia El signo más (“+”) iguala con una o más ocurrencias de la subexpresión que la antecede. La expresión “a+” iguala con el texto “a”, “aa”, “aaa”, “ab” o “ba”. Pero no igual con los textos “bb” o “bc”. Ejemplo: select texto, 'a+' regexp, case when regexp_like(texto, 'a+') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp / TEXTO REGEXP COINCIDENCIA? -------------------- ------------ ---------------------- a a+ Hay coincidencia aa a+ Hay coincidencia aaa a+ Hay coincidencia ab a+ Hay coincidencia ba a+ Hay coincidencia bb a+ No hay coincidencia bc a+ No hay coincidencia El asterisco (“*”) iguala con cero o más ocurrencias de la subexpresión que la antecede. La expresión “ab*c” iguala con el texto “ac”, “abc”, “abbc”. Pero no iguala con los textos “abb” (falta el literal “c”) o “bbc” (falta el literal “a”). Hagamos la prueba: select texto, 'ab*c' regexp, case when regexp_like(texto, 'ab*c') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp / TEXTO REGEXP COINCIDENCIA? ------------------------- ------------ ----------------------- ac ab*c Hay coincidencia abc ab*c Hay coincidencia abbc ab*c Hay coincidencia abb ab*c No hay coincidencia bbc ab*c No hay coincidencia El último metacaracter de la lista es el símbolo de interrogación (“?”). Este metacaracter iguala con cero o una ocurrencia de la subexpresión que la antecede. La expresión regular “ab?c” iguala con el texto “ac” y “abc”. Pero no iguala con “abbc” (sobra una “b”), “abb” (falta el literal “c”) o “bbc” (falta el literal “a”). Verifiquemos: select texto, 'ab?c' regexp, case when regexp_like(texto, 'ab?c') then 'Hay coincidencia' else 'No hay coincidencia' end "COINCIDENCIA?" from prueba_regexp / TEXTO REGEXP COINCIDENCIA? ---------------------- ------------ ------------------- ac ab?c Hay coincidencia abc ab?c Hay coincidencia abbc ab?c No hay coincidencia abb ab?c No hay coincidencia bbc ab?c No hay coincidencia Y así completamos la primera lista de metacaracteres. En el próximo artículo seguiremos experimentando con más expresiones regulares. Nos vemos!

Blog Post: Why Brian Hitchcock and I (probably) know more about Oracle Database than you‏

$
0
0
Click on the picture to download the latest issue of the NoCOUG Journal Brian Hitchcock’s secret is simple. For twelve years, he has been reading books on Oracle Database. And taking extensive notes. And publishing them in the NoCOUG Journal. My secret is simple. For twelve years, I have been reading the extensive book notes that Brian has been publishing in the NoCOUG Journal. I’m the editor of the NoCOUG Journal but that’s not the point. It’s that simple. Brian’s notes on Keeping Up with Oracle Database 12c Multitenant—Book One have been published in the latest issue of the NoCOUG Journal. After reading it, you might want to register for the spring conference on Friday, May 13 at PayPal and attend the hands-on lab on Oracle Database 12c Multitenant that has been organized by NoCOUG. The lab offers three different sections for three levels of expertise and interest. If you are new to Oracle Multitenant, you should start with Section 1, which covers provisioning of pluggable databases (including create, clone, unplug, and plug-in). If you are already familiar with Oracle Multitenant, you can start directly with Section 2, which covers advanced concepts in security, backup and recovery, resource management, and performance tuning in the context of multitenancy. Section 3 covers Oracle Multitenant Self-Service Provisioning, a web-based application available for download from Oracle Technology Network. Did I mention that the conference is free for members and their guests, first-time NoCOUG conference attendees, students, and PayPal employees? Bring a friend with you.

Blog Post: Accessing the R datasets in ORE and SQL

$
0
0
When you install R you also get a set of pre-compiled datasets. These are great for trying out many of the features that are available with R and all the new packages that are being produced on an almost daily basis. The exact list of data sets available will depend on the version of R that you are using. To get the list of available data sets in R you can run the following. > library(help="datasets") This command will list all the data sets that you can reference and start using immediately. I'm currently running the latest version of Oracle R Distribution version 3.2. See the listing at the end of this blog post for the available data sets. But are these data sets available to you if you are using Oracle R Enterprise (ORE)? The answer is Yes of course they are. But are these accessible on the Oracle Database server? Yes they are, as you have R installed there and you can use ORE to access and use the data sets. But how? how can I list what is on the Oracle Database server using R? Simple use the following ORE code to run an embedded R execution function using the ORE R API. What? What does that mean? Using the R on your client machine, you can use ORE to send some R code to the Oracle Database server. The R code will be run on the Oracle Database server and the results will be returned to the client. The results contain the results from the server. Try the following code. ore.doEval(function() library(help="datasets")) # let us create a functions for this code myFn # Now send this function to the DB server and run it there. ore.doEval(myFn) # create an R script in the Oracle Database that contains our R code ore.scriptDrop("inDB_R_DemoData") ore.scriptCreate("inDB_R_DemoData", myFn) # Now run the R script, stored in the Oracle Database, on the Database server # and return the results to my client ore.doEval(FUN.NAME="inDB_R_DemoData") Simple, Right! Yes it is. You have shown us how to do this in R using the ORE package. But what if I'm a SQL developer. Can I do this in SQL? Yes you can. Connect you your schema using SQL Developer/SQL*Plus/SQLcl or whatever tool you will be using to run SQL. Then run the following SQL. select * from table(rqEval(null, 'XML', 'inDB_R_DemoData')); This SQL code will return the results in XML format. You can parse this to extract and display the results and when you do you will get something like the following listing, which is exactly the same that is produced when you run the R code that I gave above. So what this means is that evening if you have an empty schema with no data in it, and as long as you have the privileges to run embedded R execution, you actually have access to all these different data sets. You can use these to try our R using the ORE SQL APIs too. Information on package ‘datasets’ Description: Package: datasets Version: 3.2.0 Priority: base Title: The R Datasets Package Author: R Core Team and contributors worldwide Maintainer: R Core Team Description: Base R datasets. License: Part of R 3.2.0 Built: R 3.2.0; ; 2015-08-07 02:20:26 UTC; windows Index: AirPassengers Monthly Airline Passenger Numbers 1949-1960 BJsales Sales Data with Leading Indicator BOD Biochemical Oxygen Demand CO2 Carbon Dioxide Uptake in Grass Plants ChickWeight Weight versus age of chicks on different diets DNase Elisa assay of DNase EuStockMarkets Daily Closing Prices of Major European Stock Indices, 1991-1998 Formaldehyde Determination of Formaldehyde HairEyeColor Hair and Eye Color of Statistics Students Harman23.cor Harman Example 2.3 Harman74.cor Harman Example 7.4 Indometh Pharmacokinetics of Indomethacin InsectSprays Effectiveness of Insect Sprays JohnsonJohnson Quarterly Earnings per Johnson & Johnson Share LakeHuron Level of Lake Huron 1875-1972 LifeCycleSavings Intercountry Life-Cycle Savings Data Loblolly Growth of Loblolly pine trees Nile Flow of the River Nile Orange Growth of Orange Trees OrchardSprays Potency of Orchard Sprays PlantGrowth Results from an Experiment on Plant Growth Puromycin Reaction Velocity of an Enzymatic Reaction Theoph Pharmacokinetics of Theophylline Titanic Survival of passengers on the Titanic ToothGrowth The Effect of Vitamin C on Tooth Growth in Guinea Pigs UCBAdmissions Student Admissions at UC Berkeley UKDriverDeaths Road Casualties in Great Britain 1969-84 UKLungDeaths Monthly Deaths from Lung Diseases in the UK UKgas UK Quarterly Gas Consumption USAccDeaths Accidental Deaths in the US 1973-1978 USArrests Violent Crime Rates by US State USJudgeRatings Lawyers' Ratings of State Judges in the US Superior Court USPersonalExpenditure Personal Expenditure Data VADeaths Death Rates in Virginia (1940) WWWusage Internet Usage per Minute WorldPhones The World's Telephones ability.cov Ability and Intelligence Tests airmiles Passenger Miles on Commercial US Airlines, 1937-1960 airquality New York Air Quality Measurements anscombe Anscombe's Quartet of 'Identical' Simple Linear Regressions attenu The Joyner-Boore Attenuation Data attitude The Chatterjee-Price Attitude Data austres Quarterly Time Series of the Number of Australian Residents beavers Body Temperature Series of Two Beavers cars Speed and Stopping Distances of Cars chickwts Chicken Weights by Feed Type co2 Mauna Loa Atmospheric CO2 Concentration crimtab Student's 3000 Criminals Data datasets-package The R Datasets Package discoveries Yearly Numbers of Important Discoveries esoph Smoking, Alcohol and (O)esophageal Cancer euro Conversion Rates of Euro Currencies eurodist Distances Between European Cities and Between US Cities faithful Old Faithful Geyser Data freeny Freeny's Revenue Data infert Infertility after Spontaneous and Induced Abortion iris Edgar Anderson's Iris Data islands Areas of the World's Major Landmasses lh Luteinizing Hormone in Blood Samples longley Longley's Economic Regression Data lynx Annual Canadian Lynx trappings 1821-1934 morley Michelson Speed of Light Data mtcars Motor Trend Car Road Tests nhtemp Average Yearly Temperatures in New Haven nottem Average Monthly Temperatures at Nottingham, 1920-1939 npk Classical N, P, K Factorial Experiment occupationalStatus Occupational Status of Fathers and their Sons precip Annual Precipitation in US Cities presidents Quarterly Approval Ratings of US Presidents pressure Vapor Pressure of Mercury as a Function of Temperature quakes Locations of Earthquakes off Fiji randu Random Numbers from Congruential Generator RANDU rivers Lengths of Major North American Rivers rock Measurements on Petroleum Rock Samples sleep Student's Sleep Data stackloss Brownlee's Stack Loss Plant Data state US State Facts and Figures sunspot.month Monthly Sunspot Data, from 1749 to "Present" sunspot.year Yearly Sunspot Data, 1700-1988 sunspots Monthly Sunspot Numbers, 1749-1983 swiss Swiss Fertility and Socioeconomic Indicators (1888) Data treering Yearly Treering Data, -6000-1979 trees Girth, Height and Volume for Black Cherry Trees uspop Populations Recorded by the US Census volcano Topographic Information on Auckland's Maunga Whau Volcano warpbreaks The Number of Breaks in Yarn during Weaving women Average Heights and Weights for American Women

Blog Post: Why should Oracle Database professionals care about NoSQL and where to start?

$
0
0
Dr. Edgar Codd, the inventor of relational theory wrote in 1986 (thirty years ago): “Only if the performance requirements are extremely severe should buyers rule out present relational DBMS products on this basis.” Of course, he was talking relational databases with pre-relational databases which had the performance advantage over relational databases in 1986. But Dr. Codd’s argument are equally valid today in the SQL v/s NoSQL debate. Oracle Corporation openly admits that NoSQL database management systems have the performance advantage over relational database management systems “when data access is ‘simple’ in nature and application demands exceed the volume or latency capability of traditional data management solutions.” In other words, Oracle Corporation has admitted that there are use cases which current relational database management systems cannot handle well. Click-stream data from high-volume web sites, high-throughput event processing, social networking communications, monitoring online retail behavior, accessing customer profiles, pulling up appropriate customer ads, and storing, and forwarding real-time communication are the examples listed by Oracle Corporation. Database professionals should therefore look seriously at NoSQL technology. Where to start? I recommend that you read my article The Rise and Fall of the NoSQL Empire . It is very negative about NoSQL but I recommend that you treat it as the starting point of your investigation. My article is based on a long series of blog posts This article is based on a twelve-part series of blog posts called “ The Twelve Days of NoSQL ” (also recommended reading) written in December 2013. Chris Date had some comments on my article which you can read in the May 2015 issue of the NoCOUG Journal . The chief concept that you need to understand is “functional segmentation” but I find that it is hardly—if ever—used in discussions of NoSQL. If you don’t understand functional segmentation you will not understand NoSQL. Another reason to start with my article is that NoSQL database management systems are rapidly changing in ways that you will find extremely hard to understand if you are not well grounded in what NoSQL is about. For example, Amazon DynamoDB looks very different than its parent Amazon Dynamo and supports what it calls “tables” but each such “table” is strictly a “functional segment.” There are even NoSQL query languages that mimic SQL: CQL (Cassandra Query Language), N1QL (Non-1st Query Language) (pronounced Nickel), and UnQL (Unstructured Query Language) (pronounced Uncle). Once you have a proper grounding in NoSQL fundamentals, you are ready to get your feet wet. You could take Oracle NoSQL Database for a spin. Check out the resources page. There is also a Five-Minute Quickstart .

Blog Post: How is data modeled in NoSQL?

$
0
0
The first question that you will have when you start your NoSQL journey is “How is data modeled in NoSQL?” The important thing to understand is the data does not change just because it is managed differently. If the data does not change, then the entities and the relationships contained in the data cannot change either. The entities and the relationships between them have not changed since the dawn of time. They were the same in the days of network database management systems which came before relational database management systems, they stayed the same when object-oriented database management systems came along, and they are the same now that we have NoSQL databases. The key NoSQL concept is that of a functional segment. As I explained in The Rise and Fall of the NoSQL Empire , Amazon’s pivotal design decision was to break its monolithic enterprise-wide database service into simpler component services such as a best-seller list service, a shopping cart service, a customer preferences service, a sales rank service, and a product catalog service. This avoided a single point of failure. In an interview for the NoCOUG Journal, Amazon’s first database administrator, Jeremiah Wilton explains the rationale behind Amazon’s approach: “The best availability in the industry comes from application software that is predicated upon a surprising assumption: The databases upon which the software relies will inevitably fail. The better the software’s ability to continue operating in such a situation, the higher the overall service’s availability will be. But isn’t Oracle unbreakable? At the database level, regardless of the measures taken to improve availability, outages will occur from time to time. An outage may be from a required upgrade or a bug. Knowing this, if you engineer application software to handle this eventuality, then a database outage will have less or no impact on end users. In summary, there are many ways to improve a single database’s availability. But the highest availability comes from thoughtful engineering of the entire application architecture.” As an example, the shopping cart service should not be affected if the checkout service is unavailable or not performing well. I said that this was the pivotal design decision made by Amazon. I cannot emphasize this enough. If you resist functional segmentation, you are not ready for NoSQL. If you miss the point, you will not understand NoSQL. Note that functional segmentation results in simple hierarchical schemas. Here is an example of a simple hierarchical schema from Ted Codd’s 1970 paper on the relational model (meticulously reproduced in the 100 th issue of the NoCOUG Journal). This schema stores information about employees, their children, their job histories, and their salary histories. employee (man#, name, birthdate) children (man#, childname, birthyear) jobhistory (man#, jobdate, title) salaryhistory (man#, jobdate, salarydate, salary) In NoSQL-land, all the data about a single employee will be “blobbed” together; for example, as a “JSON” structure. Functional segmentation is the underpinning of NoSQL technology but it does not present a conflict with the relational model; it is simply a physical database design decision. Amazon originally envisaged that functional segments would be managed by different servers. However, that was soon realized to be unnecessary. Nowadays, functional segments are variously referred to as “tables” (DynamoDB), “collections” (MongoDB), or “table families” (Oracle Database 12c Release 2 sharding).

Wiki Page: PUSH_PRED hint

$
0
0
The PUSH_PRED hint has undocumented parameters which control which specific predicates are pushed into an inline view.

Blog Post: Fix - TFA Failed to start listening for commands

$
0
0
Today I was trying to play for a while with TFA, however when I tried to started it up I received some error that we could fix by downloading the latest version of TFA and Patching or re-installing the TFA, as the note (Doc ID 2112640.1) says, however I will show you that not always that is true, sometimes the fix is easier. Checking status of TFA: [root@rac2 ~]# /u01/app/12.1.0/grid/bin/tfactl print status TFA-00002 : Oracle Trace File Analyzer (TFA) is not running Well, TFA is not running, so let's start it up... Starting TFA up: [root@rac2 ~]# /u01/app/12.1.0/grid/bin/tfactl start Starting TFA.. start: Job is already running: oracle-tfa Waiting up to 100 seconds for TFA to be started.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Successfully started TFA Process.. . . . . . TFA-00002 : Oracle Trace File Analyzer (TFA) is not running TFA Failed to start listening for commands . . . . . Successfully started TFA Process.. . . . . . TFA-00002 : TFA Failed to start listening for commands [root@rac2 ~]# Those are the errors I was receiving. I took a look into the following Note: When Trying to run TFA Error Occurs TFA-00002 : Oracle Trace File Analyzer (TFA) Is Not Running (Doc ID 2112640.1) The note is using the same binaries version that I am using (12.1.0.2) and it also says that this occur in the first starting of TFA, which is my case, however I wanted to be sure if the note in fact applies to my case, to confirm that the last thing pending to review was the log file, in the note it says that the error is related to a corrupted installation of TFA, some directories that don't exist to be precise, and should appear some messages alike the following: ls: /u01/app/12.1.0.2/grid/tfa/machine_name/tfa_home/server.jks: No such file or directory ls: /u01/app/12.1.0.2/grid/tfa/machine_name/tfa_home/client.jks: No such file or directory ls: /u01/app/12.1.0.2/grid/tfa/machine_name/tfa_home/internal/ssl.properties: No such file or directory We could thing that this not matches, because it's the same binaries version, my case is also about the first starting of TFA, however what we will see will show us something different. I decided to check the TFA log file. You can find the log of TFA in the following path: $ORACLE_BASE/tfa/ /log/ In my case: /u01/app/gr id/tfa/rac2/log/ tfa.04.30.2016-02.01.00.log And look at this, the messages are different: 2016-04-30 02:04:11.946 TFA Hosts : [rac1, rac2] 2016-04-30 02:04:11.946 BuildId : 12120020140619094932 2016-04-30 02:04:21.962 [ERROR] [Thread-7] Cannot establish Socket connection to Host : rac1 2016-04-30 02:04:21.962 isClusterNodesUpgraded :false 2016-04-30 02:06:11.693 Starting first rediscovery... So although most of the symptoms were the same, this shows that this is completely different, this issue is related to connectivity to the other node. This is because this TFA configuration is part of a RAC, so it is trying to connect to the other TFA node. After to fix the connectivity TFA was able to start normally: [root@rac2 grid]# ping rac1 PING rac1.oraworld (192.168.1.110) 56(84) bytes of data. 64 bytes from rac1.oraworld (192.168.1.110): icmp_seq=1 ttl=64 time=1.12 ms 64 bytes from rac1.oraworld (192.168.1.110): icmp_seq=2 ttl=64 time=0.311 ms Starting TFA after to fix connectivity to node "rac1": [root@rac1 ~]# /u01/app/12.1.0/grid/tfa/bin/tfactl start Starting TFA.. TFA is already running, so will be restarted oracle-tfa stop/waiting TFA Daemon is running waiting 5 seconds . . . . . TFA Daemon is running waiting 5 seconds . . . . . TFA Daemon is running waiting 5 seconds . . . . . TFA Daemon is running waiting 5 seconds . . . . . oracle-tfa start/running, process 21872 Waiting up to 100 seconds for TFA to be started.. . . . . . Successfully started TFA Process.. . . . . . TFA Started and listening for commands [root@rac1 ~]# Checking status of TFA: [root@rac1 ~]# /u01/app/12.1.0/grid/tfa/bin/tfactl print status .------------------------------------------------------------------------------------. | Host |Status of TFA| PID | Port |Version | Build ID |Inventory Status| +------+-------------+------+------+----------+---------------------+----------------+ | rac1 | RUNNING | 11909| 5000 |12.1.2.0.0| 12120020140619094932| COMPLETE | | rac2 | RUNNING | 3685 | 5000 |12.1.2.0.0| 12120020140619094932| COMPLETE | '------+-------------+------+------+----------+---------------------+----------------' [root@rac1 ~]# Follow me:

Blog Post: Oracle Data Visualisation : Setting up a Connection to your DB

$
0
0
Using Oracle Data Visualisation is just the same or very similar as to using the Cloud version of the tool. In this blog post I will walk you through the steps you need to perform the first time you use the Oracle Data Visualization client tool and to quickly create some visualizations. Step 1 - Create a Connection to your Oracle DB and Schema After opening Oracle Data Visualisation client tool client on the Data Sources icon that is displayed along the top of the screen. Then click on the 'Connection' button. You need to create a connection to your schema in the Oracle Database. Other options exist to create a connection to files etc. But for this example click on 'From Database. Enter you connections details for your schema in your Oracle Database. This is exactly the same kind of information that you would enter for creating a SQL Developer connection. Then click the Save button. Step 2 - Defining the data source for your analytics You need to select the tables or views that you are going to use to build up your data visualizations. In the Data Sources section of the tool (see the first image above) click on the 'Create Data New Data Source' button and then select 'From Database'. The following window (or one like it) will be displayed. This will contain all the schemas in the DB that you have some privileges for. You may just see your schema or others. Select your schema from the list. The window will be updated to display the tables and views in the schema. You can change the layout from icon based to being a list. You can also define a query that contains the data you want to analyse using the SQL tab. When you have select the table or view to use or have defined the SQL for the data set, a window will be displayed showing you a sample of the data. You can use this window to quickly perform a visual inspection of the data to make sure it is the data you want to use. The data source you have defined will now be listed data sources part of the tool. You can click on the option icon (3 vertical dots) on the right hand side of the data source and then select Create VA Project from the pop up menu. Step 3 - Create your Oracle Data Visualization project When the Visual Analyser part of the tool opens, you can click and drag the columns from your data set on to the workspace. The data will be automatically formatted and displayed on the screen. You can also quickly generate lots of graphics and again click and drag the columns on the graphics to define various element.

Wiki Page: Oracle 12c: SEED (PDB$SEED) pluggable database is in unusable state? Here's how you can recover/recreate it.

$
0
0
Introduction Oracle 12c has introduced the multi-tenant architecture for Oracle database, where a single container database (CDB$ROOT) can have multiple pluggable databases (PDBs). This new architecture is introduced to ease the management of Oracle databases, where by we can consolidate multiple Oracle databases into a single container database (CDB). In the multi-tenant architecture, ideally we use the SEED template pluggable database (PDB$SEED) to create any new pluggable database within the container database (CDB$ROOT). The SEED pluggable database (PDB$SEED) acts as a template for creating fresh pluggable databases and we are not allowed to alter the configuration of SEED pluggable database (by default opens up in READ ONLY mode). There are possibilities, that the SEED pluggable database (PDB$SEED) may become corrupt or unusable due to file system issues or due to any other unforeseen reasons. In that case, we can't use the seed pluggable database (PDB$SEED) for creating new pluggable databases in the respective container. In this article, I will discuss about the different methods that we can follow to recover or recreate the seed pluggable database (PDB$SEED) in the event of seed being in a unusable state (corrupted) Recover PDB$SEED using backup Backup is the first place of defense to recover or restore a database. It is always recommended to backup the databases to be able to restore it when required. In the multi-tenant architecture, we have the option of taking backup of individual pluggable databases or all the pluggable databases together along with the container database. If the seed pluggable database becomes unusable, we can restore it using the backup. Here is the pictorial representation of the overall process involved in restoring a seed pluggable database from a backup. Since, I am simulating the failure of the seed pluggable database (PDB$SEED), let me take a backup of the seed pluggable database (PDB$SEED) for this demonstration. We will use this backup to restore/reover the seed database later. ##--- ##--- Taking backup of seed pluggable database (PDB$SEED) ---## ##--- [oracle@labserver2 ~]$ rman target / Recovery Manager: Release 12.1.0.2.0 - Production on Thu Apr 28 23:16:24 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: ORPCDB2 (DBID=2428270533) RMAN> backup database "pdb$seed" format '/backup/orpcdb2/seed/orpcdb2_seed_%U.bkp'; Starting backup at 28-APR-16 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=25 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00007 name=/data/oracle/orpcdb2/pdbseed/users01.dbf input datafile file number=00002 name=/data/oracle/orpcdb2/pdbseed/system01.dbf input datafile file number=00004 name=/data/oracle/orpcdb2/pdbseed/sysaux01.dbf channel ORA_DISK_1: starting piece 1 at 28-APR-16 channel ORA_DISK_1: finished piece 1 at 28-APR-16 piece handle= /backup/orpcdb2/seed/orpcdb2_seed_02r470qb_1_1.bkp tag=TAG20160428T231627 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 28-APR-16 Starting Control File Autobackup at 28-APR-16 piece handle=/app/oracle/product/12.1.0.2/dbs/c-2428270533-20160428-00 comment=NONE Finished Control File Autobackup at 28-APR-16 RMAN> For the purpose of demonstration, I am deleting the system datafile belonging to the seed pluggable database (PDB$SEED) as shown below. This will make the seed pluggable database unusable and we will not be able to use the seed pluggable database for creating new pluggable databases in the respective container database (CDB$ROOT). In a real time scenario, the seed pluggable database may become unusable due to a number of unforeseen reasons. ##--- ##--- deleting system datafile from seed pluggable database ---## ##--- [oracle@labserver2 ~]$ rm /data/oracle/orpcdb2/pdbseed/system01.dbf [oracle@labserver2 ~]$ ls -lrt /data/oracle/orpcdb2/pdbseed/system01.dbf ls: /data/oracle/orpcdb2/pdbseed/system01.dbf: No such file or directory I have deleted the system datafile belonging to seed pluggable database (PDB$SEED). Let's try to create a new pluggable database using the seed pluggable databases. ---// ---// create pluggable database using seed (PDB$SEED) //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB_1 READ WRITE NO SQL> create pluggable database CDB2_PDB_2 admin user pdb_admin identified by oracle 2 file_name_convert=('/data/oracle/orpcdb2/pdbseed/','/data/oracle/orpcdb2/cdb2_pdb_2/') 3 ; create pluggable database CDB2_PDB_2 admin user pdb_admin identified by oracle * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 2 ORA-01116: error in opening database file 2 ORA-01110: data file 2: '/data/oracle/orpcdb2/pdbseed/system01.dbf' ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3 As expected, we are not able to create new pluggable database using the seed pluggable database (PDB$SEED). Let's use the seed pluggable database backup to restore/recover the seed pluggable database. We need to close the seed pluggable database to be able to restore the missing file. Let's close the seed pluggable database (PDB$SEED). ---// ---// trying to close seed pluggable database //--- ---// SQL> alter pluggable database "pdb$seed" close; alter pluggable database "pdb$seed" close * ERROR at line 1: ORA-65017: seed pluggable database may not be dropped or altered As we can see, we are not allowed to alter the state of the seed pluggable database (PDB$SEED). However, there is workaround where we can set the hidden parameter _oracle_script to TRUE and that will allow us to change the seed pluggable database (PDB$SEED) state as shown below. ---// ---// closing PDB$SEED by setting _oracle_script to TRUE //--- ---// SQL> alter session set "_oracle_script"=true; Session altered. SQL> alter pluggable database "pdb$seed" close; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 3 CDB2_PDB_1 READ WRITE NO Let's restore the missing seed datafile using the backup that was taken earlier ##--- ##--- validating the availability of backup ---## ##--- RMAN> list backup of datafile 2; using target database control file instead of recovery catalog List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 1 Full 221.38M DISK 00:00:06 28-APR-16 BP Key: 1 Status: AVAILABLE Compressed: NO Tag: TAG20160428T231627 Piece Name: /backup/orpcdb2/seed/orpcdb2_seed_02r470qb_1_1.bkp List of Datafiles in backup set 1 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---- 2 Full 664727 13-FEB-16 /data/oracle/orpcdb2/pdbseed/system01.dbf ##--- ##--- restoring missing seed datafile (file# 2) ---## ##--- [oracle@labserver2 seed]$ rman target / Recovery Manager: Release 12.1.0.2.0 - Production on Thu Feb 11 14:33:46 2016 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. connected to target database: orpcdb2 (DBID=3871804100) RMAN> restore datafile 2; Starting restore at 28-APR-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=130 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00002 to /data/oracle/orpcdb2/pdbseed/system01.dbf channel ORA_DISK_1: reading from backup piece /backup/orpcdb2/seed/orpcdb2_seed_02r470qb_1_1.bkp channel ORA_DISK_1: piece handle=/backup/orpcdb2/seed/orpcdb2_seed_02r470qb_1_1.bkp tag=TAG20160428T231627 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 Finished restore at 28-APR-16 We have restored the missing seed datafile. Let's open the seed database in it's intended READ ONLY state. ##--- ##--- trying to open seed pluggable database after datafile restore ---## ##--- RMAN> alter pluggable database "pdb$seed" open read only; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of sql statement command at 04/28/2016 23:32:20 ORA-65017: seed pluggable database may not be dropped or altered As expected, we are not allowed to alter the seed pluggable database state. Like earlier, we need to use the hidden parameter _oracle_script to be able to open the seed pluggable database as shown below. ##--- ##--- opening PDB$SEED by setting _oracle_script to TRUE ---## ##--- RMAN> alter session set "_oracle_script"=true; Statement processed RMAN> alter pluggable database "pdb$seed" open read only; Statement processed ##--- ##--- validate PDB$SEED is opened in READ ONLY mode ---## ##--- RMAN> select con_id,name,open_mode,restricted from v$pdbs; CON_ID NAME OPEN_MODE RES ---------- ------------------------------ ---------- --- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB_1 READ WRITE NO ##--- ##--- reset _oracle_script to FALSE ---## ##--- RMAN> alter session set "_oracle_script"=false; Statement processed We have successfully restored and recovered the seed pluggable database (PDB$SEED). Now, we should be able to create new pluggable databases using the seed pluggable database as shown below. ---// ---// creating pluggable database using PDB$SEED //--- ---// SQL> create pluggable database CDB2_PDB_2 admin user pdb_admin identified by oracle 2 file_name_convert=('/data/oracle/orpcdb2/pdbseed/','/data/oracle/orpcdb2/cdb2_pdb_2/') 3 ; Pluggable database created. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 MOUNTED Recover PDB$SEED using existing PDB (without backup) In the previous section, we have explored the method of recovering a seed pluggable database using a VALID seed pluggable database backup. In this section, we will explore the process of recovering or recreating a seed pluggable database when there is no backup available. In the event of seed pluggable database being in a unusable state (corrupted/datafile missing), we can recreate the seed pluggable database using any of the existing pluggable database (local or remote) as represented by the following diagram. Let me demonstrate this method, with a quick example. In the following example, the seed pluggable database file is missing and we are not able to use it for creation of new pluggable databases. ---// ---// not able to create PDB due to missing PDB$SEED datafile //--- ---// SQL> create pluggable database CDB2_PDB_3 admin user pdb_admin identified by oracle 2 file_name_convert=('/data/oracle/orpcdb2/pdbseed/','/data/oracle/orpcdb2/cdb2_pdb_3/') 3 ; create pluggable database CDB2_PDB_3 admin user pdb_admin identified by oracle * ERROR at line 1: ORA-19505: failed to identify file "/data/oracle/orpcdb2/pdbseed/system01.dbf" ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3 Since, we do not have a backup available for the seed pluggable database, we can't restore the missing files as demonstrated in the earlier section. We need to drop and recreate a fresh seed pluggable database using any of the existing pluggable databases (local or remote) Let's drop the seed pluggable database PDB$SEED from the container database. ---// ---// trying to drop seed pluggable database //--- ---// SQL> drop pluggable database "pdb$seed" including datafiles; drop pluggable database "pdb$seed" including datafiles * ERROR at line 1: ORA-65025: Pluggable database PDB$SEED is not closed on all instances. As the error suggests, we need to close the seed pluggable database to be able to drop it from the container. Let's close the seed pluggable database first. ---// ---// trying to close the seed pluggable database //--- ---// SQL> alter pluggable database "pdb$seed" close; alter pluggable database "pdb$seed" close * ERROR at line 1: ORA-65017: seed pluggable database may not be dropped or altered Like we have seen earlier, we can't alter the seed pluggable database state in the default mode. We need to set the hidden parameter _oracle_script to TRUE to be able to change the seed pluggable database state. ---// ---// dropping PDB$SEED by setting _oracle_script to TRUE //--- ---// SQL> alter session set "_oracle_script"=true; Session altered. SQL> alter pluggable database "pdb$seed" close; Pluggable database altered. SQL> drop pluggable database "pdb$seed" including datafiles; Pluggable database dropped. ---// ---// validate PDB$SEED is dropped //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 MOUNTED ---// ---// reset _oracle_script to FALSE //--- ---// SQL> alter session set "_oracle_script"=false; Session altered. We have dropped the unusable seed pluggable database (PDB$SEED) from the container. Now, we can recreate it using any of this existing pluggable database. However, to be able to recreate the seed pluggable database from an existing pluggable database, the existing pluggable database needs to be in READ-ONLY mode. We will basically clone an existing pluggable database to create the seed pluggable database. Let's put one of the existing pluggable database in read only mode. ---// ---// putting existing PDB in READ-ONLY mode for cloning //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ WRITE NO SQL> alter pluggable database CDB2_PDB_2 close; Pluggable database altered. SQL> alter pluggable database CDB2_PDB_2 open read only; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ ONLY NO We have kept the existing pluggable database CDB2_PDB_2 in READ-ONLY mode as a prerequisite for cloning. Let's recreate the seed pluggable database using this existing pluggable database as shown below. ---// ---// recreate PDB$SEED by cloning existing PDB //--- ---// SQL> create pluggable database "pdb$seed" from CDB2_PDB_2 2 file_name_convert=('/data/oracle/orpcdb2/cdb2_pdb_2/','/data/oracle/orpcdb2/pdbseed/') 3 ; Pluggable database created. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ ONLY NO SQL> select name from v$datafile where con_id=2; NAME ------------------------------------------------------------ /data/oracle/orpcdb2/pdbseed/system01.dbf /data/oracle/orpcdb2/pdbseed/sysaux01.dbf /data/oracle/orpcdb2/pdbseed/users01.dbf ---// ---// put the existing PDB back in READ-WRITE mode //--- ---// SQL> alter pluggable database CDB2_PDB_2 close; Pluggable database altered. SQL> alter pluggable database CDB2_PDB_2 open read write; Pluggable database altered. We have successfully recreated the seed pluggable database by cloning an existing pluggable database. Now, we need to keep the seed pluggable database (PDB$SEED) in READ-ONLY mode to be able to use it for other pluggable database creation. However, we can't not directly put the new seed database in the READ-ONLY mode. We need to first open it in READ-WRITE mode for data dictionary synchronization. ---// ---// set _oracle_script to TRUE to be able to alter PDB$SEED state //--- ---// SQL> alter session set "_oracle_script"=true; Session altered. ---// ---// READ-ONLY open is not allowed for the first time //--- ---// SQL> alter pluggable database PDB$SEED open read only; alter pluggable database PDB$SEED open read only * ERROR at line 1: ORA-65085: cannot open pluggable database in read-only mode ---// ---// open PDB$SEED in READ-WRITE mode for dictionary synchronization //--- ---// SQL> alter pluggable database PDB$SEED open read write; Pluggable database altered. ---// ---// put PDB$SEED back in READ-ONLY mode after dictionary synchronization //--- ---// SQL> alter pluggable database PDB$SEED close; Pluggable database altered. SQL> alter pluggable database PDB$SEED open read only; Pluggable database altered. ---// ---// validate PDB$SEED state //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ WRITE NO ---// ---// reset _oracle_script to FALSE //--- ---// SQL> alter session set "_oracle_script"=false; Session altered. At this stage, we have completely recovered (recreated) the seed pluggable database and it can be now used to create new pluggable databases. Recover PDB$SEED from another PDB$SEED (without backup) In the previous section, we have explored the method for recreating a missing/corrupted seed pluggable database by cloning an existing pluggable database. However, that method requires the existing pluggable database to be kept in READ-ONLY mode for the purpose of cloning. In the method of cloning an existing pluggable, we will eventually clone everything from the existing pluggable database and probably include application specific objects which we do not want to keep in the seed pluggable database. Therefore, it would not be a good idea to recreate the seed pluggable database by cloning an existing pluggable database. We have another option for recreating a seed pluggable database, which can used to avoid cloning an existing pluggable database and prevent copying application related objects in the seed pluggable database. In this method, we can use a seed pluggable database (PDB$SEED) from another (compatible) container database to recreate the unusable seed pluggable database (PDB$SEED) in the required container database as shown in the following diagram. In the following example, I will use the seed pluggable database (PDB$SEED) from remote container database ORPCDB1 to recreate the the seed pluggable database in the local container database ORPCDB2. ---// ---// creating DB Link from local to remote container //--- ---// SQL> create database link remote_seed_link 2 connect to system identified by oracle 3 using '//labserver1.oraclebuffer.com:1521/orpcdb1' 4 ; Database link created. ---// ---// validate the remote database link //--- ---// SQL> select name,cdb from v$database@remote_seed_link; NAME CDB --------------- --- ORPCDB1 YES SQL> select con_id,name,open_mode from v$pdbs@remote_seed_link where name='PDB$SEED'; CON_ID NAME OPEN_MODE ---------- --------------- ---------- 2 PDB$SEED READ ONLY We have created a database link from the local container database (ORPCDB2) to the remote container database (ORPCDB1). Now, we will use this link to create the XML manifest file, which will represent the structure of remote seed pluggable database (PDB$SEED). ---// ---// create XML manifest file representing remote PDB$SEED structure //--- ---// SQL> exec DBMS_PDB.DESCRIBE(pdb_descr_file => ' /home/oracle/seed_orpcdb1.xml ', pdb_name => ' pdb$seed@REMOTE_SEED_LINK '); PL/SQL procedure successfully completed. SQL> !ls -lrt /home/oracle/seed_orpcdb1.xml -rw-r--r-- 1 oracle dba 5344 Apr 29 01:04 /home/oracle/seed_orpcdb1.xml We have generated XML manifest file describing the remote seed pluggable database structure. Now, we need to copy the datafiles belonging to the remote seed pluggable database over to the local container database server. Let's identify the remote seed pluggable database's datafiles that needs to be copied. ---// ---// identify remote PDB$SEED's datafiles to be copied //--- ---// SQL> select name from v$datafile@REMOTE_SEED_LINK where con_id=2; NAME ------------------------------------------------------------ /data/oracle/orpcdb1/pdbseed/system01.dbf /data/oracle/orpcdb1/pdbseed/sysaux01.dbf /data/oracle/orpcdb1/pdbseed/users01.dbf SQL> select name from v$tempfile@REMOTE_SEED_LINK where con_id=2; NAME ------------------------------------------------------------ /data/oracle/orpcdb1/pdbseed/temp01.dbf Let's copy the files from remote container database server to the local container database server (under the location, where we want the local seed's datafiles to be present) ---// ---// copying remote seed pluggable database's datafiles to local container database server //--- ---// [oracle@labserver2 ~]$ scp oracle@labserver1:/data/oracle/orpcdb1/pdbseed/*.dbf /data/oracle/orpcdb2/pdbseed/ oracle@labserver1's password: sysaux01.dbf 100% 220MB 44.0MB/s 00:05 system01.dbf 100% 225MB 32.1MB/s 00:07 temp01.dbf 100% 62MB 20.7MB/s 00:03 users01.dbf 100% 500MB 38.5MB/s 00:13 [oracle@labserver2 ~]$ Now, we can recreate the seed pluggable database in local container database (ORPCDB2) using the remote pluggable database's XML manifest file that we had generated earlier. Let's recreate the seed pluggable database in the local container database. ---// ---// create local seed pluggable database using remote seed's XML manifest file //--- ---// SQL> create pluggable database "pdb$seed" using ' /home/oracle/seed_orpcdb1.xml ' 2 source_file_name_convert=('/data/oracle/orpcdb1/pdbseed/','/data/oracle/orpcdb2/pdbseed/') 3 NOCOPY 4 TEMPFILE REUSE 5 ; Pluggable database created. ---// ---// validate PDB$SEED creation //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ WRITE NO At this stage, we have recreated the seed pluggable database (PDB$SEED) using another remote seed pluggable database. We can now open the seed pluggable database for dictionary synchronization followed by keeping the seed in READ-ONLY mode as shown in the previous section. ---// ---// set _oracle_script to TRUE to be able to alter PDB$SEED state //--- ---// SQL> alter session set "_oracle_script"=true; Session altered. ---// ---// READ-ONLY open is not allowed for the first time //--- ---// SQL> alter pluggable database PDB$SEED open read only; alter pluggable database PDB$SEED open read only * ERROR at line 1: ORA-65085: cannot open pluggable database in read-only mode ---// ---// open PDB$SEED in READ-WRITE mode for dictionary synchronization //--- ---// SQL> alter pluggable database PDB$SEED open read write; Pluggable database altered. ---// ---// put PDB$SEED back in READ-ONLY mode after dictionary synchronization //--- ---// SQL> alter pluggable database PDB$SEED close; Pluggable database altered. SQL> alter pluggable database PDB$SEED open read only; Pluggable database altered. ---// ---// validate PDB$SEED state //--- ---// SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB_1 READ WRITE NO 4 CDB2_PDB_2 READ WRITE NO ---// ---// reset _oracle_script to FALSE //--- ---// SQL> alter session set "_oracle_script"=false; Session altered. We have now completely recreated the seed pluggable database and kept it in the desired state (READ-ONLY). Now, we should be able to create new pluggable databases using this seed pluggable database (PDB$SEED) Conclusion In this article, we have explored different methods available for restoring or recreating a seed pluggable database (PDB$SEED) in the event of seed database being in the UNUSABLE state. It is always recommended to take periodic backup of the seed pluggable database even though it is just a template for creating other pluggable databases. In the presence of a VALID backup, the seed pluggable database restoration/recovery becomes hassle free and straight forward. However, we still have ways (as discussed throughout this article) to recreate the seed pluggable database in the absence of a VALID backup.

Forum Post: RE: Mapping of US States with the two character State ID

$
0
0
Hmm. The reply I sent by email appears to be somewhat truncated above. I wonder why? Never mind, here you are, this might help. There may be some "states" that are overseas territories, but you can either sort them out or leave them in accordingly. The following should give you what you want: CREATE TABLE STATES_TABLE ( STATE_CODE CHAR(2) NOT NULL, STATE_NAME VARCHAR2(35) NOT NULL ); INSERT INTO STATES_TABLE VALUES ('AL', 'Alabama'); INSERT INTO STATES_TABLE VALUES ('AK', 'Alaska'); INSERT INTO STATES_TABLE VALUES ('AS', 'American Samoa'); INSERT INTO STATES_TABLE VALUES ('AZ', 'Arizona'); INSERT INTO STATES_TABLE VALUES ('AR', 'Arkansas'); INSERT INTO STATES_TABLE VALUES ('CA', 'California'); INSERT INTO STATES_TABLE VALUES ('CO', 'Colorado'); INSERT INTO STATES_TABLE VALUES ('CT', 'Connecticut'); INSERT INTO STATES_TABLE VALUES ('DE', 'Delaware'); INSERT INTO STATES_TABLE VALUES ('DC', 'District Of Columbia'); INSERT INTO STATES_TABLE VALUES ('FM', 'Federated States Of Micronesia'); INSERT INTO STATES_TABLE VALUES ('FL', 'Florida'); INSERT INTO STATES_TABLE VALUES ('GA', 'Georgia'); INSERT INTO STATES_TABLE VALUES ('GU', 'Guam'); INSERT INTO STATES_TABLE VALUES ('HI', 'Hawaii'); INSERT INTO STATES_TABLE VALUES ('ID', 'Idaho'); INSERT INTO STATES_TABLE VALUES ('IL', 'Illinois'); INSERT INTO STATES_TABLE VALUES ('IN', 'Indiana'); INSERT INTO STATES_TABLE VALUES ('IA', 'Iowa'); INSERT INTO STATES_TABLE VALUES ('KS', 'Kansas'); INSERT INTO STATES_TABLE VALUES ('KY', 'Kentucky'); INSERT INTO STATES_TABLE VALUES ('LA', 'Louisiana'); INSERT INTO STATES_TABLE VALUES ('ME', 'Maine'); INSERT INTO STATES_TABLE VALUES ('MH', 'Marshall Islands'); INSERT INTO STATES_TABLE VALUES ('MD', 'Maryland'); INSERT INTO STATES_TABLE VALUES ('MA', 'Massachusetts'); INSERT INTO STATES_TABLE VALUES ('MI', 'Michigan'); INSERT INTO STATES_TABLE VALUES ('MN', 'Minnesota'); INSERT INTO STATES_TABLE VALUES ('MS', 'Mississippi'); INSERT INTO STATES_TABLE VALUES ('MO', 'Missouri'); INSERT INTO STATES_TABLE VALUES ('MT', 'Montana'); INSERT INTO STATES_TABLE VALUES ('NE', 'Nebraska'); INSERT INTO STATES_TABLE VALUES ('NV', 'Nevada'); INSERT INTO STATES_TABLE VALUES ('NH', 'New Hampshire'); INSERT INTO STATES_TABLE VALUES ('NJ', 'New Jersey'); INSERT INTO STATES_TABLE VALUES ('NM', 'New Mexico'); INSERT INTO STATES_TABLE VALUES ('NY', 'New York'); INSERT INTO STATES_TABLE VALUES ('NC', 'North Carolina'); INSERT INTO STATES_TABLE VALUES ('ND', 'North Dakota'); INSERT INTO STATES_TABLE VALUES ('MP', 'Northern Mariana Islands'); INSERT INTO STATES_TABLE VALUES ('OH', 'Ohio'); INSERT INTO STATES_TABLE VALUES ('OK', 'Oklahoma'); INSERT INTO STATES_TABLE VALUES ('OR', 'Oregon'); INSERT INTO STATES_TABLE VALUES ('PW', 'Palau'); INSERT INTO STATES_TABLE VALUES ('PA', 'Pennsylvania'); INSERT INTO STATES_TABLE VALUES ('PR', 'Puerto Rico'); INSERT INTO STATES_TABLE VALUES ('RI', 'Rhode Island'); INSERT INTO STATES_TABLE VALUES ('SC', 'South Carolina'); INSERT INTO STATES_TABLE VALUES ('SD', 'South Dakota'); INSERT INTO STATES_TABLE VALUES ('TN', 'Tennessee'); INSERT INTO STATES_TABLE VALUES ('TX', 'Texas'); INSERT INTO STATES_TABLE VALUES ('UT', 'Utah'); INSERT INTO STATES_TABLE VALUES ('VT', 'Vermont'); INSERT INTO STATES_TABLE VALUES ('VI', 'Virgin Islands'); INSERT INTO STATES_TABLE VALUES ('VA', 'Virginia'); INSERT INTO STATES_TABLE VALUES ('WA', 'Washington'); INSERT INTO STATES_TABLE VALUES ('WV', 'West Virginia'); INSERT INTO STATES_TABLE VALUES ('WI', 'Wisconsin'); INSERT INTO STATES_TABLE VALUES ('WY', 'Wyoming'); COMMIT; -- Cheers, Norm. [TeamT]

Forum Post: RE: Mapping of US States with the two character State ID

$
0
0
Sorry, the web page appears to have fragged the line spacing. Sigh.

Forum Post: RE: Mapping of US States with the two character State ID

$
0
0
And I wouldn't bother putting an index or primary key on the above. Oracle will probably never use it as the table will fit into one or two 8k blocks and will always be read with a full scan. :-)

Wiki Page: The Oracle User Experience Rapid Development Kit

$
0
0
If you have seen Oracle Cloud Applications, you will be familiar with the simplified UI that is the first view every user gets of the application. Based on Oracle's extensive usability research, the simplified UI combines a number of best practices with a modern and attractive visual appearance. If you want to build applications with the same look, Oracle is helping you with the User Experience Rapid Development Kit (UX RDK). Getting the UX RDK You can download the UX RDK from the Usable Apps web site . The kit consists of the following: The e-book Oracle Simplified User Experience Design Patterns for Oracle Applications Cloud Service, Release 10 The e-book Using the Oracle Rapid Development Kit to Build and Deploy Oracle Applications Cloud Simplied UIs, Release 10 The UX RDK Wireframe Template, Release 10 The AppCloudUIKit sample application version 1.0.1 The User Experience Design Patterns The UX Design Patterns book explains how to build applications that your users can instantly use. The idea of design patterns is to build your application like other applications, so your users can leverage existing skills and instantly recognize specific layouts, button placements, and terminology. All developers involved in design workshops with the users should read chapters 1, 2 and 3 to understand the basic UX design philosophy, page types and page layouts. They should also familiarize themselves with the contents of chapters 4 through 7 to understand what components are recommended in which situations, referring back to the book as necessary when developing actual pages. Chapters 8 and 9 cover specialized topics that you can read if necessary, and Appendix A contain a lot of real-life examples of the design patterns in practice as implemented in Oracle Cloud Applications. The Wireframe Template Part of the process of building applications that really meet the needs of your users it to have a workshop with the users, sketching out screen that gathers the data your users need in a way that fits their way of working. The first few iterations are typically done by hand on flipcharts, but once your page flow and individual page design starts to crystallize, you normally create a detailed wireframe. This wireframe shows in some detail what the users can expect the final application to look like. To create these, you can use the wireframe template PowerPoint. This presentation contains PowerPoint illustrations of most of the pages and components in the AppsCloudUIKit sample application, illustrating many of the UX design patterns found in the UX Design Patterns e-book. You can take copies of these pages as starting points for your own wireframes, cutting and pasting together your own wireframe from Oracle's examples. The Rapid Development Kit The rapid development kit ZIP file unpacks into a self-contains Oracle JDeveloper workspace that you can immediately open and run to see a number of fully implemented examples. It is based on JDeveloper 11.1.1.9.0, and will not work in earlier versions. You should start by renaming the AppsCloudUIKit folder to the name of your project, and also change the name of the AppsCloudUIKit.jws file into the folder to the same name. Then open the application workspace in JDeveloper. It contains the following projects: DemoCRM DemoData DemoFIN DemoHCM DemoMaster UIKitCommon These projects are related in a proper enterprise architecture following best practice for ADF development as described in my book Oracle ADF Enterprise Application Development Made Simple . The foundation layer in the UX RDK sample application consist of the UIKitCommon and DemoData layers. The UIKitCommon project contains two page templates with their associated Java code as well as five ADF declarative components with code. These components implement functionality that is commonly used in applications built with the simplified UI, but which is not part of the ADF standard component library, for example a CardViewListViewDC component that implement the Card/List View UX design pattern. It also contains the ADF skin the demo application uses, together with all the necessary icons and other images. The DemoData project contains Java classes that server as the data layer of the sample application. In your application, your data will typically come from ADF business components based on an Oracle database, but in order to allow the demo application to run without a database, it uses simple Java classes as data sources. The subsystems are DemoCRM, DemoFIN, and DemoHCM. The DemoCRM project contains the contacts task flow with associated pages, the opportunities flow with associated pages, and five CRM infolets used on the infolets page of the sample application. The DemoFIN project contains the financial reports flow with its page, the global sales revenue flow with a visualization page, and three financial infolets for the infolets page. The DemoHCM project contains the My Teams flow with its page, a number of task flows and pages implementing the Team Performance part of the application, and three HCM infolets for the infolets page. The master layer is the DemoMaster project. This project contains the application unbounded task flow with the welcome springboard page (with a grid of icons) and the filmstrip page (with icons along the top). It also contains a number of bounded task flows used on these pages. To run the application, you open the master project, right-click on the welcome page and choose run. Building from Examples When you first start out with the UX RDK, you are most likely to be building fairly simple applications. Since the sample application contains a lot of pretty advanced functionality, you will mainly be removing functionality from existing pages, adding a little of your own. On the other hand, the sample application uses Java classes as data sources, whereas your application is likely to be using business components connected to the application using ADF bindings. In a later article, I'll get back to how to strip out unneeded parts from the UX RDK and how to connect ADF Business Components. Using the Building Blocks Once you have accumulated some experience with the UX RDK, you might want to build your own pages from scratch instead of copying the pre-built pages from Oracle. To do this, you simply add the UIKitCommon ADF library to your project and use the templates, skin, images, and declarative components from the library.

Blog Post: Loading Data From CSV File Into MySQL Table

$
0
0
This is my first ever post in MySQL, Reason is that I just started working in MySQL very recently. My requirement of the day was very simple, I had to load a set of data rows from a csv file into a table. I was very sure that, there should be something similar to sqlldr in MySQL, and this is what I found and completed my work. 1) There is a default data_dir from which this “LOAD DATA” command picks the data (This is found in data_dir parameter in /etc/mysql/my.cnf) 2) Copy your CSV file in the /var/lib/mysql/directory Note:- Each schema in your database will have a directory of its own. ubuntu@omegha-erp:~$ mysql -uroot -p*** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6784 Server version: 5.5.47-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use erp; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> LOAD DATA INFILE 'Stock1.csv' INTO TABLE stockmaster; ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`erp`.`stockmaster`, CONSTRAINT `stockmaster_ibfk_1` FOREIGN KEY (`categoryid`) REFERENCES `stockcategory` (`categoryid`)) 3) Since my data load failed with foreign key constraint and I was yet to prepare the master table data for load, I decided to disable to constraints for proceeding with my data load, and this is how I disabled the constraints check. mysql> SET foreign_key_checks = 0; Query OK, 0 rows affected (0.01 sec) 4) LOAD DATA again and this time it was successful. But all my CSV values in the csv file got loaded into the first column itself, I decided to delete the rows and LOAD DATA again correctly. mysql> LOAD DATA INFILE 'Stock1.csv' INTO TABLE stockmaster; Query OK, 161 rows affected, 4669 warnings (0.10 sec) Records: 161 Deleted: 0 Skipped: 0 Warnings: 4669 mysql> delete from stockmaster; Query OK, 164 rows affected (0.08 sec) mysql> LOAD DATA INFILE 'Stock1.csv' INTO TABLE stockmaster COLUMNS TERMINATED BY ',' LINES TERMINATED BY '\n'; Query OK, 161 rows affected, 214 warnings (0.10 sec) Records: 161 Deleted: 0 Skipped: 0 Warnings: 214 mysql>

Blog Post: When SQLT is not enough

$
0
0
When tuning a SQL query, I rely on a SQLT report because it has all kinds of pertinent information including—to name just a few—optimizer settings, indexes, statistics, plan history, and view definitions. However, recently I came across a case where the SQLT report was not enough. A newly created environment was not performing as well as the environment it was intended to replace. I looked at the SQLT report for one of the queries and could not find any explanation. I eventually found the answer in the AWR report. It showed that the bulk of processing time was due to recursive queries performed during query parsing. This was obviously the root cause. SQL ID 47r1y8yn34jmj SELECT default$ FROM col$ WHERE rowid=:1 SQL ID 4b4wp0a8dvkf0 SELECT executions, end_of_fetch_count, elapsed_time/px_servers elapsed_time, cpu_time /px_servers cpu_time, buffer_gets /executions buffer_gets FROM (SELECT SUM(executions_delta) AS EXECUTIONS, SUM( CASE WHEN px_servers_execs_delta > 0 THEN px_servers_execs_delta ELSE executions_delta END) AS px_servers, SUM(end_of_fetch_count_delta) AS end_of_fetch_count, SUM(elapsed_time_delta) AS ELAPSED_TIME, SUM(cpu_time_delta) AS CPU_TIME, SUM(buffer_gets_delta) AS BUFFER_GETS FROM DBA_HIST_SQLSTAT s, V$DATABASE d, DBA_HIST_SNAPSHOT sn WHERE s.dbid = d.dbid AND bitand(NVL(s.flag, 0), 1) = 0 AND sn.end_interval_time > (SELECT systimestamp at TIME ZONE dbtimezone FROM dual ) - 7 AND s.sql_id = :1 AND s.snap_id = sn.snap_id AND s.instance_number = sn.instance_number AND s.dbid = sn.dbid AND parsing_schema_name = :2 ) SQL ID frjd8zfy2jfdq SELECT executions, end_of_fetch_count, elapsed_time/px_servers elapsed_time, cpu_time /px_servers cpu_time, buffer_gets /executions buffer_gets FROM (SELECT SUM(executions) AS executions, SUM( CASE WHEN px_servers_executions > 0 THEN px_servers_executions ELSE executions END) AS px_servers, SUM(end_of_fetch_count) AS end_of_fetch_count, SUM(elapsed_time) AS elapsed_time, SUM(cpu_time) AS cpu_time, SUM(buffer_gets) AS buffer_gets FROM gv$sql WHERE executions > 0 AND sql_id = :1 AND parsing_schema_name = :2 ) Google and M.O.S quickly led us to the solution—we just using the three SQL IDs as the search strings. My point is that sometimes a SQLT report is not enough to solve a SQL performance problem.

Blog Post: Snap Clone Via Self Service Portal – Part 1

$
0
0
A friend asked: “I am setting up the “PDB as a service portal” in Enterprise Manager Cloud Control 13c. I want to ask, how I can create a profile for a pdb as this is a requirement for a creating a template for the self service portal. “The cdb with the pdb is stored on zfs storage and this has a PDB called “zecorp_snap_clone_pdb". “However, in my Enterprise Manager screen titled “Create Database Provisioning Profile: Reference target”, the box for snap cloning is not selectable and the choice for the reference target is only the cdb “zecorp”, and not the pdb. I looked in the documentation and I think I have all the prerequisites configured. Can you advise?" The answer was: Apologies, the first thing to understand is that Snap Clone via the Enterprise Mananger self service portal is currently limited to databases only. The ability to Snap Clone PDBs via Self Service Portal is not in the current Enterprise Manager 13c (13.1) release. However, you can still perform a snap clone of PDBs via the Admin driven flows. The documentation here has more details.

Wiki Page: Exadata – Discover Cluster and Database in OEM 12c

$
0
0
Introduction To manage Exadata Database Machine you first Discover the Exadata machine in OEM 12c. Discovering Exadata Machine in OEM 12c consists of the following steps: Installing Enterprise Manager 12c Agent on the Compute nodes. Using Agent Automation Kit: http://www.toadworld.com/platforms/oracle/w/wiki/11175.installing-enterprise-manager-12c-agent-on-exadata-using-agent-automation-kit Discover Exadata Database Machine in OEM 12c. Running Guided Discovery Process: http://www.toadworld.com/platforms/oracle/w/wiki/11418.discover-exadata-database-machine-in-oracle-enterprise-manager-12c Post Discovery Steps. http://www.toadworld.com/platforms/oracle/w/wiki/11554.exadata-configure-cisco-switch-and-pdu-snmp-for-oem-12c-monitoring-post-exadata-discovery-setups Discover the Cluster and Oracle Databases. In this article I will demonstrate how to “Discover Cluster and Oracle Databases” on Exadata Database Machines. Once the Exadata database machine has been discovered, next step is to discover the cluster and databases running on the machine. Assumption A fully functional OEM 12c server environment OEM 12c Agent is installed on all Exadata Compute Nodes Exadata Database Machine has been discovered in OEM 12c. Oracle user password OEM 12c credentials (SYSMAN or any other privileged user) Environment Exadata Model X5-2 Full Rack HC 4TB Exadata Components Storage Cell (14), Compute node (8) & Infiniband Switch (2) Exadata Storage cells DBM01CEL01 – DBM01CEL14 Exadata Compute nodes DBM01DB01 – DBM01DB08 Exadata Software Version 12.1.2.1.1.150316.2 Exadata DB Version 11.2.0.4 BP15 Steps to Discover the Cluster and Oracle Databases in OEM 12c Discover Cluster Login to Oracle Enterprise Manager 12c https://oem12c.mydomain.com:1159/em On the OEM 12c home page, Select Setup --> Add Targets --> Add Targets Manually Select “ Add Target Using Guided Process ”, from the drop down list select “ Oracle Cluster and High Availability Services ” and click on “ Add Using Guided Process ” button Click Search symbol , enter the hostname of the first compute node and click Search. Select the Compute node name and click Select button. Click “ Discover Target ” button. Now the cluster discovery process will start. It will take few minutes to complete On this page verify Cluster properties and host names are correct. Click on “ Set Target Global Properties ” to set the contact, location, line of business and so on. This will apply the necessary monitoring templates if defined. After making changes, click “ Save ”. The Cluster and HA targets will now have a status of pending discovery. On the home page, enter the target name (example: dm01) and search for the target, it will show that the targets are pending discovery. We can see that the cluster status is up Now all the cluster high availability services status is UP. This completed the discovery of Cluster and High Availability services Discover Database Login to Oracle Enterprise Manager 12c https://oem12c.mydomain.com:1159/em On the OEM 12c home page, Select Setup --> Add Targets --> Add Targets Manually Select “ Add Target Using Guided Process ”, from the drop down list select “ Oracle Database, Listener and Automatic Storage Management ” and click on “ Add Using Guided Process ” button Click Search symbol , enter the hostname of the first compute node and click Search. Select the Compute node name and click Select button . Check “ on all hosts in the cluster ” radio button and Click Next Now “ Target Discovery is in progress ” Database Discovery Result page shows the list of databases, ASM instances and Listeners running on the cluster. Verify the list, enter the DBSNMP user’s password for all databases and ASMSNMP user’s password for ASM instance (default ASMSNMP password on Exadata is welcome). Click Next Click on “ Set Target Global Properties ” to set the contact, location, line of business and so on. This will apply the necessary monitoring templates if defined. After making changes, click “ Save ”. Review the page and click Save The cluster database and instances will now have a status of pending discovery. On the home page, enter the target name (example: dbm01) and search for the target, it will show that the targets are pending discovery. We can see that the cluster database is up Now all the database instances status is UP. This completed the discovery of database, Listener and Automatic Storage Management. Conclusion In this article we have learnt how to perform “Discovery of Cluster and Databases” on Exadata Database Machines. We have Configured Cluster, Database, ASM and Listener for Oracle Enterprise Manager 12c Monitoring.
Viewing all 4975 articles
Browse latest View live