We live in exciting times, Oracle Database 12.2 for Exadata was released earlier today. The 12.2 database was already available on the Exadata Express Cloud Service and Database as a service for a few months now. Today, it has been released for Exadata on-premises, five days earlier than the initial Oracle announcement of 15th Feb. The documentation suggests that to run 12.2 database you need to run at least 12.1.2.2.1 Exadata storage software but better go with the recommended version of 12.2.1.1.0 which was released just a few days ago. Here are few notes on running 12.2 database on Exadata: Recommended Exadata storage software: 12.2.1.1.0 or higher Supported Exadata storage software: 12.1.2.2.1 or higher Full Exadata offload functionality for Database 12.2, and IORM support for Database 12.2 container databases and pluggable databases requires Exadata 12.2.1.1.0 or higher. Exadata Storage Server version 12.2.1.1.0 will be required for full Exadata functionality including ‘Smart Scan offloaded filtering’, ‘storage indexes’ and’ Current Oracle Database and Grid Infrastructure version must be 11.2.0.3, 11.2.0.4, 12.1.0.1 or 12.1.0.2. Upgrades from 11.2.0.1 or 11.2.0.2 directly to 12.2.0.1 are not supported. There is a completely new note on how to upgrade to 12.2 GI and RDBMS on Exadata: 12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1) The 12.2 GI and RDBMS binaries are available from MOS as well as edelivery: Patch 25528839: Grid Software clone version 12.2.0.1.0 Patch 25528830: Database Software clone version 12.2.0.1.0 The recommended Exadata storage software for running 12.2 RDBMS: Exadata 12.2.1.1.0 release and patch (21052028) (Doc ID 2207148.1) For more details about 12.2.1.1.0 Exadata storage software refer to this slide deck and of course a link to the 12.2 documentation here as we all need to start getting familiar with it. Also, last week Oracle released the February version of OEDA to support the new Exadata SL6 hardware. It does not support 12.2 database yet and I guess we’ll see another release in February to support the 12.2 GI and RDBMS. Happy upgrading!
↧
Blog Post: Oracle Database 12.2 released for Exadata on-premises
↧
Blog Post: Oracle Database 12cR2 Documentation is here
Finally Oracle 12cR2 documentation is here also All Books for Oracle® Database Online is here Cheers Osama
↧
↧
Wiki Page: Near Zero Downtime PDB Relocation in Oracle Database 12cR2
Introduction Oracle has introduced in Oracle 12.1.0.1 in 2013 Pluggable Databases, since then Oracle has been enhancing its features for Multitenant. In Oracle 12.1.0.1 we had features to convert non-CDBs to PDBs and some features to copy and move pluggable databases. In Oracle 12.1.0.2 we saw several features for cloning (mostly remotely). However before 12.2 all those features to create , copy, close and move pluggable databases need downtime since the source PDB had to be read-only. Downtime is a bad word for companies. To read how to clone, move and create PDBs in 12.1 you can read the following set of articles: How to create a PDB from a remote Non-CDB Cómo clonar una PDB desde un CDB remoto PDB Subset Cloning from a Remote non-CDB Creating a PDB with no data using PDB Metadata Cloning How to move a non-CDB into a PDB with DBMS_PDB Beginning with Oracle Database 12.2.0.1 those features were enhanced and the downtime was replaced with the word "hot" or "online". Two features that I really like are "Hot Cloning" and "Online relocation". Basically it is the same feature than 12.1.0.2 for clonning locally and remotely but now they can be done online. The source PDB can be in read-write. First let me tell you what's the difference between Hot Cloning and Online Relocation. Hot Cloning. This feature allows you to "clone" a pluggable database either locally or remotely without to put the source PDB in read-only. I can be in read-write receiving several DMLs. Online Relocation: This feature allows you to "relocate" a pluggable database to another CDB with zero downtime. After to transfer the PDB, the source PDB in the source CDB is removed, that's why it is called "relocation". In this article we will discuss the feature "Near Zero Downtime PDB Relocation". This feature needs another new feature introduced in Oracle 12.2 called "Local Undo". If you don't know what is Local Undo you can read some of my articles about that: ¿A bug after configuring Local Undo in Oracle 12.2? Oracle DB 12.2 Local Undo: PDB undo tablespace creation How to Enable and Disable Local Undo in Oracle 12.2 Online PDB Relocations uses a database link that is created in the target CDB pointing to the CDB$ROOT of the source CDB. There are some privileges that we have to grant but that's discussed later in the examples. Once the database link is created, the sentence "CREATE PLUGGABLE DATABASE" is executed with the clause "RELOCATE" and the optional clause "AVAILABILITY (NORMAL|MAX|HIGH)". When RELOCATE sentence is used, Oracle creates an identical pluggable database in the target CDB, while the source PDB is still open in read-write. While the new PDB in the target CDB is being created you can execute your DMLs as if nothing was happening against your source PDB. That's why it is called "online". When the sentence completes, you will have two identical PDBs, one in the source CDB and another in the target CDB. During this time the source PDB will be generating more redo, which will be applied when the final "switch" is performed. That "Switch" is made when the new PDB in the target CDB is opened in read-write. While opening the target pdb in read-write the source PDB is paused while the pending redo is applied in the new PDB and once they are both totally syncronized, Oracle applies undo data in the new PDB to rollback the uncommitted transactions that were running in the source PDB. Once the undo is applied the source PDB is deleted (all the datafiles) and the new client's session can be now redirected to the new PDB. Even if during this short step there are new sessions being created, oracle can redirect new sessions to a the new PDB if the clause "AVAILABILITY" was used. With this good feature, the PDBs can be relocated now from CDB to other CDB with near zero downtime. In this article I will explain step by step how this feature works. Firstable let me show you the environment that I am using: source CDB: NuvolaCG target CDB: Nuvola2 source PDB: sourcepdb database version: 12.2.0.1 (in both CDBs) Both CDBs are running on Oracle Public Cloud (EE) The article has the following sections: Preparation Copy-Phase Relocation-Phase Known Issues Conclusion Preparation Create a common user in source CDB: SQL> create user c##deiby identified by deiby container=all; User created. Granting privileges in source CDB: SQL> grant connect, sysoper, create pluggable databaseto c##deiby container=all; Grant succeeded. Create a database link in target CDB pointing to CDB$ROOT of source CDB: SQL> create database link dblinktosource connect to c##deiby identified by deiby using 'NUVOLACG'; Database link created. The source CDB and the target CDB is in archivelog mode and also with Local Undo: SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'; PROPERTY_NAME PROPERTY_V -------------------- ---------- LOCAL_UNDO_ENABLED TRUE Copy Phase Before the relocation process, the service where your clients are connecting is running in the source CDB: [oracle@NuvolaDB Hash]$ lsnrctl service Service " pdbsource.gtnuvolasa.oraclecloud.internal " has 1 instance(s). Instance " NuvolaCG ", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER The command completed successfully [oracle@NuvolaDB Hash]$ In another terminal, I created a session in the source PDB and I did two INSERTs operations, one committed and another has not committed. With this I will show you how commited and uncommitted transactions are handled: SID SERIAL# USERNAME MACHINE ------ ---------- ---------- -------------------- 367 44360 DGOMEZ NuvolaDB.compute-gtn SQL> insert into test values ('Guatemala'); SQL> commit; Commit complete. SQL> insert into test values ('USA'); 1 row created. Start the copy-phase: In this phase the datafiles will be created in the new CDB and some redo also will be applied: SQL> create pluggable database pdbsource from pdbsource @ dblinktosource keystore IDENTIFIED BY "Nuv0la#1" relocate availability max ; Pluggable database created. pdbsource must be the same name than source PDB. pdbsource is the name of the source PDB. dblinktosource is the name of the database link. keystore identified by - it's not needed on-premise. But this is Oracle Public Cloud. relocate the clause which make this operation a "relocation pdb". availability max - redirects new connections to the new PDB. You can see that the status of the source PDB is now "RELOCATING": SQL> select pdb_name, status from cdb_pdbs PDB_NAME STATUS -------------------- ---------- PDB$SEED NORMAL PDBSOURCE RELOCATING In this phase some redo also is applied as you can see bellow: create pluggable database pdbsource from pdbsource@dblinktosource keystore identified by * relocate availability max Pluggable Database PDBSOURCE with pdb id - 3 is created as UNUSABLE. local undo-1, localundoscn-0x000000000011f500 Applying media recovery for pdb-4099 from SCN 1206636 to SCN 1206649 thr-1, seq-39, logfile-/u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615 PDBSOURCE(3): Media Recovery Start PDBSOURCE(3):Serial Media Recovery started 2DBSOURCE(3): Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc PDBSOURCE(3): Incomplete Recovery applied until change 1206649 time 02/12/2017 08:20:48 PDBSOURCE(3): Media Recovery Complete (Nuvola2) Completed: create pluggable database pdbsource from pdbsource@dblinktosource keystore Identify by * relocate availability max Once this phase has completed there will be two PDBs, one in the source PDB and another in the target PDB. Your source PDB can still receive transactions in the source PDB, the transactions executed after the copy-phase generate redo data which will be applied in the "relocation-phase". Relocation Phase In the terminal where I have the session created, I will perform a couple of INSERTs more. Be aware that these sentences were executed after the copy-phase to show you that the source PDB can receive DMLs, that's why is called "online": SQL> rollback; Rollback complete. SQL> insert into test values ('Canada'); SQL> commit; Commit complete. SQL> insert into test values ('Nicaragua'); 1 row created. The relocation phase is done when you open the new target PDB in read-write. The source PDB is paused, the new PDB is opened, and the source PDB is closed and it's datafiles are deleted. (Execute 2 times this, check the Known Issues Section at the end of this article) SQL> alter pluggable database pdbsource open; Pluggable database altered. After the relocation phase is completed, in the source CDB you are able to see the source PDB, but only its metadata, its datafiles were removed physically and the status of such PDB is "relocated": SQL> select pdb_name, status from cdb_pdbs PDB_NAME STATUS ------------ ---------- PDB$SEED NORMAL PDBSOURCE RELOCATED And in the target CDB you will be the new PDB, opened in read-write, ready to receive sessions: SQL> select pdb_name, status from cdb_pdbs PDB_NAME STATUS ----------- ---------- PDB$SEED NORMAL PDBSOURCE NORMAL Now let's confirm that the PDB was indeed online: The value 'Guatemala' was commited. The value 'USA' was rolled back (after copy-phase). The value 'Canada' was commited and the value 'Nicaragua' was never commited nor uncommited. Then only "Guatemala" and "Canada" should be present in the new PDB since all the uncommited transactions were rolled back in the relocation-phase: SQL> alter session set container=pdbsource; Session altered. SQL> select * from dgomez.test; VALUE -------------------- Guatemala Canada In this phase the redo generated after the copy phase is applied and all the uncommited transactions are rolled back using undo data. There are some validations and the service is relocated as well: alter pluggable database pdbsource open PDBSOURCE(3): Deleting old file#6 from file$ PDBSOURCE(3): Deleting old file#7 from file$ PDBSOURCE(3): Deleting old file#8 from file$ PDBSOURCE(3): Deleting old file#9 from file$ PDBSOURCE(3): Adding new file#6 to file$(old file#6) PDBSOURCE(3): Adding new file#7 to file$(old file#7) PDBSOURCE(3): Adding new file#8 to file$(old file#8) PDBSOURCE(3): Adding new file#9 to file$(old file#9) PDBSOURCE(3): Successfully created internal service pdbsource.gtnuvolasa.oraclecloud.internal at open **************************************************************** Post plug operations are now complete. Pluggable database PDBSOURCE with pdb id - 3 is now marked as NEW. **************************************************************** PDBSOURCE(3):Pluggable database PDBSOURCE dictionary check beginning PDBSOURCE(3):Pluggable Database PDBSOURCE Dictionary check complete PDBSOURCE(3):Database Characterset for PDBSOURCE is US7ASCII Pluggable database PDBSOURCE opened read write Completed: alter pluggable database pdbsource open In the middle of the blue lines Oracle applies a Semi-Patching . Now the service that our customers are using to connect is running in the new CDB. This is totally transparently to customers. You don't have to send them a new connection string.... [oracle@NuvolaDB trace]$ lsnrctl service Service " pdbsource.gtnuvolasa.oraclecloud.internal " has 2 instance(s). Instance " Nuvola2 ", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER Known Issues Privileges: The documentation says "SYSDBA" or "SYSOPER", however I did a couple of tests with "SYSDBA" and it didn't work. I received the error: " ORA-01031: insufficient privileges " Target CDB in Shared Mode: The target CDB doesn't need to be in Local Undo mode. In that case the new PDB being relocated will be converted to "Shared Undo". But you could have an issue here, if your source PDB is receiving several DMLs you could have some issues when you try to open the new PDB in read-write you will get a message saying "unrecovered txns found". In that case you must clear those unrecovered transactions by yourself and then re-execute "alter pluggable database open". alter pluggable database pdbsource open Applying media recovery for pdb-4099 from SCN 1207258 to SCN 1207394 thr-1, seq-39, logfile-/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615 PDBSOURCE(3):Media Recovery Start PDBSOURCE(3):Serial Media Recovery started PDBSOURCE(3):Media Recovery Log /o1_mf_1_39_2572397703_.arc PDBSOURCE(3):Incomplete Recovery applied until change 1207394 time 02/12/2017 08:41:11 PDBSOURCE(3):Media Recovery Complete (Nuvola2) DBSOURCE(3):Zero unrecovered txns found while converting pdb(3) to shared undo mode ,recovery not necessary PDB PDBSOURCE(3) converted to shared undo mode , scn: 0x000000008a2f90c0 Applying media recovery for pdb-4099 from SCN 1207394 to SCN 1207446 DBSOURCE(3):Media Recovery Start PDBSOURCE(3):Serial Media Recovery started PDBSOURCE(3):Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc DBSOURCE(3):Incomplete Recovery applied until change 1207446 time 02/12/2017 08:41:19 PDBSOURCE(3):Media Recovery Complete (Nuvola2) In my case I had the following session opened with active transactions (the third terminal I was using to perform DMLs), I just killed the session :) SQL> alter system kill session '367,44360' immediate; System altered. After to kill that session the new PDB opened successfully. Open in Read Write: After the copy-phase the new PDB must be opened in read-write, if you try to open the new PDB in other mode right after the copy phase you will get errors: SQL> alter pluggable database pdbsource open read only; alter pluggable database pdbsource open read only * ERROR at line 1: ORA-65085: cannot open pluggable database in read-only mode Another name for the new PDB: The name of the target PDB must be the same than name of the source PDB, if you try to use another name you will get an error: SQL> create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max; create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max * ERROR at line 1: ORA-65348: unable to create pluggable database The following image can be useful: Deadlock in the first "alter pluggable database open": The new PDB has to be opened twice because the first opening fails due to a bug (undocumented so far): SQL> alter pluggable database pdbsource open; alter pluggable database pdbsource open * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource An investigation shows that this deadlock is due to a row cache lock. I found some bugs already documented for 12.2 while opening a database, there is not any workaround for all of them, only to apply a patch. However those bugs that I found are not for Relocation PDB. Here an extract of the trace generated by the deadlock: ------------------------------------------------------------------------------- Oracle session identified by: { instance: 1 (nuvola2.nuvola2) os id: 18582 process id: 8, oracle@NuvolaDB (TNS V1-V3) session id: 10 session serial #: 32155 pdb id: 3 (PDBSOURCE) } is waiting for ' row cache lock ' with wait info: { p1: 'cache id'=0x0 p2: 'mode'=0x0 p3: 'request'=0x5 time in wait: 0.186033 sec timeout after: never wait id: 2670 blocking: 0 sessions current sql: alter pluggable database pdbsource open wait history: * time between current wait and wait #1: 0.000327 sec 1. event: 'db file sequential read' time waited: 0.000260 sec wait id: 2669 p1: 'file#'=0x1e p2: 'block#'=0xeb2 p3: 'blocks'=0x1 * time between wait #1 and #2: 0.000824 sec 2. event: 'db file sequential read' time waited: 0.000235 sec wait id: 2668 p1: 'file#'=0x1e p2: 'block#'=0xbff p3: 'blocks'=0x1 * time between wait #2 and #3: 0.002020 sec 3. event: 'db file sequential read' time waited: 0.000250 sec wait id: 2667 p1: 'file#'=0x1e p2: 'block#'=0xd4e p3: 'blocks'=0x1 } and is blocked by the session at the start of the chain. ------------------------------------------------------------------------------- There is another "Deadlock" + "ORA-600" in Oracle 12.2 related to Local Undo and Shared Undo. If you want to read the workaround read this article: ¿A bug in Local Undo mode in Oracle 12.2? Closing the source PDB: Closing the source PDB right after the copy phase and before the relocation phase: I did't just for fun and I found an ORA-65020 [;)]. But you don't do that ... SQL> alter pluggable database pdbsource open; alter pluggable database pdbsource open * ERROR at line 1: ORA-17628: Oracle error 65020 returned by remote Oracle server ORA-65020: pluggable database already closed Conclusion: Oracle Database 12.2 brings several new features to work with Pluggable Database totally online by taking advantage of redo data and undo data generated locally (Local Undo Mode). It's time to relocate PDBs! Enjoy!. Follow me: About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group , a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.
↧
Blog Post: Join the Oracle Scene Editorial Team
Are you a member of UKOUG? How would you like to join the editorial team of Oracle Scene magazine as a deputy editor? If you are interested we are looking to recruit 1 deputy editor to cover the Applications area and 2 deputy editors to cover the Tech area (DBA, Developer, BA, etc) How much time is required? about 4 hours per edition, or maybe less. What does a deputy editor do? As part of the editorial team you will be expected to: - Article Review Articles submitted are uploaded to the review panel on Basecamp. During this time the editors should become familiar with the articles and have an idea of which ones would be appropriate for publication. Time approx 1.5hrs over a 2 week period. - Editorial Call After the review period has closed the editors come together for an editorial call (approx 1hr) to go through the feedback received on the articles, it is the editors job to validate any comments and select which articles should be chosen for publication. Time approx 1hr. Some articles may need further rework by the authors and the editors provide comments & instructions as to the amends needed, in some cases the editors will take on the amends themselves or if they hold the relationship with the author they may wish to approach them direct. If any articles have been held over from the previous edition, the editors will relook at the articles and if any of the content needs to be updated they will advise. If we do not have articles submitted at this stage so the editors may need to source some additional content. - Editorial Review Once the selected articles are edited they are passed to the designer for layout, editors will then receive a first copy of the magazine where they will read the articles relevant to them (Apps or Tech) marking up on the pdf any errors in the text or images found. We try to build in time over a weekend for this with the comments due by 9am on the Monday. This is generally the last time the editors see the magazine, the next time being the digital version. Time approx 2hrs. - Promotion When the digital version is ready to be sent out – the editors & review panel are notified to help raise awareness of the magazine among their network. - Article Sourcing Call for articles is open all year as we will just hold those submitted in between the planning timeline over to the next edition. If there are particular topics that we feel would make good articles the editors are expected to help source potential authors and of course if they see good presentations again encourage those speakers to turn their presentation in to text. - Flying the flag Throughout the year the editors are expected to positively “fly the flag” of Oracle Scene, as a volunteer this will include, at the annual conference, taking part in the community networking to encourage future authors amongst the community. If you are interested in a deputy editor role then submit your application now. Check out UKOUG webpage for more details.
↧
Blog Post: Oracle 12.2 is Now Available on E-delivery
We was waiting Oracle database 12cR2 and now dream come true You can download oracle Enterprise edition 12cR2 from Oracle Edelivery Here Follow the steps :- Enjoy Osama
↧
↧
Blog Post: Cómo crear una instancia de la Base de Datos Oracle en Amazon RDS - Parte 1
En los tiempos en que vivimos, conocer los avances de la tecnología y cómo todo está orientado a la nube, me hizo pensar que es momento de empezar a incursionar en este tema y por ello he tenido la oportunidad de aprender un poco sobre estos servicios que ofrece Amazon AWS, en particular sobre el servicio RDS. En este artículo quiero compartir el paso a paso para disponer de una Instancia de prueba y estudio de la Base de Datos Oracle en Amazon RDS. Como primer punto necesitamos conocer lo que es Amazon RDS (Relational Database Service). Amazon RDS es un servicio web con el cual podemos en forma muy sencilla configurar, utilizar y escalar una base de datos relacional en la nube. Nos proporciona capacidad rentable y podemos escalar el tamaño según necesitemos y al mismo tiempo, administra las tediosas tareas de administración de la base de datos, lo que nos permite centrarnos en nuestras aplicaciones y en nuestro negocio. Podemos tener en Amazon RDS seis motores de bases de datos populares para elegir, incluido Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle y Microsoft SQL Server. Una pregunta muy importante a hacerse a la hora de incursionar en la nube, es: Por qué queremos un servicio de base de datos administrado? En primera instancia se encarga de las tareas tediosas de la gestión de la base de datos relacional. Cuando compramos un Servidor, lo que obtenemos es un CPU, memoria, almacenamiento, IOPS (Input/Output operations per second), es decir, todos los componentes juntos. En cambio, adquiriendo el servicio de Amazon RDS, estos se dividen para que se pueda escalar de forma independiente. Por ejemplo, si necesitamos más CPU, menos IOPS o más almacenamiento, podemos asignarlos fácilmente. Otro punto muy importante es que Amazon RDS administra las copias de seguridad, los parches de software, la detección automática de fallas y la recuperación. Además Amazon RDS nos proporciona el acceso al shell de la instancia de la Base de Datos y restringe el acceso a ciertos procedimientos del sistema y tablas que requieren privilegios avanzados. Podemos realizar copias de seguridad automatizadas cuando las necesitemos o crear nuestro propio snapshot de copia de seguridad. Estas copias de seguridad se pueden utilizar para restaurar una base de datos y el proceso de restauración de Amazon RDS funciona de manera fiable y eficiente. Podemos obtener una alta disponibilidad con una instancia primaria y otra instancia secundaria sincrónica, que se puede activar cuando se producen problemas y errores. Además se puede controlar quién puede acceder a nuestras bases de datos RDS mediante AWS IAM que nos permite definir usuarios y permisos. Cuando usamos Amazon RDS, solo pagamos por lo que usamos, y no hay ningún costo mínimo o de configuración. ( más adelante veremos cómo podemos estimar los costos del servicio usando el estimador que nos provee Amazon ) Configurando nuestro Amazon RDS Amazon RDS es parte de los 70 servicios web que ofrece Amazon AWS. Para acceder al servicio RDS necesitamos crear en primera instancia una cuenta en AWS. Lo interesante de este servicio es que con AWS solo pagamos por los servicios individuales que necesitamos durante el tiempo que los necesitamos, sin contratos a largo plazo ni licencias complejas. Los precios de AWS son similares a las tarifas de los servicios de agua o electricidad. Solo pagamos por lo que se consume y, una vez que cancelamos el servicio, no se aplican costos adicionales ni cuotas de cancelación. Capa Gratuita para Amazon RDS La capa gratuita de AWS para Amazon RDS nos permite usar de forma gratuita instancias de base de datos micro Single-AZ que se ejecutan en MySQL, MariaDB, PostgreSQL, Oracle (modelo de licencia "Bring-Your-Own-License [BYOL]") y SQL Server Express Edition. La capa de uso gratuita cuenta con 750 horas de instancias al mes. Los clientes también reciben 20 GB de almacenamiento de base de datos, 10 millones de E/S y 20 GB de almacenamiento de backup al mes de forma gratuita. Configuración para Amazon RDS Para disponer de una instancia de base de datos Oracle, debemos realizar ciertas tareas de configuración para Amazon RDS, las cuales están definidas como las siguientes: Crear una cuenta en Amazon AWS Crear un usuario IAM Determinar los Requerimientos Proporcionar acceso a la instancia de la base de datos (Parte II de este artículo) 1) Crear una cuenta en Amazon AWS Amazon Web Services utiliza información de nuestra cuenta de Amazon.com para identificarnos y brindarnos el acceso a Amazon Web Services. Creamos las credenciales de inicio de sesión y hacemos clic en el botón “Crear Cuenta”. Pasamos a la página de Información de Contacto, completamos la información y hacemos clic en el botón “Crear cuenta y continuar”. Debemos ingresar nuestra información de Pago. Es importante destacar que haremos uso de la capa gratuita, es decir, tendremos la posibilidad de usar 750 horas en forma gratuita, después de pasado ese número ya nos espesaran a cobrar el uso del servicio, dependiendo lo que estemos usando. Hacemos clic en el botón “Continuar”. Realizamos la verificación de Identidad Hacemos clic en el botón “Llamarme ahora”, y recibiremos una llamada de una máquina en el cual tendremos que ingresar el PIN que se mostrará en pantalla de nuestro smartphone, una vez realizado el ingreso del PIN veremos que la identificación se ha completado y luego tendremos que seleccionar el plan de soporte. En mi caso seleccionaré el Plan de Soporte Básico que se encuentra incluido en el precio. Hacemos clic en el botón “Continuar”. Y finalmente llegamos a la página de “Bienvenida a Amazon Web Services”, en el cual para acceder a la Consola de Administración de AWS hacemos clic en Mi Cuenta: Volverá a solicitar las credenciales de acceso. Accedemos a la consola de administración de AWS: Para acceder al servicio RDS, podemos desplegar todos los servicios y luego seleccionar RDS: Se presentará la pantalla de inicio de Amazon Relational Database Services: 2) Crear un usuario IAM Los servicios en AWS, como el Amazon RDS requieren que nosotros proporcionemos credenciales cuando accedemos a ellos. Amazon aconseja que se utilice IAM (Identify and Access Managment) para acceder a nuestros servicios en AWS. Vamos a crear un usuario IAM y luego asignarle los permisos de administración. Posteriormente nosotros podemos acceder con una URL especial las credenciales para el usuario IAM. Para crear un usuario IAM accedemos a la consola IAM: En el área de Servicios, seleccionamos IAM, ( https://console.aws.amazon.com/iam/ ) el cual nos llevará a la página de bienvenida, en el cual nos provee una URL especial con números el cual podemos personalizarla con algún alias o nombre o compañía que sea nuestra identificación para la URL. Hacemos clic en Customize y colocamos el alias que preferimos para nuestra cuenta: Como podemos ver en la imagen de bienvenida al IAM, tenemos unas tareas que debemos realizar para que todas ellas estén en verde. Activar MFA (Multi-Factor Authentication) en nuestra cuenta Para mayor seguridad Amazon recomienda que se configure el Multi-Factor Authentication para proteger los recursos de nuestra cuenta de Amazon AWS. El MFA agrega seguridad adicional porque requiere que los usuarios ingresen un código de autenticación único de un dispositivo de autenticación aprobado o de un mensaje de texto SMS cuando acceden a sitios web o servicios de AWS. Seguridad basada en tokens . Este tipo de MFA requiere que se asigne un dispositivo MFA (hardware o virtual) al usuario IAM de la cuenta raíz de AWS. Un dispositivo virtual es una aplicación de software que se ejecuta en un teléfono u otro dispositivo móvil que emula un dispositivo físico. De cualquier manera, el dispositivo genera un código numérico de seis dígitos basado en un algoritmo de contraseña de tiempo único sincronizado. El usuario debe ingresar un código válido del dispositivo en una segunda página web durante el inicio de sesión. Cada dispositivo MFA asignado a un usuario debe ser exclusivo; Un usuario no puede introducir un código desde el dispositivo de otro usuario para autenticarse. SMS basado en mensajes de texto . Este tipo de MFA requiere que se configure el usuario de IAM con el número de teléfono del dispositivo móvil compatible con el SMS del usuario. Cuando el usuario inicia sesión, AWS envía un código numérico de seis dígitos por mensaje de texto SMS al dispositivo móvil del usuario y requiere que el usuario ingrese ese código en una segunda página web durante el inicio de sesión. Debemos tener en cuenta que el MFA basado en SMS sólo está disponible para los usuarios de IAM. No podemos utilizar este tipo de MFA con la cuenta raíz de AWS. Hacemos clic en la flechita para expandir el segundo elemento del área de “Security Status” y luego hacemos clic en el botón “Manage MFA”: Se abrirá una ventana modal y allí seleccionamos el tipo de MFA que queremos utilizar, en mi caso será del tipo: A virtual MFA device: Hacemos clic en el botón “Next Step” Aquí nos muestra en la siguiente pantalla, que debemos instalar una aplicación compatible en nuestro dispositivo así sea un Smartphone, PC u otro dispositivo, y una vez instalado podemos seguir la configuración. Vamos a instalar la aplicación, para ello accedemos al link que nos provee la ventana modal: https://aws.amazon.com/es/iam/details/mfa/ Aplicaciones MFA virtuales Android Google Authenticator; autenticación de 2 factores de Authy iPhone Google Authenticator Windows Phone Authenticator Blackberry Google Authenticator En mi caso uso Android y por ello voy a instalar la aplicación: “Google Authenticator” en mi Smartphone. Lo primero que necesitamos hacer es configurar los mensajes de texto SMS y de voz. Activar la verificación en dos pasos Al habilitar la verificación en dos pasos (también conocida como autenticación de dos factores), le agregamos una capa adicional de seguridad a nuestra cuenta. Accedemos con algo que conocemos (nuestra contraseña) y algo que tenemos (un código que se envió a nuestro teléfono). Configurar la verificación en dos pasos: Vamos a la página de configuración de verificación en dos pasos . Es posible que tengamos que acceder a nuestra cuenta de Google. Seleccionamos “Empezar”. Seguimos el proceso de configuración paso a paso. Seleccionamos el tipo de verificación, por SMS o llamada, en mi caso selecciono SMS. Luego hacemos PROBAR y se envía a nuestro dispositivo el código de 6 dígitos, lo ingresamos y luego al ver que funciona activamos la verificación de dos pasos. Volvemos a la página del “Manage MFA Device” y hacemos clic en el botón “Next Step” Tomamos nuestro Smartphone y abrimos la aplicación que acabamos de instalar y usamos “Scan a barcode”, allí con la cámara escaneamos el barcode de la ventana modal y de ese modo se agrega la cuenta y presionamos en DONE. Una vez realizado el escaneo del barcode, en nuestra app aparecerán 6 dígitos, estos dígitos lo ingresamos en la casilla Authentication Code 1, y luego aparecerá los otros 6 dígitos, que lo ingresaremos en la casilla de Authentication Code 2. Luego hacemos clic en el botón “Activate Virtual MFA” y se mostrará una ventana modal que nos indicará que nuestro dispositivo ha sido asociado exitosamente. Refrescamos la página de IAM y veremos que tenemos tildado en verde la segunda tarea. Crear un usuario IAM Amazon no aconseja usar la cuenta raíz AWS para interacciones diarias con AWS, ya que la cuenta raíz proporciona acceso sin restricciones a nuestros recursos AWS. Expandimos la flechita y hacemos clic en el botón “Manager Users” Ingresamos a la página de administración de Usuarios. Para agregar un nuevo usuario hacemos clic en el botón “Add user”. Ingresamos a un asistente de 4 pasos, en el cual el primer paso es ingresar el nombre del usuario, el tipo de acceso, que en mi caso será a la consola de administración de AWS, el password puede ser personalizado o autogenerado y si queremos que el usuario cambie el password cuando inicie sesión. En el paso 2 configuramos los permisos para el usuario. Como no tenemos ni un grupo creado, necesitamos crear primero un grupo, por ello seleccionamos la primer opción “Add user to Group” y hacemos clic en el botón “Create Group”. Aparece otra ventana modal para crear el Grupo. - Group name: Administrador - Filter: ingresamos admin y refrescamos para que se filtre las políticas que existen. - Tildamos la política “AdministratorAccess” que permite el acceso total a todos los recursos de AWS. Hacemos clic en el botón “Create Group” Hacemos clic en el botón “Next: Review” Verificamos la información ingresada en el paso 3 y hacemos clic en el botón “Create user” En esta instancia ya hemos completado las 4 tareas del área de Security Status, vamos a completar la última tarea. Aplicar en IAM password policy Hacemos clic en el botón Manage Password Policy Se mostrará la página de Password Policy, en el cual configuramos las políticas de password que queremos utilizar: Aplicamos las políticas de password y regresamos al Dashboard. Finalmente tenemos todas las tareas marcadas en verde. 3) Determinar los Requerimientos En esta sección necesitamos conocer los requerimientos para nuestra instancia de la base de datos. Hay muchísima información al respeto para consultar, yo utilizaré una configuración básica para pruebas. En esta guía puedes encontrar información al respecto: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-ug.pdf 4) Proporcionar acceso a la instancia de la base de datos Amazon RDS admite instancias de base de datos que ejecutan varias versiones y ediciones de la base de datos Oracle. Podemos utilizar las siguientes versiones y ediciones: Oracle 12c Version 12.1.0.2 Version 12.1.0.1 Oracle 11g Version 11.2.0.4 Version 11.2.0.3 Version 11.2.0.2 En la segunda parte de éste artículo vamos a crear la Instancia de la Base de Datos Oracle para poder tener listo nuestra instancia para aprender y probar Oracle en la Nube de Amazon.
↧
Blog Post: Cómo crear una Instancia de la Base de Datos Oracle en Amazon RDS - Parte 2
En la primera parte de este artículo hemos visto cómo crear una cuenta en Amazon AWS para poder tener acceso al servicio RDS además de haber creado el usuario IAM. Para ingresar a la consola de administrador de AWS, accedemos a la URL personalizada mediante el usuario IAM que hemos creado: Cuenta: (nuestro alias) Nombre de Usuario: (el usuario que creamos como administrador en el IAM) Contraseña: ingresamos la contraseña del usuario IAM Accedemos al servicio RDS y nos muestra la página de inicio de “Amazon Relational Database Service”: Para crear una Instancia de la base de datos Oracle hacemos clic en el botón “Get Started Now” Aparece la página para seleccionar el motor de base de datos que queremos utilizar. Seleccionamos Oracle: Amazon nos ofrece diferentes opciones: Oracle Standard Edition One Oracle Standard Edition Two Oracle Standard Edition Oracle Enterprise Edition Tanto Oracle SE One como Oracle SE Two nos brinda la licencia incluida de la base de datos. En mi caso seleccionaré la opción Oracle SE One. Seleccionamos la instancia Dev/Test, y hacemos clic en el botón “Next Step”: En la siguiente página seleccionaremos la versión de la base de datos, el tipo de Instancia, el tipo de almacenamiento, entre otras configuraciones como se muestra en la siguiente imagen: La versión de la base de datos podemos seleccionar la 11 o 12, en mi caso usaremos la 11.2.0.4.v10. Nota: Si disponemos de una licencia de la base de datos, podemos utilizarla aquí para que califique en la capa gratuita, seleccionado “bring-your-own-license”, en mí caso utilizaré license-included el cual tendrá costos asociados en el uso de la instancia en Amazon RDS. Para estimar los costos de uso podemos utilizar la calculadora que nos provee Amazon desde el siguiente enlace : En esta calculadora podemos estimar los costos que tendremos dependiendo lo que seleccionemos. Por ejemplo al seleccionar la primera opción básica: Details: db.t2.micro TypeType: Micro Instance - Current Generation vCPU: 1 vCPU MemoryMemory: 1 GiB EBS Optimized: No Network Performance: Low Free Tier Eligible: Yes El costo mensual sería alrededor de 25,62 dólares. Una vez seleccionadas las especificaciones de la Instancia, debemos ingresar las siguientes configuraciones en la parte inferior de la página: DB Instance Identifier: Debemos especificar un nombre único para todas las instancias de BD pertenecientes a nuestra cuenta de AWS en la región actual. El identificador de instancia de la base de datos es case insensitive a mayúsculas y minúsculas, pero se almacena todo en minúsculas como por jemplo: “mydbinstance” Master Username: Debemos especificar una cadena alfanumérica que defina el ID de inicio de sesión para el usuario maestro. Utilizamos el inicio de sesión del usuario maestro para comenzar a definir todos los usuarios, objetos y permisos en las bases de datos de nuestra instancia de base de datos. El nombre de usuario principal debe comenzar con una letra, como por ejemplo: "awsuser". Master Password: Debemos especificar una cadena que defina la contraseña para el usuario maestro. La contraseña maestra debe tener al menos ocho caracteres, como por ejemplo: "mypassword". Confirm Password: Repetimos la contraseña ingresada arriba. Hacemos clic en el botón “Next Step” Pasamos a la página de Configuraciones Avanzadas Aquí seleccionamos las opciones dependiendo nuestras necesidades. En el caso de Publicly Accessible vamos a seleccionar Yes, si queremos que las instancias de EC2 y los dispositivos fuera del VPC que aloja la instancia de la BD puedan conectarse a la instancia de DB. Si seleccionamos No, Amazon RDS no asignará una dirección IP pública a la instancia de la BD y ninguna instancia de EC2 o dispositivos fuera del VPC podrá conectarse. Si seleccionamos Yes, también debemos seleccionar uno o más grupos de seguridad VPC que especifiquen qué instancias y dispositivos EC2 pueden conectarse a la instancia de la base de datos. En el caso de VPC Security Groups(s) seleccionamos el grupo o grupos que tiene las reglas de seguridad que autorizan las conexiones de todas las instancias de EC2 y dispositivos que necesitan acceder a los datos almacenados en la instancia de DB. De forma predeterminada, los grupos de seguridad no autorizan ninguna conexión; Debemos especificar reglas para todas las instancias y dispositivos que se conectarán a la instancia de la base de datos. Además configuramos el backup y otras opciones disponibles en la parte inferior de la página: Navegamos al final de la página y hacemos clic en el botón “Launch DB Instance” El proceso de creación de la Instancia lleva unos minutos. Desde el Dashboard de nuestro Amazon RDS podemos ver el status de la Instancia: Crear un Grupo de Seguridad en VPC Accedemos al Dashboard de nuestro Amazon RDS y podemos ver las configuraciones de nuestra instancia. Si en “Supported Platforms” indica VPC, como se muestra en la captura de pantalla de abajo, nuestra cuenta AWS en la región actual utiliza la plataforma EC2-VPC y utiliza un VPC predeterminado. El nombre del VPC predeterminado se muestra debajo de la plataforma admitida. Para proporcionar acceso a una instancia de BD creada en la plataforma EC2-VPC, debemos crear un grupo de seguridad VPC. Como ya tenemos un VPC creado por default, directamente vamos a crear un VPC Security Group. Para mantener nuestra instancia de BD de Amazon RDS privada, necesitamos crear un segundo grupo de seguridad para el acceso privado. Desde la consola de administración de Amazon AWS ingresamos al servicio VPC o desde el siguiente enlace: https://console.aws.amazon.com/vpc/ . Seleccionamos Security Groups: Aparece la página de Grupo de Seguridad y hacemos clic en el botón “Create Security Group” Name tag: customname -db-securitygroup Group name: customname -db-securitygroup Description: Custom Name DB Instance Security Group VPC: Seleccionamos el VPC que se creó por defecto . Hacemos clic en el botón “Yes, Create”. Añadir inbound rules al security group Desde la página de la Consola de Administración de nuestro AWS, abrimos el Amazon VPC. Aparecerá la página de inicio de Amazon VPC y seleccionamos allí “Security Groups” y seleccionamos el grupo de seguridad “customname-db-securitygroup” que creamos en el procedimiento anterior. Abajo seleccionamos el tab “Innound Rules” y elegimos “Edit” Configuramos los siguientes valores: Type: Oracle (1521) Source: El identificador del grupo de seguridad customname-db-securitygroup que hemos creado anteriormente, por ejemplo: sg-xxxxxxxxxxx Hacemos clic en el botón “Save” El grupo de seguridad es algo así como un corta-fuego, definiendo las conexiones entrantes y salientes permitidas. El grupo de seguridad por defecto está configurado para no permitir conexiones entrantes desde el exterior. Eso significa que si no nos conectamos desde un Servidor Virtual en la Nube VPC, sino que nos conectamos desde nuestra PC, que es lo que vamos a hacer, la conexión será rechazada. Para ello vamos a editar el grupo de seguridad por defecto y en el tab “Inbound” hacemos clic en el botón “Edit” y agregamos la siguiente regla: Type: Custom TCP Rule Protocol: TCP Port Range: 1521 Source: Anywhere Hacemos clic en el botón “Save”. Conectarnos a nuestra Instancia de BD corriendo en el motor de la base de datos Oracle Una vez que Amazon RDS nos proporciona nuestra instancia de base de datos, podemos utilizar cualquier aplicación cliente de SQL estándar para conectarnos a nuestra instancia. En nuestro ejemplo vamos a conectarnos a la base de datos usando la utilidad de línea de comandos de Oracle el SQLPlus. Abrimos la consola de Amazon RDS y seleccionamos la instancia. En la línea de la instancia seleccionamos la flechita para expandir y acceder a la información de la instancia. El campo Endpoint contiene parte de la información de conexión para nuestra instancia de Base de Datos. El campo Endpoint tiene dos partes separadas por dos puntos (:). La parte anterior a los dos puntos es el nombre DNS de la instancia, la parte que sigue a los dos puntos es el puerto. En mi laptop tengo instalado el SQLPlus ya que tengo una instancia de la base de datos Oracle Express Edition corriendo para pruebas y los tutoriales que explico. Haciendo uso del SQLPlus, abrimos una ventana de comandos del DOS e ingresamos la siguiente cadena de conexión: PROMPT>sqlplus USER/PASSWORD@LONGER-THAN-63-CHARS-RDS-ENDPOINT-HERE:1521/DATABASE_IDENTIFIER Por ejemplo: PROMPT>sqlplus user/password@orcl.xxxxxxxx.us-west-2.rds.amazonaws.com:1521/orcl Como podemos ver nos hemos conectado a nuestra instancia de la base de datos desde nuestra PC a nuestro Amazon RDS. Conectarse a la Instancia de Base de Datos Oracle desde SQL Developer Podemos utilizar el SQL Developer para conectarnos a la Instancia. Para ello abrimos la herramienta y creamos una nueva conexión: En Hostname colocamos el endpoint de la instancia. Y nos conectamos. Podemos ver que no tenemos objetos creados en nuestra base de datos. Podemos crear una tabla de ejemplo: CREATE TABLE prueba (id number, Nombre varchar2(100), Cargo varchar2(100)); Refrescamos y podemos ver la tabla creada: De esta forma muy sencilla tenemos disponible una instancia de Oracle en la Nube de Amazon para poder trabajar en aprender y practicar. Opciones para Instancias de Bases de Datos Oracle Oracle Application Express se considera una característica de la base de datos Oracle. Para ello, si queremos utilizar esta característica necesitamos habilitarla en nuestra Instancia en Amazon RDS. Estas son las versiones soportadas en Amazon RDS: Amazon RDS Oracle DB version Oracle Option Version Oracle 11g Oracle APEX version 4.1.1 Oracle APEX Listener 1.1.4 Oracle 12c Oracle APEX version 4.2.6 Oracle Rest Data Services (ORDS)(the APEX listener) Aún Amazon RDS no tiene la última versión de APEX como podemos ver en la documentación, si bien continuamente están actualizando los servicios, es probable que no pase mucho tiempo hasta que tengamos disponible la última versión de APEX, solo hay que esperar. [:)]
↧
Blog Post: Enhanced Whitelist Management in 12.2
Way back in Oracle Database 12c Release 1, the PL/SQL team added whitelisting to the language. This means you can use the ACCESSIBLE BY clause to specify the "white list" of program units that are allowed to invoke another program unit (schema-level procedure, function, package). For details on the 12.1 ACCESSIBLE BY feature set, check out my Oracle Magazine article, When Packages Need to Lose Weight . In that article, I step through the process of breaking up a large package body into "sub" packages whose access is restricted through use of the ACCESSIBLE BY feature. I'll wait while you read the article. Tick, tock, tick, tock.... OK, all caught up now? Great! In 12.2, there are two enhancements: 1. You can now specify whitelisting for a subprogram within a package. This is a very nice fine-tuning and is sure to come in handy. 2. You can specify the "unit kind" (program unit type) of the whitelisted program unit. This is useful when you have a trigger with the same name as a function, procedure or package (they do not, unfortunately, share the same namespace) and you need to distinguish which you want to include in the white list. Chances are, this will not be an issue for you, assuming you follow some common-sense naming conventions for your program unit. Let's go exploring with code - all of which can be executed at LiveSQL . First, I create a package spec and body that demonstrate the new functionality: I use ACCESSIBLE BY not at the package level, but with individual subprograms. Notice that the first two usages include the unit kind (PROCEDURE and TRIGGER). The third usage does not include a unit kind. And the fourth usage tries to specify a packaged subprogram for whitelisting. I say "tries" because as you will soon see, that's not supported. CREATE TABLE my_data (n NUMBER); CREATE OR REPLACE PACKAGE pkg AUTHID DEFINER IS PROCEDURE do_this; PROCEDURE this_for_proc_only ACCESSIBLE BY (PROCEDURE generic_name); PROCEDURE this_for_trigger_only ACCESSIBLE BY (TRIGGER generic_name); PROCEDURE this_for_any_generic_name ACCESSIBLE BY (generic_name); PROCEDURE this_for_pkgd_proc1_only ACCESSIBLE BY (PROCEDURE pkg1.myproc1); END; / Package created. CREATE OR REPLACE PACKAGE BODY pkg IS PROCEDURE do_this IS BEGIN NULL; END; PROCEDURE this_for_proc_only ACCESSIBLE BY (PROCEDURE generic_name) IS BEGIN NULL; END; PROCEDURE this_for_trigger_only ACCESSIBLE BY (TRIGGER generic_name) IS BEGIN NULL; END; PROCEDURE this_for_any_generic_name ACCESSIBLE BY (generic_name) IS BEGIN NULL; END; PROCEDURE this_for_pkgd_proc1_only ACCESSIBLE BY (PROCEDURE pkg1.myproc1) IS BEGIN NULL; END; END; / Package Body created. So I now try to compile a trigger that calls the "trigger-only" procedure, and that works just fine. But if I try to use the "procedure-only" procedure, I get a compilation error. CREATE OR REPLACE TRIGGER generic_name BEFORE INSERT ON my_data FOR EACH ROW DECLARE BEGIN pkg.this_for_trigger_only; END; / Trigger created. CREATE OR REPLACE TRIGGER generic_name BEFORE INSERT ON my_data FOR EACH ROW DECLARE BEGIN pkg.this_for_proc_only; END; / PLS-00904: insufficient privilege to access object THIS_FOR_PROC_ONLY Now I show the same thing for a procedure: it can't call the trigger-only version, but it can invoke the procedure-only subprogram. CREATE OR REPLACE PROCEDURE generic_name AUTHID DEFINER IS BEGIN pkg.this_for_proc_only; END; / Procedure created. CREATE OR REPLACE PROCEDURE generic_name AUTHID DEFINER IS BEGIN pkg.this_for_trigger_only; END; / PLS-00904: insufficient privilege to access object THIS_FOR_TRIGGER_ONLY And now you can see that both the trigger and procedure can invoke the subprogram that did not include a "unit kind." CREATE OR REPLACE TRIGGER generic_name BEFORE INSERT ON my_data FOR EACH ROW DECLARE BEGIN pkg.this_for_any_generic_name; END; / Trigger created. CREATE OR REPLACE PROCEDURE generic_name AUTHID DEFINER IS BEGIN pkg.this_for_any_generic_name; END; / Procedure created. Finally, I try to invoke the subprogram whose ACCESSIBLE BY clause specified "(PROCEDURE pkg1.myproc1)". Unfortunately, this is not yet supported. You can only list program units, not subprograms, in the list. So while the package named "pkg" compiles, you will it impossible to execute that subprogram from anywhere. CREATE OR REPLACE PACKAGE pkg1 AUTHID DEFINER IS PROCEDURE myproc1; END; / Package created. CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE myproc1 IS BEGIN pkg.this_for_pkgd_proc1_only; END; END; / PLS-00904: insufficient privilege to access object THIS_FOR_PKGD_PROC1_ONLY
↧
Blog Post: Speaking at the Annual Hotsos Symposium
I’ll be speaking at the annual Hotsos Symposium in Dallas TX February 27 th and 28 th . To quote their website: Hotsos is the best conference in the Americas devoted to Oracle system performance. The combination of topic focus, small audience size, and large speaker list make the Hotsos Conference an unsurpassed networking opportunity. The Hotsos Symposium reaches for a new technical high with presentations focused on techniques, experimental results, and field-tested scripts that attendees can take home and apply to real-world tasks. The speakers routinely offer an excellent balance between database principles and problem-solving techniques. The speakers also show you how to determine the difference between reliable information and bad advice around Oracle performance optimization. Here are my presentation topics, dates and times: Optimizing Data Warehouse AdHoc Queries against Star Schemas Monday, February 27, 2017 1:00pm to 2:00pm CST Hall B & Virtual Hall B Attendees will learn optimal techniques for designing, monitoring and tuning "Star Schema" Data Warehouses in Oracle11g and 12c. While there are numerous books and papers on Data Warehousing with Oracle, they generally provide a 50,000 foot overview focusing on hardware and software architectures -- with some database design. This presentation provides the ground level, detailed recipe for successfully querying tables whose sizes exceed a billion rows. Issues covered will include table and index designs, partitioning options, statistics and histograms, Oracle initialization parameters and star transformation explain plans. Attendees should be DBA's familiar with "Star Schema" database designs and have experience with Oracle 11g, and some exposure to Oracle 12c. Successful Dimensional Modeling of Very Large Data Warehouses Tuesday, February 28, 2017 1:00pm to 2:00pm CST Hall B & Virtual Hall B Attendees will learn successful techniques for dimensional modeling of very large Data Warehouses using traditional Entity Relationship diagramming tools. While there are numerous new modeling conventions and tools, Entity Relationship modeling tools have proven best at real-world database design and implementation. This presentation provides the ground level, detailed recipe for the optimal dimensional modeling techniques of tables whose sizes exceed 500 million rows. Issues covered will include "Star Schemas", fact and dimension tables, plus aggregate table design and population. Attendees should be DBA's or senior developers familiar with Oracle database design and any form of data modeling.
↧
↧
Blog Post: Speaking at the Annual IOUG Collaborate Conference
I’ll be speaking at the annual IOUG Collaborate Conference in Las Vegas April 3 rd and 5 th . To quote their website: The Independent Oracle Users Group (IOUG), the Oracle Applications Users Group (OAUG) and Quest International Users Group (Quest) present COLLABORATE 17: Technology and Applications Forum for the Oracle Community. As an educational conference, COLLABORATE 17 helps users of the full family of Oracle business applications and database software gain greater value from their Oracle investments. Created by and for customers, COLLABORATE 17 offers an expert blend of customer-to-customer interaction and insights from technology visionaries and Oracle strategists. Expand your network of contacts by interacting with Oracle customers, solutions providers, consultants, developers and representatives from Oracle Corporation at COLLABORATE 17. Here are my presentation topics, dates and times: Oracle Utilities as Vital as Ever Apr 3, 2017 12:00 PM–12:30 PM Palm D Session ID: 762 Why are books like "Advanced Oracle Utilities: The Definitive Reference" imperative for today's DBA's and senior developers? The answer: Oracle has become so robust and complex; where many new features are implemented not as SQL commands, but as either PL/SQL packages and/or stand alone utilities. Thus to best leverage all that Oracle has to offer, one must know all those non-SQL command interfaces for those new features. This presentation will demonstrate a few critical examples of such features and their usage. The goal will be to inspire attendees to realize that SQL alone is no longer sufficient - that there are many other commands and/or PL/SQL packages to master. Productivity Enhancing SQL*Plus Scripting Techniques Apr 5, 2017 12:00 PM–12:30 PM Palm D Session ID: 759 No matter what other Oracle or 3rd party database tools Oracle DBA's and developers might embrace, SQL*Plus remains a mainstay. However many don't fully realize it's true efficiency potential by not mastering advanced techniques such as Dynamic SQL Scripting (i.e. scripts to generate and run scripts). Oracle ACE Bert Scalzo will demonstrate such techniques and provide example scripts to increase any SQL*Plus users productivity. Furthermore some useful, advanced Unix/Linux shell scripting techniques will be highlighted as well. Flash Based Methods to Speed Up DW & BI Oracle Databases Apr 5, 2017 11:00 AM–12:00 PM Jasmine D Session ID: 761 DW & BI databases are growing ever bigger while analytical users continue demanding fast performance. While Exadata may be a great solution, not everyone can afford such a capital investment. Oracle ACE Bert Scalzo will highlight simple yet effective methods to utilize flash based storage technology to significantly improve performance with the least cost, effort and interruption. In some cases it's possible to get near Exadata like performance without as many zeros added to the cost.
↧
Blog Post: Create Blackout for Multiple Targets on EM13c
The notifications emails of a “planned” downtime is one of the annoying things of monitoring systems. You forget creating blackout and you start a maintenance work, and you get lots of notifications mails. Most of the time, the target which goes down, also affect other targets so it will multiple the number of unwanted notifications mails. Good thing is, it’s very easy to create blackouts on EM13c. We can do it through web console, emctl or emcli tools. A one-liner is enough for it: emctl start blackout BlackOutName -nodeLevel The above code will create a blackout for the all targets on the server. We can achieve the same thing by EMCLI: emcli create_blackout -name="BlackOutName" -reason="you know we need to do it" -add_targets="myserver:host" -propagate_targets -schedule="duration:-1" # indefinite If we use emcli, we have more options such as creating repeating blackouts, entering a reason for blackout, enabling blackout for a group of targets (which resides on different hosts). What if we need to create blackout for multiple targets. As I mentioned, EMCLI can be used to create blackout for groups. We can create groups on EM13c, and instead of passing names of all targets in a group, we can give the group name: # # Sample EMCLI Python script file to create blackout for multiple targets # # check number of arguments if len(sys.argv) <> 2: print "Usage to start a blackout: emcli @multiblackout.py targets.csv blackout_name" print "Usage to stop a blackout: emcli @multiblackout.py stop blackout_name" exit() blackout_name = sys.argv[1].upper() # login to OMS login( username="SYSMAN", password="yoursupersecretpassword" ) if sys.argv[0].lower() == "stop": stop_blackout( name= blackout_name ) # comment below line to keep old blackouts delete_blackout( name = blackout_name ) print "%s blackout stopped and deleted." % blackout_name exit() # open file for reading f = open( sys.argv[0], 'r' ) # variable to keep all targets alltargets = "" # loop for each line of the file for target in f: # build alltargets string alltargets += target.replace("\n","") + ";" create_blackout( name = blackout_name, add_targets=alltargets, reason=blackout_name, schedule="duration:-1" ) The script accepts two parameters. First parameter is the path of the file containing the targets, the second parameter is the name of the blackout. The targets file should be something like this: DBNAME1:oracle_database DBNAME2:oracle_database MYHOST5:host After you create a blackout, you can stop (and delete) the blackout by running it again, but this time you need to enter “stop” as the file name: ./emcli @multiblackout.py /home/oracle/mytargets.txt TESTBLACKOUT ./emcli @multiblackout.py stop TESTBLACKOUT If you have any questions about the script, please do not hesitate to ask. By the way I’m aware that the script has lack of error handling, can be written more efficent but I’m not trying to provide a script library to you I’m sharing a simple version so you can write your own (and better) script.
↧
Blog Post: ORA-12537: TNS:connection closed – Oracle 12cR1 RAC
We performed an EBS R12.2 RAC to RAC clone and after successful completion of cloning we were not able to connect database using sqlplus command and it was giving the below error: [oracle@racnode1 12.1.0]$ sqlplus apps@EBSRAC1 SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 14 08:25:38 2017 Copyright (c) 1982, 2014, Oracle. All rights reserved. Enter password: ERROR: ORA-12537: TNS:connection closed Solution: There is an issue with the permission on oracle binary and its not allowing the connections. [oracle@racnode1 12.1.0]$ ls -lrt bin/oracle -rwxr-x--x. 1 oracle oinstall 324002305 Dec 19 13:34 bin/oracle Change the permissions as mentioned below and it will work: [oracle@racnode1 12.1.0]$ chmod 6751 bin/oracle [oracle@racnode1 12.1.0]$ ls -lrt bin/oracle -rwsr-s--x. 1 oracle oinstall 324002305 Dec 19 13:34 bin/oracle [oracle@racnode1 12.1.0]$ Thanks for reading. regards, X A H E E R
↧
Blog Post: My Upcoming Speaker Events for H1 2017
I'm very excited to share with you that I've been accepted to speak at 2 Oracle events that will take place in the first half of 2017: The OUGN and IOUG Collaborate . OUGN 17 This year the Oracle User Group Norway event will take place from 8 - 11 February in Norway. The title of my presentation is " Winning Performance Challenges in Oracle Multitenant Architecture ". In this session we will cover how to effectively monitor and diagnose performance issues in Oracle Multitenant environments and if needed implement resource management plans to ensure high QoS (Quality of Service) for the pluggable databases. Here is a link to my session: https://guidebook.com/guide/85471/event/15235506/ IOUG Collaborate 17 This year the IOUG Collaborate event will take place from 2- 6 April in Las Vegas. The title of my presentation is " Database Consolidation Using the Oracle Multitenant Architecture ". In this session we will explore the new Oracle Multitenant architecture as well as some tools and best practices that will help you consolidate your databases and ensure a high SLA for each pluggable database. I've already presented this session at the IOUG COLLABORATE 16 and OOW 16 so I'm very happy to present this interesting topic again at the IOUG Collaborate 17. Here is a link to my session: https://app.attendcollaborate.com/event/member/318780 Hope to see you there!
↧
↧
Blog Post: Oracle Cloud - DBaaS - How-to Guide
by Ajith Narayanan Introduction This article explains the step-by-step approach for creating a database service using Oracle Cloud DBaaS, during which we will create the needed Storage Containers that can be used by the Database Cloud Services for backup purpose. We will also learn how to create the SSH keys used during the database creation steps. After the database, has been created, you will be able connect into the database image using the SSH private key and create a new user in the database instance. Benefits of Oracle Cloud - DBaaS For any business applications, database becomes the core part of the tech stack. In any product development phase the database provisioning (creation, refreshing, cloning & configuration) is a time-consuming task. DBaaS is gaining more traction these days because it enables businesses to deploy new databases quickly, securely, and cheaply. SSH Public and Private Key Creation Step 1: Create a new SSH public and private key Assumption is that you already have registered for an Oracle Cloud account and have the login information needed to connect to your Oracle Public Cloud domain. Now, we will be creating new SSH public and private keys. Run the secure shell keygen command used to create a new MyKey by entering the following command into the terminal window: ssh-keygen -b 2048 -t rsa -f MyKey When prompted to enter a passphrase, hit return twice . This will remove the need for a passphrase (password) when the private key is used in future steps. Entering a passphrase would provide more security when sharing the private key, but for purposes of this article, we will keep it simple. ubuntu@ajithn:~$ ssh-keygen -b 2048 -t rsa -f MyKey Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in MyKey. Your public key has been saved in MyKey.pub. The key fingerprint is: e8:7a:f9:a5:d1:5e:5d:4c:39:47:ef:c1:e8:55:a0:1e ubuntu@ajithn The key's randomart image is: +--[ RSA 2048]----+ | ..o| | .o.+| | E. *+| | . ....++| | . S .. +| | . . . . | | ... o . . | | .o = . | | .. .o . | +-----------------+ ubuntu@ajithn:~$ Account Setup Steps Storage Setup Step 2: Login to your Oracle Cloud account https://cloud.oracle.com Click Sign In in the upper right hand corner of the browser. Under Cloud Account, Select Public Cloud Services - EMEA (Or any region close to your location) from the drop down list and click on My Services . Provide your Service Account information, and click Sign In . NOTE : The User Name, Identity Domain and Password values would have been sent via email, after you registered for Oracle Cloud. Enter the identity domain name, followed by the username and password. On your first login, you can change the default password of our choice and few security questions as shown below. After the password change and security questions, you will be presented with a dashboard displaying the various cloud services available to this account. Step 3: Create Storage Containers For Database Cloud Service the use of a storage container for backups is required. Before running through the Create Instance wizard you will need to first create storage containers to be used by the Cloud Services. You can perform this step using REST API’s also (Not explained in this article). On the left pane, Select the Oracle Storage Cloud Service option as shown below. After reading the replication policy and selecting the region for replication policy [In my case, I selected Chicago (us2) ] Once the replication region is selected and Set Policy Button is clicked, we are all set to create new storage containers as shown below. Click on Create Container button. Create a storage container as AlphaDBCS_SC as shown below. The newly created storage container will be in added to the list of storage containers immediately as shown below. Database Cloud Service Setup Step 4: Create Database Cloud Service If you do not already have a browser tab open and pointing to cloud.oracle.com , open a second tab and enter the following URL: http://cloud.oracle.com Click Sign In in the upper right hand corner of the browser window Enter your Username , Password and Identity Domain then click Sign-In Under My Services, change the Data Center to Public Cloud Services and click on Sign In to My Services From the Cloud Dashboard, Click Open Service Console for the Database Cloud Service. On the left panel, select Oracle Database Cloud Service as shown in the below screenshot. Click on Create Service . For Subscription Type , Software Release and Software Edition keep defaults and click Next Select Oracle Database Cloud Service for Subscription Type . Select metering frequency as Monthly and click Next Select Oracle Database 12c Release 1 for Software Release and click Next Select Enterprise Edition for Software Edition and click Next Step 5: Identify the Instance Configuration This next step is very important when provisioning or creating a service instance. In this step, you will shape the service and provide an identity INSTANCE CONFIGURATION: When providing a name, please note you might have another service instance already created in your account, so the name must be unique. Enter the following for Instance Configuration: DB Name(SID) = ORCL PDB Name = AlphaDBCS Click on the Edit button to browse for the MyKey.pub file Enter the following values for Database Configuration Administrator: Administration Password = Enter the following values for Backup and Recovery Configuration: Backup Destination = Both Cloud Storage and Block Storage Cloud Storage Container = Storage- /AlphaDBCS_SC Cloud Storage User Name = Cloud Storage Password = Notes about the information entered: The SSH Public Key will use the public key you created using the ssh-keygen command. The Administration Password in the Database Configuration section will be used for the Oracle “sys and system” users. Confirm everything is correct and click Create. You should now see your new Database Cloud Service instance AlphaDBCS . To monitor the progress, click on the In Progress link, and view the current status. After about 45 minutes the instance creation should complete. Click Refresh and when the VM Status is no longer displayed, your instance is ready. Step 6: Connect to the newly created database After the database is successfully created, you will need to load some pre-created data into the alpha schema. Click on the service instance named AlphaDBCS to view details. Copy down the IP address We will now SSH to the AlphaDBCS VM image. ssh -i ./lab/MyKey oracle@ If you are on Windows, you may use the PuTTY utility to connect to the VM. We will now create the alpha schema and grant rights by entering the following commands: sqlplus system/ @PDB1 create user alpha identified by oracle; grant connect,dba to alpha; exit Conclusion In this article, you learnt how to provision a cloud database using Oracle Cloud DBaaS option. Oracle Cloud DBaaS will provide us the capability of quick provisioning of databases with enhanced security and centralized management of all the databases of your organization. About the Author Ajith Narayanan is currently the Chief Technology Officer (CTO) of InfraStack-Labs, Bangalore, India having a total 14 Years of work experience as Oracle DBA, Oracle Apps DBA, Oracle ERP Platform Architect & OpenStack Cloud Architect with an expertise in Infrastructure Architecture, capacity planning & Performance tuning of mid to large Oracle & OpenStack Cloud environments. Oracle ACE Associate Oracle RAC SIG Board Member Author of "Oracle SQL Developer" https://infrastack-labs.com https://omegha.infrastack-labs.com https://oracledbascriptsfromajith.blogspot.in HAPPY LEARNING!
↧
Blog Post: Acaba de salir la nueva directiva para APEX 5.2!!!
David Peake, Product Manager de Application Express compartió la nueva directiva para Apex 5.2. Estas serán serán las mejoras que se realizarán!!! Oracle Application Express 5.2 Oracle Application Express 5.2 will focus on both new features and enhancements to existing functionality, and is planned to incorporate the following: Remote SQL Data Access ‐ Extend common components such as Interactive Grids, Classic Reports and Charts to interface with data stored in remote databases using ORDS and REST. REST Service Consumption ‐ Provide declarative methods to define references to external REST APIs and generic JSON data feeds and to use these references as data sources for Interactive Grids, Reports and Forms. Declarative App Features ‐ Introduce a new Create App Wizard that allows for adding components and app features to new and existing applications. Interactive Grid enhancements ‐ Add additional reporting capabilities such as Group-By and Pivot, support for subscriptions and computations, flexible row height and general UI improvements. Page Designer ‐ Provide client-side wizards for the creation of more complex components like Dynamic Actions, Forms and Shared Components, User interface improvements. Upgrade Oracle JET and jQuery ‐ Adopt the most recent versions of the integrated JavaScript libraries to take advantage of new Data Visualizations such as Gantt charts and new Form widgets and controls. New REST Workshop ‐ Provide declarative methods to support the development of ORDS enabled REST web services, taking advantage of the latest features and functionality of ORDS. Packaged Applications ‐ Improved framework and enhancements to the packaged applications. Para descargar la directiva lo puedes hacer desde este link .
↧
Blog Post: Here's a great way to put an infinite loop into your code.
Isn't that something you always wanted to do? :-) No, it's not. And I did that yesterday in my dev environment (well, of course, such a thing could never make it to production!). It is an enormous pain. You press the Run button. The process doesn't return in the usual 2 seconds. You think back over the changes you just made and feel sweat break out on your forehead. Because you can see right away what you did and.....oh, how could I be so stupid? Well, not stupid . Just in too much of a hurry. And careless. And over-confident. And thinking about too many things at once. You know, the sorts of things, "gurus" do all the time as a way of maintaining their high level of excellent to show to the world. :-( So yes, I did this yesterday, and I thought I'd share with you my mistake to hopefully help you avoid doing the same thing in the future. I am writing a program to automatically generate workouts for the Oracle Dev Gym (which will soon take over from the PL/SQL Challenge as an "expertise through exercise" learning platform). I am relying heavily on collections (PL/SQL arrays). Now, I don't know about all of you, but I often go through several iterations of the design of those collections: Use a collection of IDs? No, a collection of records. Use an integer indexed array? Hmmm, no, wait, maybe it should be string indexed...? Oh, here's a great opportunity to use a nested collection! And so on. It's all great fun, and the end result is usually less code and a cleaner algorithm. But along the way, it's kind of messy. In this particular instance of an infinite loop, I had started out with a nested table to hold comma-delimited lists of quiz IDs. This nested table was densely-filled and so my loop looked like this: PROCEDURE create_workouts_for_sets ( resource_in IN ov.ov_resources_external%ROWTYPE, quiz_sets_in IN quiz_sets_t) IS BEGIN FOR indx IN 1 .. quiz_sets_in.COUNT LOOP create_workout; parse_list (quiz_sets_in(indx), l_quizzes); load_workout_actitivies (l_quizzes); END LOOP; END; Except that I didn't actually create those nested subprograms (create_workout, etc.). Instead, the body of the loop contained all the logic and extended for 100+ lines of code (thereby violating one of my personal favorite best practices: keep your executable sections tiny and highly readable). This point will become important in a moment. OK, so as I built more of the algorithm, I realized that I needed to make sure I wasn't generating multiple workouts with the same list of quizzes. How to check for duplication? I suppose I could compare those comma-delimited lists....but, wait! Why I am creating a comma-delimited list to begin with? Why not have a collection of the selected quizzes? And, another brainstorm: why not use that comma-delimited list instead as the index for the array? Then it is transparently easy to tell if there is duplication: does an element exist at that location in my now-string indexed array? That sounds like fun! So I switched to a collection of records indexed by string (associative array): SUBTYPE quiz_list_index_t IS VARCHAR2 (4000); TYPE quiz_set_rt IS RECORD ( maximum_time INTEGER, difficulty_id INTEGER, quizzes numbers_nt ); TYPE quiz_sets_t IS TABLE OF quiz_set_rt INDEX BY quiz_list_index_t; Then I changed the loop as follows: PROCEDURE create_workouts_for_sets ( resource_in IN ov.ov_resources_external%ROWTYPE, quiz_sets_in IN quiz_sets_t) IS l_index quiz_list_index_t := quiz_sets_in.FIRST; BEGIN WHILE l_index IS NOT NULL LOOP create_workout; parse_list (quiz_sets_in(indx), l_quizzes); load_workout_actitivies (l_quizzes); END LOOP; END; And then after making a whole bunch more edits, and getting the package to compile, I decided to try it out. I executed the parent procedure of create_workouts_for_sets....and it disappeared into NeverLand, never to return. Can you see the problem? Hopefully, it was instantly clear for you since the executable section above is so small: I never change the value of l_index. Now that, dear friends, is one tight little infinite loop, right there. In my program, however, because I had not yet refactored the 120-line body into nested subprograms, the END LOOP was "off the page", out of view, and therefore out of thought. I needed to move on to the next-defined element in the collection, as follows: PROCEDURE create_workouts_for_sets ( resource_in IN ov.ov_resources_external%ROWTYPE, quiz_sets_in IN quiz_sets_t) IS l_index quiz_list_index_t := quiz_sets_in.FIRST; BEGIN WHILE l_index IS NOT NULL LOOP create_workout; parse_list (quiz_sets_in(indx), l_quizzes); load_workout_actitivies (l_quizzes); l_index := quiz_sets_in.NEXT (l_index); END LOOP; END; You saw that, right? If not, you see it now, correct? And that, readers, brings me to the point of this post: When you are switching from dense to sparse collections, you will also likely need to shift from a numeric for loop to a simple or while loop, to iterate through the collection. When you make that change, you must not only change the header of the loop, but also add the necessary code to cause loop termination. Or as is often said in programming circles: D'oh!
↧
Blog Post: 12cR2 new features for Developers and DBAs - Here is my pick (Part 1)
Since the announcement of 12cR2 on-premises availability, the Oracle community become energetic and busy tweeting/blogging the new features, demonstrating installation & upgrades. Hence, I have decided to pick my favorite list of 12cR2 new features for Developers and & DBAs. Here is the high-level summary, until I write a detailed post for each feature ( excerpt from Oracle 12cR2 new features document ). Command history for SQL * Plus : Pre 12cR2, this could be achieved through a workaround, now, the history command would do the magic for you. Materialized Views: Real-Time Materialized Views : Materialized views can be used for query rewrite even if they are not fully synchronized with the base tables and are considered stale. Using materialized view logs for delta computation together with the stale materialized view, the database can compute the query and return correct results in real time. For materialized views that can be used for query rewrite all of the time, with the accurate result being computed in real time, the result is optimized and fast query processing for best performance. This alleviates the stringent requirement of always having to have fresh materialized views for the best performance. Materialized Views: Statement-Level Refresh : In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is predominantly relevant for star schema deployments. The new ON STATEMENT refresh capability provides more flexibility to the application developers to take advantage of the materialized view rewrite, especially for complex transactions involving multiple DML statements. It offers built-in refresh capabilities that can replace customer-written trigger-based solutions, simplifying an application while offering higher performance. Oracle Data Guard Database Compare: This new tool compares data blocks stored in an Oracle Data Guard primary database and its physical standby databases. Use this tool to find disk errors (such as lost write) that cannot be detected by other tools like the DBVERIFY utility. Subset Standby : A subset standby enables users of Oracle Multitenant to designate a subset of the pluggable databases (PDBs) in a multitenant container database (CDB) for replication to a standby database. Automatically Synchronize Password Files in Oracle Data Guard Configurations: This feature automatically synchronizes password files across Oracle Data Guard configurations. When the passwords of SYS, SYSDG, and so on, are changed, the password file at the primary database is updated and then the changes are propagated to all standby databases in the configuration. Preserving Application Connections to An Active Data Guard Standby During Role Changes : Currently, when a role change occurs and an Active Data Guard standby becomes the primary, all read-only user connections are disconnected and must reconnect, losing their state information. This feature enables a role change to occur without disconnecting the read-only user connections. Instead, the read-only user connections experience a pause while the state of the standby database is changed to primary. Read-only user connections that use a service designed to run in both the primary and physical standby roles are maintained. Users connected through a physical standby only role continue to be disconnected. Oracle Data Guard for Data Warehouses : The use of NOLOGGING for direct loads on a primary database has always been difficult to correct on an associated standby database. On a physical standby database the data blocks were marked unrecoverable and any SQL operation that tried to read them would return an error. Or, for a logical standby database, SQL apply would stop upon encountering the invalidation redo. Rolling Back Redefinition: There is a new ROLLBACK parameter for the FINISH_REDEF_TABLE procedure that tracks DML on a newly redefined table so that changes can be easily synchronized with the original table using the SYNC_INTERIM_TABLE procedure. The new V$ONLINE_REDEF view displays runtime information related to the current redefinition procedure being executed based on a redefinition session identifier. Online Conversion of a Nonpartitioned Table to a Partitioned Table : Nonpartitioned tables can be converted to partitioned tables online. Indexes are maintained as part of this operation and can be partitioned as well. The conversion has no impact on the ongoing DML operations. Online SPLIT Partition and Subpartition : The partition maintenance operations SPLIT PARTITION and SPLIT SUBPARTITION can now be executed as online operations for heap organized tables, allowing the concurrent DML operations with the ongoing partition maintenance operation. Online Table Move : Nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move. Oracle Database Sharding : Sharding with Oracle Database 12c Release 2 (12.2) is an architecture for suitable online transaction processing (OLTP) applications where data is horizontally partitioned across multiple discrete Oracle databases, called shards, which share no hardware or software. The collection of shards is presented to an application as a single logical Oracle database. Stay tuned for Part 2..
↧
↧
Blog Post: Blog posts on Oracle Advanced Analytics features in 12.2
A couple of days ago Oracle finally provided us with an on-premises download for Oracle 12.2 Database. Go and download load it from here or Download the Database App Development VM with 12.2 (This is what I did) Over the past couple of months I've been using the DBaaS of 12.2, trying out some of the new Advanced Analytics option new features, and other new features. Here are the links to the blog posts on these new 12.2 new features. There will be more coming over the next few months. New OAA features in Oracle 12.2 Database Explicit Semantic Analysis in Oracle 12.2c Database Explicit Semantic Analysis setup using SQL and PL/SQL and slightly related is the new SQL Developer 4.2 Oracle Data Miner 4.2 New Features
↧
Blog Post: Database Administrators and the Cloud
There’s no fighting progress. Decades ago database administrators managed and controlled everything – the OS, network, storage and database. Times have changed and DBAs have had to evolve (i.e. accept) well established principles of economics, namely specialization. Thus we have network administrators, storage administrators, virtualization administrators and database administrators. While it’s rarely very comforting to “ give-up control ”, DBAs have done so – even if begrudgingly. So now we have “ the cloud ”. Once more things are evolving and DBAs have to again embrace a new way of doing things. And as with each and every evolutionary step, fighting the inevitable is a losing battle. The planets have aligned, the gods have spoken, the train has left the station, or whatever saying you prefer – we all will be deploying more and more cloud databases going forward. So just go with the flow. Better yet, embrace and “ ride the wave ”. So unless you’re really close to retirement and basically don’t care – you will need to “ have your head in the clouds ”. [;)] But just as with every other evolutionary step where DBAs were worried about job security – the cloud does not eliminate the need for database administrators. It merely requires them to focus on other key aspects of managing a database. So while something critical like backup and recovery may be simply a questionnaire during cloud database instantiation, the DBA still has to know what choices to make and why. So in short, DBAs will be required to focus more on what vs. how. Moreover since everything in the cloud has a monthly cost – DBA’s will need to pay strict attention to capacity planning for storage and all other chargeable resources (e.g. CPU) in order to better assist management with controlling cloud costs. And as we all know “ money talks ”. So the DBA is just as important as ever.[:)]
↧
Blog Post: Lucky Breaks While Performance Tuning Oracle
Hi, This article will include a couple of great tips…one using a hint I work with in my SQL Tuning class and the other solving a performance issue with partitioning using PL/SQL Dynamic Execute Immediate feature! First, a little background. This assignment was one of my common 3-day performance tuning assignments. I can usually find the performance issues. I can usually solve the performance issues as well. Sometimes, I’m just giving advice on how to solve the performance issue. Sometimes the clients can’t change the source code. This is where flexibility becomes very important. This client had a VERY large Oracle partitioned database. At the time, it was Oracle9. They loaded 10 million rows an hour to this database! It could never be taken down. There was no test or QA database either. There are a rolling 365 partitions, no indexes, no stats. The analysts typically looked at either a 3-day or a 5-day window for their data reporting needs. Day 1 of Client Performance Tuning. A Materialized View refresh was taking over 4 hours to run. I reviewed the code and associated Explain Plan. The SQL included a 2-table join and the Explain Plan showed that the Cost-based Optimizer (CBO) was selecting a Merge Join operation as the row counts were in the hundreds of millions, particularly on the one side. My standard practice when I see a Merge Join operation is to add the ‘USE_NL’ hint to get it to do a Nested Loop operation. Sometimes you also need the LEADING or ORDERED hint to tell the CBO in which order to process tables in the FROM clause…I like the LEADING hint better because you specify the tables or table alias names in the hint and don’t have to change the SQL text. The ORDERED hint is an older hint that says take the tables in the order found in the FROM clause. Usually I don’t need either of these because the CBO usually gets the tables in the right order (based on statistics). Make sure to check out the Gather Plan Stats blog I submitted this month. It discusses the importance of looking at the actual row counts associated with each line of the Explain Plan versus the Explain Plan you asked for (using a variety of methods including the Explain Plan button in Toad) because you asked for one. This plan is not the one it executed with. When working with problem SQL, you need the actual row counts when the SQL executed. You get these in a number of ways including the method showed in the Gather Plan Stats blog and a SQL Trace…that Toad can also do. Without asking a bunch of questions, we added the hint. IF the CBO takes the hint, there are two things that will happen: Performance will be much better Performance will be much worse I wrote a Toad World article a year ago that shows how you can tell if your hints were considered, if they contained errors, and if they were used. I didn’t give the environment a lot of thought. I assumed they would be running a test. Nope. The refresh happens sometime after midnight each day and this was live code! The next morning, the client was just thrilled because the MV took about an hour and a half to run…a significant savings in time. I commented on putting it into production. She said ‘That was production…’…wow…news to me! That was my first Lucky Break on this site. Day 2 of Client Performance Tuning This day’s task involved looking at why they are consistently getting Buffer Busy Waits wait events on their large data loads. The massive data loads were quite intrusive upon the system. If one didn’t get done inside of an hour, another kicked off, and they would start getting these Buffer Busy Waits wait events…which means…the buffer cache was not big enough to hold all the new rows and space was not getting cleared up fast enough either. Commit points always comes up too…not sure it would help with this particular issue but these applications were all designed for maybe hundreds of thousands of rows, not millions… Share Pool Info Script…you can ask me for the code ( dan@danhotka.com ). c o lumn psize format 999,999,999 heading 'Shared Pool Size' column fspace format 999,999,999 heading 'Free Space' column pfree format 99 heading '% Free' SELECT to_number(p.value) psize, s.bytes fspace, (s.bytes/p.value) * 100 pfree FROM v$parameter p, v$sgastat s WHERE p.name = 'shared_pool_size' AND s.name = 'free memory' and s.pool = 'shared pool' / You can run these in Toad. You can run these using the Quest Script Runner as well. My scripting tool of choice is SQL*Plus…I’m a Neanderthal when it comes to data processing (what we used to call IT)…yes…Mr. Character-mode himself. Don’t get me wrong, I love today’s tools but back then…what I loved about Windows 3.1 (came out around Oracle7) the best was 2 VT200 emulators running on a single device! Back in the day, PC’s cost close to $2000, and these VT100 and VT200 dumb ASCII green screen direct-connect terminals were cheap at $400 each. Anyway, a little formatting but this script will show the size of the shared pool and how much is currently in use. We ran the script and it showed a Library Cache size of 2GB and they were using about 500MB of SQL text in the course of their operations. So, there was easily a gig and a half available. The Lucky Break was for the client this time. We/I found a considerable amount of memory that was already allocated to the Oracle RDBMS environment but was not currently being used. My recommendation was to take 1GB of memory from the library cache and give it to the default buffer cache to help with these massive data loads. They did take it under advisement. I do not know if they tried this when they recycled the database next. Real-time changes to these caches wasn’t available until Oracle10. Day 3 of Client Performance Tuning involved using Dynamic SQL to solve a pesky partitioned SQL problem. I’ll cover this one in a future PL/SQL problem solving blog. Luck did show up here again, I was able to use Dynamic SQL to solve a common coding problem with SQL going against partitioned objects. Maybe this is more of my skills in being able to solve difficult SQL and PL/SQL performance issues. Stay tuned!!! Dan Hotka Author/Instructor/Oracle Expert www.DanHotka.com Dan@DanHotka.com
↧