Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: “Back It Up!!!”

$
0
0
“Expectations is the place you must always go to before you get to where you're going.” ― Norton Juster, The Phantom Tollbooth In a recent post in the Oracle General database forum the following question was asked: Hi, I have 3 schema's of almost equal sizes, off which only two are creating the backup's (Hot backup). One of the schema's is not creating any backup, by not creating I mean, the backup file that is generated size is too small than the other two files. The size of the other two backup files is almost 20 GB while the third one is only 54 Bytes !!! Below are the commands I am using for backup, alter tablespace SCHEMA begin backup; ( DB level ) tar cpvf - SCHEMA_TS | compress -f > backup.tar.Z ( OS level ) The DB (Oracle Database 11g) is in Archive log mode and no error is being thrown while running the above commands. Could you please help me in solving this issue. Any reference related to this would also be of great help. Thanks in advance !! There are a number of issues with that sequence of statements, the first is calling that a ‘backup’. The issue with that is it’s highly likely that, after the tablespace files are restored, the recovery will fail and the database will be left in an unusable state. The obvious omission is the archivelogs; nowhere in that command sequence is found any statement using tar to copy the archivelogs generated before, during and after that ‘backup’ is completed. Since no recovery testing is reported it’s very possible that this ‘backup’ is taken on faith, and unfortunately faith isn’t going to be of much help here. Yet another problem is the lack of any query to determine the actual datafiles associated with the given tablespace; a ‘backup’ missing that important information means that not all required datafiles will be copied, making the tablespace incomplete and Oracle unable to recover it. This again leads to a down database with no hope of opening. It was suggested several times in the thread that the poster stop using this ‘backup’ and move to RMAN to create dependable, reliable and recoverable backups. Why this method was in use was explained with this post: I am new to Oracle DB and the guy who worked before me, wrote a (backup) script where he just created a tar of the table space files. which leads one to wonder how this DBA thought he or she would restore these tablespaces to a useful and usable state should the time come. The poster added: I want to use RMAN but thought of solving this issuse first (worst case scenario) and then create a new backup script using RMAN. Honestly this is not the problem that needs to be solved; the problem is generating a reliable backup and RMAN has been proven time and again as the tool for that job. Further discussion lead to the realization that not all files were being sent to tar which explained the size discrepancy but didn’t truly address the recoverability issue. Anyone can take a so-called ‘backup’ using any number of tools and operating system utilities; it’s restoring and recovering from those ‘backups’ that tells the tale of success or failure, and failure in restoring and recovering a production database isn’t an option. Sometimes you don’t get what you expect. Filed under: General

Blog Post: Slides from the OUG Ireland meet-ups

$
0
0
I've finally gotten the time (and the permissions from the presenters) to make the slides from the first two OUG Ireland meet-ups available. I've posted them on SlideShare and I've embedded them in this blog post too. OUG Ireland Meet-up 12th January - DevOps and Oracle Cloud from Brendan Tierney 20th Oct. 2016 OUG Ireland Meet-up - Updates from Oracle Open World 2016 from Brendan Tierney

Blog Post: Find duplicate SQL statements with PL/Scope in 12.2

$
0
0
PL/Scope is a compiler tool that gathers information about identifiers (as of 11.1) and SQL statements (as of 12.2) in your PL/SQL code. You can do all sorts of amazing deep-dive analysis of your code with PL/Scope, answering questions like: Where is a variable assigned a value in a program? What variables are declared inside a given program? Which programs call another program (that is, you can get down to a subprogram in a package)? Find the type of a variable from its declaration. Show where specific columns are referenced Find all program units performing specific DML operations on table (and help you consolidate such statements) Locate all SQL statements containing hints Find all dynamic SQL usages – ideal for getting rid of SQL injection vulnerabilities Show all locations in your code where you commit or rollback And my latest favorite: Locate multiple appearances of same "canonical" SQL statement. What does this mean and why does it matter? One great feature of PL/SQL is that the PL/SQL compiler automatically "canonicalizes" all static SQL statements in your code. This means that it upper-cases all keywords, removes extraneous white space, and so on. It standardizes the format of your SQL statements. This standardization is important because it reduces the number of times that Oracle Database will "hard parse" your SQL statement when it is executed. That's because standardization of format raises the likelihood that SQL statements which "look" different (different case, line breaks, spaces) but are actually the same "under the surface". So canonicalization of SQL can improve performance. Now on to another benefit gained from this process: PL/Scope compares all the canonicalized SQL statements and assigns the same SQL_ID to matching statements. Consider the following two statements. I turn on PL/Scope to gather both identifier and statement information. Then I compile two procedures. Clearly, they were written by two different developers on my team, with their formatting and naming standards. Sigh....welcome to the real world, right? ALTER SESSION SET plscope_settings='identifiers:all, statements:all' / CREATE OR REPLACE PROCEDURE p1 (p_id NUMBER, p_name OUT VARCHAR2) IS BEGIN SELECT last_name INTO p_name FROM employees WHERE employee_id = p_id; END; / CREATE OR REPLACE PROCEDURE p2 (id_in NUMBER, name_out OUT VARCHAR2) IS BEGIN SELECT last_name INTO name_out FROM EMPLOYEES WHERE employee_id = id_in; END; / Now let's analyze the PL/Scope data: SELECT signature, sql_id, text FROM all_statements WHERE object_name IN ('P1', 'P2') ORDER BY line, col / 517C7D44CC74C7BD752899158B277868 641rpxvq1qu8n SELECT LAST_NAME FROM EMPLOYEES WHERE EMPLOYEE_ID = :B1 DFD0209075761780F18552DE6661B4E7 641rpxvq1qu8n SELECT LAST_NAME FROM EMPLOYEES WHERE EMPLOYEE_ID = :B1 Brilliant! The signatures are different (no big surprise there; that's a value generated by PL/Scope that is guaranteed to be unique across all statements and identifiers). But notice that the SQL_IDs are the same - and the SQL statements are the same, too. There you see the canonicalization at work. Why I am so excited about this? One of the worst things you can do in your code is repeat stuff: copy/paste algorithms, magic values...and, most damaging of all, SQL statements. If you repeat the same SQL statements across your application, it is much harder to optimize and maintain that code. Now, with PL/Scope 12.2, we have an awesome and easy-to-use tool at our disposal to identify all duplicates of SQL. We can then decide which of those should be moved into functions (SELECTs) or procedures (non-query DML), so that the subprogram can be invoked in multiple places, and the SQL can be managed in one place. Here's a query that tells you precisely where duplication of SQL occurs: SELECT owner, object_name, line, text FROM all_statements WHERE sql_id IN ( SELECT sql_id FROM all_statements WHERE sql_id IS NOT NULL GROUP BY sql_id HAVING COUNT (*) > 1) / Cool stuff. Try it out at LiveSQL , our free, 24x7 playground for Oracle Database 12c Release 2 SQL and PL/SQL (and a code library). And check out the extensive doc on PL/Scope, with lots of examples and insights.

Comment on Getting Started with Kubernetes

$
0
0
esheetal, you should start the Docker daemon like this (CentOS): systemctl start docker It depends on your linux distribution.

Blog Post: Nueva Característica de APEX 5.1: Gran Simplificación de Asistentes

$
0
0
En Oracle Application Express 5.1, los asistentes se han simplificado con valores predeterminados más inteligentes y con menos pasos, lo que permite a los desarrolladores crear componentes más rápidamente que nunca. Asistente para crear aplicaciones El Asistente para crear aplicaciones de escritorio admite ahora la creación de páginas de cuadrícula interactivas como informes, formularios y detalles maestros. Cuando creamos una aplicación desde una hoja de cálculo, el asistente admite ahora la creación de una cuadrícula interactiva como una sola página o una página de informe y formulario. Además el asistente pasa de tener 8 pasos a solo tener 5 pasos! Crear Asistente para la página El Asistente para crear páginas tiene una interfaz más consistente y simplificada que consta de menos pasos. Los asistentes de Master Detail ahora admiten incorporar una región de cuadrícula interactiva en una sola página o en dos páginas. El formulario tabular, el gráfico AnyChart y el calendario antiguo aparecen ahora en el tipo de página heredado. Esta nueva versión viene con todo!

Blog Post: Import One table to the new name in the same schema

$
0
0
The following case just want to share it in case anyone need it, development wants to import the existing table in the siebel schema into the new name to check the data integrity:- Oracle Provide you with Attribute called REMAP_TABLE Read More about it here . Just Run the following command and change the variable depend on your environment :- directory : The name of existing dump dumpfile : The name of dump file remap_table : Allows you to rename tables during an import operation tables : the name of the table you want to restore it. impdp directory=DMP_BAK dumpfile=31-01-2017_PRDSBL.dmp remap_table=siebel.CX_PERMITS:CX_PERMITS_NEW tables=CX_PERMITS Thanks Osama

Blog Post: Upgrading to Exadata 12.1.2.2.0 or later – mind the 32bit packages

$
0
0
This might not be relevant anymore, shame on me for keeping it draft for few months. However, there are still people running an older versions of Exadata storage software and it might still help someone out there. With the release of Exadata storage software 12.1.2.2.0, Oracle announced that some 32bit (i686) packages will be removed from the OS as part of the upgrade. This happened to me in the summer last year ( blog post ) and I thought back then that someone messed up the dependencies. After seeing it again for another customer a month later I thought it might be something else. So after checking the release notes for all the recent patches, I found this for the 12.1.2.2.0 release: Note that several i686 packages may be removed from the database note when being updated. Run dbnodeupdate.sh with -N flag at prereq check time to see exactly what rpms will be removed from your system. Now, this will be all ok if you haven’t installed any additional packages. If you, however, like many other customers has packages like LDAP or Kerberos then your dbnodeupdate pre-check will fail with “Minimum’ dependency check failed.” and broken dependencies since all i686 package will be removed as part of dbnodeupdate pre-check. The way around that is to run dbnodeupdate with -N flag and check the logs of what packages will be removed and what will be impacted. Then manually remove any packages you installed manually. After the Exadata storage software update, you’d need to install the relevant version of the packages again. Having said that I need to mention the below note on what software is allowed to install on Exadata: Is it acceptable / supported to install additional or 3rd party software on Exadata machines and how to check for conflicts? (Doc ID 1541428.1)

Wiki Page: Oracle APEX 5.1 Introduce tres nuevas Apps de productividad

$
0
0
En este artículo hablaremos de las tres nuevas app que trae Oracle APEX 5.1, las cuales las podemos ver en el módulo de aplicaciones empaquetadas: Competitive Analysis, Quick SQL and REST Client Assistant. Competitive Analysis Podemos utilizar esta aplicación para comparar cualquier número de productos uno al lado del otro. Las comparaciones se crean en línea en un navegador y pueden ser completadas por muchos usuarios simultáneamente. Las comparaciones, una vez completadas, pueden publicarse en línea. Las comparaciones se pueden puntuar y mostrar en forma de gráfico agregado, y pueden mostrarse en forma de texto más extensa y detallada. El formato y los atributos de contenido que se muestran son personalizables por los usuarios finales. El filtrado facilita la capacidad de resaltar las diferencias entre los productos. Datos sobre la Aplicación Podemos utilizar esta aplicación para comparar productos o versiones del mismo producto. Podemos comparar cualquier número de productos. Podemos ver hasta 8 productos a la vez. La comparación está organizada por niveles y atributos. Los niveles se utilizan para estructurar el documento, y los atributos son parte de la comparación que se puede puntuar o anotar. Cada comparación tiene un nivel 1 y un nivel opcional 2 de agrupaciones de atributos. Cada línea (atributo) tiene un peso. Cada intersección de un producto y un atributo pueden ser punteados. La puntuación total de un producto es la suma de todas las puntuaciones x el peso del atributo. Características: Seguimiento y gestión del análisis competitivo Analizar la fuerza y ​​debilidad del producto por categoría Permitir que varios usuarios desarrollen simultáneamente y vean la misma comparación al mismo tiempo Agregar vínculos de URL y archivos adjuntos Control de acceso flexible (lector, colaborador, modelo de administrador) Comparar cualquier número de productos Posibilidad de personalizar las opciones de vista Descargar en una hoja de cálculo Quick SQL Esta aplicación proporciona una forma rápida e intuitiva de generar un modelo de datos SQL a partir de un documento de texto identado (con sangrías). Esta herramienta puede mejorar drásticamente el tiempo requerido para crear un modelo de datos relacional. Además, la herramienta ofrece muchas opciones para generar SQL, incluyendo generar triggers, APIs y tablas de historial. Esta herramienta no está diseñada para ser un reemplazo del 100% para el modelado de datos, simplemente es un comienzo rápido. Una vez que se genera el SQL se puede ajustar y ampliar. Datos sobre la Aplicación Casos de uso: Rápida generación de sintaxis de SQL a partir de texto de estilo markdown ... crear modelos de datos básicos rápidos Crear instrucciones SQL de créate e insert desde copiar y pegar desde una Hoja de Cálculo Generación aleatoria de datos Aprender la sintaxis SQL de: create table, select, insert, index, trigger, PL/SQL package, and view utilizando ejemplos que son proporcionados por la app. Características Generación de sentencias SQL de creación de tablas a partir de una lista de nombres de tablas y columnas Capacidad de compartir modelos de datos con otros Capacidad para guardar modelos de datos con un nombre Tipos de datos automáticos y sugestión de tamaño basadas en nombres de columnas Generación aleatoria de datos Hoja de cálculo - copiar y pegar datos de carga con conversión de SQL, incluyendo la creación de tablas e instrucciones de inserción Las declaraciones de tipo de datos short hand como vc20 para varchar2 (20) Indexar una columna con un sufijo indexado Añadir una clave externa con la sintaxis de /fk [nombre de la tabla] Opcional, de forma predeterminada, la adición automática de una columna de clave principal llamada "ID" para cada tabla Detección automática de claves foráneas mediante la definición de una columna cuyo final es "ID" con el correspondiente precedente de un nombre de tabla Múltiples niveles de identado (sangría), puede crear estructuras de tablas de padres, hijos, nietos simplemente mediante sangría Indización automática de claves foráneas Sintaxis simplificada para tipos de datos, restricciones de comprobación, condiciones no nulas, etc. No es necesario incluir subrayados (underscores) en nombres de tablas o columnas Generación de disparadores (trigger) Opciones para varias sintaxis de clave primaria, trigger o identificar tipo de datos Generación de API (opcional) Generación de tabla de historial (opcional) Los prefijos de tabla, opcionalmente añadir automáticamente un prefijo de tabla definido por el usuario a todos los objetos Los nombres de columna duplicados se eliminan, si hay dos ocurrencias de la columna a en una tabla, se usa el primer valor, otras se ignoran Las columnas que terminen en _YN tendrán automáticamente una restricción de verificación generada que restringe el dominio de valores aceptables a Y y N REST Client Assistant Esta aplicación pone de relieve las capacidades del servicio RESTful de Oracle Application Express. Esta aplicación le permite acceder a los servicios RESTful definidos en su espacio de trabajo, así como a los servicios públicos. La aplicación proporciona mapeo basado en metadatos desde datos de respuesta de servicio a columnas de conjuntos de resultados de SQL. El código SQL y PL/SQL generado puede ser utilizado en aplicaciones propias. Pre-requisitos Para ejecutar correctamente esta aplicación, deben cumplirse los siguientes requisitos previos: Los servicios RESTful deben configurarse en la instancia. Para obtener más información, consulte la Guía de instalación de Oracle Application Express . Los servicios de red deben estar habilitados en la base de datos. Para obtener más información, consulte Habilitar servicios de red en Oracle Database 11g en la Guía de instalación de Oracle Application Express . 'Enable RESTful Services' debe establecerse en 'Sí' en el nivel de instancia y de área de trabajo. Consulte Administración> Administrar instancia> Configuración de funciones . 'Allow RESTful Access' debe establecerse en 'Yes' en el nivel de instancia. Para obtener más información, consulte Control del acceso RESTful en la Guía de instalación de Oracle Application Express Características Almacena meta datos para servicios REST externos o internos. Mapeo basado en metadatos desde una respuesta de servicio REST a un conjunto de resultados de consultas SQL. Auto-Detectar columnas de datos para servicios alimentados por ORDS 3.0 o superior. Soporta formatos de respuesta JSON y XML. Soporta DML (POST, PUT, DELETE) para los servicios RESDS de ORDS. Soporta HTTP Basic y OAuth2 Credenciales de cliente Métodos de autenticación. Genera código SQL para acceder al servicio REST para su uso en aplicaciones propias. Con estas aplicaciones listas para usar podemos aprender cómo implementar estas funcionalidades a nuestras propias aplicaciones APEX.

Blog Post: Consolidation Planning for the Cloud - Part VII

$
0
0
We started to detail the concept and workings of Consolidation Planning for the Cloud, using Oracle Enterprise Manager Cloud Control 13c, in the previous parts of this article series. This is Part VII. To recap, in the previous part of the series, we looked at consolidating to virtual machines when we created a new project for a P2V (Physical to Virtual) consolidation. One important point we noted was that only Oracle Virtual machines are considered in this case, if you are using other virtualization techniques such as VMware, then those virtual machines need to be treated as physical machines for the sake of consolidation. We then started to look at the benchmark rate, the SPECint®_base_rate2006 benchmark being used by the Consolidation Planner for database or application hosts, or hosts with a mixed workload. The SPECjbb®2005 benchmark is used for middleware platforms. The technique to refresh these rates was to go to the Host Consolidation Planner homepage in Enterprise Manager, via Enterprise | Consolidation | Host Consolidation Planner, and then select Actions | View Data Collection. On this page, examine the section titled “Load Benchmark CSV File”. The built-in benchmark rates can be updated with a downloaded Comma Separated Values (CSV) file. The latest data is available from the spec.org published page http://www.spec.org/cgi-bin/osgresults . As per the instructions shown on the Enterprise Manager page, the new spec rates can be downloaded from SPEC.org in the form of CSV files, and then uploaded to the repository. And where are these stored inside the Enterprise Manager repository? The full list of all the SPEC rates that are being used in Enterprise Manager can be found in the repository table EMCT_SPEC_RATE_LIB (owned by the SYSMAN user in the repository database). The host consolidation plan results can then be reviewed, and the consolidation ratio, mapping, target server utilization, and excluded servers examined. The Host Consolidation planner we have seen so far is essentially a server consolidation tool and is based on CPU, Memory and I/O Metrics at the O/S level. These metrics are collected by Oracle Enterprise Manager for Linux, Solaris, HP-UX, AIX and Windows and this means the planner would work on these platforms. Note that if a Phantom Exadata server is used as the destination candidate in a P2P project and scenario, the Host Consolidation Planner (which was the only consolidation planner available in the previous Enterprise Manager Cloud Control 12c version) does not by itself take into account the Oracle Exadata features such as Smart Scan, which could potentially have a positive impact on performance by reducing the CPU utilization of the databases and allowing more servers to be consolidated on the Exadata server. However, in the latest Enterprise Manager Cloud Control 13c version, we have a new “Database Consolidation Workbench” that takes into consideration the effects of the Exadata features on consolidation. This is described in the next section. Database Consolidation Workbench Oracle Enterprise Manager Cloud Control 13c offers the new capability of a “Database Consolidation Workbench”. This is actually a feature of the Oracle Real Application Testing (RAT) Database Option, so it is mandatory to hold the RAT option license if you intend to use the Database Consolidation workbench. The workbench is accessed via Enterprise | Consolidation | Database Consolidation Workbench in Enterprise Manager, and is a new comprehensive end-to-end solution for managing database consolidation. The analysis is based on historical workload data—database and host metrics, in combination with AWR data. The workbench provides automation in all consolidation phases, from planning to deployment and validation. Guesswork and human errors in database consolidation can be considerably eliminated if you use this workbench. Database versions 10.2 and above are supported, as also consolidation to the Oracle Private/Public Cloud, or consolidation to Exadata. The workbench also supports high availability options for the implementation to minimize downtime, depending on what platform and version of database is present at the source and destination. Note that for the Database Consolidation workbench, if you have the RAT option, you can run SQL Performance Analyzer (SPA)’s Exadata simulation, which will help assess the benefit for Exadata smart scans for phantom servers. SPA’s Exadata simulation runs on existing hardware and uses the init.ora parameter cell_simulation_enabled to assess the benefit. You can perform this by going to each database’s home page in Enterprise Manager, and selecting Performance | SQL | SQL Tuning Sets from the menu to create a SQL Tuning set. Next, select Performance | SQL | SQL Performance Analyzer Home, and “Exadata Simulation” on that page. The Database Consolidation workbench has three main phases, Plan – Migrate – Validate. In the Plan phase, consolidation advice is given by identifying candidate databases for the designated consolidation platform, and Automatic Workload Repository (AWR) data gathered from the Oracle databases is used for this phase. Since this uses AWR, the Oracle Enterprise Manager Diagnostics Pack License is also required. In the Migrate phase, the consolidation plan is actually implemented, by migrating the databases to the new consolidation platform using Enterprise Manager’s provisioning features. To perform the actual provisioning using Enterprise Manager, an additional Enterprise Manager pack needs to be licensed. This is the Database Lifecycle Management pack (DBLM) for Oracle Database. If you use Real Application Clusters (RAC) or Active Data Guard in the migration phase, those database options need to be licensed as well. In the Validate phase, the consolidation plan is validated with SQL Performance Analyzer (SPA) (a component of the Real Application Testing Option) by running test workloads on the consolidated databases. The RAT option license is required. Conflicts are identified based on workload characteristics, and you are also told if the workload is not suitable for Exadata consolidation. Storage/Platform advice is made available—such as the impact of using compression on I/O and storage, and the impact of the I/O offloading and Flash Cache features of Exadata. Note that for licensing database options such as RAT or Active Data Guard, and the Enterprise Manager Packs such as Diagnostics or DBLM, the Enterprise Edition (EE) of the database is required. Other editions of the Oracle Database such as standard edition (SE) cannot be used with these options or Enterprise Manager packs. We’ll have a look at the new Database consolidation workbench in the continuation of this Consolidation Planning for the Cloud article series in Part VIII.

Wiki Page: Consolidation Planning for the Cloud - Part VII

$
0
0
We started to detail the concept and workings of Consolidation Planning for the Cloud, using Oracle Enterprise Manager Cloud Control 13c, in the previous parts of this article series. This is Part VII. To recap, in the previous part of the series, we looked at consolidating to virtual machines when we created a new project for a P2V (Physical to Virtual) consolidation. One important point we noted was that only Oracle Virtual machines are considered in this case, if you are using other virtualization techniques such as VMware, then those virtual machines need to be treated as physical machines for the sake of consolidation. We then started to look at the benchmark rate, the SPECint®_base_rate2006 benchmark being used by the Consolidation Planner for database or application hosts, or hosts with a mixed workload. The SPECjbb®2005 benchmark is used for middleware platforms. The technique to refresh these rates was to go to the Host Consolidation Planner homepage in Enterprise Manager, via Enterprise | Consolidation | Host Consolidation Planner, and then select Actions | View Data Collection. On this page, examine the section titled “Load Benchmark CSV File”. The built-in benchmark rates can be updated with a downloaded Comma Separated Values (CSV) file. The latest data is available from the spec.org published page http://www.spec.org/cgi-bin/osgresults . As per the instructions shown on the Enterprise Manager page, the new spec rates can be downloaded from SPEC.org in the form of CSV files, and then uploaded to the repository. And where are these stored inside the Enterprise Manager repository? The full list of all the SPEC rates that are being used in Enterprise Manager can be found in the repository table EMCT_SPEC_RATE_LIB (owned by the SYSMAN user in the repository database). The host consolidation plan results can then be reviewed, and the consolidation ratio, mapping, target server utilization, and excluded servers examined. The Host Consolidation planner we have seen so far is essentially a server consolidation tool and is based on CPU, Memory and I/O Metrics at the O/S level. These metrics are collected by Oracle Enterprise Manager for Linux, Solaris, HP-UX, AIX and Windows and this means the planner would work on these platforms. Note that if a Phantom Exadata server is used as the destination candidate in a P2P project and scenario, the Host Consolidation Planner (which was the only consolidation planner available in the previous Enterprise Manager Cloud Control 12c version) does not by itself take into account the Oracle Exadata features such as Smart Scan, which could potentially have a positive impact on performance by reducing the CPU utilization of the databases and allowing more servers to be consolidated on the Exadata server. However, in the latest Enterprise Manager Cloud Control 13c version, we have a new “Database Consolidation Workbench” that takes into consideration the effects of the Exadata features on consolidation. This is described in the next section. Database Consolidation Workbench Oracle Enterprise Manager Cloud Control 13c offers the new capability of a “Database Consolidation Workbench”. This is actually a feature of the Oracle Real Application Testing (RAT) Database Option, so it is mandatory to hold the RAT option license if you intend to use the Database Consolidation workbench. The workbench is accessed via Enterprise | Consolidation | Database Consolidation Workbench in Enterprise Manager, and is a new comprehensive end-to-end solution for managing database consolidation. The analysis is based on historical workload data—database and host metrics, in combination with AWR data. The workbench provides automation in all consolidation phases, from planning to deployment and validation. Guesswork and human errors in database consolidation can be considerably eliminated if you use this workbench. Database versions 10.2 and above are supported, as also consolidation to the Oracle Private/Public Cloud, or consolidation to Exadata. The workbench also supports high availability options for the implementation to minimize downtime, depending on what platform and version of database is present at the source and destination. Note that for the Database Consolidation workbench, if you have the RAT option, you can run SQL Performance Analyzer (SPA)’s Exadata simulation, which will help assess the benefit for Exadata smart scans for phantom servers. SPA’s Exadata simulation runs on existing hardware and uses the init.ora parameter cell_simulation_enabled to assess the benefit. You can perform this by going to each database’s home page in Enterprise Manager, and selecting Performance | SQL | SQL Tuning Sets from the menu to create a SQL Tuning set. Next, select Performance | SQL | SQL Performance Analyzer Home, and “Exadata Simulation” on that page. The Database Consolidation workbench has three main phases, Plan – Migrate – Validate. In the Plan phase, consolidation advice is given by identifying candidate databases for the designated consolidation platform, and Automatic Workload Repository (AWR) data gathered from the Oracle databases is used for this phase. Since this uses AWR, the Oracle Enterprise Manager Diagnostics Pack License is also required. In the Migrate phase, the consolidation plan is actually implemented, by migrating the databases to the new consolidation platform using Enterprise Manager’s provisioning features. To perform the actual provisioning using Enterprise Manager, an additional Enterprise Manager pack needs to be licensed. This is the Database Lifecycle Management pack (DBLM) for Oracle Database. If you use Real Application Clusters (RAC) or Active Data Guard in the migration phase, those database options need to be licensed as well. In the Validate phase, the consolidation plan is validated with SQL Performance Analyzer (SPA) (a component of the Real Application Testing Option) by running test workloads on the consolidated databases. The RAT option license is required. Conflicts are identified based on workload characteristics, and you are also told if the workload is not suitable for Exadata consolidation. Storage/Platform advice is made available—such as the impact of using compression on I/O and storage, and the impact of the I/O offloading and Flash Cache features of Exadata. Note that for licensing database options such as RAT or Active Data Guard, and the Enterprise Manager Packs such as Diagnostics or DBLM, the Enterprise Edition (EE) of the database is required. Other editions of the Oracle Database such as standard edition (SE) cannot be used with these options or Enterprise Manager packs. We’ll have a look at the new Database consolidation workbench in the continuation of this Consolidation Planning for the Cloud article series in Part VIII.

Wiki Page: Consolidation Planning for the Cloud - Part VI

$
0
0
by Porus Homi Havewala We introduced the concept of Consolidation Planning in the previous parts of this article series. Part V was here . This is Part VI. Consolidating to Virtual Machines In the same way, we can create a new project for a P2V (Physical to Virtual) consolidation. This allows you to consolidate your existing Physical Machines to one or more Virtual Machines. The consolidation project only applies to Oracle Virtual Machines. Consolidating to non-Oracle Virtual Machines, such as VMware, is not supported unless if you treat the VMware machines as Physical Machines and use the earlier P2P consolidation method.. When creating the project, simply select the Consolidation Type to be "From physical servers to Oracle Virtual Servers (P2V)". The other screens are mostly the same. You can select source candidates, then add existing virtual servers (current Enterprise Manager targets) as destination candidates. If you do not specify the destination candidates, only phantom (new) virtual servers will be used. In the Pre-configured Scenarios step of the process, select New (Phantom) servers. This is seen in Figure 1-34. Note that there is no engineered systems option at this stage, however engineered systems can be chosen later on when creating the scenario. Figure 1-34. Phantom Servers in the case of P2V Projects. If you do not intend to use engineered systems as the destination, you can manually specify the CPU capacity, Memory, and Disk Storage of your phantom virtualized servers, along with optional entries for reserved cpu and reserved memory. Finish creating and submitting the P2V Project. Once the project is ready, you can create a scenario. In the case of a P2V scenario, a list of Exalogic configurations is provided, and you can select from that. We have selected the Exalogic Elastic Cloud X5-2 (Eight Rack) as shown in Figure 1-35. Figure 1-35. Selecting an Exalogic X5-2 (Eight Rack) for the P2V Scenario. The rest of the scenario calculations and mapping work in the same way, and the physical source servers are mapped to the destination phantom Eight Rack. In this manner, the Consolidation Planner allows you to play a number of what-if scenarios, for P2P as well as P2V consolidations on the basis of sound mathematical calculations. You can specify which metrics will be analysed, and this results in the calculation of the resource requirements for every source server. Each resource is aggregated to a 24-hour pattern based on a different formula, depending on whether Conservative, Medium or Aggressive has been selected as the algorithm. Constraints can also be specified as to which server workloads can co-exist together, and which workloads should be placed on different target servers for business or technical reasons. Updating the Benchmark Rates Now for a look at the Benchmark rates. The SPECint®_base_rate2006 benchmark is used by Consolidation planner for database or application hosts, or hosts with a mixed workload. The SPECjbb®2005 benchmark is used for middleware platforms. The benchmarks are used as a representation for the processing power of CPUs - including Intel Itanium, Intel Xeon, SPARC T3, SPARC64, AMD Opteron, as well as IBM POWER. The SPEC rates in Enterprise Manager can also be updated by users in the following manner. Go to the Host Consolidation Planner home page via Enterprise | Consolidation | Host Consolidation Planner, and select Actions | View Data Collection. On this page, see the section titled “Load Benchmark CSV File”. The following information is seen on the page: “The built-in benchmark rates can be updated with a downloaded Comma Separated Values (CSV) file. To download the latest data, go to http://www.spec.org/cgi-bin/osgresults and choose either SPECint2006 Rates or SPEC JBB2005 option for "Available Configurations" and "Advanced" option for "Search Form Request", then click the "Go!" button. “In the Configurable Request section, keep all default settings and make sure the following columns are set to "Display": For SPECint2006: Hardware Vendor, System, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Processor, Processor MHz, Processor Characteristics, 1st Level Cache, 2nd Level Cache, 3rd Level Cache, Memory, Base Copies, Result, Baseline, Published; for SPEC JBB2005: Company, System, BOPS, BOPS per JVM, JVM, JVM Instances, # cores, # chips, # cores per chip, Processor, CPU Speed, 1st Cache, 2nd Cache, Memory, Published. Choose "Comma Separated Values" option for "Output Format", then click "Fetch Results" button to show retrieved results in CSV format and save the content via the "Download" link in a CSV file, which can be loaded to update the built-in rates”. As per the above instructions, the latest spec rates can be downloaded from SPEC.org in the form of CSV files, and then uploaded to the repository. We have continued this article series in Part VII here . (This article is an excerpt from Chapter I of the new book by the author titled “Oracle Database Cloud Cookbook with Oracle Enterprise Manager 13c Cloud Control” published by Oracle Press in August 2016. For more information on the book, see here .)

Blog Post: Now NOT to Handle Exceptions

$
0
0
Oracle Database raises an exception when something goes wrong (examples: divide by zero, duplicate value on unique index, value too large to fit in variable, etc.). You can also raise exceptions when an application error occurs (examples: balance too low, person not young enough, department ID is null). If that exception occurs in your code, you have to make a decision: Should I handle the exception and let it propagate out unhandled? You should let it propagate unhandled (that is, not even trap it and re-raise) if handling it in that subprogram or block will not "add value" - there are no local variables whose values you need to log, for example. The reason I say this is that at any point up the stack of subprogram invocations, you can always call DBMS_UTILITY.FORMAT_ERROR_BACKTRACE and it will trace back to the line number on which the error was originally raised. If you are going to handle the exception, you have to make several decisions: Should I log the error? Yes, because you want to be able to go back later (or very soon) and see if you can diagnose the problem, then fix it. Should I notify anyone of the error? This depends on your support infrastructure. Maybe depositing a row in the log table is enough. What should I log along with the error information? Generally you the developer should think about the application state - local variables, packaged/global variables, table changes - for which a "snapshot" (logged values) would help you understand what caused the error. Should I then re-raise the same exception or another after logging? Almost always, yes. Sometimes, it is true, you can safely "hide" an error - very common for a NO_DATA_FOUND on a SELECT-INTO, when a lack of data simply indicates need for a different action, not an error - but for the most part you should always plan to re-raise the exception back out to the enclosing block. So that's a high-level Q&A. Now let's dive into some anti-patterns (common patterns one finds in PL/SQL code that are "anti" - not good things to do) to drive these points home. You can see and run all of this code on LiveSQL . I also put together a YouTube playlist if you prefer video. OK, here we go. 1. Worst Exception Handler Ever CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN /* Lots of code executing and then... */ RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN NULL; END; Completely swallows up and ignores just about any error the Oracle Database engine will raise. DO NOT DO THIS. Want to ignore the error? Make it explicit and log it anyway: CREATE OR REPLACE PROCEDURE my_procedure (name_in IN VARCHAR2) AUTHID DEFINER IS e_table_already_exists EXCEPTION; PRAGMA EXCEPTION_INIT (e_table_already_exists, -955); BEGIN EXECUTE IMMEDIATE 'CREATE TABLE ' || name_in || ' (n number)'; EXCEPTION /* A named handler */ WHEN e_table_already_exists THEN /* Even better: write a message to log. */ NULL; /* CHecking SQLCODE inside WHEN OTHERS */ WHEN OTHERS THEN IF SQLCODE = -955 THEN /* ORA-00955: name is already used by an existing object */ NULL; ELSE RAISE; END IF; END; If you can anticipate a certain being raised and "That's OK", then handle it explicitly, either with an "IF SQLCODE = " inside WHEN OTHERS or by declaring an exception, associating it with the error code and then handling by name. 2. A Handler That Only Re-Raises: Why Bother? CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN /* Lots of code executing and then... */ RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN RAISE; END; This handler doesn't hide the error - it immediately passes it along to the outer block. Why would you do this? Not only is there no "added value", but by re-raising, any calls to DBMS_UTILITY.FORMAT_ERROR_BACKTRACE will trace back to that later RAISE; and not to the line on which the error was *originally* raised. Takeaway: don't handle unless you want to do something inside the handler, such as log the error information, raise a different exception or perform some corrective action. 3. "Log" Error with DBMS_OUTPUT? No Way! CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN /* Lots of code executing and then... */ RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ('Program failed!'); END; OK, so I don't do *nothing* (NULL) in this handler, but I come awfully close. I do not re-raise, so the error is swallowed up. But I also simply write non-useful information out to the screen. TO THE SCREEN. If this code is running in production (the most important place from which to gather useful error-related diagnostics), can you see output to the screen via DBMS_OUTPUT? I bet not. And even if you could, surely you'd like to show more than a totally useful static piece of text? 4. Display Error and Re-Raise - Better But Still Pathetic CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN /* Lots of code executing and then... */ RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line (SQLERRM); END; OK, now I display the current error message, but still stuck with output to the screen, and anyway (a) we recommend you call DBMS_UTILITY.format_error_stack instead, since it avoids some possible truncation issues with SQLERRM (for very long error stacks) and (b) you really do need to see more than that! At least, though, I do re-raise the error. 5. Do Not Convert Exceptions to Status Codes CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER, status_out OUT INTEGER) AUTHID DEFINER IS BEGIN IF value_in > 100 THEN /* All is fine */ /* Execute some code */ /* Set status to "ok" */ status_out := 0; ELSE RAISE PROGRAM_ERROR; END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_stack); status_out := SQLCODE; END; This is a common technique in some other programming languages. For example, in C, many people only write functions and the function's return value is the status. If the status is not 0 or some other magic value indicating success, then you must abort. But, oh, the resulting code! DECLARE l_status INTEGER; BEGIN my_procedure (100, l_status); IF l_status <> 0 THEN /* That didn't go well. Need to stop or do *something*! */ RAISE program_error; END IF; my_procedure (110, l_status); IF l_status <> 0 THEN /* That didn't go well. Need to stop or do *something*! */ RAISE program_error; END IF; END; 6. Write to a Log Table , But Not This Way CREATE TABLE error_log ( id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, title VARCHAR2 (200), info CLOB, created_on DATE DEFAULT SYSDATE, created_by VARCHAR2 (100), callstack CLOB, errorstack CLOB, errorbacktrace CLOB ) / CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN INSERT INTO error_log (title, info, created_by, callstack, errorstack, errorbacktrace) VALUES ('Program failed', 'value_in = ' || value_in, USER, DBMS_UTILITY.format_call_stack, DBMS_UTILITY.format_error_stack, DBMS_UTILITY.format_error_backtrace); RAISE; END; / You should write error information to a log table. Here's a very simple example. Even better is to download and use the open source and popular Logger . It's good that I write my errors to a table, but terrible that I do it this way. You should never "hard code" the inserts right inside the handler. First, that's bad because if you ever need to change the table structure you (might) have to go back to each handler and change it. Second, in this way, the error log entry becomes a part of your session's transaction. If the process ends with an unhandled exception, your log entry is rolled back along with the "bad" transaction. Error information lost! 6. Get Fancy with Savepoints? CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS BEGIN SAVEPOINT my_transaction; UPDATE employees SET salary = value_in; RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN ROLLBACK TO my_transaction; INSERT INTO error_log (title, info, created_by, callstack, errorstack, errorbacktrace) VALUES ('Program failed', 'value_in = ' || value_in, USER, DBMS_UTILITY.format_call_stack, DBMS_UTILITY.format_error_stack, DBMS_UTILITY.format_error_backtrace); SAVEPOINT error_logged; RAISE; END; Um, no. Yes, you could use savepoints to make sure that the log entry is not rolled back - but then any rollbacks that occur "higher up" in the callstack have to know to rollback only to this savepoint. It's messy and hard to ensure consistency. The autonomous transaction feature is much to be preferred. Best Approach: Reusable Error Logger CREATE OR REPLACE PROCEDURE log_error (title_in IN error_log.title%TYPE, info_in IN error_log.info%TYPE) AUTHID DEFINER IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN INSERT INTO error_log (title, info, created_by, callstack, errorstack, errorbacktrace) VALUES (title_in, info_in, USER, DBMS_UTILITY.format_call_stack, DBMS_UTILITY.format_error_stack, DBMS_UTILITY.format_error_backtrace); COMMIT; END; This is a very simple example; again, the Logger project is a MUCH better example - and code you can use "out of the box". The key points are: (a) move the insert into the procedure so it appears just once and can be modified here as needed; (b) The autonomous transaction pragma to ensure that I can commit this insert without affecting the unsaved changes in my "outer" transaction/session. CREATE OR REPLACE PROCEDURE my_procedure (value_in IN INTEGER) AUTHID DEFINER IS l_local_variable DATE; BEGIN l_local_variable := CASE WHEN value_in > 100 THEN SYSDATE - 10 ELSE SYSDATE + 10 END; UPDATE employees SET salary = value_in; RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN log_error ( 'my_procedure failed', 'value_in = ' || value_in || ' | l_local_variable = ' || TO_CHAR (l_local_variable, 'YYYY-MM-DD HH24:MI:SS')); RAISE; END; In my exception handler, I call the generic logging routine. In that call, I include the values of both my parameter and local variable, so that I can use this information later to debug my program. Note that the local variable's value is lost unless I handle and log here. Finally, I re-raise the exception to ensure that the enclosing block is aware of the "inner" failure. Lots of Ways To Go Wrong.... But it's also not that hard to do things right. Just remember: Do not swallow up errors. Make sure you at least log the error, if you truly do not want or need to stop further processing. Log all local application state before propagating the exception to the outer block. Write all error log information to a table through a reusable logging procedure, which is defined as an autonomous transaction. And why write one of these yourself? Logger does it all for you already!

Blog Post: Oracle Behavior Differences Between Releases

$
0
0
A question came to me from a DBA friend asking about SQL differences between releases of the database. IE: behavioral differences in SQL processing. The question posed by this person was that they felt that some older SQL was not forward compatible with the newer Oracle databases. They were not able to produce such an example, though. What I know is that there have been subtle and important changes in how the data is returned, mostly using a GROUP BY clause. This goes back a ways, so bear with me…but…it might be pertinent to your current applications; who knows? Prior to Oracle10.2, Oracle RDBMS always did a sort on the GROUP BY clause. It was redundant to include the ORDER BY clause in your code. Maybe we got complacent. Maybe we didn’t think about it at all. Oracle10.2 changed the internal sort mechanism and the GROUP BY clause no longer did a sort unless the ORDER BY clause was specified. I would say that this issue should have shown up in migration testing though. It is good practice and now very important to have an ORDER BY clause associated with the same fields in the GROUP BY clause. Another subtle change, implemented in Oracle9i, was that the sort now took into account the national language assigned to the database. So, the same data from different countries might not appear in the same order on a report/screen/form even when using an ORDER BY clause. The only other change over the years and releases has been the cost-based optimizer (CBO) taking over the rule-based optimizer (RBO). The old rule based optimizer may have performed better than some SQL in older releases of the database (prior to Oracle10.2). I say this because release 10.2 was a HUGE change for the CBO, and for the good! Oracle11 removed references to the rule based optimizer but both Oracle11 and Oracle12 still support the RBO when a /*+ RULE */ hint is applied to the SQL. The RBO had a list of 19 rules that mostly revolved around an index on one or more WHERE clause items. The RBO also drove off the last table in the FROM clause as both optimizers still read backwards through the SQL text, so, the last table in the FROM clause was always the first one it encountered. This would be the leading table on a nested-loop or a sort-merge operation. Hash joins are only CBO operations. The first thing I used to do to tune RBO SQL was to shuffle the tables on the FROM clause to find the best-performing SQL. I still recommend this technique in my tuning class (see my video offerings with discount codes or instructor led offerings at www.DanHotka.com ) using the LEADING hint. I have all kinds of SQL performance tips I share in this class. The CBO, of course, has lots of ways it interprets the submitted SQL and my point here is the behavior and possible performance implications are great between the two optimizers. I see no good reason to be using the RBO anymore. None. Convert your code, fix the poor-performing SQL So, SQL between releases performed differently but still produced the same results, but perhaps not in the same amount of time. I know of no SQL code that used to work in prior releases of the Oracle database that doesn’t work in Oracle12.1.0.2. Dan Hotka Author/Instructor/Oracle Expert

Blog Post: AWS Big Data Services in the Cloud Part 1: Amazon Redshift

$
0
0
By Wissem El Khlifi I. Introduction In the IT industry we go through a tedious and long process of acquiring infrastructure (hardware software and licenses). Typically, projects are held back due to infrastructure delays in definition, contracts, support and licenses. Oftentimes, projects have been cancelled due to the infrastructure cost and non-ability of escalation. Cloud computing addresses all these difficulties and more. Teams now can focus in more important aspects of the projects: business, functionalities, and applications; and pay little heed to the infrastructure, which is totally or partially managed by the cloud provider. Everything from storage for petabytes of data to high compute capacity is at your disposals. You have the software, hardware, all on demand and at your disposals everywhere and anytime, with the ability to scale, and at lower cost. This article is the first of a series on Big Data in the cloud. In it, we will introduce the Big Data warehouse service in AWS. We will learn the Redshift architecture and good practices to maximize the usage of the Big Data Warehouse service in the cloud. II. Amazon Redshift Amazon Redshift is a very powerful and fast relational data warehouse in the AWS cloud. Easy to scale to petabytes of data, it is optimized for high- performance analysis and reporting of very large datasets. Amazon Redshift is specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications, which require complex queries against large datasets. Amazon Redshift offers fast querying using the most commonly query language; SQL. Clients use ODBC or JDBC to connect to Amazon Redshift. Amazon Redshift is based on PostgreSQL. Amazon Redshift and PostgreSQL have a number of very important differences that you must be aware of as you design and develop your data warehouse applications; for example, PostgreSQL 9.x includes some features that are not supported in Amazon Redshift. Refer to http://docs.aws.amazon.com/redshift/latest/dg/c_unsupported-postgresql-features.html for a complete list of non-supported PostgreSQL features in Amazon Redshift. Amazon Redshift can be scaled out easily by resizing the cluster. Redshift will create a new cluster and migrate data from an old cluster to the new one. During a resize operation, the database will become read only. The Amazon Redshift service offers automation of administrative tasks such as snapshot backups and patching, along with tools to monitor the cluster and to recover from failures. III. Amazon Redshift Cluster Architecture: The main component of an Amazon Redshift is the cluster, which is composed of a leader node and one or more compute nodes. When the clients or the application establishes a connection to the Amazon Redshift Cluster, it connects directly to the leader node only. For each SQL query issued by the users, the leader node calculates the plan to retrieve the data. It passes that execution plan to each compute node, and each slice processes its portion of the data. The leader node manages the distribution of data and query processing tasks to the compute nodes. Amazon Redshift currently supports six different node types; each node type differs from others in vCPU, memory and storage capacity. The cost of your cluster depends on the region, node type, number of nodes, and whether the nodes are reserved in advance. The six node types are grouped into two categories: Dense Compute and Dense Storage. The Dense Compute node types support clusters up to 326TB using fast SDDs. The Dense Storage nodes support clusters up to 2PB using large magnetic disks. The following tables describe every node type. Check http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html for more details about the node types. Dense Compute Node Types Node Size vCPU ECU RAM (GB) Slices Per Node Storage Per Node Node Range Total Capacity dc1.large 2 7 15 2 160 GB SSD 1–32 5.12 TB dc1.8xlarge 32 104 244 32 2.56 TB SSD 2–128 326 TB Dense Storage Node Types Node Size vCPU ECU RAM (GB) Slices Per Node Storage Per Node Node Range Total Capacity ds1.xlarge 2 4.4 15 2 2 TB HDD 1–32 64 TB ds1.8xlarge 16 35 120 16 16 TB HDD 2–128 2 PB ds2.xlarge 4 13 31 2 2 TB HDD 1–32 64 TB ds2.8xlarge 36 119 244 16 16 TB HDD 2–128 2 PB The following diagram explains the architecture of Amazon Redshift Each Redshift cluster is a container of one or many databases. The data of each table is distributed across the compute nodes. The storage of every compute node is divided into slices. The number of slices per node depends on the node size of the cluster. For example, each DS1.XL compute node has two slices, and each DS1.8XL compute node has 16 slices. When the application issues a query against the Redshift cluster, all the compute nodes participate in parallel processing. IV. Amazon Redshift Cluster Setup In this exercise, we will setup a 4 node Redshift cluster from the AWS console. First login to the console: Open the AWS Management Console and login to http://console.aws.amazon.com Find the Redshift service link from the Services menu: Click on Redshift 3. Click on Launch Cluster 4. A cluster details page will show up; insert the cluster identifier and the database name that you want to be created during cluster setup process. The port number per default is 5439; you need to allow inbound in the security group to allow users connect to your cluster. Choose the admin user and password, then click on continue. 5. Choose the node type according to your workload, the multi node cluster type and the number of compute nodes. Click on Continue. 6. At this step, it is recommended to setup your cluster in your own VPC; do not allow public access to your cluster and choose appropriate security group to filter and secure the traffic into your cluster. 7. In this step, you will see a summary of your cluster setup. Click on launch cluster. 8. It will take about 10 to 20 minutes to get your cluster up and running. Take a note of the JDBC URL. We will use the JDBC URL to connect remotely to the Redshift cluster. V. Connect to the Redshift Cluster using SQL Workbench/J In this section, we will use SQL Workbench/J, a free SQL client tool. 1. Install SQL Workbench/J tool: In order to connect to your Amazon Redshift cluster, you need to download The SQL Workbench/J; for example, Download generic package for all systems that you can find here: http://www.sql-workbench.net/downloads.html The Amazon Redshift JDBC Driver; you can download from here: http://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#download-jdbc-driver 2. Connect to the Redshift Cluster: - Open SQL Workbench/J, Click File, and then click Connect window. Then click Create a new connection profile. - In the New profile box, type a name for the profile. For example, Amazon_Redshift . - Click Manage Drivers. The Manage Drivers dialog opens. In the Name box, type a name for the driver, example: Amazon Redshift . - In URL, copy the JDBC URL from the Amazon Redshift console - In Username, type the name of the master user. In our case, it is admin - In Password, type the password associated with the master user account. - Select the Auto commit box. - Click on Test to test the connection. VI. Amazon Redshift Cluster Backup Amazon Redshift Cluster backup is automatically enabled using snapshots. You can also manually backup your databases by taking a snapshot. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/ . In the cluster configuration tab, click on Backup and select Take Snapshot . VII. Conclusion In this article we introduced the Amazon Redshift architecture and cluster build; we have seen the simplicity and power of the Big Data Warehouse service in the Amazon Cloud. In the next article of the series, we will explore the Amazon Redshift Database design and examples of query performance.

Blog Post: Oracle 12.2 In-memory New Features

$
0
0
I was able to attend the Oracle Open World 2016 event this past fall. Last April, I posted a nice series on In-memory, material I had gathered from Oracle PM Andy Revenes (heck of a nice guy). Here are the Oracle12.2. new features for In-Memory from VP Tirthankar Lahiri, presented at OOW16. He clearly stated where you can currently get Oracle12.2: His slide simply said “Oracle is presenting features for Oracle database 12c Release 2 on Oracle cloud. Features and enhancements related to the on-premises versions of Oracle Database 12c Release 2 are not being announced at this time.” As of this writing, the documentation on Oracle12.2 is available online but there still isn’t a Windows or Linux version of Oracle12.2 available yet. You can review the online documentation at this link: http://docs.oracle.com/database/122/ In-memory Performance Information Mr. Lahiri said overall performance was much better and in the 3x range with mixed workloads (row and column store processing). He claimed 10x faster when using table joins. He said there is a new Join Group feature that is quite performant as well. He noted that JSON columns can now participate in a column store query with a 20 to 60 performance increase in access times. He also pointed out that in-memory features now support Active Data Guard (see further discussions and syntax below). There is also a column store Fast Start feature, again, discussed in detail below. In-memory Join Group Syntax CREATE INMEMORY JOIN GROUP v_deptno (DEPT(deptno), EMP (deptno)); Create the join group on common columns being joined between two tables such as the illustrated EMP and DEPT. In-memory Expressions CREATE TABLE CUSTOMER_SALES ( CUST_NO number, PRICE number, QTY number, TAX number, TOTAL_SALE as (PRICE * QTY + TAX)) INMEMORY; The column expressions must have a one-to-one relationship to the rows in the table or column store. You can also use DECODE, UPPER, LOWER, etc. Expressions can be manually defined (as shown in the above syntax) or automatically created by Expressions Statistics Store (ESS) which monitors workloads. When it notices repeating SQL expressions, it will use the DBMS_INMEMORY.IME_CAPTURE to capture the repeating expressions and DBMS_INMEMORY.IME_POPULATE to create the in-memory virtual columns. In-memory and Exadata Exadata flash cache is managed by the keyword CELLMEMORY. ALTER TABLE emp CELLMEMORY; ALTER TABLE emp NO CELLMEMORY; CREATE TABLE CUSTOMER_SALES ( CUST_NO number, PRICE number, QTY number, TAX number, TOTAL_SALE as (PRICE * QTY + TAX)) CELLMEMORY MEMCOMPRESS FOR QUERY; You can use this feature on tables, partitions, sub partitions, and materialized views. The MEMCOMPRESS clause supports FOR QUERY LOW and FOR CAPACITY LOW. The NO PRIORITY clause will populate this into in-memory upon first request. This feature also supports automatic data optimizations with this policy syntax: INMEMORY or NO INMEMORY Alter MEMCOMPRESS level After of no access After of creation After of no modification On These policies can be run manually (as in a batch job perhaps) DBMS_ILM.EXECUTE_ILM Some examples include: ALTER TABLE emp ILM ADD POLICY NO INMEMORY AFTER 6 months OF CREATION; ALTER TABLE emp ILM ADD POLICY NO INMEMORY AFTER 10 days no access; ALTER TABLE emp ILM ADD POLICY MEMCOMPRESS FOR QUERY after 5 days of creation; In-memory Fast Start Another feature that will help with database startup and the initialization of the in-memory column store is to save the column store from the prior operational system. You simply enable this feature and name the tablespace where the column store is to be saved when the database is shutdown. In-memory Fast Start syntax: Begin DBMS_INMEMORY_ADMIN.FastStart_ENABLE(‘ ’); End; Tablespace listed should be 2x larger than the parameter INMEMORY_SIZE. Use the syntax SHOW SGA or SHOW PARAMETER INMEMORY_SIZE to see current in-memory total size assigned. The data is then check pointed and stored in the DBIMFS_LOGSEG$. The metadata about this feature is stored in the sysaux tablespace. The column store is then loaded from this area on database restart rather than rebuilt from scratch. This should save a considerable amount of time to startup the database that has larger in-memory column stores. In-memory Dynamic Allocation The in-memory size can now be adjusted dynamically! You can increase the size of the allocation but you cannot decrease the size without recycling the database (shutdown/startup). Use the syntax ‘alter system set inmemory_size = 512M scope = both;’ to change this size both on the fly and for future recycles of the database. Again, use SHOW ALL or SHOW PARAMETER INMEMORY_SIZE to see the current then the new allocation (after executing the syntax). There are some prerequisites to this, however: The column store must be enabled The database compatibility level must be set to 12.2.0 or higher The database has to have been started using the SPFILE option The new size of the in-memory size (INMEMORY_SIZE) must be 128M or greater than its current setting In-memory and Active Data Guard Oracle 12.2 also supports in-memory features for data guard/standby databases. There are three configuration options to consider: Identical column stores in both primary and standby databases Column store in just the standby database Different column store configurations between the primary and standby databases Identical column stores is where both the primary database and its standby database have the same column stores on the same tables. This will ensure the same level of query performance when connected to either environment. This setup is convenient when the standby database is used as a reporting database. These settings need to be set: INMEMORY_SIZE is set in both, set to the same size INMEMORY_ADG_ENABLED is set to true on the standby instance The INMEMORY clause is used on the same objects in both the active and standby instances Standby database in-memory only features is where you are probably using the standby database as a reporting database and do not wish to use the in-memory features in the primary database. These settings need to be set: INMEMORY_SIZE is set in the standby instance and this is set to 0 in the primary database INMEMORY_ADG_ENABLED is set to true on the standby instance The INMEMORY and DISTRIBUTE FOR SERVICE clauses are used on the standby instance objects only You can have a mix of the above…using in-memory features for both the primary and standby instances but on different objects. You might want the current quarter with in-memory features in the primary instance but maybe a different (prior) quarter of data in the standby instance. These settings need to be set: INMEMORY_SIZE is set in both instances but doesn’t necessarily need to be set to the same number. INMEMORY_ADG_ENABLED is set to true on the standby instance The INMEMORY clause is again used to tell the instance which object to load into the data store and DISTRIBUTE FOR SERVICE clause is used in the standby instance only. You can review this level of detail in the Oracle12.2 In-memory Deployment guide using this link: http://docs.oracle.com/database/122/INMEM/deploying-im-column-store-with-adg.htm#INMEM-GUID-F5934C5A-34DE-46BA-ABD2-727E548B8D9F Dan Hotka Author/Instructor/Oracle Expert …check out current promotions on my Video-on-Demand courses: www.DanHotka.com

Blog Post: Oracle 12.2 New Feature: Longer object/column names

$
0
0
Hi, Maybe you have wanted to use longer object names to be more descriptive. The Oracle RDBMS now supports up to 128 positions for both object and column names! The current naming length convention is 30. The other naming rules still apply such as: Starts with a character No spaces Underbars are acceptable Special characters are too…just be careful out there! Please don’t get carried away. I know using Toad’s schema browser, I can filter objects by short acronyms perhaps. We don’t need index names like ‘Index_on_EMP_table_drop_after_1_use_as_this_was_just_a_test’…and the index never gets renamed or dropped. I can see this happening. So Oracle12.2 on, you can give your tables/indexes/materialized views/columns/PLSQL packages/procedures/functions much longer and more meaningful names. Oracle Employee Chris Saxon shared this code with us at OOW16…to display indexes associated with tables and crop the names shorter to fit on your display. Select TABLE_NAME, Listagg(INDEX_NAME, ‘,’ on overflow truncate) within group (order by INDEX_NAME) INDS From USER_INDEXES Group by TABLE_NAME; Oracle employee Connor McDonald expanded on the topic with this code that would prevent longer object names if you do not have DBA privileges. Perhaps you can use this code to modify/monitor/enforce your company naming conventions. c reate or replace trigger ddl_trigger before create or alter on demo.SCHEMA declare l_obj varchar2(128); l_dba int; begin l_obj := ora_dict_obj_name; select count(*) into l_dba from dba_role_privs where grantee = USER and granted_role = 'DBA'; if l_dba = 0 and length(l_obj) > 30 then raise_application_error(-20000,'Identifier "'||l_obj||'" is too long'); end if; end; Dan Hotka Author/Instructor/CEO

Blog Post: Players for Logic Annual Championship for 2016

$
0
0
The following players will be invited to participate in the Logic Annual Championship for 2016, currently scheduled to take place on 4 April. The number in parentheses after their names are the number of championships in which they have already participated. Congratulations to all listed below on their accomplishment and best of luck in the upcoming competition! Name Rank Qualification Country Pavel Zeman (2) 1 Top 50 Czech Republic SteliosVlasopoulos (3) 2 Top 50 Belgium Marek Sobierajski (1) 3 Top 50 Poland mentzel.iudith (3) 4 Top 50 Israel Vyacheslav Stepanov (3) 5 Top 50 No Country Set James Su (3) 6 Top 50 Canada Rytis Budreika (3) 7 Top 50 Lithuania JasonC (3) 8 Top 50 United Kingdom Cor (2) 9 Top 50 Netherlands Köteles Zsolt (2) 10 Top 50 Hungary Kuvardin Evgeniy (2) 11 Top 50 Russia NickL (2) 12 Top 50 United Kingdom Chad Lee (3) 13 Top 50 United States NeilC (0) 14 Top 50 United Kingdom TZ (1) 15 Top 50 Lithuania D. Kiser (2) 16 Top 50 United States ted (3) 17 Top 50 United Kingdom MarkM. (3) 18 Top 50 Germany Elic (3) 19 Top 50 Belarus mcelaya (1) 20 Top 50 Spain Sandra99 (3) 21 Top 50 Italy tonyC (2) 22 Top 50 United Kingdom seanm95 (3) 23 Top 50 United States Talebian (2) 24 Top 50 Netherlands richdellheim (3) 25 Top 50 United States Arūnas Antanaitis (1) 26 Top 50 Lithuania ratte2k4 (1) 27 Top 50 Germany umir (3) 28 Top 50 Italy Kanellos (2) 29 Top 50 Greece NielsHecker (3) 30 Top 50 Germany Andrii Dorofeiev (2) 31 Top 50 Ukraine Mehrab (3) 32 Top 50 United Kingdom JustinCave (3) 33 Top 50 United States krzysioh (2) 34 Top 50 Poland Stanislovas (0) 35 Top 50 Lithuania Vladimir13 (1) 36 Top 50 Russia danad (3) 37 Top 50 Czech Republic RalfK (2) 38 Top 50 Germany YuanT (3) 39 Top 50 United States Mike Tessier (1) 40 Top 50 Canada Vijay Mahawar (3) 41 Top 50 No Country Set Eric Levin (2) 42 Top 50 United States whab@tele2.at (1) 43 Top 50 Austria puzzle1fun (0) 44 Top 50 No Country Set Sartograph (1) 45 Top 50 Germany tonywinn (1) 46 Top 50 Australia dovile (0) 47 Top 50 Lithuania Jeff Stephenson (0) 48 Top 50 No Country Set craig.mcfarlane (2) 49 Top 50 Norway Paresh Patel (0) 50 Top 50 No Country Set

Wiki Page: Pre-Purchase Building & Pest inspections

$
0
0
We Provide Pre-Purchase Building & Pest inspections, Dilapidation Reports, Completion Reports with Aerial Photography in Sydney Building inspections. http://www.expertbuildinginspections.com.au

Wiki Page: Changing DB Parameters in for Oracle on Amazon RDS

$
0
0
Nowadays, a DBA has to know how to administer a database on-premise but also on Cloud. Cloud is not the future, it's the present. Several databases are being moved from on-premise to Cloud. Since we are in the transition phase, a DBA has to know how to shutdown, start, create tablespaces, change database parameters, and several more tasks on Amazon RDS, Oracle Public Cloud, Microsoft Azure, others providers and of course on-premise. Every Cloud provider have several similar things, but also they have several different ways to performa a task. In this article I will cover how to change the values for database parameters in Amazon RDS. If you are an experienced on-premise Oracle DBA you would wonder "To change database parameters" is the easiest thing, and it is, indeed, but it is different on Amazon RDS and you will see why. I have broken up the article in the following sections: The on-premise approach The research How to change DB parameters in Amazon RDS using AWS Management Console How to change DB parameters in Amazon RDS using CLI Conclusion For this article I have used the following environment: Database Name: Oracle Database Type: Amazon RDS - Single Instance Database version: 12.1.0.2 The on-premise approach An experienced on-premise DBA would want to execute an "ALTER SYSTEM SET", at the end, the user that provides Amazon RDS has the role "DBA", and the privilege "ALTER SYSTEM" is included in that role. Let's follow that approach and let's see what happens: SQL> alter system set statistics_level='BASIC' scope=spfile;; alter system set statistics_level='BASIC' * ERROR at line 1: ORA-01031: insufficient privileges So, here is the first problem we have. If we were using an on-premise database with an user that has "DBA" role, the same sentence works: SQL> alter system set statistics_level='BASIC' scope=spfile; System altered. The only difference is that the first database was on Amazon RDS and the second was on-premise. So in this "difference" there should be the "reason" of that "insufficient privileges" error. Now let's move on to the research part. The research The first question we have to clarify is why one user with "DBA" role doesn't have privilege to execute "ALTER SYSTEM". The reason is that in Amazon RDS, the role "DBA" has two privileges less than an on-premise database: In Amazon RDS: SQL> select count(*) from dba_sys_privs where grantee='DBA' and privilege like 'ALTER%'; COUNT(*) ---------- 32 On-premise database (same version, 12.1.0.2): SQL> select count(*) from dba_sys_privs where grantee='DBA' and privilege like 'ALTER%'; COUNT(*) ---------- 34 I found that the two privileges that were removed are: ALTER DATABASE ALTER SYSTEM So that's the reason why we can not execute "ALTER SYSTEM SET" in our Oracle Amazon RDS. Now you would think: Why not use SYS? and here is the second thing that an Oracle database on Amazon RDS has different compared with on-premise. An Oracle Database on Amazon RDS doesn't allow to use SYS, SYSTEM users, as per the Amazon documentation : "The SYS user, SYSTEM user, and other administrative accounts are locked and cannot be used." I also recommend to read the following notes: Oracle Database Support for Amazon AWS EC2 (Doc ID 2174134.1) Amazon RDS Support for AIA (Doc ID 2003294.1) Amazon documentation says two things: Locked and it cannot be used. It is not correct regarding the first property, both SYS and SYSTEM users are not locked: SQL> select username, account_status from dba_users where username in ('SYS','SYSTEM') USERNAME ACCOUNT_STATUS ------------- ----------------- SYSTEM OPEN SYS OPEN But Amazon Documentation is correct, when it says that they cannot be used, and you will know why [:)] It tried to change the password of SYS just for fun and the output was this: SQL> alter user sys identified by Manager1; alter user sys identified by Manager1 * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-20900: ALTER USER SYS not allowed. ORA-06512: at "RDSADMIN.RDSADMIN", line 208 ORA-06512: at line 2 That was my first meeting with RDSADMIN.RDSADMIN. When I saw "RDSADMIN.RDSADMIN" I thought: "I can change the password of SYS, but this function RDSADMIN.RDSADMIN is not allowing it". I mean, without this function that sentence should work. Of course this is a customized function created by Amazon RDS, it is not created by default by Oracle Database. Then I took a look into that function: SQL> select owner, trigger_name, trigger_type, status, triggering_event, trigger_body from dba_triggers where owner='RDSADMIN' and triggering_event like '%DDL%'; OWNER TRIGGER_NAME TRIGGER_TYPE STATUS TRIGG TRIGGER_BODY -------- --------------- ------------ -------- ----- ------------ RDSADMIN RDS_DDL_TRIGGER BEFORE EVENT ENABLED DDL BEGIN rdsadmin.secure_ddl; END; RDSADMIN RDS_DDL_TRIGGER2 BEFORE EVENT ENABLED DDL BEGIN rdsadmin.secure_ddl; END; Firstable I don't understand why there are two triggers with the same trigger type, the same triggering even, the same status (enabled) and calling exactly the same procedure (rdsadmin.secure_ddl), it seems like if Amazon RDS developers were playing on production? Who knows!. Anyways I tried to disable that trigger, at the end my user has "DBA" role, right? :) SQL> alter trigger RDSADMIN.RDS_DDL_TRIGGER disable; alter trigger RDSADMIN.RDS_DDL_TRIGGER disable * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-20900: RDS restricted DDL found: ALTER TRIGGER SYS.RDS_DDL_TRIGGER ORA-06512: at "RDSADMIN.RDSADMIN", line 407 ORA-06512: at line 2 Again our friend RDSADMIN.RDSADMIN.... I did a research on the package "RDSADMIN.RDSADMIN" and after several tests and different inputs, and some cups of coffee, by observation I found the following regarding the package RDSADMIN.RDSADMIN: It is a customized package that is included in any Oracle Database on Amazon RDS that is on charge to verify what it is allowed and what it's not. If you are SYS, SYSTEM or RDSADMIN likely all the validations performed by this package are not applied. But if you are another user, several validations are performed on the sentence that you are trying to execute. It doesn't allow you to grant the privileges ALTER SYSTEM, ALTER DATABASE,GRANT ANY PRIVILEGE, DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,IMP_FULL_DATABASE,EXP_FULL_DATABASE. If you try to do it, you will get the error "ORA-20997". It validates which objects are being touched in the sentence you are executing and if those objects belong to the schemas SYS, SYSTEM and RDSADMIN it cancel your sentence and you will receive the error "ORA-20900". For example, when I tried to disable the trigger. It doesn't allow you to alter or to drop the schema RDSADMIN. it doesn't allow you to revoke any privilege from the RDSADMIN user. (This is the first time that with an user that has DBA role I felt too unprivileged [:(]) it doesn't allow you to ALTER or DROP the tablespace RDSADMIN (Yes, there is a tablespace called RDSADMIN that is created by default. Also an user profile called RDSADMIN. RDSADMIN is everywhere! [:|] ! ) It doesn't allow to add datafiles to any tablespace specifying a full path. You must let the OMF to handle that. It allows you to create new tablespaces. omg! finally! [:D] It allows you to compile packages, procedures, functions, triggers and views. Only to compile. So when someone asks you why it is not possible to change a db parameter with "ALTER SYSTEM", you will know what to say :) How to change DB parameters in Amazon RDS using AWS Management Console In order to Change a DB parameter on Amazon RDS, you must use a "Parameter Group". I like the concept behind this because it is cloud-oriented concept, which is good. A Parameter Group, as the name says, it is a set of parameters identified by a name. The good thing of this is that a "Parameter Group" can be shared with several Oracle Database Instances. Instead to manage several parameters for every single database instance as on-premise, Amazon allows you to create only one single group and to re-use that group across several database instances. That way you will have your databases standardized. You can have a Parameter Group for all your Dev databases, another Parameter Group for all your Test Databases, and so on. You can set a Parameter Group for your database when you are creating the Oracle RDS: Or you can create a Parameter Group any time after the database creation. In this case, at the time of the creation Amazon will assign a default Parameter Group usually called " default.oracle-ee-12.1 ". The problem of this is that the default Parameter Group cannot be modified. You cannot change the value of any parameter inside that default one. Since a default Parameter Group cannot be modified, we must create another one. To do so, go to AWS Management Console -> Parameter Groups and click on Button " Create Parameter Group " and follow the instructions: You have to Select a " Parameter Group Family " which is basically for which kind of database you are creating the Parameter Group: the Parameter Group Family could be one of the following: Provide the name for the Parameter Group and a Description as well. Click on the button " Create ". When the Parameter Group is created, you can be able to modify the values of the parameters. For non-default Parameter Group you have to click on AWS Management Console -> Parameter Groups -> [your non-default Parameter Group] and then click on the button " Edit Parameter s". There will be a page where you will find all the parameters and you will be able to change the values: Once you have modified all the required parameters, click on the button " Save Change s". Be aware that every parameter could be either "dynamic" or "static". All the dynamic parameters will be applied immediately regardless the "Apply Immediately" setting. However for the Static parameters you will have to reboot the Oracle RDS Instance. How to change DB parameters in Amazon RDS using CLI Amazon provides a terminal tool (CLI). It allows to make changes faster than in Amazon AWS Management Console. To list the Parameter Groups: Deibys-MacBook-Pro$ aws rds describe-db-parameter-groups { "DBParameterGroups": [ { "DBParameterGroupArn": "arn:aws:rds:us-west-2:062377963666:pg:default.oracle-ee-12.1", "DBParameterGroupName": "default.oracle-ee-12.1", "DBParameterGroupFamily": "oracle-ee-12.1", "Description": "Default parameter group for oracle-ee-12.1" } ] } To List All the Parameter of a Parameter Group: Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 { "Parameters": [ { "ApplyMethod": "pending-reboot", "Description": "_allow_level_without_connect_by", "DataType": "boolean", "AllowedValues": "TRUE,FALSE", "Source": "engine-default", "IsModifiable": true, "ParameterName": " _allow_level_without_connect_by ", "ApplyType": "dynamic" }, { "ApplyMethod": "pending-reboot", "Description": "_always_semi_join", "DataType": "string", "AllowedValues": "CHOOSE,OFF,CUBE,NESTED_LOOPS,MERGE,HASH", "Source": "engine-default", "IsModifiable": true, "ParameterName": " _always_semi_join ", "ApplyType": "dynamic" }, {.......} ] } The output can have a different format: Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output table or Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output text To filter one single parameter in a Parameter Group: Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output text --query "Parameters[?ParameterName==' statistics_level ']" or Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output table --query "Parameters[?ParameterName==' statistics_level ']" And finally, you change a parameter value with the following sentence: Deibys-MacBook-Pro$ aws rds modify-db-parameter-group --db-parameter-group-name nuvolaparameters --parameters "ParameterName= statistics_level ,ParameterValue= basic ,ApplyMethod=pending-reboot" { "DBParameterGroupName": "nuvolaparameters" } Deibys-MacBook-Pro$ Conclusion To finish the article I would say that to use a Parameter Group is a good approach for on-cloud databases, because they can be shared across several databases and that allows to Standardize. Both Amazon AWS Management Console and CLI allows a fast method to change parameters. I didn't like RDSADMIN.RDSADMIN package because I am a DBA, I am used to have the control of everything inside the database, but I understand the security perspective of Amazon, also I understand that RDS is that... "Relational Database Service" and that means others have the control, others maintain the database. It doesn't make sense that if it is a "DaaS" I still have to maintain the database, so that's fine. by using a DaaS, I could be focused on others areas like SQL Tuning, Instance Tuning, Reporting for the board of directors, capacity planning, monitoring, etc, etc. So at the end, the package is fine. [H] I think Amazon should ask if we want to apply the changes on memory, or spfile or both, instead to apply immediately whenever it's possible. +1 for Oracle Public Cloud :) Always create your own Parameter Group , before to create a database, so that you can use it since the beginning. The problem to use a default Parameter Group is that when you require to change a parameter value you will have to create a new Parameter Group, stop the database and to assign the new Parameter Group, and that means Downtime, mostly when you database is already used by clients. Follow me: About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group , a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.

Comment on Oracle Text Index at a glance

$
0
0
I have been looking at Oracle Text for several years and this is the best overview that I've found! Can you write more about the circumstances under which the $P and $S tables are created? Thanks!
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>